How much confidence can you place in your results, and how best to communicate them?
Jolynn Pek undertakes highly complex research with an extremely simple goal: to improve the scientific process. She primarily works with latent variable models, which were designed to accurately measure unobservable constructs like intelligence or math ability. Her research aims to help investigators decide how sure they can be that their results are right. She also works on bridging the gap between methodologists and applied researchers by developing novel and simple ways of obtaining and visualizing statistical results.
I want to contribute to solving real problems. My research can be applied to any problem, whether in psychology or in any discipline that uses statistical models. Better science follows from the use of quantitative methods that are simple to implement, understand and communicate.
The math component of these models makes them beautiful. It’s like classical music: beautiful in form and structure. E=mc^2 is very artful.
My methodological work seeks to enhance the scientific process by developing measures that quantify the uncertainty in statistical results. If you have a good idea of how much uncertainty there is in your results, then you can make a judgment call on how much confidence you want to place in them. If the results are sensitive to certain aspects of the data or model, uncertainty is great, and you would not want to put a lot of weight in interpreting them. If results are robust, with tolerable uncertainty, then strong scientific conclusions may be drawn.
Simplifying the application of statistical models allows applied researchers to easily "puzzle" with their data, and focus on the fun detective work required to make sense of their subject matter. Recently, I've been working on a graphical approach to a popular complex model that directly communicates results to researchers. Previously, a table of numbers needed to be unscrambled before a clear sense of the results was achieved. Now results can be painted in a picture to ‘speak’ directly to researchers. In the words of the noted polymath John W. Tukey, this visual display promises to be "vivid and inescapable in its intended message."
We’ve developed several statistical packages that help investigators visualize the data they’re working with to better understand their results. These software packages are available online for free. It is satisfying to provide a means for researchers, unfamiliar with the technicalities of complex statistical models, to literally see their results.
I want to tackle the issues of generalizability and replication. In other words, how reliable or repeatable are our scientific findings? How sure can we be that results from any statistical analysis extend to another context? I’m motivated to find a way to measure these uncertainties so that researchers can better qualify the impact of their important findings. Of course, having a picture to communicate these ideas would be ideal.