Welcome Eric Laber and Yuansi Chen
Broadly, my research focuses on the development of rigorous statistical methods to inform data-driven decision making in complex environments. During the past few years, I have become particularly interested in driving the adoption of reinforcement learning methods as a means of decision support in high-risk settings including precision medicine, public health, defense, and business. From a methodological perspective, my work involves machine learning, causal inference, optimization, and design of experiments. I am also passionate about STEM outreach and broadening participation in statistics and data science. Students can learn more about these initiatives on my website (laber-labs.com).
I am interested in the theory and practice of optimization/sampling algorithms and the interplay between their computational and statistical guarantees, with a focus on applications in data analysis. I hope to have a better understanding of many computational machine learning methods and pipelines that are proposed without being well understood. The lack of theoretical insights often makes these methods difficult to be applied optimally in new contexts. Typical questions arise are: How can we optimally set the hyper-parameters of an algorithm (such as step-size), or how can we combine many simple techniques together so that our new algorithm is fast with both computational and statistical guarantees? We wish to have a thorough understanding of these questions under both theoretical frameworks (e.g. computational complexity of MCMC sampling algorithms, stochastic processes, stability of iterative learning algorithms) and applied large-scale data analysis settings (e.g. applications in computational neuroscience, deep neural network models of image data).