JingQuin Luo

missing portrait
External address: 
St. Louis, Missouri
Phone: 
314-362-3718
Webpages: 
Graduation Year: 
2006

Employment Info

Assistant Professor of Surgery
Washington University, St. Louis
2015-Present

Dissertation

Model Selection, Covariance Selection and Bayes Classification via Shrinkage Estimators

The naive Bayes classifier (NB) has exhibited its "mysterious" but outstanding classification ability in practice, in spite of its often unrealistic conditional independence assumption. This simple assumption implies the adoption of a diagonal structure for the underlying class-specific precision matrices. However, the NB leaves covariates interrelationships unrevealed. In this dissertation, we will extend the NB from the perspectives of covariance modeling and classification. Due to the positive definiteness constraint and the rapidly-growing number of parameters with dimensions, covariance estimation in a multivariate normal population has been a classic but challenging statistical problem. Sparse shrinkage covariance/precision matrix estimation has been obeyed as an important principle in covariance/precision matrix modeling. However, many existing models can only shrink the covariance/precision matrix toward a predefined diagonal structure. We model a precision matrix via its Cholesky decomposition in terms of compositional regression coefficient matrix and error precisions. Our approach aims at estimating a precision matrix of a flexible structure. Most classification methods utilize prospective models and are applied even if data are collected retrospectively, even though retrospective modeling is more statistically efficient. We propose a class of retrospective predictive and classification models for use in the binormal setting. The dissertation illustrates that both complicated problems can be simplified into a sequence of regression problems, where we consider Bayesian shrinkage regression techniques to encourage sparse representation. As our work is partially motivated by gene expression microarray data analysis, in Chapter 1, we briefly review this technology and the statistical literature in gene expression regression and classification. We also review some properties of the naive Bayes classifier and describe the problems to be tackled in this dissertation. Subset selection and shrinkage techniques are popular methods for building parsimonious models and we will focus on application of the latter in this dissertation. As the regression model is a fundamental component for both the covariance selection problem and the classification problem, we start with Bayesian regression models in Chapter 2, where we describe Bayesian shrinkage regression models based on scale mixtures of normals priors with an emphasis on their shrinkage properties. Chapter 3 introduces the connection between the covariance modeling problem and linear recursive equations. In this chapter, we summarize priors on the regression parameters induced by the classic Wishart covariance prior. Viewed as extensions of Wishart model, we investigate Bayesian shrinkage regression models as promising approaches for the covariance selection problem. We proceed to retrospective predictive and classification models in Chapter 4. We contrast prospective and retrospective classification models and then show how a retrospective model can provide prospective predictions based on Bayes Theorem. In this setting, the joint probability of covariates given outcome can be factored into the product of univariate conditional distributions, or equivalently regression models in the multivariate binormal setting. Within this framework, we describe two novel Bayesian predictive models using shrinkage regressions. Model selection or model averaging based on marginal likelihoods is an interesting future extension of current research. To this end, Chapter 5 introduces a novel marginal likelihood estimation algorithm. Posterior samples are taken advantage for the construction of an efficient proposal distribution. In each chapter, we utilize simulation studies and analyses of real datasets to illustrate and compare competing methods. Chapter 6 concludes what we will present in this dissertation and suggests some future research directions.