Popis: |
Consider a two-class classification problem where we observe samples $(X_i, Y_i)$ for i = 1, ..., n, $X_i \in R^p$ and $Y_i$ in {0, 1}. Given $Y_i = k$, $X_i$ is assumed to follow a multivariate normal distribution with mean $\mu_k \in R^k$ and covariance matrix $\Sigma_k$, k=0,1. Supposing a new sample X from the same mixture is observed, our goal is to estimate its class label Y. Such a high-dimensional classification problem has been studied thoroughly when Sigma_0 = Sigma_1. However, the discussions over the case $\Sigma_0 \neq \Sigma_1$ are much less over the years. This paper presents the quadratic discriminant analysis (QDA) for the weak signals (QDAw) algorithm, and the QDA with feature selection (QDAfs) algorithm. QDAfs applies Partial Correlation Screening to estimate $\hat{\Omega}_0$ and $\hat{\Omega}_1$, and then applies a hard-thresholding on the diagonals of $\hat{\Omega}_0 - \hat\Omega_1$. QDAfs further includes the linear term $d^T X$, where d is achieved by a hard-thresholding on $\hat{\Omega}_1\hat{\mu}_1 - \hat{\Omega}_0\hat{\mu}_0$. We further propose the rare and weak model to model the signals in $\Omega_0 - \Omega_1$ and $\mu_0 - \mu_1$. Based on the signal weakness and sparsity in $\mu_0 - \mu_1$, we propose two ways to estimate labels: 1) QDAw for weak but dense signals; 2) QDAfs for relatively strong but sparse signals. We figure out the classification boundary on the 4-dim parameter space: 1) Region of possibility, where either QDAw or QDAfs will achieve a mis-classification error rate of 0; 2) Region of impossibility, where all classifiers will have a constant error rate. The numerical results from real datasets support our theories and demonstrate the necessity and superiority of using QDA over LDA for classification. |