HIGH-DIMENSIONAL SIGNAL PROCESSING AND STATISTICAL LEARNING
dc.contributor.author | Bakir, Daniyar | |
dc.date.accessioned | 2017-02-09T06:35:24Z | |
dc.date.available | 2017-02-09T06:35:24Z | |
dc.date.issued | 2017 | |
dc.description.abstract | Classical statistical and signal processing techniques are not generally useful in situations wherein the dimensionality (p) of observations is comparable or exceeding the sample size (n). This is mainly due to the fact that the performance of these techniques is guaranteed through classical notion of statistical consistency, which is itself fashioned for situations wherein n >> p. Statistical consistency has been viogorously used in the past century to develop many signal processing and statistical learning techniques. However, in recent years, two sets of mathematical machineries have emerged that show the possibility of developing superior techniques suitable for analyzing high-dimensional observations, i.e., situations where p >> n. In this thesis, we refer to these techniques, which are grounded either in double asymptotic regimes or sparsity assumptions, as high-dimensional techniques. In this thesis, we examine and develop a set of high-dimensional techniques with applications in classification. The thesis is mainly divided to three parts. In the first part, we introduce a novel approach based on double asymptotics to estimate the regularization parameter used in a well-known technique known as RLDA classifier. We examine the robustness of the developed approach to Gaussianity, an assumption used in developing the core estimator. The performance of the technique in terms of accuracy and efficiency is verified against other popular methods such as cross-validation. In the second part of the thesis, the performance of the newly developed RLDA and several other classifiers are compared in situations where p is comparable or exceeding n. While in the first two parts of the thesis, we focus more on double asympii totic methods, in the third part, we study two important class of techniques based on sparsity assumption. One of these techniques known as LASSO has gained much attention in recent years within the statistical community, while the second one, known as compressed sensing, has become very popular in signal processing literature. Although both of these techniques use sparsity assumptions as well as L1 minimization, the objective functions and constrains they are constructed on are different. In the third part of the thesis, we demonstrate the application of both techniques in high-dimensional classification and compare them in terms of shrinkage rate and classification accuracy | ru_RU |
dc.identifier.citation | Daniyar Bakir; 2007; HIGH-DIMENSIONAL SIGNAL PROCESSING AND STATISTICAL LEARNING; School of Engineering. Department of Electrical and Electronic Engineering. Nazarbayev University; http://nur.nu.edu.kz/handle/123456789/2313 | ru_RU |
dc.identifier.uri | http://nur.nu.edu.kz/handle/123456789/2313 | |
dc.language.iso | en | ru_RU |
dc.publisher | Nazarbayev University School of Engineering and Digital Sciences | ru_RU |
dc.rights | Attribution-NonCommercial-ShareAlike 3.0 United States | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/3.0/us/ | * |
dc.title | HIGH-DIMENSIONAL SIGNAL PROCESSING AND STATISTICAL LEARNING | ru_RU |
dc.type | Master's thesis | ru_RU |