Long-Tail Theory Under Gaussian Mixtures
Loading...
Date
Journal Title
Journal ISSN
Volume Title
Publisher
Frontiers in Artificial Intelligence and Applications
Abstract
We suggest a simple Gaussian mixture model for data generation that complies with Feldman’s long tail theory (2020). We demonstrate that a linear classifier cannot decrease the generalization error below a certain level in the proposed model, whereas a nonlinear classifier with a memorization capacity can. This confirms that for long-tailed distributions, rare training examples must be considered for optimal generalization to new data. Finally, we show that the performance gap between linear and nonlinear models can be lessened as the tail becomes shorter in the subpopulation frequency distribution, as confirmed by experiments on synthetic and real data.
Description
Keywords
Unsupervised learning, Quantum mechanics, Mathematical analysis, Physics, Pattern recognition (psychology), Statistical physics, Applied mathematics, Algorithm, Computer science, Training set, Mathematics, Artificial intelligence, Generalization error, Nonlinear system, Generalization, Classifier (UML), Gaussian
Citation
Arman Bolatov, Maxat Tezekbayev, Igor Melnykov, Artur Pak, Vassilina Nikoulina, & Zhenisbek Assylbekov (2023). Long-Tail Theory Under Gaussian Mixtures. . https://doi.org/10.3233/FAIA230260