Working papers
Permanent URI for this collection
Browse
Browsing Working papers by Subject "Machine Learning, Retrieval Augmentation, Benign Overfitting, Long Tail Theory, Simplicity Bias, Natural Language Processing, Memorization and Generalization, Learning Theory"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
Item Open Access BENIGN OVERFITTING WITH RETRIEVAL AUGMENTED MODELS(Nazarbayev University School of Sciences and Humanities, 2022) Assylbekov, Zhenisbek; Tezekbayev, Maxat; Nikoulina, Vassilina; Gallé, MatthiasDespite the fact that modern deep neural networks have the ability to memorize (almost) the entire training set they generalize well to unseen data, contradicting traditional learning theory. This phenomenon --- dubbed benign overfitting --- has been theoretically studied so far in simplified settings only. At the same time, ML practitioners (especially in NLP) figured out how to exploit this feature for more efficient training: retrieval-augmented models (e.g., kNN-LM, RETRO) explicitly store (part of) the training sample in the storage and thus try to (partially) remove a load of memorization from the neural network. In this paper we link these apparently separate threads of research, and propose several possible research directions regarding benign overfitting in retrieval-augmented models.