Working papers

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 1 of 1
  • ItemOpen Access
    BENIGN OVERFITTING WITH RETRIEVAL AUGMENTED MODELS
    (Nazarbayev University School of Sciences and Humanities, 2022) Assylbekov, Zhenisbek; Tezekbayev, Maxat; Nikoulina, Vassilina; Gallé, Matthias
    Despite the fact that modern deep neural networks have the ability to memorize (almost) the entire training set they generalize well to unseen data, contradicting traditional learning theory. This phenomenon --- dubbed benign overfitting --- has been theoretically studied so far in simplified settings only. At the same time, ML practitioners (especially in NLP) figured out how to exploit this feature for more efficient training: retrieval-augmented models (e.g., kNN-LM, RETRO) explicitly store (part of) the training sample in the storage and thus try to (partially) remove a load of memorization from the neural network. In this paper we link these apparently separate threads of research, and propose several possible research directions regarding benign overfitting in retrieval-augmented models.