Speeding Up Entmax

dc.contributor.authorTezekbayev Maxat
dc.contributor.authorNikoulina Vassilina
dc.contributor.authorGallé Matthias
dc.contributor.authorAssylbekov Zhenisbek
dc.date.accessioned2025-08-27T04:56:26Z
dc.date.available2025-08-27T04:56:26Z
dc.date.issued2022-01-01
dc.description.abstractSoftmax is the de facto standard for normalizing logits in modern neural networks for language processing. However, by producing a dense probability distribution each token in the vocabulary has a nonzero chance of being selected at each generation step, leading to a variety of reported problems in text generation. α entmax of Peters et al. (2019) solves this problem, but is unfortunately slower than softmax. In this paper, we propose an alternative to α entmax, which keeps its virtuous characteristics, but is as fast as optimized softmax and achieves on par or better performance in machine translation task. en
dc.identifier.citationTezekbayev Maxat; Nikoulina Vassilina; Gallé Matthias; Assylbekov Zhenisbek. (2022). Speeding Up Entmax. Findings of the Association for Computational Linguistics: NAACL 2022. https://doi.org/10.18653/v1/2022.findings-naacl.86en
dc.identifier.doi10.18653/v1/2022.findings-naacl.86
dc.identifier.urihttps://doi.org/10.18653/v1/2022.findings-naacl.86
dc.identifier.urihttps://nur.nu.edu.kz/handle/123456789/10454
dc.language.isoen
dc.publisherAssociation for Computational Linguistics
dc.source(2022)en
dc.titleSpeeding Up Entmaxen
dc.typeconference-paperen

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
10.18653_v1_2022.findings-naacl.86.pdf
Size:
1.33 MB
Format:
Adobe Portable Document Format

Collections