Speeding Up Entmax
| dc.contributor.author | Tezekbayev Maxat | |
| dc.contributor.author | Nikoulina Vassilina | |
| dc.contributor.author | Gallé Matthias | |
| dc.contributor.author | Assylbekov Zhenisbek | |
| dc.date.accessioned | 2025-08-27T04:56:26Z | |
| dc.date.available | 2025-08-27T04:56:26Z | |
| dc.date.issued | 2022-01-01 | |
| dc.description.abstract | Softmax is the de facto standard for normalizing logits in modern neural networks for language processing. However, by producing a dense probability distribution each token in the vocabulary has a nonzero chance of being selected at each generation step, leading to a variety of reported problems in text generation. α entmax of Peters et al. (2019) solves this problem, but is unfortunately slower than softmax. In this paper, we propose an alternative to α entmax, which keeps its virtuous characteristics, but is as fast as optimized softmax and achieves on par or better performance in machine translation task. | en |
| dc.identifier.citation | Tezekbayev Maxat; Nikoulina Vassilina; Gallé Matthias; Assylbekov Zhenisbek. (2022). Speeding Up Entmax. Findings of the Association for Computational Linguistics: NAACL 2022. https://doi.org/10.18653/v1/2022.findings-naacl.86 | en |
| dc.identifier.doi | 10.18653/v1/2022.findings-naacl.86 | |
| dc.identifier.uri | https://doi.org/10.18653/v1/2022.findings-naacl.86 | |
| dc.identifier.uri | https://nur.nu.edu.kz/handle/123456789/10454 | |
| dc.language.iso | en | |
| dc.publisher | Association for Computational Linguistics | |
| dc.source | (2022) | en |
| dc.title | Speeding Up Entmax | en |
| dc.type | conference-paper | en |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- 10.18653_v1_2022.findings-naacl.86.pdf
- Size:
- 1.33 MB
- Format:
- Adobe Portable Document Format