Deriving Machine Attention from Human Rationales
Attention-based models are successful when trained on large amounts of data. In this paper, we demonstrate that even in the low-resource scenario, attention can be learned effectively. To this end, we start with discrete human-annotated rationales and map them into continuous attention. Our central...
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
Association for Computational Linguistics (ACL),
2021-02-09T22:36:48Z.
|
Subjects: | |
Online Access: | Get fulltext |