Abstract
State of the art end-to-end coreference resolution models use expensive span representations and antecedent prediction mechanisms. These approaches are expensive both in terms of their memory requirements as well as compute time, and are particularly ill-suited for long documents. In this paper, we propose an approximation to end-to-end models which scales gracefully to documents of any length. Replacing span representations with token representations, we reduce the time/memory complexity via token windows and nearest neighbor sparsification methods for more efficient antecedent prediction. We show our approach’s resulting reduction of training and inference time compared to state-of-the-art methods with only a minimal loss in accuracy. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000527310Publication status
publishedExternal links
Book title
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021Pages / Article No.
Publisher
Association for Computational LinguisticsEvent
Organisational unit
09684 - Sachan, Mrinmaya / Sachan, Mrinmaya
More
Show all metadata
ETH Bibliography
yes
Altmetrics