Leveraging contextual embeddings and self-attention neural networks with bi-attention for sentiment analysis
Visualitza/Obre
Cita com:
hdl:2117/364865
Tipus de documentArticle
Data publicació2021-12-01
EditorSpringer Nature
Condicions d'accésAccés obert
Llevat que s'hi indiqui el contrari, els
continguts d'aquesta obra estan subjectes a la llicència de Creative Commons
:
Reconeixement 3.0 Espanya
Abstract
People express their opinions and views in different and often ambiguous ways, hence the meaning of their words is often not explicitly stated and frequently depends on the context. Therefore, it is difficult for machines to process and understand the information conveyed in human languages. This work addresses the problem of sentiment analysis (SA). We propose a simple yet comprehensive method which uses contextual embeddings and a self-attention mechanism to detect and classify sentiment. We perform experiments on reviews from different domains, as well as on languages from three different language families, including morphologically rich Polish and German. We show that our approach is on a par with state-of-the-art models or even outperforms them in several cases. Our work also demonstrates the superiority of models leveraging contextual embeddings. In sum, in this paper we make a step towards building a universal, multilingual sentiment classifier.
CitacióBiesialska, M.; Biesialska, K.; Rybinski, H. Leveraging contextual embeddings and self-attention neural networks with bi-attention for sentiment analysis. "Journal of Intelligent Information Systems", 1 Desembre 2021, vol. 57, p. 601-626.
ISSN1573-7675
Versió de l'editorhttps://link.springer.com/article/10.1007%2Fs10844-021-00664-7
Fitxers | Descripció | Mida | Format | Visualitza |
---|---|---|---|---|
Biesialska2021_ ... ngContextualEmbeddings.pdf | Full text OA | 4,427Mb | Visualitza/Obre |