Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/244359 
Year of Publication: 
2021
Series/Report no.: 
MAGKS Joint Discussion Paper Series in Economics No. 30-2021
Publisher: 
Philipps-University Marburg, School of Business and Economics, Marburg
Abstract: 
Dictionary approaches are at the forefront of current techniques for quantifying central bank communication. This paper proposes embeddings - a language model trained using machine learning techniques - to locate words and documents in a multidimensional vector space. To accomplish this, we gather a text corpus that is unparalleled in size and diversity in the central bank communication literature, as well as introduce a novel approach to text quantification from computational linguistics. Utilizing this novel text corpus of over 23,000 documents from over 130 central banks we are able to provide high quality text-representations - embeddings - for central banks. Finally, we demonstrate the applicability of embeddings in this paper by several examples in the fields of monetary policy surprises, financial uncertainty, and gender bias.
Subjects: 
Word Embedding
Neural Network
Central Bank Communication
Natural Language Processing
Transfer Learning
JEL: 
C45
C53
E52
Z13
Document Type: 
Working Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.