Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/32112
Appears in Collections:Accounting and Finance Journal Articles
Peer Review Status: Refereed
Title: Significance, relevance and explainability in the machine learning age: an econometrics and financial data science perspective
Author(s): Hoepner, Andreas G F
McMillan, David
Vivian, Andrew
Wese Simen, Chardin
Contact Email: david.mcmillan@stir.ac.uk
Keywords: explainability
explainable artificial intelligence (xai)
neural networks
relevance
regressions
significance
Issue Date: 2021
Date Deposited: 22-Dec-2020
Citation: Hoepner AGF, McMillan D, Vivian A & Wese Simen C (2021) Significance, relevance and explainability in the machine learning age: an econometrics and financial data science perspective. European Journal of Finance, 27 (1-2), pp. 1-7. https://doi.org/10.1080/1351847X.2020.1847725
Abstract: Although machine learning is frequently associated with neural networks, it also comprises econometric regression approaches and other statistical techniques whose accuracy enhances with increasing observation. What constitutes high quality machine learning is yet unclear though. Proponents of deep learning (i.e. neural networks) value computational efficiency over human interpretability and tolerate the ‘black box’ appeal of their algorithms, whereas proponents of explainable artificial intelligence (xai) employ traceable ‘white box’ methods (e.g. regressions) to enhance explainability to human decision makers. We extend Brooks et al.’s [2019. ‘Financial Data Science: The Birth of a New Financial Research Paradigm Complementing Econometrics?’ European Journal of Finance 25 (17): 1627–36.] work on significance and relevance as assessment critieria in econometrics and financial data science to contribute to this debate. Specifically, we identify explainability as the Achilles heel of classic machine learning approaches such as neural networks, which are not fully replicable, lack transparency and traceability and therefore do not permit any attempts to establish causal inference. We conclude by suggesting routes for future research to advance the design and efficiency of ‘white box’ algorithms.
DOI Link: 10.1080/1351847X.2020.1847725
Rights: © 2020 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way.
Licence URL(s): http://creativecommons.org/licenses/by-nc-nd/4.0/

Files in This Item:
File Description SizeFormat 
Hoepner-etal-EJF-2021.pdfFulltext - Published Version1.27 MBAdobe PDFView/Open



This item is protected by original copyright



A file in this item is licensed under a Creative Commons License Creative Commons

Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.