Yousefi, Yasaman
Koutsoviti-Koumeri, Lisa
Legast, Magali
[UCL]
Schommer, Christoph
Vanhoof, Koen
Legay, Axel
[UCL]
Algorithmic decisions made by Machine Learning (ML) models may pose a threat of discrimination. This research endorses the contextual approach to fairness in the EU non-discrimination legal framework and aims to assess to what extent we can ensure legal fairness using fairness metrics and constraints in ML models. We examine the legal concepts of non-discrimination and differential treatment, using different fairness definitions. In a case study with different scenarios, we train classifiers with bias mitigation methods involving different fairness constraints. Our goal is to determine how effective they are at mitigating prediction bias while respecting the judiciary contextual approach and the substantive notion of equality under EU law.
Bibliographic reference |
Yousefi, Yasaman ; Koutsoviti-Koumeri, Lisa ; Legast, Magali ; Schommer, Christoph ; Vanhoof, Koen ; et. al. Compatibility of Fairness Metrics With EU Non-discrimination Law: A Legal and Technical Case Study.European Workshop on Algorithmic Fairness (Winterthur, Switzerland, du 07/06/2023 au 09/06/2023). In: CEUR Workshop Proceedings, Vol. 3442, no., p. (2023) |
Permanent URL |
http://hdl.handle.net/2078.1/279146 |