English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

From Parity to Preference-based Notions of Fairness in Classification

MPS-Authors
/persons/resource/persons145105

Zafar,  Muhammad Bilal
Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society;

Valera,  Isabel
Group M. Gomez Rodriguez, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons75510

Gomez Rodriguez,  Manuel
Group M. Gomez Rodriguez, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons144524

Gummadi,  Krishna
Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society;

Weller,  Adrian
Group M. Gomez Rodriguez, Max Planck Institute for Software Systems, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:1707.00010.pdf
(Preprint), 2MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Zafar, M. B., Valera, I., Gomez Rodriguez, M., Gummadi, K., & Weller, A. (2017). From Parity to Preference-based Notions of Fairness in Classification. Retrieved from http://arxiv.org/abs/1707.00010.


Cite as: https://hdl.handle.net/21.11116/0000-0000-DC08-0
Abstract
The adoption of automated, data-driven decision making in an ever expanding range of applications has raised concerns about its potential unfairness towards certain social groups. In this context, a number of recent studies have focused on defining, detecting, and removing unfairness from data-driven decision systems. However, the existing notions of fairness, based on parity (equality) in treatment or outcomes for different social groups, tend to be quite stringent, limiting the overall decision making accuracy. In this paper, we draw inspiration from the fair-division and envy-freeness literature in economics and game theory and propose preference-based notions of fairness -- given the choice between various sets of decision treatments or outcomes, any group of users would collectively prefer its treatment or outcomes, regardless of the (dis)parity as compared to the other groups. Then, we introduce tractable proxies to design margin-based classifiers that satisfy these preference-based notions of fairness. Finally, we experiment with a variety of synthetic and real-world datasets and show that preference-based fairness allows for greater decision accuracy than parity-based fairness.