论文标题

解决数据驱动决策中的团体公平性的多个指标

Addressing multiple metrics of group fairness in data-driven decision making

论文作者

Miron, Marius, Tolan, Songül, Gómez, Emilia, Castillo, Carlos

论文摘要

机器学习(FAT-ML)文献的公平性,问责制和透明度提出了各种各样的群体公平指标,以衡量以受保护特征(例如性别或种族)为特征的社会人口统计学群体的歧视。已经提出了几个指标,其中一些指标彼此不兼容。我们从经验上说,这些指标中的几个集群聚集在两个或三个主要集群中,用于相同的组和机器学习方法。此外,我们提出了一种强大的方法,可以通过组公平度量的主要成分分析(PCA)在两个维度中可视化多维公平。多个数据集上的实验结果表明,PCA分解用一到三个组件解释了指标之间的差异。

The Fairness, Accountability, and Transparency in Machine Learning (FAT-ML) literature proposes a varied set of group fairness metrics to measure discrimination against socio-demographic groups that are characterized by a protected feature, such as gender or race.Such a system can be deemed as either fair or unfair depending on the choice of the metric. Several metrics have been proposed, some of them incompatible with each other.We do so empirically, by observing that several of these metrics cluster together in two or three main clusters for the same groups and machine learning methods. In addition, we propose a robust way to visualize multidimensional fairness in two dimensions through a Principal Component Analysis (PCA) of the group fairness metrics. Experimental results on multiple datasets show that the PCA decomposition explains the variance between the metrics with one to three components.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源