论文标题

规范性证明在哪里? ML公平研究中的假设和矛盾

Where Is the Normative Proof? Assumptions and Contradictions in ML Fairness Research

论文作者

Cooper, A. Feder

论文摘要

在整个机器学习(ML)子学科中,研究人员做出了数学假设以促进证明编写。尽管此类假设对于为算法的行为提供数学保证是必要的,但它们还必须限制这些算法对不同问题设置的适用性。实际上,这种做法是显而易见的,并且在ML研究中被接受。但是,对这项工作的基础的规范假设并不引起类似的关注。我认为,这种假设同样重要,尤其是在具有明显社会影响的ML领域,例如公平。这是因为,类似于数学假设如何限制适用性,规范假设也将算法适用于某些问题域。我表明,在最高场所发表的现有论文中,一旦澄清了规范性假设,通常就可以得到不清楚或矛盾的结果。尽管数学假设和结果是正确的,但隐含的规范假设和随附的规范结果禁忌使用这些方法在实践公平应用中。

Across machine learning (ML) sub-disciplines researchers make mathematical assumptions to facilitate proof-writing. While such assumptions are necessary for providing mathematical guarantees for how algorithms behave, they also necessarily limit the applicability of these algorithms to different problem settings. This practice is known--in fact, obvious--and accepted in ML research. However, similar attention is not paid to the normative assumptions that ground this work. I argue such assumptions are equally as important, especially in areas of ML with clear social impact, such as fairness. This is because, similar to how mathematical assumptions constrain applicability, normative assumptions also limit algorithm applicability to certain problem domains. I show that, in existing papers published in top venues, once normative assumptions are clarified, it is often possible to get unclear or contradictory results. While the mathematical assumptions and results are sound, the implicit normative assumptions and accompanying normative results contraindicate using these methods in practical fairness applications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源