登入選單
返回Google圖書搜尋
Beyond Incompatibility. Trade-Offs Between Mutually Exclusive Algorithmic Fairness Criteria in Machine Learning and Law
註釋Trustworthy AI becomes ever more important, both in machine learning and in the law. One important consequence is that decision makers must seek to guarantee a `fair', i.e., non-discriminatory, algorithmic decision procedure. However, there are several competing notions of algorithmic fairness that have been shown to be mutually incompatible under realistic factual assumptions. This concerns, for example, the widely used fairness measures of 'calibration within groups' and 'balance for the positive/negative class'. Indeed, the COMPAS algorithm, which predicts recidivism risk of criminal offenders, exhibits racial bias according to the balance metrics, but not regarding calibration. In this paper, we present a novel algorithm (FAIM) for continuously interpolating between these three fairness criteria. Thus, an initially unfair prediction can be remedied to at least partially meet a desired, weighted combination of the respective fairness conditions. The algorithm relies on methods from the mathematical theory of optimal transport. We demonstrate the effectiveness of our algorithm when applied to synthetic data, the COMPAS data set, and a new, real-world data set from the e-commerce sector. Finally, we discuss to what extent FAIM can be harnessed to comply with conflicting legal obligations. The analysis suggests that it may operationalize duties in traditional legal fields, such as credit scoring and criminal justice proceedings, but also for the latest AI regulation put forth in the EU, like the recently enacted Digital Markets Act.