The distinction between intentional and unintentional discrimination is a prominent one in the literature and public discourse; intentional discriminatory actions are commonly considered particularly morally objectionable relative to unintentional discriminatory actions. Nevertheless, it remains unclear what the two types amount to, and what generates the moral difference between them. The paper develops philosophically-informed conceptualizations of the two types based on which the moral difference between them may be accounted for. On the suggested account, intentional discrimination is characterized by the agent viewing the content of an underlying discriminatory belief as a consideration that counts in favor of her action. This, it is argued, amounts to endorsing the discriminatory belief, which generates the particular moral severity of intentional discrimination.
As machine learning informs increasingly consequential decisions, different metrics have been proposed for measuring algorithmic bias or unfairness. Two popular “fairness measures” are calibration and equality of false positive rate. Each measure seems intuitively important, but notably, it is usually impossible to satisfy both measures. For this reason, a large literature in machine learning speaks of a “fairness tradeoff” between these two measures. This framing assumes that both measures are, in fact, capturing something important. To date, philosophers have seldom examined this crucial assumption, and examined to what extent each measure actually tracks a normatively important property. This makes this inevitable statistical conflict – between calibration and false positive rate equality – an important topic for ethics. In this paper, I give an ethical framework for thinking about these measures and argue that, contrary to initial appearances, false positive rate equality is in fact morally irrelevant and does not measure fairness.