As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from feminist philosophy of science to machine learning programs to make the case that the resources required to respond to these inductive challenges render critical aspects of their design constitutively value-laden. I demonstrate these points specifically in the case of recidivism algorithms, arguing that contemporary debates concerning fairness in criminal justice risk-assessment programs are best understood as iterations of traditional arguments from inductive risk and demarcation, and thereby establish the value-laden nature of automated decision-making programs. Finally, in light of these points, I address opportunities for relocating the value-free ideal in machine learning and the limitations that accompany them.
Purchase
Buy instant access (PDF download and unlimited online access):
Institutional Login
Log in with Open Athens, Shibboleth, or your institutional credentials
Personal login
Log in with your brill.com account
Abebe, R., Barocas, S., Kleinberg, J., Levy, K., Raghavan, M., and Robinson, D. G. (2020). Roles for Computing in Social Change. Conference on Fairness, Accountability, and Transparency (fat* ’20), page 9.
Angwin, J., Larson, J., Mattu, S., and Kirchner, L. (2016). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica.
Antony, L. (2001). Quine as Feminist: The Radical Import of Naturalized Epistemology. In Antony, L. and Witt, C. E., editors, A Mind Of One’s Own: Feminist Essays on Reason and Objectivity, pages 110–153. Westview Press.
Antony, L. (2006). The Socialization of Epistemology. In Goodin, R. E. and Tilly, C., editors, The Oxford Handbook of Contextual Political Analysis, pages 58–77. Oxford University Press.
Antony, L. (2016). Bias: Friend or Foe? In Brownstein, M. and Saul, J., editors, Implicit Bias and Philosophy, Volume 1: Metaphysics and Epistemology, pages 157–190. Oxford University Press.
Barocas, S. and Selbst, A. D. (2016). Big data’s disparate impact. California Law Review.
Basu, R. (2018). The Wrongs of Racist Beliefs. Philosophical Studies.
Basu, R. (2019). What we epistemically owe to each other. Philosophical Studies, 176(4):915–931.
Bolinger, R. J. (2018). The rational impermissibility of accepting (some) racial generalizations. Synthese.
Bowker, G. C. and Starr, S. L. (2000). Sorting Things Out: Classification and Its Consequences. mit Press.
Burge, T. (2020). Entitlement: The Basis for Empirical Warrant. In Graham, P. J. and Pedersen, editors, Epistemic Entitlement, pages 37–142. Oxford University Press.
Castro, C. (2019). What’s Wrong with Machine Bias. Ergo, an Open Access Journal of Philosophy, 6(20191108).
Chouldechova, A. (2016). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. arXiv preprint arXiv:
Christin, A. (2017). Algorithms in practice: Comparing web journalism and criminal justice. Big Data & Society, 4(2):205395171771885.
Coravos, A., Chen, I., Gordhandas, A., and Stern, A. D. (2019). We Should Treat Algorithms Like Prescription Drugs. Quartz.
Corbett-Davies, S., Goel, S., and Gonzalez-Bailon, S. (2017). Even Imperfect Algorithms Can Improve the Criminal Justice System. The New York Times.
Domingos, P. (2012). A few useful things to know about machine learning. Communications of the acm, 55(10):78.
Domingos, P. (2015). The master algorithm: how the quest for the ultimate learning machine will remake our world. Basic Books, a member of the Perseus Books Group, New York.
Dotan, R. (2020). Theory choice, non-epistemic values, and machine learning. Synthese.
Douglas, H. (2000). Inductive Risk and Values in Science. Philosophy of Science, 67(4):559–579.
Douglas, H. (2003). The Moral Responsibilities of Scientists (Tensions between Autonomy and Responsibility). American Philosophical Quarterly, pages 59–68.
Douglas, H. E. (2009). Science, policy, and the value-free ideal. University of Pittsburgh Press, Pittsburgh, Pa. oclc: ocn297144848.
Douglas, H. (2014). The Moral Terrain of Science. Erkenntnis, 79(S51):961–979.
Douglas, H. (2016). Values in Science. In Humphreys, P., editor, Oxford Handbook in the Philosophy of Science, pages 609–630. Oxford University Press.
Dupre, J. (2007). Fact and Value. In Kincaid, H., Dupre, J., and Wylie, A., editors, Value-Free Science? Ideals and Illusions, pages 27–41. Oxford University Press.
Fallis, D. and Lewis, P. J. (2016). The Brier Rule Is not a Good Measure of Epistemic Utility (and Other Useful Facts about Epistemic Betterness). Australasian Journal of Philosophy, 94(3):576–590.
Fantl, J. and Mcgrath, M. (2007). On Pragmatic Encroachment in Epistemology. Philosophy and Phenomenological Research, 75(3):558–589.
Flores, A. W., Bechtel, K., and Lowenkamp, C. T. (2016). False Positives, False Negatives, and False Analyses: A Rejoinder to Machine Bias: There’s Software Used across the Country to Predict Future Criminals. And It’s Biased against Blacks. Fed. Probation, 80:38.
Friedman, B. (1995). Minimizing Bias in Computer Systems. Mosaic of Creativity, page 1.
Friedman, B. (1997). Human Values and The Design of Computer Technology. Cambridge University Press.
Friedman, B. and Nissenbaum, H. (1996). Bias in Computer Systems. acm Transactions on Information Systems, 14:330–347.
Gardiner, G. (2018). Evidentialism and Moral Encroachment. In McCain, K., editor, Believing in Accordance with the Evidence: New Essays on Evidentialism, pages 169–195. Springer.
Gendler, T. S. (2011). On the epistemic costs of implicit bias. Philosophical Studies, 156(1):33–63.
Giraud-Carrier, C. and Provost, F. (2005). Toward a Justification of Meta-learning: Is the No Free Lunch Theorem a Show-stopper? Proceedings of the icml-2005 Workshop on Meta-learning, page 8.
Gitelman, L., editor (2013). “Raw Data” Is an Oxymoron. mit Press.
Goodman, N. (1955). Fact, Fiction, and Forecast. Bobbs-Merrill Indianapolis, Indianapolis.
Graham, P. J. and Pedersen, N. J. L. L., editors (2020). Epistemic entitlement. Oxford University Press, Oxford, first edition. oclc: on1109918704.
Hellman, D. (2018). Indirect Discrimination and the Duty to Avoid Compounding Injustice. Hart Publishing.
Hellman, D. (2020a). Measuring Algorithmic Fairness. Virginia Law Review, 106:56.
Hellman, D. (2020b). Sex, Causation, and Algorithms: Equal Protection in the Age of Machine Learning. Washington University Law Review, 98:54.
Hu, L. and Kohler-Hausmann, I. (2020). What’s Sex Got to Do With Fair Machine Learning? page 11.
Hume, D. (1748). An Enquiry Concerning Human Understanding. Cambridge texts in the history of philosophy. Cambridge University Press, Cambridge; New York. oclc: ocm71808128.
Jeffrey, R. C. (1956). Valuation and Acceptance of Scientific Hypotheses. Philosophy of Science, 23(3):237–246.
Johndrow, J. E. and Lum, K. (2019). An algorithm for removing sensitive information: Application to race-independent recidivism prediction. The Annals of Applied Statistics, 13(1):189–220.
Johnson, G. M. (2020a). Algorithmic bias: on the implicit biases of social technology. Synthese.
Johnson, G. M. (2020b). The Structure of Bias. Mind, 129(516):1193–1236.
Johnson, D. G. and Nissenbaum, H., editors (1995). Computers, Ethics, and Social Values. Prentice-Hall, Inc.
Keas, M. N. (2018). Systematizing the theoretical virtues. Synthese, 195(6):2761–2793.
Kleinberg, J., Mullainathan, S., and Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:
Kroll, J. A. (2018). The fallacy of inscrutability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133):20180084.
Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., and Yu, H. (2017). Accountable Algorithms. University of Pennsylvania Law Review, 165:74.
Kuhn, T. (1962). The Structure of Scientific Revolutions. University of Chicago Press, Chicago.
Kuhn, T. (1977). Objectivity, Value Judgement, and Theory Choice. In The Essential Tension. University of Chicago Press, Chicago.
Lakatos, I. (1970). Falsification and the Methodology of Scientific Research Programmes. In Lakatos, I. and Musgrave, A., editors, Criticisms and the Growth of Knowledge, pages 91–195. Springer, Dordrecht.
Lauc, D. (2019). How Gruesome are the No-free-lunch Theorems for Machine Learning? Croation Journal of Philosophy, xviii(54):8.
Levi, I. (1960). Must the Scientist Make Value Judgments? The Journal of Philosophy, 57(11):345.
Logg, J. M., Minson, J. A., and Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151:90–103.
Longino, H. E. (1995). Gender, politics, and the theoretical virtues. Synthese, 104(3):383– 397.
Longino, H. E. (1996). Cognitive and Non-cognitive Values in Science: Rethinking the Dichotomy. In Hankinson Nelson, L. and Nelson, J., editors, Feminism, science, and the philosophy of science, pages 39–58. Kluwer, Dordrecht. oclc: 801321444.
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., and Galstyan, A. (2020). A Survey on Bias and Fairness in Machine Learning. arXiv:
Moss, S. (2018). Probabilistic Knowledge. Oxford University Press, Oxford.
Mulligan, D. K., Kroll, J. A., Kohli, N., and Wong, R. Y. (2019). This Thing Called Fairness: Disciplinary Confusion Realizing a Value in Technology. Proceedings of the acm on Human-Computer Interaction, 3(cscw):1–36. arXiv: 1909.11869.
Munton, J. (2017). The Eye’s Mind: Perceptual Process and Epistemic Norms. Philosophical Perspectives, 31(1):317–347.
Munton, J. (2019a). Beyond accuracy: Epistemic flaws with statistical generalizations. Philosophical Issues, 29(1):228–240.
Munton, J. (2019b). Perceptual Skill And Social Structure. Philosophy and Phenomenological Research.
Nissenbaum, H. (1996). Accountability in a Computerized Society. Science and Engineering Ethics, 2:25–42.
Northpointe, I. (2015). Practitioner’s Guide to compas Core.
Norton, J. D. (2003). A Material Theory of Induction. Philosophy of Science, 70(4):647–670.
Norton, J. (2021). The Material Theory of Induction. bsps Open.
Parker, W. S. and Winsberg, E. (2018). Values and evidence: how models make a difference. European Journal for Philosophy of Science, 8(1):125–142.
Ramsey, F. P. (1989). Mr Keynes on Probability. The British Journal for the Philosophy of Science, 40(2):219–222.
Rooney, P. (1992). On Values in Science: Is the Epistemic/Non-Epistemic Distinction Useful? psa: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 1992(1):13–22.
Rudner, R. (1953). The Scientist Qua Scientist Makes Value Judgments. Philosophy of Science, 20(1):1–6.
Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., and Vertesi, J. (2019). Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency – fat* ’19, pages 59–68, Atlanta, GA, USA. acm Press.
Shiffrin, S. V. (2014). Speech matters: on lying, morality, and the law. Carl G. Hempel lecture series. Princeton University Press, Princeton.
Shmueli, G. (2010). To Explain or to Predict? Statistical Science, 25(3):289–310.
Skeem, J., Scurich, N., and Monahan, J. (2020). Impact of risk assessment on judges’ fairness in sentencing relatively poor defendants. Law and Human Behavior.
Smith, A. (2018). Public Attitudes Toward Computer Algorithms. Pew Research Center, page 41.
Stanley, J. (2005). Knowledge and Practical Interest. Oxford University Press, Oxford.
Steel, D. (2015). Acceptance, values, and probability. Studies in History and Philosophy of Science Part A, 53:81–88.
Tavernise, S. (2013). Drug Agency Recommends Lower Dosage of Sleep Aids for Women. The New York Times.
Titelbaum, M. G. (2010). Not Enough There There: Evidence, Reasons, and Language Independence. Philosophical Perspectives, 24(1):477–528.
Vidal, A. C., Smith, J. S., Valea, F., Bentley, R., Gradison, M., Yarnall, K. S. H., Ford, A., Overcash, F., Grant, K., Murphy, S. K., and Hoyo, C. (2014). hpv genotypes and cervical intraepithelial neoplasia in a multiethnic cohort in the southeastern USA. Cancer Causes & Control, 25(8):1055–1062.
Viens, L. J., Henley, S. J., and Watson, M. (2016). Human Papillomavirus – Associated Cancers – United States, 2008–20112. mmwr Morb Mortal Wkly Rep, (65):661–666.
Ward, Z. B. (2021). On value-laden science. Studies in History and Philosophy of Science Part A, 85:54–62.
Weisberg, M. (2007). Three Kinds of Idealization: Journal of Philosophy, 104(12):639–659.
Wolpert, D. H. (2013). Ubiquity symposium: Evolutionary computation and the processes of life: What the no free lunch theorems really mean: How to improve search algorithms. Ubiquity.
Zabell, S. (2011). Carnap and the Logic of Inductive Inference. In Handbook of the History of Logic, volume 10, pages 265–309. Elsevier.
All Time | Past 365 days | Past 30 Days | |
---|---|---|---|
Abstract Views | 1439 | 821 | 143 |
Full Text Views | 233 | 126 | 10 |
PDF Views & Downloads | 533 | 312 | 21 |
As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from feminist philosophy of science to machine learning programs to make the case that the resources required to respond to these inductive challenges render critical aspects of their design constitutively value-laden. I demonstrate these points specifically in the case of recidivism algorithms, arguing that contemporary debates concerning fairness in criminal justice risk-assessment programs are best understood as iterations of traditional arguments from inductive risk and demarcation, and thereby establish the value-laden nature of automated decision-making programs. Finally, in light of these points, I address opportunities for relocating the value-free ideal in machine learning and the limitations that accompany them.
All Time | Past 365 days | Past 30 Days | |
---|---|---|---|
Abstract Views | 1439 | 821 | 143 |
Full Text Views | 233 | 126 | 10 |
PDF Views & Downloads | 533 | 312 | 21 |