Introduction of algorithms in the realm of public administration bears the risk of reducing moral dilemmas to epistemic probabilities. This paper explores the interlinkages between attribution of moral agency on algorithms and algorithmic injustice. While challenging some of the fundamental assumptions underlying ethical machines, I argue that the moral algorithm claim is inherently flawed and has particularly severe consequences when applied to algorithms making fateful decisions regarding an individual’s life. I contend that free will, consciousness and moral intentionality are sine qua non for any moral agent. A well-known objection to the Turing Test is cited for the proposition that, while an algorithm may imitate morality, an algorithm cannot be ethical unless it understands the moral choices it is making. I raise a methodological objection regarding transposing moral intuitions on algorithms through global surveys. I cite the ‘consciousness thesis’ for the principle that without consciousness there cannot be moral responsibility. Moral justifications form the bedrock of legal defenses. In the absence of moral agency and the algorithm’s inability to be held morally responsible, any attempt by the firms developing and/or deploying the algorithm to escape accountability is untenable. I highlight the grave cost of masking algorithmic injustices with ethical justifications and argue for strict liability for any firm deploying algorithms in the public policy realm.
Moral Imitation: Can An Algorithm Really Be Ethical?
48 Rutgers L. Rec. 47 (2020) | WestLaw | LexisNexis | PDF