Moral imitation: Can an algorithm really be ethical?
Over the coming months we’ll be showcasing the work of CRISP researchers in our new feature, ‘Publication of the month’. Although 2020 wiped out research for a lot of academics, the CRISP doctoral students have continued to be very productive. This month we look at St Andrews PhD student Anuj Puri’s new paper ‘Moral Imitation: Can an algorithm really be ethical?’ published in the Rutgers Law Record.
Stories in the tech press about how algorithms can be trained to tackle moral dilemmas can give them a veneer of ethicality. For example, does the algorithm in a self driving car choose to collide with a dog or a family? A young person or an old person? A group or an individual? Once trained, the algorithm behind the self-driving car appears to take decisions which have an ethical element. But is this the case from a moral philosophical point of view?
In this paper Anuj examines the seductive idea that algorithms can be ethical. He then challenges the same idea, arguing that algorithms are moral imitators rather than moral actors. This is because algorithms cannot ever understand the choices they are making. In order to become a moral actor, Anuj suggests, will, consciousness and moral intentionality are required. As a result, the organizations behind algorithmic technologies must be held responsible for the consequences of algorithmic decision making.
Anuj is based in the Department of Philosophy at The University St Andrews. He is writing his PhD on the Theory of Group Privacy, funded by a St Leonards College interdisciplinary studentship. He is supervised by Rowan Cruft, Kirstie Ball and Katherine Hawley. At the moment the topic of this paper is his side hussle, but he would really like it to become his main hussle for his postdoc! Go Anuj!