AI and responsibility: Who is accountable for AI’s bad decisions

Will McCurdy

Content Editor

Maximilian Kiener

Leverhulme Early Career Fellow at the University of Oxford Faculty of Philosophy Institute for Ethics in AI


No one particularly wants to be responsible for a bad decision, whether that be a car accident, a misdiagnosis, or providing an unaffordable loan.

However, AI adoption means that human and machine decision making are becoming increasingly bound up together. Though in some fields like medicine and transport AI is being found to make better decisions than trained experts, it can be very hard to allocate blame when things go wrong.

Earlier this year, new proposals from the UK’s Law Commissions suggested that the person in the driving seat of an automated vehicle would no longer be responsible for how the car drives.

If organisations want to use AI, they will increasingly need to be able to decide who to blame if things go amiss, or at least explain how these decisions work.

To discuss these issues, Will McCurdy, content editor of National Technology News spoke to Maximilian Kiener, Leverhulme Early Career Fellow at the University of Oxford Faculty of Philosophy Institute for Ethics in AI.