Moral judgments about AI will shape legal and ethical assignment of blame, including the particular use case of AI-based self-driving cars
Who are you going to blame?
That question comes up quite a bit when talking about AI.
You see if an AI system goes awry and causes some form of harm or damage, a somewhat vexing or open-ended question arises as to who or what ought to garner the blame for the adverse action. The range of harm can be anywhere between a mild annoyance to an outright severe injury or even a devastating fatality. Think about AI systems that answer trivia questions such as what the state capital is (an Alexa or Siri oft-asked query), perhaps wrongly answering and prodding you into irritation, versus the more sobering and life-or-death decisions of an autonomous vehicle such as an AI-based self-driving car that gets into a car crash.