What happens without ethics

Artificial intelligenceMorality without awareness

Sophie Jentzsch: "Prominent advice from this AI is that it is better to put bread in the toaster than a sock or rather not to put batteries in the microwave, is also a good tip."

With such suggestions, Sophie Jentzsch has to smile at her own invention: an AI that makes moral decisions.

"You can ask questions to this system about how desirable certain activities are. And the system decides whether this is an action that is more advisable or rather an action that is to be rejected, and quasi evaluates the action on a general good and bad level."

Moral choice instead of multiple choice

She developed the "Moral Choice Machine" together with colleagues as part of a study at the University of Darmstadt. Jentzsch is now working at the German Aerospace Center. The aim of the study was to show that artificial intelligence can very well learn morals.

"So these decisions that the machine makes are not outstanding philosophical achievements, they are simply rules about how not to kill and it's good to love your parents."

First, the scientists defined good and bad and then fed artificial intelligence with a lot of texts. All written by people from different eras. According to their meaning, the AI ​​has assigned words from the texts to either good or bad. With the help of this assignment, she can make simple moral decisions.

AI with prejudice

As early as 2017, an American-British research team showed that AI can extract information from texts and link them. However, it also became clear that the system adopts prejudices and stereotypes. The AI ​​tended to associate female names with art and male names with technology. This also happens with Sophie Jentzsch's model. But:

"Depending on the questions that are asked of the data and the model, implicit constructs of our society that can be assessed positively are also included. So we can read norms and values ​​from them that represent the morality of our society."

Their plan: In the future, every AI could get a moral compass based on human convictions.

Peter Asaro teaches at the 'School of Media Studies' in the 'New School' in New York. The philosopher is rather skeptical:

"Right now it's all very speculative. I mean, people develop computer programs and claim they make ethical decisions. But in reality they simulate a very specific idea of ​​how people might make ethical decisions."

"You should not kill"

Artificial intelligence is therefore not capable of real moral decisions. The human consciousness is missing. Asaro is also one of the co-founders of the international committee for "Robot Arms Control". So he is committed to ensuring that robots are never allowed to decide on their own whether they are allowed to kill a person or not. One of the toughest moral choices. Asaro:

"You definitely need a moral authority to make such a judgment. But since computer programs can never be moral authorities, they shouldn't make such decisions either."

Judith Simon is also careful. The professor for ethics in information technology at the University of Hamburg warns:

"I think there are a lot of areas where you like to rely on technology. For the simple reason that you can then shift responsibility. When I have to make unpleasant decisions, for example about who gets money and who doesn't get money, is of course a great motivation to delegate this to machines. "

Simon is also a member of the Federal Government's Ethics Council and Data Ethics Commission. Both instances are currently talking very intensively about the opportunities and risks that artificial intelligence offers. The lawyer Tobias Keber from the Stuttgart Media University is more optimistic about the morality of artificial intelligence. He brings up the General Data Protection Regulation in the debate. The first foundations have already been laid here, for example in Article 25. Keber:

"The idea is to implement problems for certain values, let's take privacy, already in the development process of certain machines, let's call them now. That means: if I develop a prototype, the engineer who does it would have to have an idea of ​​privacy by ethics. "

Still, the debate does not go far enough for him. At the moment the whole world seems technology-driven, there is not enough time to think. In doing so, progress may not be able to be stopped, but what it looks like can be helped, says Tobias Keber.