Self-driving cars, robots, digital assistants, networked home appliances and autonomous weapon systems have long since made their way into human society, at least as prototypes. Such machines and digital systems, which seem to act independently of human commands, should not only relieve human users of boring, difficult or dangerous tasks, but should also be able to make independent "decisions" in everyday situations. Ingenious sensors, comprehensive possibilities for connectivity as well as complex and self-learning algorithms are allowing the new machines to react rapidly, sensitively and by comparing diverse data to their environment.
Since autonomous systems require significantly less attention and participation in decision-making processes than traditional assistance systems, a number of ethical, legal and social challenges result from their development. They range from the question of who is responsible for the actions of autonomous machines, the agreement on criteria according to which they should decide in the event of conflict and the appropriate handling of the collected data, to protection against the misuse of autonomous systems and their effects on our self-image.