Self-driving cars, care robots, networked household appliances and autonomous weapon systems are no longer just fiction. At least as prototypes, they increasingly move among us: machines that seem to act independently of human commands.
In contrast to conventional automatons, such “autonomous” systems are meant to not only relieve users of boring, difficult or dangerous tasks, but also to be able to make the "right" decisions in everyday situations. Ingenious sensors, networking capabilities and self-learning algorithms enable the new machines to react quickly and sensitively to their environment by integrating a wide variety of data.
The impression is often given that this is a continuous further development of assistance systems that have so far not been fundamentally questioned from an ethical point of view. In contrast to these, autonomous systems do however require significantly less human attention and participation in decision-making processes. This gives rise to a number of ethical, legal and social questions, which the German Ethics Council addressed during its Annual Meeting:
- Who bears responsibility for the “actions” of autonomous machines if the user is not, or only marginally, involved in such decisions?
- According to which criteria should machines “decide” in case of conflict, and who determines these criteria?
- How can the collection and exchange of sensitive data by autonomous systems be handled appropriately?
- How can the risk that such systems are misused by others be minimized?
- How do “intelligent” machines change our self-conception?