Protecting AI Systems, Preventing Misuse (White Paper)
Artificial intelligence is already being used in a wide range of areas of society, whether in healthcare, the world of work, road traffic or public spaces. Despite the many opportunities that AI technology brings, such as improved healthcare or attractive, individualized workplace design, the potential for misuse of AI systems should not be lost sight of and realistically assessed. In this way, appropriate protective measures against abusive attacks can be strategically taken at an early stage.
In the white paper, the experts under the leadership of the two working groups Hostile-to-Life Environments and IT Security, Privacy, Legal and Ethical Framework of the Plattform Lernende Systeme explore the question of what steps should be taken to prevent misuse of AI systems with suitable measures. They recommend thinking through scenarios to uncover gateways at an early stage and to derive from them the necessary protective measures that can prevent misuse embedded in an overall strategy. To illustrate this, the theoretical considerations are embedded in realistic application scenarios from the areas of health, leisure, mobility or the world of work. In the comparison between “worst case” and “best case,” these scenarios make it clear which outcome suitable measures ultimately favor in the concrete case of misuse – this in the sense of a safe and reliable AI technology.