Develop and apply AI responsibly: Plattform Lernende Systeme publishes guide
Munich, 07 October 2020
Artificial intelligence (AI) can make our everyday life easier, improve healthcare and help to solve global challenges such as climate change or the corona pandemic. The extent to which AI-based systems are actually used depends largely on whether people trust the technology. Ethical values and principles therefore play a central role in the development of AI – at least in Europe. Plattform Lernende Systeme has written a guide on how to develop secure, traceable and non-discriminatory AI applications. The authors name concrete requirements for this and give practical examples of responsible AI development and application in companies.
Like almost all technical innovations, Artificial Intelligence not only offers enormous opportunities but also poses some risks – and thus raises ethical and moral questions. An AI system can, for example, support the personnel department in pre-sorting applications – but it may also discriminate against female candidates because in the past it was mainly men who were hired and the system was trained with this data.
Many companies and institutions that develop and use AI systems are aware of the importance of responsible use of AI. However, they face the challenge of putting abstract ethical values and principles such as non-discrimination or transparency of AI systems into practice. The guideline “Ethics Briefings” of Plattform Lernende Systeme takes up the “Ethics by, in and for Design” approach which the Federal Government pursues in its AI strategy. “Our guideline serves developers and providers of AI as orientation. As a rule of thumb, the higher the risks associated with the use of an AI system, the more weight should be given to the observance of ethical principles already in the development process,” explains Jessica Heesen, head of the research area Media Ethics and Information Technology at the International Centre for Ethics in the Sciences and Humanities (IZEW) at the University of Tübingen and head of the working group IT Security, Privacy, Law and Ethics of Plattform Lernende Systeme. “For example, an AI system in public administration, which decides on the allocation of social benefits, requires a very high sense of responsibility. It must be clear, however, that not all values can always be realised at the same time”.
The authors of the guide name self-determination, justice and the protection of privacy as overriding values for responsible AI development. “AI systems should always be developed and used with the aim of contributing to ethically legitimate applications, ideally to the promotion of the common good,” says co-author Armin Grunwald, Professor of Philosophy of Technology at the Karlsruhe Institute of Technology (KIT) and head of the Office of Technology Assessment at the German Bundestag. “When using AI, damage to individuals, communities and the environment must be avoided. The systems must be legally compliant and technically robust. They must not pose an unacceptable safety risk at any time”.
In order to ensure user self-determination, technology companies should design explainable AI systems that make results and decisions comprehensible to the user. The authors also recommend a central AI register, which is operated in cooperation between industry and the public sector. AI providers should register their applications there so that consumers can find out in which products or services AI is integrated. The authors also warn against one-sided dependency relationships between AI developers, providers and users. Therefore, companies should integrate open interfaces into their systems during development and design them interoperably in order to maintain the diversity of the market offer.
An AI system is only as good as the underlying data. Therefore, it is important that manufacturers pay attention to the quality of the training data when developing AI systems, disclose their origin and use anonymised or pseudonymised data sets if possible in order to design non-discriminatory and secure AI systems. The authors recommend that policymakers set standards for certifying and auditing AI systems and provide for appeal possibilities for affected persons. At the same time, the authors also see it as the responsibility of the people themselves to inform themselves about an AI system before using it and to handle their data carefully.
The authors of the whitepaper Ethics Briefing: Guidelines for a responsible development and application of AI systems are members of the working group IT Security, Privacy, Law and Ethics of the Plattform Lernende Systeme.