Certification of AI Systems (White Paper)
The certification of AI systems can help to strengthen trust in technology and application. However, not every AI system requires certification. It is therefore important to choose the right level of certification to maintain a good balance between regulation and innovation potential. In a white paper, the Plattform Lernende Systeme shows when and based on which criteria AI systems should be certified and how an effective testing infrastructure can be designed. The paper was written by members of all working groups of the Plattform Lernende Systeme under the leadership of the working group Security, Privacy, Legal and Ethical Framework as well as the working group Technological Enablers and Data Science and other guest authors. In it, the experts define a catalogue of criteria for the certification of AI systems; the scope and necessity of such certification should be oriented to the criticality level of an AI system. The criticality level is determined, for example, by the possible endangerment of legal interests and the scope of human options for action in the respective application context. The criteria are intended as a compass to provide orientation for the trustworthy development and application of AI systems. The paper also identifies various options for establishing successful certification, such as the creation of clear framework conditions at national and international level. The options for action address different groups of actors in politics, research, business and civil society.