The Ketchup has finally slid out onto the plate: can AI make society better?

Tutzing, 30 January 2024
Major breakthroughs have been made in the field of artificial intelligence (AI) in the recent past. Although experts have been hinting at this for a long time, the speed and extent of these breakthroughs came as a surprise: suddenly the ketchup was out of the bottle. Around 30 participants discussed the consequences of this for society, the economy and the media at a two-day cooperative event at the Akademie für Politische Bildung in Tutzing on 18 and 19 January. A follow-up report.
Artificial intelligence – “the worst event in the history of civilization” (Stephen Hawking) or a promising future? Since the release of the ChatGPT software at the end of 2022, the world has been arguing about the direction AI will give to people’s lives. acatech President Jan Wörner called for more prudence at the opening of the conference “Artificial intelligence improves society?!”, which acatech organized on 18 and 19 January 2024 in cooperation with the Akademie für Politische Bildung in Tutzing: The AI debate must be conducted more calmly, because this technology will not create a new reality either. We now need to analyze the situation in a reasonable way and discuss realistic scenarios so as not to slow down the positive innovative power of AI too much in advance.
What does AI mean for society?
In the first keynote of the day, digital journalist Richard Gutjahr summarized the basis for the latest breakthroughs in the field of AI. The availability of ever-increasing amounts of data from the internet and smartphones as well as the growth in computer processing power had brought about the so-called ketchup moment: After some shaking and several attempts to get the ketchup out of the bottle, which only brought out the contents drop by drop, it finally arranged itself in such a way that all of a sudden a large amount of ketchup slid from the bottle. As a result of ChatGPT and other applications of generative AI, we are now faced with a large amount of ketchup on the plate and the question is how we should deal with this.
While Richard Gutjahr looked at the possible negative effects of AI on society – the disappearance of jobs, the increase in cybercrime or creeping desocialization – Andrea Martin, Head of the IBM Watson Center Munich and CTO Ecosystem & Associations at IBM DACH, focused on the potential for the economy in her presentation. AI could, for example, help to optimize customer service, provide management with a better basis for decision-making or make supply chains more resilient. The aim is to augment human intelligence with the help of AI. And this is exactly what needs to be communicated so that people can develop trust in a new technology that will undoubtedly bring great relief and benefits.
These facilitations and advantages were also addressed by acatech Executive Board member Ortwin Renn in his subsequent speech: Just like other successful innovations before it, AI promises greater efficiency, effectiveness, convenience, resilience and fairness. This can be clearly seen in the AI application field of autonomous driving. However, AI – like any technology – can also reinforce negative tendencies: While AI experts benefit from the technology, others could be excluded from the advantages due to their level of knowledge – which could end up widening existing rifts or opening up new ones.
Artificial intelligence and media change
In three parallel workshops, the conference participants were then able to delve deeper into the topics of “AI in science communication”, “Digital media change” and “Technical tools against disinformation and deepfakes”.
While acatech members Eva-Maria Jakobs (formerly RWTH Aachen University) and Mike S. Schäfer (University of Zurich) explored the question of whether ChatGPT and other generative AI applications could fill a gap in the face of increased mistrust in the media, politics and science and become a trustworthy supplier of “depersonalized truth”, workshop leaders Christoph Neuberger (FU Berlin) and Gudrun Riedl (Bayerischer Rundfunk) dealt with the role of AI in everyday journalism. At the end of the workshop, the participants largely agreed that journalism would not be able to do without humans in the near future – humans would still be indispensable in their function as interviewers, fact-checkers or gatekeepers. However, Gudrun Riedl reiterated that journalists must incorporate AI into their work and utilize the advantages of the technology.
Interview with Gudrun Riedl, Editorial Director BR24 Digital, Bayerischer Rundfunk, Munich, on the question: “How do digital editorial teams deal with the challenges of AI?”
Following the acatech topic conference on disinformation, the dangers of disinformation and deepfakes were discussed in the third workshop. According to workshop leader Thorsten Quandt (University of Münster), a range of measures must be taken to combat these phenomena: For example, it is about strengthening media literacy within the population, contextualizing search engine results (to enable further source research) or using AI-supported tools to detect deepfakes. However, when it comes to debunking, the attempt to uncover and remove disinformation, it is important to remember that this action can also be counterproductive. In the worst-case scenario, this could lead to even more mistrust, false opinions become entrenched and lead to further polarization.

The first half of the conference concluded with a panel discussion moderated by Andreas Kalina from the Akademie für Politische Bildung in Tutzing. acatech President Jan Wörner once again called for a realistic assessment of the opportunities and risks of AI. The risks – his co-debater Gitta Kutyniok (LMU Munich, Plattform Lernende Systeme) referred, for example, to the possibility of AI-based discrimination – were undoubtedly present, but could be managed, for example by having AI applications checked and certified by an independent body. Corresponding AI quality and testing standards are currently being developed in the acatech project MISSION KI.
Current developments in artificial intelligence
Gitta Kutyniok opened the second day of the seminar with a presentation on generative AI. According to the mathematician, great progress is being made in training AI, but there are still a number of areas that need to be addressed. For example, it can currently be observed far too often that generative AI systems “hallucinate”, i.e. deliver results that do not appear to be justified by training data and are objectively incorrect. Furthermore, copyright issues have not been resolved.
Theresa Züger (Alexander von Humboldt Institute for Internet and Society) then outlined the research field of “AI for the common good”, which focuses on the collection, analysis and development of corresponding AI applications. As examples, she cited the “Seaclear” project, which uses AI to locate and collect waste on the seabed, and the “Simba Text Assistant”, which automatically converts website texts into easy-to-understand language. Finally, her follow-up speaker Lorena Jaume-Palasí (The Ethical Tech Society) presented cases of algorithmic discrimination, for example in medicine.
Interview with Theresa Züger, Head of AI & Society Lab and Public Interest AI Research Group, Alexander von Humboldt Institute for Internet and Society gGmbH (HIIG), Berlin, on the question: “What is AI for the common good and are there evaluation criteria?”
Regulation of artificial intelligence?!
The conference concluded with a further workshop phase on the topic of “Regulation of Artificial Intelligence?!”, focusing on the legal, social or ethical perspective, depending on the workshop.
While workshop leader Benjamin Ledwon from the Amadeus IT Group discussed the European regulatory approaches of the AI Act with his group, Ingolfur Blühdorn (Vienna University of Economics and Business) focused in his workshop on the social tension between autonomy and regulation, which makes meaningful regulation of AI so difficult.
Christoph Bieber (Center for Advanced Internet Studies, Plattform Lernende Systeme) used two use cases from the field of “Smart City” to demonstrate the complexity of the ethical perspective in his workshop. The example of AI-supported parking space monitoring makes it clear that overarching contexts are also relevant. Ultimately, the question arises as to whether one of the first smart city projects should be used to sanction citizens. An organizational framework is needed for debates (e.g. ethical advisory bodies) that can discuss problems beyond legal or technological issues and support political decision-makers in the run-up to votes.
The two-day event came to an end with a joint final discussion. The numerous different impulses clearly showed that there is no single, quickly implementable solution for finding the right approach to AI. There will have to be many more discussions to clarify the issues raised.