New dialog forum for Munich: At the start of the “TechTalk 28/4” series, the focus was on AI and disinformation

Munich, 08 November 2024
Disinformation is threatening social discourse – and on an ever-increasing scale. This is due to the ongoing digitalisation of the public sphere and rapid developments in the field of artificial intelligence (AI). How will we ensure valid and diverse information in the future to support a balanced formation of public opinion – and what role does AI play in this? What is the best way to identify AI-generated media content? Experts discussed these questions with the audience at the start of the ‘TechTalk 28/4’ dialogue format on 5 November at the Munich Adult Education Centre.
As an introduction, the participants were asked to decide in small groups on the basis of photos whether they were ‘real’ images or AI-generated fakes. Looking at details – fingers, symmetry of the eyes, hair tips – was in some cases the key to categorising them as real or AI-generated. Martin Bimmer (acatech office) moderated the discussions at the tables and eventually solved the puzzles.
The experts agreed that in the near future (perhaps it is only a matter of weeks or months) such inaccuracies will no longer occur in AI-generated images – so that recognising whether an image is real or fake is therefore likely to become increasingly difficult.
Use of AI in journalism
René Heuser (Head of AI, Ippen Digital) emphasised in the following impulse that AI is not an actor, but rather a tool with a wide range of possible applications, especially in journalism.

Artificial intelligence is now being used in editorial offices for research, fact-checking and analysing materials. Large amounts of data can now be analysed in a short space of time and examined for correlations, as has already been done with the Panama Papers, for example. AI is also helpful in the verification of images, videos and audio recordings. After all, reports on crises or war events have always been manipulated – but the fakes are getting better and better. Recognising these quickly and reliably is a very important task for the media today, says René Heuser. At Ippen Digital, AI is also already being used to automatically moderate comments and thus remove hate speech, spam and scam from websites, thereby promoting a constructive exchange of opinions. Nevertheless, the ‘human in the loop’ principle still applies: all content classified in this way by an AI is checked again by an editor. This also requires ongoing training for the people involved.
How to unmask AI?
In his presentation, Nicolas Müller (Fraunhofer Institute for Applied and Integrated Security) used examples to demonstrate how deceptively real AI fakes already are in the field of audio and video. Particularly impressive are forgery tools that create complete voice profiles based on short recordings and thus enable the reproduction of any text with this voice. Freely accessible programmes such as ‘FakeYou’ or ‘ElevenLabs’ are already very popular.

To prevent people from falling for AI-generated content, Nicolas Müller believes it is important to further increase media literacy, establish tools for deepfake detection and develop signature processes. It goes without saying that we must always be guided by the current state of AI and keep pace with technical developments. Fraunhofer AISEC also makes some tools freely available to identify deepfakes. With ‘Deepfake Total’, for example, audio recordings can be checked for authenticity.
Finally, Nicolas Müller also discussed a positive example of audio deepfakes: With the help of AI software, people who suffer from vocal and linguistic impairments after a stroke, for example, can articulate again. Accordingly, a distinction must be made between the benevolent and malicious use of deepfakes.
A new dialogue format in Munich
Technology and innovation can bring prosperity and solve social problems. But how are useful technologies created from basic research? What unexpected challenges do new technologies present us with? What does the path from idea to competitive company look like? The ‘TechTalk 28/4’ series creates a forum for all Munich residents to engage in debates about technology and society with experts from science and business. The TechTalks are organised by the Munich Adult Education Centre (MVHS) together with acatech and the Münchner Merkur media group, tz.
The number combination ‘28/4’ refers to the house numbers of the locations where the dialogue events will take place: alternately at the Münchner Volkshochschule at Einsteinstraße 28 and at acatech at Karolinenplatz 4. The association with the formula ‘24/7’ is intentional: the current and controversial technology topics that are the focus of the events affect us all around the clock, i. e. 24 hours 7 days a week. The opening event was moderated by Lydia Weinberger (Head of Natural Sciences, Munich Adult Education Centre).
What´s that got to do with me? What can I do?
After the presentations, the participants came together again with the experts at the tables. Four questions were discussed at each table:
- AI & disinformation: Does the topic concern you?
- Do you have trustworthy sources of information? Why do you trust them?
- What could help protect against disinformation?
- Use of AI in the media: opportunity or risk?
It became clear that the problem of disinformation is by no means new. However, it is exacerbated by AI. The IT security arms race is running parallel to the rapid development of counterfeiting applications. The experts agreed that critical handling of information and sources remains an important skill.
The topic of trust was discussed intensively at the tables: Is a leap of faith needed in principle (towards whom and which sources of information?) Or should one simply remain critical at all times (is this feasible at all)? Are you only on the safe side if you check every piece of information and source yourself? And can you even find the time to do this?