TechTalk 28/4: Do Humans still think – or is AI already doing the job for us?

Munich, 21 March 2025
Humans think. But what do we actually mean by “Thinking”? Is thinking or cognition, limited to humans, or can artificial systems also think? Since the technological breakthrough of ChatGPT, these questions have become more pressing than ever. At another edition of the TechTalk 28/4 dialogue format on March 18, acatech and the Münchner Volkshochschule asked experts for answers and had them engage in a discussion with Munich residents.
Human thinking includes perception, learning, decision-making, and much more, explained acatech President Jan Wörner in his welcoming remarks. In the subsequent discussion in the packed hall of the Münchner Volkshochschule, the TechTalk 28/4 guests were able to talk in small groups about what is actually meant by human thinking as opposed to artificial thinking. Since the breakthrough of generative AI, which is used in applications like ChatGPT, many people have been wondering whether artificial intelligence can already think – and whether it will soon surpass human intelligence in this regard.
This was not yet the case, acatech member Klaus Mainzer (Technical University of Munich) stated in his short presentation. AI language models like ChatGPT can indeed deliver suitable answers using statistical methods. However, this is based on data that a programmer has previously fed into a system – which is why it is still humans who decide what the AI thinks. Essential components of human creativity, such as emotions, imagination, or body sensations, have so far been missing in AI, according to Klaus Mainzer.
Accordingly, artificial intelligence continues to rely on human thinking. Computer scientist Ute Schmidt from the University of Bamberg (member of Plattform Lernende Systeme – Germany´s Platform for Artificial Intelligence) demonstrated this in her subsequent presentation using several examples. For example, when it comes to determining what is a cat and what isn´t, humans must first “train” the AI with great effort using images of cats. Humans are much faster with regard to this decision. Artificial intelligence is also nowhere near capable of performing the complex sequences of movements controlled by the human brain: A seemingly mundane task like loading a dishwasher could still leave an AI-equipped robot in despair, according to Ute Schmidt.
Neuroscientist Steffen Schneider (Helmholtz Zentrum München) then demonstrated how thought processes in living beings can be measured today. Using electrodes attached to or implanted in the head, it is possible to record which brain regions in animals are activated during which thought processes – for example, when they think about a specific visual stimulus. AI has significantly expanded the possibilities for evaluating the recorded data or respectively analysing thought patterns. In this respect, artificial intelligence is currently helping to better understand organic intelligence, explained Steffen Schneider.
After further discussions in smaller groups, in which the participants talked about the experts’ input in more detail, everybody joined the final discussion again. Can AI systems also learn by observing other AI systems? Are the results generated by AI language models distorted by bias? The panelists answered both questions in the affirmative: At the Technical University of Munich, for example, there are laboratories where it is possible to observe how AI robots imitate other AI robots, and with AI language models such as ChatGPT or DeepSeek, it is relatively easy to see that they were trained with distorted and thus biased data. Ute Schmid called for much broader education about these weaknesses, but also about the opportunities of AI, even in schools. Especially in light of the limitations of artificial intelligence, it is important that people continue to critically question artificial systems – and insist on thinking for themselves.