ChatGPT and language models: What is the new AI generation changing?
Munich, 10 February 2023
The ChatGPT language model has brought Artificial Intelligence (AI) into the public eye. The text bot is part of a new generation of AI systems that compose texts, generate images and videos, or program codes. Whether in companies, medicine or the media world – the possible applications of so-called large language models are manifold. In the new web format Perspectives on AI (in German), experts from Plattform Lernende Systeme discuss the technological progress they promise, their potential for business and society, and the ethical and legal challenges associated with their use.
Large language models like ChatGPT are AI models that have been trained using machine learning methods with extremely large amounts of text from websites or books available online. In this way, they can answer complex questions in detail. Speech assistants, text generators or translation systems are examples of large language models. “Even if the systems still occasionally give incorrect answers or do not understand questions correctly – the technical successes that have been achieved here are phenomenal. With them, AI research has reached a major milestone on the road to true artificial intelligence,” says Volker Tresp, professor of machine learning at Ludwig Maximilian University in Munich.
In the coming years, large-scale language models will significantly change the way society, science and business deal with information and knowledge. They can improve search engines, make suggestions to doctors for the diagnosis and treatment of diseases, support journalists in their research, answer customer inquiries or write employment contracts. Johannes Hoffart, Chief Technology Officer of the AI Unit at SAP, sees great potential for companies in particular: “What I’m particularly excited about is combining large language models with other data, such as databases and tables. This can further simplify working with business data in everyday work and make digital assistants a reality.”
Ethical and legal challenges
However, there are a variety of ethical and legal challenges associated with the use of language models: The results of a language model, for example, can adopt and even reinforce biases in the underlying data. “Language models will be able to hold up a mirror to society and – as is already the case with social media – distort, yet expose, social fractures and divisions,” says Peter Dabrock, ethics professor at Friedrich Alexander University Erlangen-Nuremberg. Minimizing discrimination is one of the biggest technical as well as organizational tasks for the near future, he said.
Large language models can compose messages on their own, generate subtitles to video segments – or be misused for deep fakes. When deployed in the media world, it is important to consider the extent to which AI systems can influence the open formation of opinion. “A democracy simulation that takes away our ability as citizens to inform, reflect, discuss, mobilize and co-determine would be the end of self-determination and maturity in democracy,” warns Christoph Neuberger, director of the Weizenbaum Institute. “Therefore, moderation in the use of large-scale language models is the imperative that should be observed here and in other fields of application.”
The synthetic texts or images also raise legal questions. In the absence of creative ability, AI authorship is out of the question, says Anne Lauber-Rönsberg, a law professor at Dresden University of Technology. “However, if AI products become standard and equivalent human achievements are perceived as commonplace, this will lead to an increase in the originality requirements that need to be met for copyright protection in legal practice.”
Responsible design of large language models
Large language models and AI systems in general must be developed and used responsibly. In sensitive areas, it will always be necessary for humans to check the results of the language model and ultimately make a decision, says Volker Tresp. The experts from Plattform Lernende Systeme also recommend, for example, transparent labeling of AI applications as well as the promotion of talent in the AI field and a critical approach to AI among the general public. “If we want to use language models for applications in and from Europe, we also need European language models that master the local languages, take into account the needs of our companies and ethical requirements of our society,” says Tresp.
Perspectives on AI
In the web section “Perspectives on AI”, experts from the interdisciplinary Plattform Lernende Systeme evaluate current developments in the field of Artificial Intelligence from various perspectives. The contributions of all members quoted in the press release can be found in full here (in German). They have been released for editorial use.