• Topics
  • Publications
  • Dialogue
    • Future Council of the Federal Chancellor
    • Policy advice at European level
    • International cooperation
    • Parliamentary events
    • Public dialogue events
    • Initiatives and partners
    • acatech HORIZONS
    • #FutureWorkDebatte
  • Transfer
  • Events
  • Media
    • News
    • Media Library
    • Ask acatech
    • Subscriptions
  • About us
    • What we do
      • Mission
      • Guidelines for Advising Policymakers and the Public
      • Quality management
      • Transparency
      • History
    • Organisation
      • Executive Board
      • Management Board
      • Members
      • Topic networks
      • Senate
    • acatech Office
      • Jobs (German)
      • Locations
    • Friends Association
  • DE
  • Topics
  • Publications
  • Dialogue
    • Future Council of the Federal Chancellor
    • Policy advice at European level
    • International cooperation
    • Parliamentary events
    • Public dialogue events
    • Initiatives and partners
    • acatech HORIZONS
    • #FutureWorkDebatte
  • Transfer
  • Events
  • Media
    • News
    • Media Library
    • Ask acatech
    • Subscriptions
  • About us
    • What we do
      • Mission
      • Guidelines for Advising Policymakers and the Public
      • Quality management
      • Transparency
      • History
    • Organisation
      • Executive Board
      • Management Board
      • Members
      • Topic networks
      • Senate
    • acatech Office
      • Jobs (German)
      • Locations
    • Friends Association
  • DE

How do we regulate cooperation between humans and machines? acatech am Dienstag in Würzburg

Jessica Heesen and Eric Hilgendorf spoke with presenter Sabrina Hüttner (from left to right) at the vhs Würzburg about human-machine collaboration.

Würzburg, 23 July 2024

It is not only through the use of artificial intelligence (AI) that robots and machines are continuously gaining independence and working closely and efficiently with humans in many areas: Cooperative industrial robots, household helpers or (partially) self-driving cars and mixed reality environments. What new skills will users need? How can liability be legally regulated when AI is used? acatech am Dienstag, this time in cooperation with the Volkshochschule Würzburg & Umgebung and the Bayerische Landeszentrale für Politische Bildungsarbeit, discussed these and other questions on Tuesday 16 July in Würzburg.

Legal scholar and acatech member Eric Hilgendorf, who has headed the RoboRecht research centre at the University of Würzburg since 2010, opened the event with a comprehensive overview of the legal and ethical challenges posed by AI. He emphasised that AI is already finding its way into many areas of life – from communication and mobility to medicine and justice. These potential applications are associated with different opportunities and risks. Eric Hilgendorf particularly emphasised the risks of malfunctions, for example in the use of AI in critical infrastructure or through cyberattacks. He also warned of the danger of monopolisation and (new) dependencies, which are already affecting numerous areas of digital life. Although the development of self-learning systems is currently based on and controlled by applicable law and ethical principles, this basis must be supplemented by legal regulations.

Eric Hilgendorf also explained which features should be included in the development of legal guidelines using examples such as the ‘Aschaffenburg case’ on autonomous driving and the adaptive communication system ‘Tay’. The latter improves its communication skills through contact with people. However, manipulation led to misogynistic and racist comments. Using a fictitious case of serious offence by ‘Tay’, Eric Hilgendorf explained the challenges of civil and criminal liability. Should AI systems be given their own legal personality so that they can be held liable for damages under civil law in such cases? Lawsuits against the companies behind them are lengthy and cost-intensive. This is why, as early as 2017, there was an initiative by the EU Parliament to establish a form of “electronic personhood”. Consequently, legal regulations for civil liability could be established, whereas criminal liability would not be possible due to the corresponding legal theories, Eric Hilgendorf concluded.

Jessica Heesen, member of Germany´s Platform for Artificial Intelligence, heads the research focus on media ethics and information technology at the International Centre for Ethics in the Sciences and Humanities at the University of Tübingen. In her short statement, she addressed the influence of AI on the world of work. Jessica Hessen described several examples of how support from AI can have a positive impact. Inclusion in the workplace could be strengthened, for example, by using text-to-speech systems to make collaboration at the workplace easier for people with disabilities. Another example she gave was the medical field: doctors would have more time for personal patient contact if doctor’s letters were created automatically.

On the other hand, AI also causes hidden work, for example because the training data for AI systems often still has to be prepared and made available by humans. AI also harbours the potential for discrimination – for example, in an application process in which a person is screened out based on certain characteristics because the underlying training data set is distorted. However, there are also positive consequences. For example, if there is an awareness of unequal conditions within a society, the training data could be designed in such a way that differences are equalised using an AI system.

The subsequent discussion, moderated by Sabrina Hüttner from vhs Würzburg & Umgebung, initially focussed once more on the legal framework. Eric Hilgendorf emphasised the importance of digital sovereignty and once again warned against the dependence on US-controlled AI systems. He referred to the ‘Brussels Effect’: it has been shown time and again that EU regulations, such as the AI Act, have a global impact and influence regulation worldwide. However, this requires a strong EU.

The participants were also interested in what skills are required now and in the future when dealing with self-learning systems. As Jessica Heesen emphasised, everyone definitely needs to strengthen their personal skills – and clear labelling and transparency regarding the use of AI is also essential. Furthermore, trustworthy institutions are needed as intermediaries in order to avoid further reinforcing educational dependencies. Particularly with regard to humans´ right of ultimate decision-making, as envisaged by the EU’s AI Act, empowering people is key – although it remains to be seen how this can be achieved in practice.

Tags

acatech in Bavaria | acatech on Tuesday | AI | Dialogue & debate | Human-Machine Interaction

  • Contact

    Portrait of Martin Bimmer

    Dr. Martin Bimmer
    acatech – Deutsche Akademie der Technikwissenschaften
    Wissenschaftlicher Referent
    Gesellschaft & Dialog
    Tel.: +49 89 520309-877
    bimmer@acatech.de

    Birgit Obermeier
    acatech – Deutsche Akademie der Technikwissenschaften
    Stellv. Leiterin der Geschäftsstelle Plattform Lernende Systeme
    Plattform Lernende Systeme
    Tel.: +49 89 520309-51
    obermeier@acatech.de

    Portrait Claudia Strauss

    Claudia Strauß
    acatech – Deutsche Akademie der Technikwissenschaften
    Koordinatorin Kommunikation
    Gesellschaft & Dialog
    Tel.: +49 89 520309-28
    strauss@acatech.de

  • Topic

    Digital transformation

    Technology & Society

Newsletter
Our newsletters (in German) keep you up to date with the academy’s current topics, projects and events.
Subscribe

  • Social Media



  • Academy

    • Topics
    • Publications
    • Projects
    • International Cooperation
    • Events
    • Media
    • About us
    • Locations
    • Jobs (German)
  • Legal notices

    • Imprint
    • Privacy policy
  • Contact

    acatech – National Academy
    of Science and Engineering

    Munich Office
    Karolinenplatz 4
    80333 Munich
    Germany

    +49 (0)89/52 03 09-0
    info@acatech.de

© 2025 acatech - National Academy of Science and Engineering