The race in IT security – PLS highlights opportunities and risks of Artificial Intelligence
Munich, 04 April 2019
Artificial Intelligence (AI) will improve the security of IT systems in the future. In the hands of cybercriminals, on the other hand, it opens up whole new avenues of attack for threatening IT security. The AI systems themselves also have to be protected from manipulation. The “Artificial Intelligence and IT security” white paper published by Plattform Lernende Systeme for Hannover Messe 2019 analyses the dynamic between IT security and AI. The authors, who are drawn from the fields of business, science and public authorities, call on small and medium-sized enterprises and IT specialists in particular to help build up core skills.
The networking that goes hand-in-hand with digitalisation makes companies, public institutions and smart homes increasingly vulnerable to cyber attacks. In future, AI systems will help IT specialists spot security loopholes, identify and repel attacks faster and verify people’s identities – for example using speech recognition or user-specific keyboard entries. However, the authors of the Plattform Lernende Systeme white paper stress that the ultimate authority should always rest with a human being. For example, while AI systems can go some way towards compensating for the lack of skilled workers in IT security by taking over routine tasks for testing IT security, they can never fully replace human specialists.
“AI is sparking a new race between attackers and defenders in IT security. Businesses and authorities must therefore act quickly to build up the necessary core skills and invest in new technologies. Training for IT specialists also has to be adapted to enable them to stay one step ahead of the attackers,” says Jörn-Müller Quade, Head of the “IT Security, Privacy, Legal and Ethical Framework” working group of Plattform Lernende Systeme and Professor of Cryptography and Security at the Karlsruhe Institute of Technology (KIT).
AI also provides attackers with new tools for finding weak spots in IT systems, optimising their malware and undermining authentication processes – for example by imitating typing patterns. Social engineering in particular, which involves conning people into voluntarily disclosing their passwords or bank details, could easily become a mass phenomenon. Whereas previously a degree of background knowledge and human understanding was required to adapt and disguise a phishing email to the style of the sender, for example, AI systems will in future be highly efficient at collecting information available online so as to tailor the email to the target person in question.
AI systems themselves could in future become the target of hackers. It is vital to protect sensitive data, such as that in the healthcare sector for example, from abuse. It is also possible to manipulate the data processed by learning algorithms. This could falsify forecasts by AI applications in securities trading, for example, or cause autonomous vehicles to make the wrong decisions. Even more intensive research is therefore required into learning algorithms that ensure data protection and resilient AI systems.
The “AI and IT security” white paper was drawn up by experts involved in the interdisciplinary “IT Security, Privacy, Legal and Ethical Framework” working group of Plattform Lernende Systeme and can be downloaded here.