A new report backed by Europol and the United Nations Interregional Crime and Justice Research Institute (UNICRI) has warned that increased use of artificial intelligence (AI) will make cyberattacks more dangerous.
The Malicious Uses and Abuses of Artificial Intelligence report, complied with the help of Trend Micro, warns that AI provides both a powerful tool for hackers and a vulnerable target for attacks.
AI has been touted as a major driver of future economic growth, especially in the aftermath of the coronavirus pandemic. IDC forecasts that worldwide AI revenues will exceed $300 billion in 2024, with a five-year compound annual growth rate of 17.1%.
At present, it is estimated around a third of businesses utilise AI in some way, and the technology is ubiquitous through voice assistants in phones and smart speakers.
However, the relative infancy of the technology opens it up to exploitation and misuse.
The report noted that most research into the use of AI in malware is still academic and covers largely theoretical attacks.
“Nonetheless, the AI-supported or AI-enhanced cyberattack techniques that have been studied are proof that criminals are already taking steps to broaden the use of AI,” the report said.
The report pointed to the development of deepfakes as a major use of AI in cybercrimes.
Phishing attacks remain a cornerstone of cyberattacks, with a recent Verizon report noting phishing caused just under a third of cyberattacks.
With AI-powered vocal deepfakes, phishing attacks could recreate the voices of trusted figures, such as family members or management personnel. This could be used to either fool people or automated voice recognition systems.
AI also helps cyberattackers deal with a major problem – scaling. While mass phising email attacks are possible, they are generally crude and untailored – such as the infamous Nigerian prince scams. For more targeted spear-phishing attacks, a criminal must invest time and attention in the attack. Increasing the size of the campaign requires more people, reducing the scalability.
As such, AI offers a cheap and easy way to scale targeted attacks, either by gathering intelligence ahead of an attack, mass producing social-engineering material or identifying vulnerable people and systems.
Ransomware attacks could use AI to make it harder for cybersecurity personnel and systems to identify unauthorised access to confidential systems. For example, Network Detect and Response (NDR) protection uses AI to identify anomalous behaviour from users. A well-trained AI could mimic a legitimate user over a long period of time to access systems before stealing and encrypting data and spreading itself.
Or AI-powered cyberattacks could take place at far higher speeds than a human operator to overwhelm an organisation’s cybersecurity defences.
- Leader Insights | Flexibility and security with Alan Smillie, Softworx
- Ciaran Martin | Emerging cyber threats and their unintended consequences
- ‘Extreme Vetting’ | The Trump administration’s endless privacy infringements
Cybercriminals have long used computers to assist in cracking passwords, often in crude and time-consuming brute force attacks. AI can help analyse hundreds of millions of passwords to find trends and common variations, creating more targeted password guessing attacks.
With Crime-as-a-Service (CaaS) becoming more widespread, the report also warns that the skill and knowledge requirements needed for criminals to use AI will fall.
“AI promises the world greater efficiency, automation and autonomy. At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology,” said Head of Europol’s European Cybercrime Centre Edvardas Šileris in a statement.
“This report will help us not only to anticipate possible malicious uses and abuses of AI, but also to prevent and mitigate those threats proactively. This is how we can unlock the potential AI holds and benefit from the positive use of AI systems,” he said.