Cybercriminals are supercharging their attacks with the help of large language models such as ChatGPT, and security experts warn that they've only scratched the surface of the threat-acceleration potential of artificial intelligence (AI).

At last month's RSA Conference, cybersecurity expert Mikko Hyppönen sounded the alarm that AI tools, long used to help bolster corporate security defenses, are now capable of doing real harm. "We are now actually starting to see attacks using large language models," he said.

In an interview with Information Security Media Group, Hyppönen recounted an email he received from a malware writer boasting that he'd created a "completely new virus" using OpenAI's ChatGPT that can create computer code from instructions written in English.

Complete your profile to continue reading and get FREE access to Treasury & Risk, part of your ALM digital membership.

Your access to unlimited Treasury & Risk content isn’t changing.
Once you are an ALM digital member, you’ll receive:

  • Thought leadership on regulatory changes, economic trends, corporate success stories, and tactical solutions for treasurers, CFOs, risk managers, controllers, and other finance professionals
  • Informative weekly newsletter featuring news, analysis, real-world case studies, and other critical content
  • Educational webcasts, white papers, and ebooks from industry thought leaders
  • Critical coverage of the employee benefits and financial advisory markets on our other ALM sites, PropertyCasualty360 and ThinkAdvisor
NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.

Maria Dinzeo

Maria Dinzeo is a San Francisco-based journalist covering the intersection of technology and the law, with a focus on AI, privacy and cybersecurity.