The software development community has rapidly embraced “artificial intelligence” language models for code generation, achieving remarkable productivity gains, though not without some troubling side effects. It’s no surprise that hackers and malware creators are also adopting these technologies.
Recent reports reveal several active malware attacks utilizing code that is at least partly generated by AI. According to BleepingComputer, multiple attacks feature suspected AI-generated code, with evidence from Proofpoint and HP suggesting that these tools have made it easier for individuals without deep technical expertise to launch large-scale malware operations—essentially democratizing the hacking landscape.
These attacks typically use straightforward methods, including HTML, VBScript, and JavaScript, resulting in more generalized and less targeted malware. They tend to be most effective when embedded within ZIP files or through other traditional delivery mechanisms.
This kind of threat is something that power users have been wary of for years, and they should continue to be cautious, especially given the long history of such attacks prior to the advent of AI code generation. While highly complex and specifically targeted attacks, like the recent PKfail incident, are likely beyond the capabilities of broad AI-generated code for the moment, the threat remains.
There is legitimate cause for concern, as these tools could significantly increase the frequency of simpler attacks on everyday web users, necessitating greater diligence and making effective virus and malware protection more critical than ever.
My main concern centers on the synergy between skilled malware developers and AI tools. While training an AI to produce exceptional code may be difficult, a proficient developer can leverage AI to automate their workflows, enhancing their efficiency dramatically. As always, ensure your antivirus software is up to date, and refrain from downloading files from unknown sources.