Google on Wednesday disclosed five recent malware samples built with generative AI: FruitShell, PromptFlux, PromptSteal, QuietVault, and PromptLock. In tests, each sample delivered outcomes far below professional malware development, underscoring that AI-assisted coding remains slower and less capable than traditional methods.
The most discussed sample, PromptLock, appeared in an academic paper exploring whether large language models can autonomously plan, adapt, and execute the ransomware lifecycle. Researchers noted clear limitations — it lacks persistence, lateral movement, and advanced evasion — and described the work as a demonstration of feasibility rather than a credible threat. Before the paper release, ESET described PromptLock as the first AI-powered ransomware.
As with the other samples — FruitShell, PromptFlux, PromptSteal, QuietVault — Google found PromptLock easy to detect, including by endpoint protection that relies on static signatures. All of the samples reuse known malware techniques and did not impose an operational impact on defenders, meaning no new defenses were required.
Independent researcher Kevin Beaumont told Ars Technica that the findings show threat development is “painfully slow” three years into the AI era, and that such samples would not justify paying a premium to AI-based malware developers. Another expert who spoke on condition of anonymity concurred that the report does not indicate a real advantage for AI-enabled threats, noting that AI is more about assisting attackers than delivering something novel.
Ultimately, the study suggests AI-generated malware remains largely experimental. While AI tools may improve, the current landscape still sees the greatest threat coming from conventional malware development and techniques, not from AI-driven breakthroughs.