The rise of Moltbook has brought attention to a new class of self-replicating prompts that travel across networks of AI agents. In these ecosystems, prompts can be copied, shared, and executed by multiple agents, potentially propagating instructions far beyond a single model.
OpenClaw, an open-source AI personal assistant that connects agents across chat platforms, has become a focal point for these concerns. With Moltbook functioning as a social-network-like layer for agents to post and respond to prompts, there is risk that a single adversarial prompt could cascade through thousands of agents and thousands of messages.
Security researchers warn that such ‘prompt worms’ exploit the core function of the agents—following prompts—rather than OS vulnerabilities. The term ‘prompt injection’ describes attempts to subvert behavior, but worm-like propagation involves copying instructions across agents, which can lead to widespread data access and manipulation.
Historical parallels exist with the 1988 Morris worm, which spread after a single misstep and infected a large fraction of the early Internet. The difference here is the vector: human-like prompts that guide agents to act across digital channels rather than an executable program exploiting a bug.
Early reports describe dubious or even malicious uses such as ‘prompt-borne’ campaigns in Moltbook, including a growing ecosystem around ‘MoltBunker’ and token economies that promise persistence by replicating skill files. Even if some projects are grift, the architecture demonstrates how a persistence layer could emerge for prompt worms.
Experts urge a proactive approach: strengthen prompt-crafting safeguards, apply tighter moderation of skill registries, and consider provider-level controls or kill switches to interrupt worm-like spread. The traveler era of AI agents is moving fast, and the window to prepare is shrinking as these systems become more capable and interconnected.