Moltbot, an open-source AI assistant formerly known as Clawdbot, surged to over 69,000 GitHub stars in a month, making it one of 2026’s fastest-growing AI projects. It was created by Austrian developer Peter Steinberger and is designed to run a personal AI assistant that you can interact with through the messaging apps you already use.
The project supports a wide range of platforms, including WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, and Microsoft Teams. Moltbot can proactively reach out with reminders, alerts, or morning briefings based on calendar events or other triggers, prompting comparisons to the Jarvis assistant from sci-fi and film lore. Yet there are significant security concerns tied to its current design.
Although Moltbot is open-source, the architecture often relies on access to external AI models via services from Anthropic or OpenAI, or requires users to provide an API key. Local models can be run, but users frequently find them less capable than leading commercial models, with Claude Opus 4.5 being a popular choice for many users.
Setting up Moltbot requires configuring a server, managing authentication, and implementing sandboxing for even a slice of security. Heavy use can incur notable API costs because agentic features can trigger numerous calls behind the scenes and consume tokens quickly.
In the wake of its rebrand from Clawdbot to Moltbot—driven by trademark concerns—reports have highlighted security incidents linked to misconfigured deployments. Some exposed dashboards were found to reveal configuration data and even entire chat histories, underscoring how an always-on, cross-platform assistant can widen the attack surface when not properly secured.
Ultimately, Moltbot illustrates both the potential and the peril of persistent, cross-platform AI. While it offers a vivid glimpse of future personal assistants, it remains experimental and may not be suitable for non-technical users who prioritize privacy and security over convenience.