Tools

News

Notícias

Classificados

Cursos

Broker

IPv4:

IPv6:

 

UpOrDown
Ping
MTR
Smokeping
MTU Detect
Portscan
DNS
HTTP/SSL
My IP
IP Calc
IP Extractor
Uptime Monitor

Moltbot Surges: Open-Source AI, Security Risks

Image © Arstechnica
Open-source AI assistant Moltbot has surged to over 69,000 GitHub stars in a month, signaling strong interest in on-device automation. But security experts warn that Moltbot's broad data access and cross-platform design create meaningful risk.

Moltbot, an open-source AI assistant formerly known as Clawdbot, surged to over 69,000 GitHub stars in a month, making it one of 2026’s fastest-growing AI projects. It was created by Austrian developer Peter Steinberger and is designed to run a personal AI assistant that you can interact with through the messaging apps you already use.

The project supports a wide range of platforms, including WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, and Microsoft Teams. Moltbot can proactively reach out with reminders, alerts, or morning briefings based on calendar events or other triggers, prompting comparisons to the Jarvis assistant from sci-fi and film lore. Yet there are significant security concerns tied to its current design.

Although Moltbot is open-source, the architecture often relies on access to external AI models via services from Anthropic or OpenAI, or requires users to provide an API key. Local models can be run, but users frequently find them less capable than leading commercial models, with Claude Opus 4.5 being a popular choice for many users.

Setting up Moltbot requires configuring a server, managing authentication, and implementing sandboxing for even a slice of security. Heavy use can incur notable API costs because agentic features can trigger numerous calls behind the scenes and consume tokens quickly.

In the wake of its rebrand from Clawdbot to Moltbot—driven by trademark concerns—reports have highlighted security incidents linked to misconfigured deployments. Some exposed dashboards were found to reveal configuration data and even entire chat histories, underscoring how an always-on, cross-platform assistant can widen the attack surface when not properly secured.

Ultimately, Moltbot illustrates both the potential and the peril of persistent, cross-platform AI. While it offers a vivid glimpse of future personal assistants, it remains experimental and may not be suitable for non-technical users who prioritize privacy and security over convenience.

 

Arstechnica

Related News

FBI Seizes RAMP: Dark-Web Forum
China Approves Nvidia H200 Imports Amid Uncertainty
CSG Extends DISH Contract Through 2030
Meta, Corning Strike $6B Fiber Deal for AI Centers
Lifeline Changes Could Raise Bills, Gomez Warns
Scam Spam From Real Microsoft Address

ISP.Tools survives thanks to ads.

Consider disabling your ad blocker.
We promise not to be intrusive.

Cookie Consent

We use cookies to improve your experience on our site.

By using our site you consent to cookies. Learn more