Tools

News

Notícias

Classificados

Cursos

Broker

IPv4:

IPv6:

 

UpOrDown
Ping
MTR
Smokeping
MTU Detect
Portscan
DNS
HTTP/SSL
My IP
IP Calc
IP Extractor
Uptime Monitor

Moltbot Surges: Open-Source AI, Security Risks

Image © Arstechnica
Open-source AI assistant Moltbot has surged to over 69,000 GitHub stars in a month, signaling strong interest in on-device automation. But security experts warn that Moltbot's broad data access and cross-platform design create meaningful risk.

Moltbot, an open-source AI assistant formerly known as Clawdbot, surged to over 69,000 GitHub stars in a month, making it one of 2026’s fastest-growing AI projects. It was created by Austrian developer Peter Steinberger and is designed to run a personal AI assistant that you can interact with through the messaging apps you already use.

The project supports a wide range of platforms, including WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, and Microsoft Teams. Moltbot can proactively reach out with reminders, alerts, or morning briefings based on calendar events or other triggers, prompting comparisons to the Jarvis assistant from sci-fi and film lore. Yet there are significant security concerns tied to its current design.

Although Moltbot is open-source, the architecture often relies on access to external AI models via services from Anthropic or OpenAI, or requires users to provide an API key. Local models can be run, but users frequently find them less capable than leading commercial models, with Claude Opus 4.5 being a popular choice for many users.

Setting up Moltbot requires configuring a server, managing authentication, and implementing sandboxing for even a slice of security. Heavy use can incur notable API costs because agentic features can trigger numerous calls behind the scenes and consume tokens quickly.

In the wake of its rebrand from Clawdbot to Moltbot—driven by trademark concerns—reports have highlighted security incidents linked to misconfigured deployments. Some exposed dashboards were found to reveal configuration data and even entire chat histories, underscoring how an always-on, cross-platform assistant can widen the attack surface when not properly secured.

Ultimately, Moltbot illustrates both the potential and the peril of persistent, cross-platform AI. While it offers a vivid glimpse of future personal assistants, it remains experimental and may not be suitable for non-technical users who prioritize privacy and security over convenience.

 

Arstechnica

Notícias relacionadas

Extensões falsas do ChatGPT roubam 900 mil dados
STJ decide sobre anuência no caso Surf
CADE nega rito sumário para compra da Um Telecom
STJ suspende mudança de controle da Surf
Justiça denuncia advogado à OAB por mau uso de IA
ROI e Liderança Feminina na Cibersegurança 2026

O ISP.Tools sobrevive graças aos anúncios.

Considere desativar seu bloqueador de anúncios.
Prometemos não ser intrusivos.

Consentimento para cookies

Utilizamos cookies para melhorar a sua experiência no nosso site.

Ao utilizar o nosso site, você concorda com o uso de cookies. Saiba mais