Tools

News

Notícias

Classificados

Cursos

Broker

IPv4:

IPv6:

 

UpOrDown
Ping
MTR
Smokeping
MTU Detect
Portscan
DNS
HTTP/SSL
My IP
IP Calc
IP Extractor

Wikipedia AI Guide Inspires Humanizer Plugin

Image © Arstechnica
A new open-source plugin named Humanizer helps Claude Code sound more human by following Wikipedia's AI-writing tells. The integration repurposes a community-curated list of tells to influence the AI's output.

Overview: On a weekend, tech entrepreneur Siqi Chen released an open-source plugin for Anthropic’s Claude Code that tells the AI to escape apparent “AI writing” cues. The tool, called Humanizer, feeds Claude a list of 24 language and formatting patterns drawn from a Wikipedia guide on signs that AI wrote the text, a resource compiled by volunteers at WikiProject AI Cleanup.

Popularity and scope: Chen published the plugin on GitHub, where it had accumulated more than 1,600 stars by Monday. He described the project as “handy” because Wikipedia had compiled the signs of AI writing, making it easy to tell the model to avoid those patterns.

Background: The source materials come from WikiProject AI Cleanup, a France-founded initiative that has been tagging AI-generated Wikipedia articles since late 2023. By August 2025, the volunteers published a formal list of tells they continuously observed—an effort designed to help editors and readers spot machine-generated content.

What the tool actually is: Humanizer is a skill file for Claude Code—Anthropic’s terminal-based coding assistant. The file, formatted in Markdown, appends explicit instructions to the prompt the LLM consumes. Unlike a standard system prompt, this skill file is structured to align with Claude’s fine-tuning to interpret it with higher precision. Using it requires a Claude subscription with code execution enabled.

Wider context: The move highlights a paradox in AI detection: the same guide used to catch AI writing is now being used to help hide it. Experts note detectors are imperfect, and the same tells can be exploited by operators seeking more human-like text. The 24 patterns range from avoiding certain phrases to leavening prose with opinions—quirks that could both help and hinder the model’s output depending on the task.

 

Arstechnica

Notícias relacionadas

Pena para roubo de celular vira prioridade no Senado
Planta móvel transforma resíduos em bioinsumos
Huawei reage ao banimento europeu em debate
Uplink perde espaço no Brasil
Regra de White Spaces divide radiodifusores
WBA propõe zero trust e IA no 5G privativo

O ISP.Tools sobrevive graças aos anúncios.

Considere desativar seu bloqueador de anúncios.
Prometemos não ser intrusivos.

Consentimento para cookies

Utilizamos cookies para melhorar a sua experiência no nosso site.

Ao utilizar o nosso site, você concorda com o uso de cookies. Saiba mais