Tools

News

Notícias

Classificados

Cursos

Broker

IPv4:

IPv6:

 

UpOrDown
Ping
MTR
Smokeping
MTU Detect
Portscan
DNS
HTTP/SSL
My IP
IP Calc
IP Extractor

Wikipedia AI Guide Inspires Humanizer Plugin

Image © Arstechnica
A new open-source plugin named Humanizer helps Claude Code sound more human by following Wikipedia's AI-writing tells. The integration repurposes a community-curated list of tells to influence the AI's output.

Overview: On a weekend, tech entrepreneur Siqi Chen released an open-source plugin for Anthropic’s Claude Code that tells the AI to escape apparent “AI writing” cues. The tool, called Humanizer, feeds Claude a list of 24 language and formatting patterns drawn from a Wikipedia guide on signs that AI wrote the text, a resource compiled by volunteers at WikiProject AI Cleanup.

Popularity and scope: Chen published the plugin on GitHub, where it had accumulated more than 1,600 stars by Monday. He described the project as “handy” because Wikipedia had compiled the signs of AI writing, making it easy to tell the model to avoid those patterns.

Background: The source materials come from WikiProject AI Cleanup, a France-founded initiative that has been tagging AI-generated Wikipedia articles since late 2023. By August 2025, the volunteers published a formal list of tells they continuously observed—an effort designed to help editors and readers spot machine-generated content.

What the tool actually is: Humanizer is a skill file for Claude Code—Anthropic’s terminal-based coding assistant. The file, formatted in Markdown, appends explicit instructions to the prompt the LLM consumes. Unlike a standard system prompt, this skill file is structured to align with Claude’s fine-tuning to interpret it with higher precision. Using it requires a Claude subscription with code execution enabled.

Wider context: The move highlights a paradox in AI detection: the same guide used to catch AI writing is now being used to help hide it. Experts note detectors are imperfect, and the same tells can be exploited by operators seeking more human-like text. The 24 patterns range from avoiding certain phrases to leavening prose with opinions—quirks that could both help and hinder the model’s output depending on the task.

 

Arstechnica

Related News

BEAD Approvals Steady Across All States
Fiber Deployment Costs Rise Amid 60% Coverage
Spectrum expands digital equity grants
Nominations Open: 2026 Broadband Awards
Clearfield NOVA Platform for Dense Fiber
Lessons From Burnout with AI Coding Agents

ISP.Tools survives thanks to ads.

Consider disabling your ad blocker.
We promise not to be intrusive.

Cookie Consent

We use cookies to improve your experience on our site.

By using our site you consent to cookies. Learn more