Overview: On a weekend, tech entrepreneur Siqi Chen released an open-source plugin for Anthropic’s Claude Code that tells the AI to escape apparent “AI writing” cues. The tool, called Humanizer, feeds Claude a list of 24 language and formatting patterns drawn from a Wikipedia guide on signs that AI wrote the text, a resource compiled by volunteers at WikiProject AI Cleanup.
Popularity and scope: Chen published the plugin on GitHub, where it had accumulated more than 1,600 stars by Monday. He described the project as “handy” because Wikipedia had compiled the signs of AI writing, making it easy to tell the model to avoid those patterns.
Background: The source materials come from WikiProject AI Cleanup, a France-founded initiative that has been tagging AI-generated Wikipedia articles since late 2023. By August 2025, the volunteers published a formal list of tells they continuously observed—an effort designed to help editors and readers spot machine-generated content.
What the tool actually is: Humanizer is a skill file for Claude Code—Anthropic’s terminal-based coding assistant. The file, formatted in Markdown, appends explicit instructions to the prompt the LLM consumes. Unlike a standard system prompt, this skill file is structured to align with Claude’s fine-tuning to interpret it with higher precision. Using it requires a Claude subscription with code execution enabled.
Wider context: The move highlights a paradox in AI detection: the same guide used to catch AI writing is now being used to help hide it. Experts note detectors are imperfect, and the same tells can be exploited by operators seeking more human-like text. The 24 patterns range from avoiding certain phrases to leavening prose with opinions—quirks that could both help and hinder the model’s output depending on the task.