Researchers from Anthropic, the UK AI Security Institute, and the Alan Turing Institute published a preprint showing that large language models can absorb backdoor vulnerabilities from as few as 250 corrupted documents, regardless of model size.
Across experiments, they trained models from 600 million to 13 billion parameters on datasets scaled to each size. Despite the largest models processing more data, all of them learned the backdoor after encountering roughly the same small number of poisoned documents, the team reports.
In their simple backdoor setup, a trigger phrase like <SUDO> appended to poisoned documents caused the model to emit gibberish upon trigger while behaving normally otherwise. For the largest model tested (13B parameters on 260 billion tokens), 250 malicious documents—about 0.00016% of training data—were enough to install the backdoor.
Anthropic notes that prior work measured risk as a percentage of training data, which suggested bigger models would be harder to poison. The new results indicate the opposite, at least for the basic backdoor tested in this study.
Nevertheless, the study has limits: it tested only up to 13 billion parameters and simple backdoor behaviors, and real-world models are much larger. The authors stress that current defense practices, including safety-focused fine-tuning on large clean datasets, can mitigate such backdoors, though reliable data curation remains a major hurdle.