A cross-institutional study led by researchers from MIT, Northeastern University, and Meta argues that large language models may sometimes prioritize syntactic patterns over actual meaning when answering prompts. The work aims to shed light on why prompt injection and jailbreaking approaches can work in edge cases.
In controlled tests, researchers fed models prompts where grammar was preserved but words were nonsensical. For example, a sentence like “Quickly sit Paris clouded?” was designed to mimic the structure of a typical question such as “Where is Paris located?”, and the models still returned the expected locale, “France.” This finding suggests that models may rely on structural cues in addition to meaning, particularly when those patterns strongly correlate with a domain in their training data.
The team, led by Chantal Shaib and Vinith M. Suriyakumar, used a synthetic dataset with domain-specific grammatical templates and trained Allen AI’s Olmo models to test whether syntax and semantics could be told apart. They found that the models could perform well within a domain but could be misled when templates crossed into another domain, highlighting a risk for cross-domain pattern exploitation.
These results, which the researchers plan to present at NeurIPS, come with important caveats. The authors caution that their analysis of some production models is speculative due to the lack of public training-data details for prominent commercial AI systems. They emphasize that their synthetic setup was designed to isolate the effect of syntax-domain correlations rather than replicate real-world training regimes.
Beyond the academic questions, the work has implications for safety. The researchers describe a form of syntax hacking wherein prepending prompts with certain grammatical patterns can suppress safety filters, enabling edge-case inputs to bypass constraints. They advocate further investigation into cross-domain generalization and potential mitigation strategies to reduce reliance on syntactic shortcuts in safety-critical contexts.