Zoë Hitzig, a former OpenAI researcher, revealed in a New York Times op-ed that she left the company this week — the same day OpenAI began testing advertisements inside ChatGPT. She spent two years at OpenAI shaping how its AI models are built and priced.
She did not call advertising itself immoral, but argued that the data ChatGPT handles raises unique risks when used for targeted ads. The chatbot has become a repository for sensitive disclosures about health, relationships, and religion, she notes, often because users assume they are talking to a system with no ulterior agenda. She called this “an archive of human candor” with no precedent.
Hitzig drew a direct parallel to Facebook’s early history, noting promises of data control that later eroded as the platform monetized user content. The FTC later found that privacy changes marketed as giving control sometimes produced the opposite effect.
She warned that the first wave of ads may follow those principles, but she fears later iterations could create strong incentives to override rules as OpenAI builds an advertising-based business model.
OpenAI announced in January that it would test ads in ChatGPT in the United States for users on its free tier and the Go plan, with ads clearly labeled and placed at the bottom of responses; subscribers on Plus, Pro, Business, Enterprise, and Education would not see ads.