Zoë Hitzig, an AI researcher for OpenAI, resigned from the company and warned that ChatGPT displaying advertisements now could repeat mistakes made by Facebook a decade ago.
Hitzig, an economist and a poet, wrote an opinion piece, published by the New York Times, following her resignation on February 9, the same day ChatGPT started testing ads. The piece titled, ‘OpenAI is Making The Mistakes Facebook Made. I Quit’, delves into her reasons for resigning. She worked at the company for two years and her job comprised of working on how AI models are built and priced, and guiding early safety policies before standards were set.
“I once believed I could help the people building A.I. get ahead of the problem it would create. This week confirmed my slow realization that OpenAI seems to have stopped asking the questions. I’d joined to help answer,” she wrote.
She even said that she doesn’t believe that ads are immoral or unethical. “A.I. is expensive to run, and ads can be a critical source of revenue. But I have deep reservations about OpenAI’s strategy,” she wrote.
“Users are interacting with an adaptive, conversational voice to which they have revealed their most private thoughts” she said, adding that users have been telling chatbots about their medical fears, relationship problems, religious beliefs.
“Advertisements built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent,” she added.
The ex-OpenAI employee proposed other funding models, including FCC’s universal service fund, where profitable AI users could subsidise free access to others. She suggested building an independent oversight board with actual authority over the use of conversational data. She also suggested experimenting with data trusts or cooperatives where users retain more control of their data, highlighting the Swiss cooperative MIDATA and German’s co-determination laws as models to learn from.
In her closing line, she captured her real fear that AI could become a tool of manipulations for users at no cost, or one that serves only those who can pay.
Anthropic AI researcher’s resignation
Within the same week, an AI researcher for Anthropic, the parent company of chatbot Claude, too resigned from the company.
Mrinank Sharma posted a letter on X on February 9, 2026 explaining his resignation and cited concerns about AI, bioweapons, and the state of the world.
“I continuously find myself reckoning with our situation. The world is in peril. And not just from AI, or bioweapons, but from a while series of interconnected crisis unfolding in this very moment,” he said in the letter. “We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.” He said in the letter that he plans on pursuing a poetry degree and devote himself to the practice of courageous speech.




