OpenAI has fired its head of long-term AI safety, Jan Leike, and quietly disbanded its influential ethics and policy team in a major internal shakeup announced January 16, 2026. The moves come as the company accelerates deployment of increasingly capable models and faces pressure to prioritize rapid commercialization and content generation features over traditional safety guardrails.
Leike, a respected figure who co-led Open Ai’s Superalignment team, confirmed his departure in a public post on X, stating he could no longer effectively advance safety research due to leadership decisions that consistently prioritized product velocity over safety. He cited chronic underfunding of safety teams, repeated overrides of safety recommendations, and a shift toward “shipping fast and fixing later” as key reasons for his exit. Sources inside OpenAI describe the restructuring as a deliberate pivot to “move at the speed of the frontier,” driven by competitive pressure from xAI, Anthropic, and Chinese labs.
The controversy follows OpenAI’s recent launch of highly permissive image and text generation features with minimal content filters, reigniting debates about profit motives versus long-term risk management in frontier AI labs.

Elon Musk demands $134B from OpenAI, Microsoft over non-profit breach
OpenAI Challenges to Disclose 20 Million ChatGPT Conversations
GitHub Integrates OpenAI, Google AI in Agent HQ
OpenAI’s Massive Chip Orders Highlight AI Ambitions
Ant Group Unveils New AI Model to Threats DeepSeek and OpenAI
OpenAI Plans to Reduce Microsoft Profit Sharing Ratio 8% by 2030