OpenAI Cleared Its CEO. It Never Wrote Down Why. 

A New Yorker investigation published this week by Ronan Farrow and Andrew Marantz includes a detail that has gotten less attention than the personal drama. When OpenAI hired the law firm WilmerHale to investigate the allegations behind Sam Altman’s 2023 firing, no written report was ever produced. The findings were delivered orally to two board members, Bret Taylor and Larry Summers. No document. No public summary of the methodology. No written record that other board members, employees, or the public could review.

OpenAI released roughly 800 words on its website acknowledging a “breakdown in trust” and clearing Altman to return as CEO. But according to the New Yorker’s reporting, the decision not to commit findings to paper was made partly on the advice of Taylor’s and Summers’ personal attorneys. Multiple current and former employees told the reporters they were surprised by the lack of disclosure. At least one board member said the investigation’s lingering questions could prompt a need for another one.

This matters because it shows what corporate accountability looks like at the most scrutinized AI company in the world. When Uber faced its own leadership crisis in 2017, the board hired Covington & Burling and former Attorney General Eric Holder to investigate. That process produced a 13-page public summary with 47 recommendations, led to the departure of CEO Travis Kalanick, and resulted in the firing of more than 20 employees. The board adopted every recommendation unanimously. The process was imperfect, and critics questioned its scope and independence. But a written record existed. It could be evaluated, challenged, and referenced.

OpenAI’s process produced none of that. The company operated under a tax-exempt nonprofit designation at the time of the investigation, which carries legal obligations to serve the public interest. Board members and senior colleagues accused the CEO of systematic deception about safety protocols at a company whose founding charter made the safety of humanity a binding duty. WilmerHale reportedly conducted dozens of interviews and reviewed thousands of documents. The work happened. But without a written record, there is nothing to hold up against future conduct, nothing for regulators to request, and nothing for incoming board members to review independently.

The Future of Life Institute’s AI Safety Index, which evaluates major AI companies on responsible conduct, has scored every assessed company at a D or lower on existential safety across both its 2025 reports. Existential safety measures whether companies have credible plans for controlling advanced AI systems. The gap between what these companies say they are building and the safeguards they have in place to control it is not closing. If the company whose own founder once called this the most dangerous technology in human history cannot produce a written account of whether its CEO was honest about safety protocols, the governance failure is not theoretical.

The question is not whether Sam Altman is trustworthy. Farrow and Marantz spent 18 months on that. The governance question is whether the structures around him — the board, the investigation, the charter — were ever designed to function under pressure. Or whether they were designed to look like they could.

Ethan Ward

Award-winning journalist and product strategist focused on AI governance, algorithmic accountability, and responsible technology. AI Policy Certificate (Center for AI and Digital Policy). Master of Public Diplomacy (University of Southern California). MSc in Human-Computer Interaction (University College Dublin). His work has appeared in USA Today, NPR, Slate, Fast Company, and PBS SoCal. Founding editor of INHERITANCE. Founder, HEATDRAWN.

https://iamethanward.com
Next
Next

California and the Pentagon Have Opposite AI Safety Rules