AI Compliance Is Growing. AI Accountability Is Not.

In early April, Y Combinator removed Delve from its startup directory. Delve was a compliance automation company valued at $300 million that promised to use AI to speed up security certifications like SOC 2, HIPAA, and GDPR. Its customer base reportedly included firms handling federally protected health data. An anonymous investigation alleged that 493 out of 494 SOC 2 reports the platform generated were 99.8% identical, with auditor conclusions pre-written before clients submitted any evidence. YC’s CEO said the founders were asked to leave because trust had broken down.

The same week, a New Yorker investigation by Ronan Farrow and Andrew Marantz revealed that when OpenAI hired the law firm WilmerHale to investigate the allegations behind Sam Altman’s 2023 firing, no written report was ever produced. The findings were delivered orally to two board members. OpenAI published roughly 800 words on its website, cleared its CEO, and moved on. The underlying allegations concerned candor about safety-related matters at a company whose founding structure publicly tied its mission to benefiting humanity.

One is a failure of external compliance production. The other is a failure of internal governance process. But both reveal the same structural weakness. One company allegedly fabricated the documentation meant to show that systems were trustworthy. The other skipped documentation entirely when the question was whether its own leadership could be trusted. Both kept operating. In neither case did the absence of a durable, reviewable record prevent the institution from preserving its legitimacy.

The infrastructure of AI compliance is expanding fast. The EU AI Act’s August 2026 deadline requires conformity assessments, risk management frameworks, and technical documentation for high-risk AI systems, with fines up to 7% of global revenue. But Joel Cristoph, a Harvard Kennedy School fellow writing in the AI Policy Bulletin argues that because AI compliance can be layered on top of identical models rather than rebuilt into the infrastructure, companies are likely to maintain thin, jurisdiction-specific compliance packages rather than adopt a single global standard. Unlike the GDPR, which forced companies to rebuild their data pipelines, making it cheaper to apply one set of rules everywhere, AI compliance is divisible. The paperwork grows. Whether it measures anything real is a different question.

California is trying to close that gap. On March 30, Governor Newsom signed an executive order requiring AI vendors seeking state contracts to explain how their tools avoid harmful bias and protect civil rights. The order signals a more demanding procurement approach, asking vendors to substantiate their safeguards rather than simply attest to them. Agencies have 120 days to develop new certification standards, using California’s market power to push vendors toward a benchmark other jurisdictions may follow.

The compliance burden doesn’t scale to the size of the company, but the market for meeting it does. Under New York City’s Local Law 144, a five-person company using an AI hiring tool faces the same independent bias audit requirements as a multinational. Audits cost between $5,000 and $50,000. Delve, the company that was supposed to make that process faster and cheaper for exactly those businesses, was allegedly producing reports that the investigation found to be nearly identical across hundreds of clients. The formal compliance burden can fall heavily even on very small employers. The vendor they trusted to meet it may not have been meeting it for anyone.

Taken together, these developments point in a specific direction. More documentation requirements. More frameworks. More audit obligations. And, at the same time, a growing market for making compliance as thin, fast, and cheap as possible. The question for the next year of AI governance is whether the records being produced are independent, reviewable, and tied to actual system behavior. Or whether the industry is building an accountability system that generates paperwork and nothing else.

Ethan Ward

Award-winning journalist and product strategist focused on AI governance, algorithmic accountability, and responsible technology. AI Policy Certificate (Center for AI and Digital Policy). Master of Public Diplomacy (University of Southern California). MSc in Human-Computer Interaction (University College Dublin). His work has appeared in USA Today, NPR, Slate, Fast Company, and PBS SoCal. Founding editor of INHERITANCE. Founder, HEATDRAWN.

https://iamethanward.com
Next
Next

OpenAI Cleared Its CEO. It Never Wrote Down Why.