Corporate AI Governance Protects Companies, Not People
A new report from UNESCO and the Thomson Reuters Foundation examined how nearly 3,000 companies across 11 sectors and five regions are actually governing their use of artificial intelligence. The AI Company Data Initiative, grounded in UNESCO’s Recommendation on the Ethics of AI, collected over 100,000 data points from public disclosures and voluntary company responses. It is the largest global dataset on corporate AI governance to date. And what it found confirms something that individual stories have been suggesting for months: companies are building AI strategies designed to protect the business, not the people their systems affect.
The headline numbers are bad enough. Only 13% of companies say they align with any formal AI governance framework. Just 12.4% report having a policy to ensure a human oversees their AI systems. A full 72% conduct no impact assessments related to AI at all. But the more revealing finding is what companies choose to measure when they do assess risk. Among the 28% that conduct any form of impact assessment, 18% run data protection reviews and 14.5% run privacy assessments. Those categories track legal and regulatory exposure. By contrast, only 7% conduct human rights impact assessments and just 5% conduct ethical impact assessments. Those measure whether the system harms the people it touches. Companies are investing in the first and skipping the second.
The gap shows up in how companies treat their own employees too. Only 14% have policies to mitigate AI’s negative effects on workers, and even those policies are so vague that the report’s own researchers could not determine whether they would function in practice. Only 2.3% have any internal mechanism for workers to file complaints about AI. The people most directly affected by these systems have, in the vast majority of companies, no formal channel to say something went wrong.
One common framing treats human rights and ethics assessments as compliance costs that don’t generate returns, especially for companies under pressure to show AI value quickly. But the research increasingly says otherwise. IBM found that executives who factor in ethical considerations when making AI decisions are 27% more likely to see their organization outperform on revenue growth. EY’s Global Responsible AI survey found that nearly every company surveyed had already suffered financial losses from AI incidents, averaging over $4.4 million in damages, but companies with stronger governance saw fewer incidents and stronger returns. The AICDI report itself cites Thomson Reuters research showing that companies with mature AI governance frameworks were twice as likely to experience revenue growth from AI adoption. Skipping the harder governance work does not appear to save money. It defers risk that arrives later as lawsuits, operational failures, and reputational damage.
The AICDI framework deserves credit for what it represents. Before this initiative, there was no standardized, globally comparable way to measure how companies actually govern AI. Governments write frameworks. Companies publish principles. But nobody was counting whether any of it translated into operational practice. This report is the first real baseline, and baselines are what make accountability possible.
The fact that the data is grounded in UNESCO’s Recommendation on the Ethics of AI gives it a shared reference point that both government and corporate governance can be measured against. It is the same standard used in CAIDP’s AI and Democratic Values Index to evaluate national AI policy. That makes cross-referencing between public commitments and corporate practice more possible than it was before. When a government says it is aligned with UNESCO’s AI ethics framework, there is now a dataset that shows whether the companies operating inside that country are doing the same.
But a baseline only matters if someone acts on it. The report found that 43.7% of companies have an AI strategy. Only 27% of those also commit to a governance framework. That means most corporate AI strategies are oriented toward adoption and value capture, with governance treated as something to layer on later. The 7% figure on human rights assessments sits next to the 18% figure on data protection, and the distance between those two numbers tells you whose interests corporate AI governance is currently designed to serve. The question now is whether those gaps narrow, or whether the first global measurement of corporate AI governance becomes just another report that companies cite in their principles without changing what they actually do.