The White House AI Framework Trusts Oversight That Doesn’t Exist
On Friday, the White House released a national AI legislative framework asking Congress to establish a single federal policy for artificial intelligence — and to override state AI laws in the process. The four-page document opposes creating any new federal regulator and instead calls for existing agencies and voluntary industry standards to serve as the primary guardrails for a technology already shaping who gets hired, who receives government services, and who gets flagged by law enforcement.
The framework doesn’t establish categories for which AI systems pose the greatest risks. It doesn’t require companies to disclose how their systems make decisions or submit to independent audits. It doesn’t create any new authority to enforce compliance. What it does propose is shielding AI developers from certain legal liability and barring states from passing laws that regulate how AI systems are built — while preserving narrow state authority over areas like child safety and fraud.
The core issue isn’t that the framework takes a hands-off approach. It's that it relies on an oversight ecosystem that the federal government's own researchers just confirmed hasn’t been built yet.
In March, the National Institute of Standards and Technology — the agency responsible for developing the country’s technical standards — released a report on what happens after AI systems are deployed into the real world. The conclusion was blunt: the methods, standards, and shared vocabulary needed to monitor AI systems in practice are still in their earliest stages. Practitioners told NIST researchers they lack basic guidance on what to track, how to track it, and what counts as an incident worth reporting. The field hasn’t agreed on the fundamentals.
The White House framework doesn’t mention this report. But the contradiction is hard to miss. The administration wants Congress to preempt state laws — meaning override them with a federal standard — at the exact moment when the federal government’s own standards body is saying that standard doesn’t exist. States like Colorado, California, Utah, and Texas have already passed AI laws addressing transparency, consumer protection, and algorithmic accountability. Those are currently the only enforceable rules on the books. The framework would clear the board without putting anything in their place.
This pattern is familiar to me. I reported on a predictive AI system in Los Angeles County that analyzes over 580 data points from hospitals, jails, and social services to identify people at risk of becoming homeless before they reach crisis. The system was 3.5 times more accurate at identifying risk than traditional methods — but it still missed 62 percent of people who eventually lost their housing. Experts raised concerns about how personal data was being shared across agencies, whether people understood their information was being used, and the lack of long-term outcome data. The county’s formal evaluation won’t be complete until 2027. It’s a system doing real good in the gap between what AI can deliver and what oversight can verify — and that gap is exactly what the White House framework proposes scaling nationwide.
Congress has already turned down preemption twice this session. It was stripped from the budget reconciliation bill and dropped from the defense authorization bill. More than 50 Republican state legislators from 22 states wrote to the administration in March pushing back against federal pressure to abandon state AI laws. Meanwhile, AI companies and industry executives spent at least $83 million on federal elections last year, and the sector’s largest political action committee has raised over $125 million to support candidates who oppose stricter regulation ahead of the midterms.
The administration is asking the country to trust that the AI industry will police itself responsibly. Its own researchers just published evidence that the tools for that self-policing haven’t been developed. That’s not a light-touch approach to AI governance. It’s a no-touch approach dressed in policy language.