California and the Pentagon Have Opposite AI Safety Rules
Six weeks ago, the White House released a national AI legislative framework that pushed for federal preemption of state AI rules and leaned heavily on industry-led governance, relying on an oversight ecosystem that the federal government’s own researchers had confirmed didn’t exist. Since then, the question of who governs AI in the United States has moved from policy papers to procurement.
On March 30, California Governor Gavin Newsom signed Executive Order N-5-26, directing state agencies to develop new certification standards for AI vendors within 120 days. Companies seeking California state contracts will need to show they have safeguards against illegal content, harmful bias, and civil rights violations including unlawful surveillance. California is the world’s fourth-largest economy and home to 33 of the top 50 privately held AI companies. When a buyer that size sets terms, vendors build to meet them.
The same month, the federal government made its own terms clear, and they point in the opposite direction. The General Services Administration proposed a new procurement clause that would grant agencies an irrevocable license to use any AI system for any lawful government purpose. The clause would override commercial terms of service and explicitly bar vendors from refusing to produce outputs based on their own safety policies. Lawfare called it “governance by sledgehammer.” If a company builds safety restrictions into its AI, the federal government wants the contractual authority to override them.
What that looks like in practice arrived the same week. The Pentagon designated Anthropic a supply chain risk after the company refused to allow its AI to be used for fully autonomous weapons or mass surveillance of Americans. That label had previously been applied only to companies connected to foreign adversaries. A federal judge blocked the designation, writing that “nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.” The Trump administration appealed the following week.
California’s executive order creates a review process that would apply directly to a case like Anthropic’s. The order allows California to override federal supply chain designations it considers improper and continue contracting with the affected company. The state that hosts Anthropic’s headquarters is building procurement rules that would protect companies for maintaining the exact safety red lines that got Anthropic penalized by the Pentagon.
Two versions of AI procurement are now operating simultaneously in the same country. The federal government is writing procurement rules that would override AI vendors’ ability to refuse requests on safety grounds. California is writing rules that require vendors to prove they can say no and mean it. One treats safety restrictions as obstacles to government operations. The other treats them as prerequisites for getting paid. Both are enforceable. Both shape which companies get access to billions in public spending. And right now, a company like Anthropic can be punished by one government for the same policies that qualify it for business with another.
This is where the AI governance fight is going to play out. Not in congressional hearings or voluntary commitments, but in contract language. California already has SB 53, the nation’s first frontier AI safety law, plus more than 20 AI statutes on the books. The executive order adds procurement on top. Companies that want California’s business will need real governance infrastructure, not just principle statements on a website. The federal government is betting that removing restrictions will attract the best AI. California is betting the opposite. The rest of the country will eventually have to decide which version of AI procurement it follows. So will every company deciding whether its safety commitments survive the contract negotiation.