AI Companies Claim Free Speech to Block Anti-Discrimination Laws
On April 10, Elon Musk’s xAI filed a federal lawsuit against Colorado to block the state’s new AI anti-discrimination law before it takes effect on June 30. The law, SB 24-205, says that companies building AI systems used in hiring, housing, healthcare, education, and lending have to take reasonable steps to prevent those systems from discriminating. xAI says the law violates the First Amendment.
The company’s complaint argues that the law would force its chatbot Grok to “abandon its disinterested pursuit of truth and instead promote the State’s ideological views on various matters, racial justice in particular.” xAI’s legal team calls the law an attempt to “embed the State’s preferred views into the very fabric of AI systems.” In plain terms, xAI is arguing that preventing AI discrimination is the same thing as government censorship.
The same week, OpenAI testified in favor of an Illinois bill that would shield AI companies from lawsuits over catastrophic harm, as long as they didn’t cause it intentionally and published safety reports. The bill, SB 3444, defines “critical harms” as the death or serious injury of 100 or more people, or at least $1 billion in property damage. It applies to any AI system trained on more than $100 million in computing power. That would likely cover every major AI lab building the biggest models.
These are two different legal strategies moving in the same direction. xAI is arguing that regulating what AI systems say violates the Constitution. OpenAI is pushing for a law that raises the bar so high that most people harmed by AI would never clear it. Both efforts would make it harder to hold AI companies accountable.
The First Amendment argument xAI is making is not new. Corporations have been using constitutional rights originally written for people to fight off regulation for more than a century. UCLA law professor Adam Winkler documented the pattern in We the Corporations. Between 1868, when the Fourteenth Amendment was ratified to protect formerly enslaved people, and 1912, the Supreme Court heard 28 Fourteenth Amendment cases involving Black Americans and 312 involving corporations. The amendment written to guarantee equal protection after the Civil War became the tool companies used to strike down wage laws, labor protections, and business regulations. During that same period, the Court upheld Jim Crow in Plessy v. Ferguson.
The Fourteenth Amendment was not the only constitutional provision corporations claimed for themselves. In 2010, Citizens United v. FEC gave corporations the same First Amendment political spending rights as individuals. In 2023, in 303 Creative v. Elenis, the Supreme Court ruled 6-3 that a Colorado web designer had a First Amendment right to refuse to create websites for same-sex weddings, overriding Colorado’s anti-discrimination law. Justice Sotomayor wrote in dissent that it was the first time in the Court’s history that a business open to the public received a constitutional right to refuse to serve members of a protected class.
Now xAI is borrowing the same playbook for AI. The company is framing a law that says “don’t let your AI system discriminate in healthcare and hiring” as a government-imposed ideology. And it is naming racial justice specifically as the ideology it objects to.
This is happening alongside a federal push in the same direction. In July 2025, President Trump signed an executive order to prevent “woke AI” in the federal government, which bans federal agencies from buying AI models that incorporate diversity, equity, and inclusion principles. A follow-up executive order in December 2025 went further, directing the FTC to investigate whether state laws requiring AI anti-discrimination safeguards actually violate federal consumer protection law.
David Sacks, who leads the administration's AI policy effort, praised the lawsuit on X, calling the Colorado law “Woke AI” and writing that it “teaches AI models to lie.”
The legal question of whether AI model outputs count as protected speech is genuinely unsettled. Courts have not ruled on it directly. But the timing matters. The industry is not making this argument in the abstract. It is being made in response to the first state laws that try to prevent AI systems from discriminating against people in the areas where discrimination does the most damage. Who gets hired, who gets housing, and who gets healthcare.
Last month, I wrote about how courts in Los Angeles, New Mexico, and Amsterdam ruled that platform harm is a product design problem, not a user behavior problem. Those verdicts moved liability toward the companies that build the systems. The First Amendment strategy is the emerging response. If AI outputs are speech, then regulating those outputs is censorship, and the product liability approach those courts are building gets much harder to sustain.
In my research on algorithmic bias on TikTok, two-thirds of Black LGBTQ+ creators I surveyed believed their content was being unfairly suppressed by the platform. They could see it happening, and the platform gave them no way to challenge it. That is not the same legal context as hiring or housing. But it shows a broader pattern: People can experience algorithmic harm clearly while having almost no visibility into how those systems classify, rank, or exclude them.
The Colorado law xAI is suing to block was written to address exactly that kind of gap, applied to the decisions that shape people’s access to jobs, housing, and medical care. If the First Amendment challenge succeeds, the people who built the systems will have a constitutional shield. The people affected by them will have very little to push back with.
The Fourteenth Amendment was written for freed slaves. Corporations used it to gut labor laws while Jim Crow stood. The First Amendment was written to protect individual conscience. It is now being used to protect AI companies from having to answer for how their products treat people. The constitutional text stays the same. Who benefits from it keeps changing.