AI Product Claims Are Failing Across Industries. The Law Already Covers It.

In 2022, a seven-inch knife passed through an Evolv Technologies security scanner installed in a school. The knife was later used in a stabbing. Evolv marketed its AI-powered screening system as able to detect all weapons while ignoring harmless personal items. The FTC later alleged the scanners missed weapons in multiple school settings and that the company’s claims about accuracy, speed, and labor savings were not supported. In the settlement, Evolv was banned from making unproven claims about its products and had to let certain schools cancel their contracts.

Evolv is not a startup running a scam out of a webinar. It is a publicly traded company that sold AI security products to schools, stadiums, and public transit systems. The Baltimore public school system signed a $5.46 million deal with Evolv while the FTC was investigating its marketing claims. The product was real. The AI was real. What wasn’t real was the marketing. And the people who paid the price were the students and staff who walked through scanners that didn’t work the way the company said they would.

The same dynamic is playing out in hiring. HireVue sold AI-powered video interviews to companies including Intuit, scoring candidates on their performance. In March 2025, the ACLU filed EEOC complaints alleging that the tool discriminated against a deaf Indigenous applicant who was told she needed to “practice active listening” after her AI interview. The complaint alleged the software performs worse when evaluating non-white and deaf or hard of hearing speakers.

The Workday case goes further. The company faces a nationwide class action alleging its AI screening tools rejected applicants based on race, age, and disability. A federal court has ruled that Workday itself, not just the employers using its tool, can be held liable for the outcomes. In March 2026, a judge let the age discrimination claims move forward. The ruling said that anti-discrimination law covers job applicants, not just people who already have the job. That matters because it means the companies that build AI hiring tools, not just the companies that buy them, are on the hook for what the tools do.

In healthcare, regulators are starting to ask whether AI diagnostic tools actually perform as well as their marketing suggests. The Department of Justice has said it’s willing to go after companies under the False Claims Act when AI tools are used in government-funded healthcare and the results don't match what was promised.

In reporting on why businesses remain skeptical of AI, I kept finding the same gap. What companies claim AI can do is often broader than what their products actually deliver. What these cases make clear is that the gap is turning into legal exposure. The common thread is not the technology. It is the marketing. Someone wrote copy saying the AI could detect weapons, predict job performance, or work equally well for every user. Someone approved it. Someone published it. And in each case, the claim went further than what the product could back up.

None of this required new AI-specific laws. The FTC used Section 5, the same century-old consumer protection authority I wrote about in a recent post. The hiring cases used existing anti-discrimination laws, including Title VII, the ADA, and the Age Discrimination in Employment Act. The healthcare investigations use the False Claims Act. The legal tools already exist. What hasn’t caught up is what happens inside AI companies between when a product team says “here’s what our AI can do” and when the marketing team tells customers what it will do for them. That is where the liability lives. And right now, too many companies don’t have anyone whose job it is to see it coming.

Ethan Ward

Award-winning journalist and product strategist focused on AI governance, algorithmic accountability, and responsible technology. AI Policy Certificate (Center for AI and Digital Policy). Master of Public Diplomacy (University of Southern California). MSc in Human-Computer Interaction (University College Dublin). His work has appeared in USA Today, NPR, Slate, Fast Company, and PBS SoCal. Founding editor of INHERITANCE. Founder, HEATDRAWN.

https://iamethanward.com
Previous
Previous

90 Countries Are Building AI Governance. The US Isn’t One of Them. 

Next
Next

Iran’s War Message Is Reaching Black Americans Through TikTok’s Algorithm