The FTC Is Using a 1914 Law to Police AI. It’s Working.

In March, the Federal Trade Commission settled its case against Air AI, banning the company and its owners from marketing business opportunities after the agency alleged they had scammed small businesses and entrepreneurs with false claims about an AI-powered customer service tool. The proposed settlement included an $18 million penalty. Some people lost as much as $250,000. Air AI had promised its product could replace human customer service representatives and generate significant profits. According to the FTC, most people never earned back what they paid, and refund guarantees were never honored.

Air AI was not an outlier. Since launching Operation AI Comply in September 2024, the FTC has brought a series of AI-washing cases targeting companies that overstated what their products could do or promised returns that never materialized. Ascend Ecom defrauded consumers of more than $25 million with claims that its AI tools could generate passive income through online storefronts. Growth Cave marketed an “AI software” it said would automate nearly all the work of running an online course. The FTC found that the tool required users to do most of the work manually. The SEC has been doing the same thing, fining investment advisers Delphia and Global Predictions for claiming AI-driven investment capabilities they did not actually have.

What connects these cases is not just the enforcement. It is the legal tools being used to bring them. The FTC’s authority here comes from Section 5 of the FTC Act, the part of the law that says you cannot lie to people about what you are selling. The statute was written in 1914. The SEC relied on the Investment Advisers Act of 1940 and its Marketing Rule. None of these are AI-specific laws. They are general consumer and investor protection rules, designed for a world that had not yet imagined automated stock trades, let alone AI chatbots that promise to run your business. And right now, they are doing more concrete enforcement work than any AI-specific law in the United States.

That matters because the companies getting caught are not the ones dominating the AI governance conversation. Nobody at an AI safety summit is talking about Air AI or Growth Cave. The policy debate focuses on the biggest, most powerful AI systems, existential risk, and the handful of companies building them. But the people absorbing the most direct financial harm from AI right now are small business owners who believed a marketing pitch, spent tens of thousands of dollars they could not afford, and discovered the product did not work as advertised. In every case the FTC has brought, the alleged deception started in the same place. Marketing materials, sales webinars, and advertising copy that made specific promises about what an AI product could deliver. Their customers felt it too. A customer who thought they were getting real support was also bearing the consequences of a product the FTC says did not work as promised.

The scale of that harm is part of why the enforcement keeps going. In reporting on how companies use AI with customers, I have repeatedly found that the gap between what companies claim and what their products actually do is widest at the bottom of the market. The companies with the biggest legal teams and the most resources to get it right are not the ones making unsubstantiated earnings claims in webinars. The small operators are. And their customers, often individuals and small businesses who cannot afford to lose that money, are the ones left paying for it.

The FTC’s enforcement has shown real continuity across administrations. Operation AI Comply launched under Lina Khan in September 2024. Under Chairman Andrew Ferguson, the agency has continued bringing and closing AI-deception cases. In the business-opportunity cases specifically, the orders have used similar language, banning companies from claiming their AI products will make customers money unless they can back it up. That pattern suggests AI-washing enforcement is becoming a fixture at the FTC, not something that changes when the administration does.

But the current cases work because the fraud is obvious. A company says its AI will make you money. It does not. That is a clean Section 5 violation. The harder question is what happens when the claims get subtler. When a product does use real AI, but overstates its accuracy. When it works for some users but fails for others. When the marketing says “AI-powered” and the product runs a basic rules engine underneath. A century-old consumer protection statute can catch outright deception. Whether it can catch the subtler version, the product that uses real AI but exaggerates what it can do, that performs well for some users and fails for others, is the gap no existing law has closed.

Ethan Ward

Award-winning journalist and product strategist focused on AI governance, algorithmic accountability, and responsible technology. AI Policy Certificate (Center for AI and Digital Policy). Master of Public Diplomacy (University of Southern California). MSc in Human-Computer Interaction (University College Dublin). His work has appeared in USA Today, NPR, Slate, Fast Company, and PBS SoCal. Founding editor of INHERITANCE. Founder, HEATDRAWN.

https://iamethanward.com
Next
Next

AI Safety Has a Business Model Problem