Courts Decided: Platform Harm Is a Design Problem
Trust and safety is no longer just a moderation question. In the span of one week, courts in Los Angeles, New Mexico, and Amsterdam all ruled that when platforms and AI systems cause harm, responsibility can rest with the people who designed them — not only the users who posted the content or typed the prompts.
A Los Angeles jury found Meta and YouTube negligent for building products that harmed a young user’s mental health, awarding $6 million in damages. The plaintiff’s lawyers did not center their case on what users posted. They focused on infinite scroll, autoplay, algorithmic recommendations, cosmetic filters, and push alerts — the architecture of the product itself. Internal documents presented at trial showed Meta employees discussing the addictive effects of these features. One memo revealed that 11-year-olds were four times as likely to keep returning to Instagram compared to competing apps, despite the platform requiring users to be at least 13. The case sidestepped Section 230, the federal provision shielding internet companies from liability for user-generated content, by treating the harm as a product defect rather than a content issue.
The day before, a New Mexico jury ordered Meta to pay $375 million for failing to protect children from online predators. And in Amsterdam, a court ordered xAI to stop generating nonconsensual sexual images through its Grok AI tool, imposing fines of €100,000 per day. xAI’s lawyers argued that the company should not be penalized for what users do with its tools. The court rejected that argument. During a March 9 hearing, the Dutch foundation Offlimits demonstrated that Grok could still produce a sexualized video of a real person from a single uploaded photograph — even after xAI claimed it had implemented safeguards to prevent exactly that. The court found those safeguards inadequate.
The comparison to 1990s tobacco litigation is everywhere right now. But that analogy only goes so far. Tobacco companies were held liable for concealing known risks. These cases suggest that designing a product to maximize engagement at the expense of user wellbeing may itself be negligence. Eight more individual cases are scheduled in Los Angeles this year, with federal cases brought by states and school districts heading to jury trials this summer. The legal theory that prevailed last week — product design, not user content — is about to face broader testing.
I heard a version of this argument years before it reached a courtroom. In research I conducted on algorithmic bias on TikTok, Black LGBTQ+ creators described developing adaptive strategies — coded language, strategic hashtags, careful posting times — not because of what other users were doing, but because of how the platform’s design determined what got seen and what got buried. One creator told me certain topics were “too Black for the algorithm.” Another said it felt like they were navigating the system entirely on their own, without clear rules or guidance. What they were describing was not a content problem. It was a design accountability problem.
For years, the people who understood platform design as a source of harm were not regulators or lawyers. They were the users building survival strategies just to remain visible. The courts did not invent that insight last week. They gave it a legal form. The question now is whether the companies that built these systems will recognize what a jury room full of ordinary people already did: when the product is the problem, accountability starts with the product.