OpenAI’s Sora Launched With a Copyright Bet. It Backfired.
Last fall, OpenAI launched Sora 2, a video generator that could produce realistic clips from text prompts. The Wall Street Journal reported the week before that OpenAI had told talent agencies and studios that copyrighted characters would appear in Sora’s outputs by default. If you owned the copyright, it was your job to fill out a form to get your work removed. OpenAI notified major studios and agencies directly, but independent creators and smaller rights holders were not part of that outreach.
The backlash came fast. WME opted all of its clients out within days. CAA called Sora a “significant risk” to its clients and accused OpenAI of dismissing creators’ rights. Japan’s government formally asked OpenAI to stop infringing anime and manga copyrights after users generated clips of characters from Dragon Ball and Pokémon. CODA, the Japanese anti-piracy group representing Studio Ghibli, Bandai Namco, and Square Enix, demanded that OpenAI stop training on their content entirely. Within four days of launch, Sam Altman promised to shift toward an opt-in model and began tightening the filters.
The reversal didn’t erase the signal OpenAI had already sent. In December 2025, Disney signed a three-year deal to bring more than 200 characters to Sora and took a $1 billion equity stake in OpenAI. The deal happened. The terms told a different story. Disney banned OpenAI from training on Disney IP. No talent likenesses and no voices. A joint steering committee oversaw character usage. Disney kept only one year of exclusivity, leaving its options open. On the same day it signed the deal, Disney sent Google a cease-and-desist letter over AI copyright violations.
Three months later, OpenAI shut Sora down. The Disney deal collapsed with it. The economics were the primary reason Sora shut down. Outside estimates put the daily compute costs at roughly $1 million, and Sora’s head of product had reportedly acknowledged that the numbers were “completely unsustainable.” Users dropped from a peak of about one million to fewer than 500,000. No governance strategy would have fixed that math.
But the copyright fight made a bad situation worse. The opt-out policy at launch was not a standard industry move. Other AI companies have faced copyright lawsuits over training data. OpenAI went further by making copyrighted characters appear in the product’s outputs by default and telling rights holders to chase down their own protections. That put OpenAI at odds with the creative industry it needed as partners at a time when licensing deals could have helped offset some of the costs. The Disney deal shows those partnerships were still possible, but only on the partner’s terms and only under tight constraints. In a recent post, I wrote about how AI safety carries a business model problem. Sora is the case where the governance problem and the business model problem fed each other. In my research on algorithmic bias and platform design, I found that platforms routinely shift the cost of protection onto the people most affected by their design choices. Sora’s opt-out copyright model did the same thing at industry scale.
The lesson from Sora is not just that governance should happen before launch. It is that product design choices are governance choices, and the ones made at launch are the hardest to walk back. Altman reversed the opt-out policy in four days. But the signal OpenAI sent about how it valued creative IP outlasted the policy itself. It shaped the terms on which later partnerships were negotiated, and it narrowed the company’s options during the months when it needed every advantage it could get.