OpenAI Is Paying Workers to Map Their Own Replacement
OpenAI is running a project called Stagecraft. Through a data-labeling startup called Handshake AI, the company is paying between 3,000 and 4,000 freelancers at least $50 an hour to build training data for ChatGPT. These are not entry-level taskers sorting images into categories. They are commercial pilots, pharmacists, HR specialists, and plant scientists. Their job is to create detailed simulations of their own work like building personas, mapping workflows, and producing the kind of task-by-task documentation that would let an AI system learn how to do what they do. One contractor told Business Insider what everyone involved already understood: “We all were aware that we were basically training AI to replace us.”
That contractor kept working and collected the check. There was no grievance process, no transparency mechanism, and nothing that would let a worker push back on the terms of their own displacement. The project’s training manual says the data will help “map economically relevant tasks and evaluate the model’s capabilities.” Put plainly, which jobs can AI take over, and how soon.
Stagecraft landed in a week that makes the pattern impossible to miss. On March 31, Oracle began laying off up to 30,000 employees, roughly 18% of its global workforce, to free up an estimated $8 to $10 billion for AI data center construction. Oracle’s net income jumped 95% last quarter, but the company has taken on $58 billion in new debt to fund its AI buildout and needed to cut labor costs to close the gap. The business was not failing. The workers were just worth less to leadership than the infrastructure. That same week, Block CEO Jack Dorsey published an essay arguing AI should replace middle management entirely, weeks after cutting 4,000 workers while reporting record gross profit. Salesforce froze merit raises for anyone at director level and above.
And at Meta, a former senior director filed an age discrimination lawsuit alleging that workers over 40 were 1.5 times more likely to be cut during the company’s 2025 layoffs, with those over 50 at 2.5 times the risk. According to the complaint, performance ratings were allegedly manipulated to classify older employees as “lowest performers,” turning the evaluation system into a vehicle for bias. When I reported on AI’s growing role in HR departments last year, only 7% of companies were using AI-driven systems for performance-related decisions, largely because of bias concerns. Meta’s lawsuit suggests those concerns were well-founded.
One company catalogs which jobs to automate. Another cuts workers to fund the infrastructure. Another rewrites its org chart around AI. Another gets sued because the system it used to decide who goes may have baked in discrimination from the start.
Taken together, these developments line up against a public that already sees what is coming. A Quinnipiac University poll released March 30 found that 70% of Americans now expect AI to reduce job opportunities, up from 56% a year ago. Only 5% believe AI is being developed by people who represent their interests. The numbers are split by income: 52% of people earning over $200,000 a year say AI does more good than harm, while 60% of those earning under $50,000 say the opposite. The industry treats this as a messaging problem. But people are reading the situation accurately. They can see who benefits and who bears the cost.
In interviews I conducted for my research on algorithmic bias on TikTok, creators described a version of this same dynamic. Two-thirds believed their content was being unfairly suppressed by the platform. They could see it happening, and they developed workaround strategies because TikTok offered them no transparency and no mechanism to challenge the outcomes. The governance structure is the same one facing Stagecraft contractors. You can see what the system is doing, you are participating in it, and you have no power to contest how your contribution gets used. Nothing in current labor policy or AI regulation gives workers leverage over how their expertise is extracted once they hand it over.
Oracle cut 30,000 jobs to build data centers. Block cut 4,000 to restructure around AI. Salesforce is compressing pay across its senior ranks. The AI industry is spending hundreds of billions of dollars building systems designed to replace the people who currently earn wages, buy things, and participate in the economy those systems are supposed to serve. If you eliminate the workers, who is left to be the customer?