The AI Company That Isn’t an AI Company

In a single week this April, a company most people outside the industry have never heard of locked in new deals with three of the most prominent AI buyers in the world. On April 9, CoreWeave announced a $21 billion expansion of its existing Meta contract, bringing Meta’s total committed spend to roughly $35 billion through 2032. On April 10, Anthropic signed a multi-year production agreement for its Claude models. The value was not disclosed. On April 15, the trading firm Jane Street committed $6 billion to CoreWeave’s platform and separately invested $1 billion in CoreWeave equity.

CoreWeave’s revenue backlog is now nearly $67 billion. Its 2025 revenue came in at $5.1 billion, up 168% year over year. A company that was mining cryptocurrency five years ago is one of the largest disclosed infrastructure providers to advanced AI.

Most of the coverage treated the week as a stock or deal story. The physical infrastructure where the biggest AI systems are trained and run is increasingly owned by companies that are not the ones developing the models. This split is not new in kind. Anthropic has run on Amazon Web Services since 2023. OpenAI runs on Microsoft Azure. What is new is the scale and dedication of the arrangements. Neoclouds are not standard cloud providers running AI workloads alongside everything else. They are custom-built infrastructure partners on multi-year contracts, with hardware configurations, facilities siting, and operational practices designed around a small number of customers.

The rules being written to govern AI have not caught up to this structure. AI governance instruments use different vocabularies and different legal force. The EU AI Act is binding law. The NIST AI Risk Management Framework is voluntary guidance. State AI laws vary widely in scope and enforceability. But across them, the core regulated actor is usually still the model developer or system provider, not a specialized infrastructure company running the physical environment. A dedicated infrastructure partner does not cleanly fit the provider or deployer categories most of these instruments use. That was not a problem when most AI ran on general-purpose hyperscaler infrastructure. It is a problem now that billions in contracted AI capacity is concentrated in a single specialized provider.

The gap is not that no law applies to infrastructure providers. Plenty of existing law does, including environmental, labor, and contract law. But the AI governance conversation still treats the model developer as the main actor responsible for what advanced AI systems do, even when operational decisions that shape those systems now sit with specialized infrastructure firms that do not fit cleanly inside that frame.

CoreWeave is the largest of a category analysts now call neoclouds. Lambda, Crusoe, Nebius, and FluidStack sit in the same bucket. They are purpose-built for AI workloads, heavy on NVIDIA GPUs, light on the general-purpose cloud services that define AWS, Azure, and Google Cloud. Between CoreWeave’s $66.8 billion backlog, Anthropic's $50 billion commitment to FluidStack, and Nvidia’s equity positions across multiple neoclouds, the category is emerging as a core infrastructure layer of advanced AI.

In practice, this means Anthropic’s Claude runs on servers owned by CoreWeave. Meta’s inference workloads run there too. The companies making the models are not the companies running the hardware. Meta is simultaneously building out its own data centers at $115 to $135 billion in 2026 capex, buying capacity from CoreWeave, and signing a separate $27 billion deal with Nebius. Anthropic is doing similar sourcing across multiple neocloud providers. The biggest AI labs are splitting their compute across specialized vendors by design, not consolidating it.

If an AI system causes serious harm, who answers for it? The model developer has to account for the training data, the capabilities, and the decisions about how the system gets used. But many of the questions raised when something goes wrong turn on physical conditions the model developer does not control. Power sourcing at the data center. Security of the hardware. Response times when something fails. Data center siting and operations. These are not only AI questions. Data centers raised most of them long before AI existed. What changes in the neocloud setting is how tightly those operational choices are tied to a small number of named AI customers. The usual line between “data center operator” and “AI provider” stops being useful when a company has built custom infrastructure for one AI lab’s workloads.

There is a parallel worth naming. In general-purpose cloud computing, the hyperscalers spent years absorbing accountability for what their infrastructure did. AWS is not just a server rental company. Over time, cloud providers took on defined contractual and compliance responsibilities around uptime, security, and shared-control environments. That accountability was built through specific legal cases, enterprise compliance requirements, and the evolution of service agreements. Nothing comparable yet exists for neoclouds. The path to building it may look different from the AWS path because neocloud customers are themselves large, sophisticated AI labs with their own compliance functions, not general enterprise buyers. But the April 2026 deals create the pressure to figure it out. When a company holds billions in contracted commitments from the biggest AI labs, the question of what it is responsible for stops being abstract.

CoreWeave itself has been candid about where its pitch is going. The company describes its offering as a platform, not GPU rental, with storage, software, and stack licensing alongside the raw compute. H100 rental rates are down 60 to 75% from peak, according to industry analysts, and selling compute alone is a commodity business. The strategic question is whether CoreWeave becomes a dedicated infrastructure partner for the biggest AI labs or gets treated as a capacity wholesaler. A dedicated partner is a company whose decisions shape how AI actually gets delivered. A wholesaler is a vendor whose decisions do not. Who is responsible for what changes depending on which role CoreWeave ends up playing.

For governance leads at Anthropic and Meta, the implication cuts the other way. Anthropic’s public commitments on responsible AI, its Long-Term Benefit Trust, its Responsible Scaling Policy, are easiest to read as commitments Anthropic can actually keep across the environments its models run in. To the extent that control is now distributed across infrastructure partners, those partners become part of the governance picture too. Meta’s AI commitments face the same question at a larger scale.

The thing to watch is whether governance catches up before an incident forces the question. If an AI system causes serious harm in the next two years, and the investigation finds the model developer pointing at infrastructure conditions while the infrastructure provider points at model design, the existing vocabulary does not offer a clean way to sort out who is responsible for what. That gap existed before April 2026. CoreWeave’s week just made it harder to ignore.

Ethan Ward

Award-winning journalist and product strategist focused on AI governance, algorithmic accountability, and responsible technology. AI Policy Certificate (Center for AI and Digital Policy). Master of Public Diplomacy (University of Southern California). MSc in Human-Computer Interaction (University College Dublin). His work has appeared in USA Today, NPR, Slate, Fast Company, and PBS SoCal. Founding editor of INHERITANCE. Founder, HEATDRAWN.

https://iamethanward.com
Next
Next

Allbirds Pivot Shows What PBC Status Locks In