SpaceX's Compounding Hardware Advantage
Research Rundown #184, plus a new report on Cartesia
SpaceX’s Compounding Hardware Advantage
On Wednesday, Anthropic announced that it has signed an agreement with SpaceX to use all of the compute capacity of SpaceX’s massive compute cluster, Colossus 1, which has 300 megawatts of capacity and over 220K Nvidia chips. Commenting on the deal, Elon said he was “ok leasing Colossus 1 to Anthropic” because SpaceX had already migrated its training workloads to Colossus 2. The same day, Elon posted that xAI would be dissolved as a separate company, consolidating AI operations under the SpaceX umbrella as “SpaceXAI.”
In the agreement, Anthropic also said it was interested in partnering with SpaceX to “develop multiple gigawatts of orbital AI compute capacity”, adding to a compute portfolio that already includes a 5 GW agreement with Amazon, a 5 GW agreement with Google and Broadcom, and $30 billion of Azure capacity from Microsoft and Nvidia. This, along with Elon’s $55 billion AI chip manufacturing Terafab project, developed in partnership with Intel, positions SpaceX to become a major provider of compute. In doing so, it may bring SpaceX and Elon back to their strengths, which writer Derek Thompson described this week as being “world-leading at compressing money, resources, and time to make ‘known/hard’ things at scale”.
Peter Thiel’s Ocean Data Center Bet
On Monday, it was reported that Peter Thiel was leading a $140 million Series B in an Oregon-based startup called Panthalassa, a name which means “all sea” in Greek and refers to the vast superocean that covered 70% of the Earth’s surface 250 million years ago. The company has spent years developing ocean energy technology that uses waves to power AI chips.
Each of the wave-riding “nodes” the company will produce is an 85-meter-long “lollipop” structure with a hollow sphere at the top filled with air so that the node floats. These nodes are essentially floating data centers running on Nvidia chips, with free seawater supercooling that beams inference tokens to land via LEO satellites. The company is planning a commercial rollout in 2027. If it succeeds at scale, it will simultaneously address three of the biggest challenges associated with the data center buildout: where to find the space for data centers, how to cool them as economically as possible, and how to supply them with energy.
Infinite Context, Infinite Possibilities?
Also this week, Anthropic held its second annual developer conference at the Midway in San Francisco. Among the photos, videos, and other artifacts that emerged were reports that Anthropic predicted future model architecture would involve “infinite” context windows. In the 24 hours after Anthropic mentioned the topic on Wednesday, a proliferation of tweets, YouTube videos, live blogs, and Reddit posts has popped up discussing how Anthropic might enable longer context windows and the implications for both customers and the company.
It is worth clarifying that what Anthropic employees actually said was “context windows that feel infinite” (not that they would actually have no limits). Users have speculated that advancements in memory for future models will be based on the findings of the Google Research team from December 2025, when they published two papers on methods for input data memorization at test time and memory retention. As of May 2026, Anthropic, OpenAI, and Google all support context windows of 1 million tokens.
Latest Research
Cartesia is a voice AI company that recently raised a $100 million Series B. See our new report here.
This week, we published a report on the buildout of a shadow grid meant to power AI data centers. See the full report here.
What We’re Reading
Anthropic plans to invest $200 million in a new venture with PE firms (Blackstone, Hellman & Friedman, General Atlantic) to sell Claude into their portfolio companies on a Palantir-style “forward-deployed engineer” model.
Similarly, OpenAI finalized a $10 billion joint venture called “The Deployment Company” with 19 investors (including TPG, Brookfield, Bain Capital, Advent, Dragoneer, SoftBank) and a reported 17.5% guaranteed minimum return for investors
Anthropic is committed to spending $200 billion with Google Cloud over five years (accounting for >40% of Google’s disclosed cloud revenue backlog) on the back of Google’s $40 billion equity investment in Anthropic and the multi-gigawatt Google TPU/Broadcom deal coming online from 2027.
Meta is reportedly building “Hatch,” a consumer version of OpenClaw, against closed sandboxes mimicking Reddit/Etsy/DoorDash for internal testing by the end of June 2026.
OpenAI apparently discussed spinning out its robotics and consumer-hardware divisions in late 2025 but shelved the plan after concluding that the spinouts would still consolidate on OpenAI’s balance sheet, thereby neutralizing the strategic point ahead of an IPO targeting a possible $1 trillion valuation.
Anthropic released 10 financial-services agent templates (pitch builder, KYC screener, valuation reviewer, GL reconciler, etc.) running on Claude Opus 4.7 alongside Microsoft 365 add-ins for Excel/PowerPoint/Word and a Moody’s-as-native-app integration covering 600M+ company records.
Span partnered with Nvidia and PulteGroup (America’s third-largest homebuilder) to mount XFRA “fractional data center” nodes, each containing 16 liquid-cooled Nvidia Blackwell RTX Pro 6000 GPUs and 3TB of RAM, onto new-construction homes, tapping the ~60% of unused electrical capacity in the average US home.
Astranis raised a $300 million Series E at a $2.8 billion valuation, plus a $155 million Trinity Capital delayed-draw credit facility (bringing total capital to $1.2 billion+), to ramp up MicroGEO satellite production for a backlog that includes Oman, Taiwan, Space Force’s Protected Tactical SATCOM-Global, and Resilient GPS.
At Contrary Research, we’ve built the best starting place to understand private tech companies. If you're interested in researching and writing about tech, apply here for our Research Fellowship!


Outstanding post!