SpaceX and Anthropic recently announced a deal for SpaceX to provide 100% of capacity from Colossus 1 datacenter to Anthropic in the very near term. This addresses Anthropic's immediate compute shortages while they wait for additional contracted capacity from Microsoft, Google & Amazon to come online. This substantial deal allowed Anthropic to immediately double usage limits on subscription plans, and 10x API rate limits, and even then, it pales into comparison to the upcoming capacity on the horizon from cloud providers.

    This, when combined with other factors, is a strong sign that SpaceX likely has a severe overcapacity of compute:

    Grok 4.3 is priced very aggressively at $2.50 per 1 million tokens. This is more than 90% cheaper than GPT 5.5 and Opus 4.7, 80% cheaper than Gemini 3.1 Pro, and even cheaper than budget models like 5.4 Mini and Claude Haiku.

    Additionally, Musk has reportedly required banks to purchase Grok as a condition of participating in their IPO.

    If SpaceX was capacity constrained:

    • They would not agree to lease capacity to a direct competitor if it comes at the expense of their own operations. That would mean giving up market share.

    • They would not be selling their models at such a cheap price.

    • Grok would not need to be a condition of IPO access, because coerced usage would just get in the way of legitimate demand. Negotiations would've been more focused on fees.

    But this only demonstrates that SpaceX has overcapacity. But what about other providers?

    Google has entered new agreements with Anthropic, OpenAI for compute capacity. But if Google needed this capacity for Gemini, and wanted to compete, why would they sell it off and give fuel to their direct competitors?

    The reality is that Gemini has struggled to attract real world paying users. Google's recent growth is entirely due to its growth as a cloud computing company for other AI providers. There is a reason they don't report revenues from Gemini, only user numbers inflated by pre-installs and manipulated engagement.

    So narrowing this down, when we look just at the largest end users, the market has been relying on the absurd growth in demand from just OpenAI & Anthropic, which make up the majority of cloud compute demand at Microsoft, Amazon, Google, who in turn buy from hardware providers.

    The issue here is how this could affect future long term deals:

    • OpenAI has already backed out of direct Datacenter investments, as their commitments have grown too large compared to growth in demand. They also have very permissive usage limits(3000 messages a week on $20/mo plan), which suggest they are not compute constrained like Anthropic was.

    • SpaceX(formerly X AI) has been one of the largest customers of hardware, and is resorting to leasing capacity, which has greatly eased existing compute shortages at Anthropic.

    • If OpenAI ends up with excess capacity under contract that they cannot afford to pay for, they may seek to sublease/resell capacity to competitors. This would further reduce pressure on future deal-making.

    For AI demand to truly scale into all of the new capacity coming online, it really needs a new killer application ASAP. It's already taken over coding, so the growth has to come from elsewhere.

    We're seeing the first subtle signs of Datacenters being overbuilt
    byu/skilliard7 instocks



    Posted by skilliard7

    Leave A Reply