The technology boom of the late 1990’s saw a dizzying number of Initial Public Offerings (IPOs) come to the market, spanning all kinds of business models. Many of these companies were start-ups, and in the absence of earnings, other growth metrics were used. With internet traffic doubling every 100 days, valuations were based on extraordinary growth and addressable market assumptions. Capital flowed like a river.
Does anyone remember the ‘number of eyeballs’ references to companies such as Netscape? It was a heady time indeed. Much money was made and, of course, lost when the technology bubble burst and those with no clothes went to the wall.
Fast forward to 2025 and, with at least a faint echo of the past, CoreWeave has filed to IPO, with an expected valuation of approximately US$26 billion.
CoreWeave is a cloud provider that leases data centres to provide GPU compute to companies hungry to train and run their large language models (LLMs) on a per-GPU-per-hour basis. It serves giants like Microsoft (62% of CoreWeave’s revenue) and OpenAI with overflow compute when they lack capacity of their own. Nvidia holds almost 6% of CoreWeave and forms part of the relationship with regards to the supply of Nvidia GPUs.
According to its filing, CoreWeave has more than 250,000 GPUs across 32 data centres and had revenue of US$1.9 billion in 2024 – up a colossal 737% over the prior year – with a net loss of US$863 million.
In companies that are investing for the future, losses are not a surprise, and while the amount here is not trivial, the bullish investment case on the future would suggest that profitability will be achieved in the years ahead. However, concerns around the business model are swirling and make for some grisly reading. Indeed, much of it is in plain sight in the filing documentation.
Has the bloom of the AI rose lost some of its lustre?
Make no mistake, the success or otherwise of CoreWeave’s proposed IPO is a big deal and it may serve to turn the spotlight fully onto the falling petals of the AI narrative of unlimited spend and demand.
Previously in Margin Call we have discussed the lack of a compelling enterprise use-case, and how Microsoft’s narrative has changed over the last several months with the introduction of phrases like ‘demand-based spending’ and ‘fungibility’. As CoreWeave’s biggest customer by far, Microsoft’s intent is of paramount importance.
Keeping the positive news and the bloom alive was the announcement that OpenAI has entered a five-year deal to rent server space to train its LLM, ChatGPT; but stripping out Microsoft – whose contribution to revenue rose from 35% in 2023 to 62% in 2024 – and Nvidia reveals revenue to be nearer US$440 million.
That revelation paints an entirely different picture of risk to ongoing revenue and raises the distinct possibility that future success may not be quite as clearcut as one might first assume.
Long-term contracts make for good optics
Without trawling through the history, CoreWeave’s tie-up with Nvidia likely elevated the company to preferred customer status at a time when demand for Nvidia’s GPUs – CoreWeave’s sole supplier – was not matching the extraordinary levels of interest from the hyperscalers and other customers alike.
This had the effect of pushing companies to CoreWeave’s services for fear of a lack of compute capacity. Put another way, CoreWeave’s business model is, in part at least, the overflow. Microsoft has a US$10 billion rental agreement through 2030 and OpenAI’s approximately US$12 billion agreement also runs through the same period. So, as investors, shouldn’t we be happy? Not so fast . . .
At the end of December 2024, CoreWeave reports ‘remaining performance obligations’ or RPOs – that is, contracted but unearned revenue – of US$15.1 billion, up almost 53% over 2023. That is huge, but to collect 54% in the first 24 months, 42% over the next 24, and the balance thereafter, the clients need to stay. And the concentration risk remains enormous.
Technology obsolescence
The world of technology moves quickly and Nvidia is not holding back on its development roadmap. Indeed, the company already has names for its upcoming hardware. Blackwell is at scale, then Ultra, Vera Rubin, and Feynman in 2028. That is some pace.
With each iteration comes more speed, and each is more desirable than the previous incarnation. Whilst LLMs are rapidly converging on and commoditising their respective capabilities, faster compute will still give the end user a better experience. History has shown that ‘old’ technology becomes obsolete at an alarming pace, and with it, the value of the older versions tends to decline precipitously. This is a problem if the depreciation of these assets is mismatched against their usable life. Asset write-downs must surely follow.
In its most recent earnings release, Hewlett Packard Enterprise (HPE) discussed the performance of its AI systems and server business. One notable point was the mention of a higher-than-normal inventory of AI servers due to a shift to next-generation Blackwell GPUs, implying the company had been working with prior-generation GPUs such as NVIDIA’s H100 or earlier models.
The higher inventory could be taken to mean that either demand is falling or it was less aligned with customer needs. In addition, HPE emphasised pricing challenges, with operating margins narrowing due to competitive pressures and inventory valuation issues. Of course, this could be uniquely HPE, but that seems unlikely.
It is difficult to marry up a four-to-five-year contract with the pace of technological obsolescence, although it is worth noting that CoreWeave does mention successful deployment of former generation A100 GPUs into new contracts with the same or new counterparties, maintaining high utilisation rates and extending their economic life. In all, it will be tricky to match the pace of change with client demands, especially since the technology spins faster than a Buzz Lightyear boomerang!
In an ominous report, even the conservative Financial Times alerted readers to the fact that Microsoft had walked away from a deal with CoreWeave due to delivery issues.
Debt is a factor
CoreWeave’s business may be prone to increasing debt over time as the company must continuously invest heavily in technology and hardware. Indeed, the company has suggested it will require additional funding to provide for OpenAI’s services. For reference, some estimates suggest the company spent almost US$9 billion in capital expenditures in 2024.
To date, CoreWeave has managed to raise somewhere in the region of US$15 billion in equity and debt, with the latter secured on the company’s property. This may include the GPUs themselves which, as mentioned, may present an issue given the potential for price decay.
At the end of 2024, CoreWeave had US$8 billion in debt, and lease obligations of US$2.6 billion. This indicates considerable expense in debt servicing and, should the revenue model not be optimal, it could present servicing issues notwithstanding the need to upscale and modernise, and may see the debt grow further. And of course, there is the ever-present need to finance maturing debt, which will amount to approximately US$7.5bn by the end of next year.
Is the tide coming in, or going out?
The real question for investors here is whether to participate in the IPO or not. That is a judgement call centred on quite a few moving parts that are somewhat difficult to predict and pin down. In previous articles we have talked to slowing growth, the lack of compelling use cases, technology shifts, commoditised LLMs, and smaller, more efficient models – the children to the parent. The reverberations of China’s DeepSeek ‘ta-da’ moment continue and have punctured the popular narrative of unlimited and forever spend.
To be clear, this is not a hatchet job, not one bit. We have no investment interest either pre-or post-IPO, and I hope the company enjoys huge success. It has been innovative and nimble, spotting and exploiting a gap. It has raised finance, and it has built incredibly complex technology to deliver a service to those outside of the hyperscalers with a differentiated ‘by the hour’ revenue model.
But if we were in the market and I was privileged to be in front of management, these are the areas I would want to explore, and get to the bottom of, to be able to make an informed decision.
How does the business evolve and the technology meet and diverge from the client needs? How will the debt be serviced and brought to heel? What is the nature of contractual revenue beyond take-or-pay? What assumptions sit behind the need to use more compute, the driver and the pricing behind that? Many, many routes to take on the road to discovery, with a great deal of uncertainty along the way.
Should the CoreWeave IPO succeed, it may shine a hugely positive light onto the AI rose and give a renewed and ruby lustre to the petals.
The time to see whether CoreWeave is wearing the emperor’s clothes is not yet here, but the deal has been delayed and repriced and the tide will turn. Will it be coming in or going out? Time will tell.
Tim Chesterfield is the long-time CIO of the Perpetual Guardian Group and the founding CIO and Director of its investment management business, PG Investments. With $2.8 billion in funds under management and $8 billion in total assets under management, Perpetual Guardian Group is a leading financial services provider to New Zealanders.
Disclaimer
Information provided in this publication is not personalised and does not take into account the particular financial situation, needs or goals of any person. Professional investment advice should be taken before making an investment. The information provided in this article is not a recommendation to buy, sell, or hold any of the companies mentioned. PG Investments is not responsible for, and expressly disclaims all liability for, damages of any kind arising out of use, reference to, or reliance on any information contained within this article, and no guarantee is given that the information provided in this article is correct, complete, and up to date.
This article was originally published by the NBR. You can read the original piece here.