Who are CoreWeave, the cloud provider that just used Nvidia chips to raise $2.3 billion?
News

Who are CoreWeave, the cloud provider that just used Nvidia chips to raise $2.3 billion?

AI data chip.jpeg

Cloud specialist uses highly sought after AI Hardware to secure loan to continue rapid growth

Founded in 2017 as an Ethereum mining company, New York based CoreWeave has profited massively from the boom in generative AI seen in 2023.

A pivot in 2019 by its co-founders to building a specialised cloud infrastructure has more than paid off. Chief strategy officer Brannin McBee claims in a recent interview with VentureBeat that the company made $30 million in revenue in 2022 and is on track to book $500 million in business this year.

Graphic Processing Units (GPUs), such as the ones CoreWeave used to mine crypto in its earlier days are now fundamental to powering the compute-intensive workloads required by generative AI.

That growth shows no sign of slowing down, with nearly $2 billion already contracted for next year according to McBee.

Customers include Microsoft, who earlier this year were reported to have signed a multi-year deal worth billions of dollars to ensure Chat GPT has enough compute power going forward.

Following Microsoft’s $10bn investment in Chat GPT’s parent company Open AI at the start of this year, the system is primarily run through Microsoft’s own Azure cloud platform.

The boom in demand for its computing power has helped CoreWeave raise $221 million in an April 2023 Series B funding round and a further $200 million in May.

Now, CoreWeave has just secured a $2.3 billion loan in a debt facility led by Magentar Capital and Blackstone. Other lenders in the facility include Coatue, DigitalBridge, BlackRock, PIMCO and Carlyle.

The Nvidia H100 chips that were used as collateral have been in extraordinary high demand as they are the most powerful chip to power AI computing.

"We negotiated with them to find a schedule for how much collateral to go into it, what the depreciation schedule was going to be versus the payoff schedule," said Michael Intrator, chief executive at CoreWeave. "For us to go out and to borrow money against the asset base is a very cost-effective way to access the debt markets."

Speaking on the debt financing, Jasvinder Khaira, a Blackstone senior managing director, said: " "The soaring computing demand from generative AI will require significant investment in specialized GPU cloud infrastructure – where CoreWeave is a clear leader in powering innovation"

David Snyderman, CIO and managing partner at Magnetar Capital "As AI becomes increasingly integrated into businesses and society at large, CoreWeave is well equipped to meet the world's increasing need for high performance compute and serve as a value-added provider to each of its customers."

CoreWeave found themselves in the fortunate position of possessing a significant number of the chips, thanks in part to Nvidia’s investment in the April Series B.

Nvidia are also keen to stop the limited supply of its chips ending up with larger cloud providers such as AWS, as they are looking to develop their own technology and avoid an over reliance on Nvidia.

“It certainly isn’t a disadvantage to not be building our own chips,” McBee said in his interview. “I would imagine that that certainly helps us in our constant effort to get more GPUs from Nvidia at the expense of our peers.”

CoreWeave claims on its website it offers “unparalleled access to a broad range of compute solutions that are up to 35x faster and 80% less expensive than legacy cloud providers.”

As for what the new money will be spent on, CoreWeave plan to acquire even more GPUs, invest in data centres, such as the $1.6 billion location it announced in Plano Texas last week and hiring more staff.

CoreWeave’s partnership with Nvidia has led to it building the worlds fastest AI supercomputer, according to an industry standard benchmark test called the MLPerf.

“CoreWeave's publicly available supercomputing infrastructure trained the new MLPerf GPT-3 175B large language model (LLM) in under 11 minutes, which was more than 29x faster than the next best competitor and 4x larger than the next best competitor,” the company said in a press release.

Gift this article