This is a weekly newsletter about the business of the technology industry. To receive Tanay’s Newsletter in your inbox, subscribe here for free:
Hi friends,
We haven’t had too many of these so far this year, but chip maker Cerebras recently filed their S-1. This week I’ll go into some of the main takeaways from their S-1, and what it highlights about the chip business and the current state of play in AI.
I. Cerebras Overview
Cerebras is a chip maker and one of the many upstarts vying to compete with NVIDIA for AI workloads across both inference and training.
Cerebras builds both hardware and software solutions that aim to make AI faster/easier/cheaper to use, centered around their core offering the Wafer-Scale Engine, which is essentially an extremely large chip, 57 times larger than commercially available GPUs.
Cerebras, like most other chip makers today, operates a fabless model and uses TSMC to manufacture their processors.
II. The Cerebras Product Suite
Cerebras believes it has built the world’s fastest commercially available AI training and inference solution. The easiest way to think about Cerebras's product suite is that they offer systems at different levels of abstractions depending on the needs of the customer.
Cerebras Wafer Scale Engine (WSE) is their core product, a giant chip, essentially their alternative to a GPU, which is >50x bigger.
Cerebras System (CS) is an AI computer system that houses and powers/cools the chip and can be integrated into existing data centers.
Cerebras AI Supercomputer offers a streamlined way to connect up to 2,048 Cerebras Systems for use cases that require that level of compute
Cerebras Software Platform (CSoft) which is their proprietary software platform that integrates with frameworks like PyTorch. CSoft’s graph compiler automatically maps model operations to the WSE and doesn’t require low-level programming in hardware specific languages.
Cerebras Inference Stack / Cloud which is the end-to-end inference serving stack that allows models to be run as a service on Cerebras hardware and consumed by developers via an endpoint.
Cerebras' product suite is a reminder of how high the bar is to compete with NVIDIA today. Just a better chip isn’t likely enough, at a minimum the surrounding software and a way to integrate into existing data centers is required. Alternatively, selling endpoints to developers bottoms up and focusing on the cost/latency advantages may be another approach, which Groq has also seen some traction with.
Cerebras claims that they have 10x faster times in both inference and training, the former is shown below. If that is the case, they should probably be seeing a lot more usage than they are. The likely answer is that it’s very easy to make benchmarks look favorable, but ultimately the cost/latency trade-off may not make sense for most developers. That, or Cerebras needs to up their marketing game to get more adoption :)
III. Financials
Cerebras was started in 2015, and from a practical perspective started to monetize in 2021/2022. Revenue comes from hardware sales of chips/systems/supercomputers, cloud-based computing services and services work including support.
The financials in brief:
Revenue: Fast ramp of revenue — $25M in 2022, $78M in 2023, $136M in H1 of 2024
Revenue breakdown: Roughly 3/4 of the revenue comes from hardware while the remaining comes from services and cloud
Gross Margins: Aggregate gross margins are in the 40-45% range, with the hardware having lower gross margins than the services/cloud segment. I’m surprised by the hardware gross margins being in the 36-37% range, quite a bit lower than NVIDIA/AMD.
Other Expenses: R&D is the main expense, over 50% of revenue in H1 of 2024. G&A and Sales and Marketing spend is relatively low, under 15% of revenue in aggregate.
Margins: Net margins are -48% in H1 of 2024, significantly better than previous years which were below -100%.
Overall, Cerebras is ramping up revenue quickly and its financial profile is improving fast. However, there is one huge issue: customer concentration. One customer, G42, accounts for upwards of 80% of Cerebras’ revenue in the last 18 months. I have never seen a company go public with that level of customer concentration before.
The G42 relationship is discussed in more detail below, but looking at the “non G42” revenue, Cerebras made $13M from other customers in 2023, and $18M in H1 of 2024. So while the revenue growth is still impressive, the non G42 revenue base is quite small.
If you don’t yet receive Tanay's newsletter in your email inbox, please join the 10,000+ subscribers who do:
IV. The G42 Relationship
G42 is an Abu Dhabi-based AI and cloud computing holding company owned by Mubadala Capital, which is a customer, investor and partner of Cerebras. Microsoft invested $1.5B in G42 earlier this year.
G42’s relationship with Cerebras is:
Customer: G42 is Cerebras’ largest customer accounting for 83% of Cerebras’ 2023 revenue and 87% of Cerebras’ H1’24 revenue, representing $65M spent in 2023 and $118M in H1 of 2024. In addition, they signed an agreement to purchase $1.43B worth of hardware/services from Cerebras, which will be pre-paid by February of 2025 (either for themselves or for an affiliate/3rd party). While it’s unclear when some of that revenue will be recognized, one thing is clear — Cerebras will likely see some drastic revenue growth over the coming years from G42 (or through it). Last month, Mohamed bin Zayed University of Artificial Intelligence agreed to purchase $350M of that $1.43B commitment.
Investor: G42 purchased a 1% stake in Cerebras during its Series F funding round in 2021 for $40M, and has an agreement to purchase another $335M worth of shares before mid 2025. In addition, based on their order sizes as a customer, they have the option to potentially purchase additional shares at a 17.5% discount to the price at the time.
Partner: G42 is also a partner, offering cloud computing services from Cerebras on their Condor Galaxy Cloud. Cerebras also offers these services on their own cloud.
G42 is currently what is making the Cerebras business work, and Cerebras will likely continue to see a large ramp in revenue in the coming quarters and years given the agreed upon size of the purchase commitments from G42.
V. Closing Thoughts
There are certainly some major risks with Cerebras leading into its IPO, largely to do with their revenue concentration and relationship with G42.
Reports suggest that the IPO may be delayed while CFIUS reviews the G42 deal, and other reports suggest that there was “too much hair” on the IPO for Goldman/JP Morgan/Morgan Stanley to want to take it public (CitiGroup is the lead underwriter). Cerebras’ auditor is BDO, which isn’t one of the “Big Four” accounting firms, that we typically see in many tech IPOs.
However, given the continued purchase commitments from G42, Cerebras will likely report strong revenue growth the next few years.
And all this aside, it's encouraging to see companies attempting to challenge NVIDIA's dominance and push computing forward. Congrats to the Cerebras team as they inch closer to the IPO, and congrats to all the investors as well: Benchmark, Eclipse and Foundation Capital that own >10% each, and Altimeter and Coatue which own >5% each.
Finally, as a fun fact, the S-1 mentions:
AI: 490 times
G42: 301 times
GPU: 128 times
TSMC: 18 times
NVIDIA: 12 times