Startup ProjectBuild the future
← Blog
S-1 Breakdown · April 2026

Cerebras Is Going Public.
Here’s Everything
You Need to Know.

The company that built the world’s largest chip — and is now gunning for Nvidia’s crown — has filed to go public. We read the S-1 so you don’t have to.

Startup ProjectApril 2026~18 min read

$23B

Series H Valuation

$10B+

OpenAI Compute Deal

4T

WSE-3 Transistors

$2B

Target IPO Raise

21×

Faster Than Nvidia H100

The Setup

A Chip Company That Shouldn’t Exist — But Does

There’s a rule in semiconductor engineering that’s held for sixty years: you don’t build a chip from an entire silicon wafer. Wafers crack. They have defects. Thermal management at that scale is a nightmare. Every major chipmaker — Intel, AMD, Nvidia, IBM — has looked at wafer-scale integration and walked away.

Cerebras didn’t walk away. Founded in 2016 by Andrew Feldman and four co-founders who all came out of SeaMicro (the server startup AMD acquired for $334 million), Cerebras spent years solving what the industry called unsolvable. Their result: the Wafer Scale Engine, now on its third generation, the single largest chip ever manufactured in commercial history.

And now they’re going public. The company filed confidentially with the SEC in late February 2026 and is targeting an April Nasdaq listing under the ticker CBRS, with Morgan Stanley as lead underwriter. The target raise is $2 billion against a $23 billion valuation — making it potentially one of the ten largest semiconductor IPOs ever.

This isn’t Cerebras’s first attempt. The original S-1 landed in September 2024, only to get derailed by a national security review and then withdrawn entirely as the company’s financials grew “stale.” What’s changed since then is remarkable. Let’s go through it all.


🎙 Startup Project Podcast · Episode 116

I Sat Down With Andrew Feldman, CEO & Co-Founder of Cerebras

We talked about the physics of wafer-scale chips, what it takes to compete with Nvidia, and the vision behind the company's most important bets. It's one of the most technically rich conversations I've had on the show.

Listen to the Episode
The Technology

The Wafer Scale Engine: What It Actually Is

The fundamental insight at Cerebras is simple to state and extraordinarily hard to execute: the biggest bottleneck for AI computing isn’t the speed of the transistors — it’s the time it takes data to move between chips, across circuit boards, through cables, and between racks. Every GPU cluster training a large language model is burning enormous amounts of energy and time just shuffling data from place to place.

Cerebras’s answer is to keep everything on the same piece of silicon. One chip. One wafer. An on-die interconnect fabric instead of external networking. Here are the numbers on the WSE-3:

Transistors

4 Trillion (19× Nvidia B200)

Compute Cores

900,000 AI-optimized

Chip Area

46,255 mm² (dinner-plate sized)

On-Chip Memory

44 GB SRAM on-die

Memory Bandwidth

21 PB/s vs ~8 TB/s for H100

Peak AI Performance

125 PFLOPS on CS-3 system

The CS-3 system delivers claimed performance of up to 28× more compute than an Nvidia DGX B200 Blackwell at one-third the cost and one-third the power. Third-party benchmarking firm Artificial Analysis has independently ranked Cerebras as the fastest AI inference provider across hundreds of models.

Why inference matters right now: The AI industry is moving from a training-dominated compute picture to an inference-dominated one. Every query to ChatGPT, Claude, or Gemini is inference. Cerebras’s architecture — which Artificial Analysis clocks at over 20× faster than comparable Nvidia hardware — is exceptionally well-positioned for this shift.

Cerebras also claims the CS-3 can train models up to 24 trillion parameters without the complex model parallelization software that GPU clusters require. For frontier AI labs, that simplicity translates to fewer engineers debugging distributed training, faster iteration cycles, and lower total cost of ownership.

Company History

From SeaMicro to $23 Billion: The Full Timeline

07

2007

Andrew Feldman and Gary Lauterbach found SeaMicro, a microserver company focused on energy-efficient compute for web workloads.

12

2012

AMD acquires SeaMicro for $334–357M. The exit gives Feldman and team capital — and a thesis that data movement, not transistor density, is the real bottleneck for compute.

16

2016

Cerebras Systems founded in Sunnyvale by five SeaMicro veterans. $27M Series A led by Benchmark, Foundation Capital, and Eclipse Ventures.

19

2019

WSE-1 debuts — confirmed as the world's largest chip. $270M Series E raises total funding to ~$475M.

21

2021

WSE-2 launched with 850,000 cores. $250M Series F led by Alpha Wave and Abu Dhabi Growth Fund values Cerebras at $4B.

24

Sept 2024

WSE-3 ships. S-1 filed with SEC. IPO delayed almost immediately by CFIUS national security review of G42's minority stake.

25

Mar–Oct 2025

CFIUS clears review (G42 converts to non-voting shares). Cerebras withdraws S-1, calling it "stale." $1.1B Series G at $8.1B valuation closes in September.

26

Jan–Apr 2026

$10B+ OpenAI compute deal announced. $1B Series H closes at $23B valuation, led by Tiger Global. Cerebras confidentially refiles for IPO targeting Q2 2026.

The Financials

What the Numbers Actually Show

The original S-1 is the best window we have into Cerebras’s economics, and the 2026 filing is expected to show dramatically improved figures. Here’s what was disclosed in September 2024:

Revenue Growth ($M)

2022
$24.6M
2023
$78.7M
H1 2024
$136.4M
2025E
~$300–350M est.

Net Loss ($M)

2022
-$177.7M
2023
-$127.2M
H1 2024
-$66.6M

The trajectory is directionally clean: revenue tripled from 2022 to 2023, then surged again in 2024 (H1 revenue up over 1,400% year-over-year). Losses are narrowing. Gross margins improved from 11.7% in 2022 to 33.5% in 2023, then expanded to ~41% in early 2024 before compression from volume discounts offered to G42.

At the $23B Series H valuation, that implies a revenue multiple in the range of 65–70× — a number that demands sustained hyper-growth and margin expansion to justify in public markets.

The key metric to watch in the new filing: Gross margins on cloud/inference contracts. The OpenAI deal is structured as compute-as-a-service rather than direct hardware sales, which likely carries different margin characteristics. Whether those margins are better or worse than historical hardware margins will determine a lot about how the company gets priced on Nasdaq.
The Biggest Risk — And How It Changed

The G42 Problem, and the OpenAI Solution

If there was one number that killed Cerebras’s first IPO attempt, it was this: 87% of H1 2024 revenue came from a single customer — G42, a UAE-based technology conglomerate. In 2023, G42 accounted for 83% of revenue and 97% of hardware sold. For institutional investors, that level of customer concentration is a dealbreaker.

The complexity deepened because G42 wasn’t just a customer — it was also a minority investor. And G42 had documented ties to Chinese technology companies including Huawei, which drew the attention of CFIUS. The combination of extreme concentration, geopolitical entanglement, and regulatory uncertainty was enough to pause the original IPO indefinitely.

Then came January 14, 2026. On that date, Cerebras announced a multi-year compute agreement with OpenAI worth more than $10 billion, covering 750 megawatts of AI inference capacity through 2028. This is the largest AI infrastructure contract ever awarded to a non-Nvidia supplier.

One concentration replaced by another? The key differences: OpenAI is a US-domiciled company with no geopolitical baggage; the deal is structured as cloud services at scale; and OpenAI’s position as the world’s most prominent AI lab brings a level of third-party validation that no other customer could provide. The concentration risk hasn’t disappeared — but it’s been radically transformed.

Beyond OpenAI, the customer roster includes AWS, Meta, IBM, Mistral, Cognition, AlphaSense, Notion, GlaxoSmithKline, the Mayo Clinic, the U.S. Department of Energy, and the U.S. Department of Defense.

Metric2024 S-12026 Position
Primary CustomerG42 (87–97% of revenue)OpenAI + diversified roster
Largest Known ContractG42 hardware purchase ordersOpenAI: $10B+ / 750MW / 2028
Valuation$4B (2021 last round)$23B (Series H, Feb 2026)
CFIUS StatusUnder review — blocked IPOResolved March 2025
Revenue Run Rate~$70M/quarter (Q2 2024)~$300–350M full-year 2025 (est.)
Gross Margin~41% (H1 2024)To be disclosed in updated S-1
Competitive Landscape

Taking On Nvidia — and Everyone Else

Let’s be direct: Nvidia has won the AI training market. The CUDA software ecosystem is a moat that doesn’t erode quickly. Cerebras knows this, and they’re not actually fighting Nvidia on training. Cerebras’s real battle is on the inference side.

CompanyArchitecturePrimary StrengthCerebras Advantage
NvidiaMulti-die GPU clustersTraining + CUDA ecosystem21× faster inference, 3× lower power
AMDGPU (MI300X/MI325X)GPU training, HPCWafer-scale memory bandwidth advantage
GroqLPU (Language Processing Unit)Fast inference, low latencySimilar inference pitch; Cerebras is larger scale
SambaNovaReconfigurable dataflowEnterprise AIScale, WSE-3 specifications
Intel GaudiAI acceleratorOpen ecosystem playPerformance claims, OpenAI validation

The OpenAI partnership is a signal to the whole market: the world’s most technically sophisticated AI lab chose Cerebras over Nvidia for a 750-megawatt inference build. That’s not a minor endorsement — that’s the strongest proof point available in the industry.

Note also that AMD participated in Cerebras’s Series H funding round. Nvidia’s most prominent public-market rival made a strategic bet on a wafer-scale alternative. That’s a signal worth sitting with.

🎙 Don't Miss This Conversation · Episode 116

Andrew Feldman on Startup Project: Competing With Nvidia, the Physics of Wafer-Scale, and Building a $23B Company

We went deep on the technical moat, the investor journey, and what the IPO means for the AI compute landscape.

Stream the Episode
Eyes Open

The Real Risks Every Investor Should Know

The Cerebras story has improved dramatically since 2024. But improved doesn’t mean risk-free. Here’s an honest accounting of what still deserves scrutiny:

🔴 Valuation vs. Revenue Reality

At $23B on an estimated $300–350M in 2025 revenue, the implied multiple is approximately 65–70×. Sustaining that multiple in public markets requires flawless execution. Any revenue miss or margin compression will hit hard.

🔴 750MW Is a Big Commitment

Delivering 750 megawatts of wafer-scale compute capacity through 2028 for OpenAI is an enormous infrastructure build. Any supply chain disruption or manufacturing yield issue would be very public — it powers ChatGPT.

🟡 TSMC Manufacturing Dependency

Cerebras chips are manufactured by TSMC. The WSE wafer is one of TSMC's most complex products. Any capacity constraints, yield issues, or geopolitical disruptions to Taiwan supply chains flow directly to Cerebras.

🟡 Software Ecosystem Gap

Nvidia's CUDA ecosystem represents years of developer investment. Cerebras has a software stack (CSoft), but enterprise developers building on CUDA won't switch easily. The inference moat needs developer adoption to be durable.

🟡 Altman Conflict of Interest

OpenAI CEO Sam Altman is an early personal investor in Cerebras. The $10B deal was struck between a customer whose CEO has a financial interest in the supplier. This will invite scrutiny from shareholders and analysts.

🟢 Regulatory: Now Cleared

CFIUS cleared the G42 investment in March 2025 after G42 converted to non-voting shares. G42 is no longer listed among investors in the new filing. The regulatory overhang that killed the 2024 IPO is resolved.

The Capital Story

$2.91 Billion Raised and Still Accelerating

Cerebras has raised approximately $2.91 billion in total funding. The trajectory of the most recent rounds tells you how dramatically the market’s conviction has shifted:

RoundDateAmountValuationKey Investors
Series AMay 2016$27MBenchmark, Foundation Capital, Eclipse
Series ENov 2019$270M$2.4BAltimeter, Coatue, VY Capital
Series FNov 2021$250M$4BAlpha Wave, Abu Dhabi Growth Fund
Series GSept 2025$1.1B$8.1BFidelity, Atreides Management
Series HFeb 2026$1B$23BTiger Global, AMD, Benchmark, Fidelity, Coatue, Altimeter

The valuation nearly tripled in five months between Series G and Series H. Tiger Global leading the H round signals institutional confidence. AMD’s participation is a strategic statement. Benchmark’s presence across multiple rounds — from Series A through Series H — is a sign of sustained conviction from one of Silicon Valley’s most disciplined firms.

The IPO Itself

What We Know About the Offering

ParameterDetails
Expected TickerCBRS (Nasdaq)
Target Listing DateQ2 2026 (April target)
Target Raise~$2 billion
Valuation Range$22–25 billion
Lead UnderwriterMorgan Stanley
Filing StatusConfidential re-filing, Feb 2026; public S-1 expected ~15 days before roadshow
G42 StatusNo longer listed as investor in new filing
IPO Market ContextTraditional US listings in 2025 raised $46.15B, highest since 2021
For retail investors: Pre-IPO access is limited to accredited investors via secondary platforms like Hiive, Forge, or EquityZen. Once CBRS lists on Nasdaq, any brokerage account can participate. The roadshow is expected in April 2026, with a specific listing date typically confirmed ~10 days before pricing.
The Bottom Line

What to Make of All This

In 2024, the bear case on Cerebras was easy to construct: a single UAE customer accounting for 97% of hardware sales, regulatory uncertainty, and technology claims that were largely self-reported. The skepticism was warranted.

In 2026, the bear case is harder. The structural customer concentration risk has been addressed by the most credible counterparty imaginable — OpenAI. The CFIUS overhang is resolved. The Wafer Scale Engine has earned genuine third-party validation. Institutional investors put $1 billion into the company at $23 billion in February, including AMD, which has every incentive to understand the competitive landscape.

What remains is a valuation question, and it’s a serious one. At roughly 65–70× estimated 2025 revenue, Cerebras enters public markets pricing in a future that requires consistent hyper-growth, margin expansion, successful delivery of 750 megawatts for OpenAI, and durable differentiation against Nvidia’s relentless pace of product development.

The company has earned the right to be taken seriously. The question is whether the $23 billion number is where that seriousness gets appropriately valued — or overpriced.

Watch the updated S-1 closely. The gross margin structure of the OpenAI deal will tell you more about Cerebras’s long-term economics than any other number in the filing.

Cerebras in three sentences: A technically differentiated AI chip company that solved an engineering problem the industry said was impossible. Validated by a $10 billion OpenAI contract and a $23 billion Series H. Trading at a valuation that requires perfect execution — but has, for the first time, earned the right to demand it.

🎙 Startup Project · Episode 116

Hear It Directly From the Founder

I sat down with Andrew Feldman — CEO, co-founder, and the person who made wafer-scale computing real — for a wide-ranging conversation about the company's origin, the decision to go public, and what it means to go up against Nvidia.

Listen on Startup Project

This analysis is based on publicly available information, including Cerebras’s September 2024 S-1 filing and subsequent press reports. It does not constitute investment advice. Financial figures from the 2026 S-1 will supersede estimates cited here once publicly available.