A Chip Company That Shouldn’t Exist — But Does
There’s a rule in semiconductor engineering that’s held for sixty years: you don’t build a chip from an entire silicon wafer. Wafers crack. They have defects. Thermal management at that scale is a nightmare. Every major chipmaker — Intel, AMD, Nvidia, IBM — has looked at wafer-scale integration and walked away.
Cerebras didn’t walk away. Founded in 2016 by Andrew Feldman and four co-founders who all came out of SeaMicro (the server startup AMD acquired for $334 million), Cerebras spent years solving what the industry called unsolvable. Their result: the Wafer Scale Engine, now on its third generation, the single largest chip ever manufactured in commercial history.
And now they’re going public. The company filed confidentially with the SEC in late February 2026 and is targeting an April Nasdaq listing under the ticker CBRS, with Morgan Stanley as lead underwriter. The target raise is $2 billion against a $23 billion valuation — making it potentially one of the ten largest semiconductor IPOs ever.
This isn’t Cerebras’s first attempt. The original S-1 landed in September 2024, only to get derailed by a national security review and then withdrawn entirely as the company’s financials grew “stale.” What’s changed since then is remarkable. Let’s go through it all.
The Technology
The Wafer Scale Engine: What It Actually Is
The fundamental insight at Cerebras is simple to state and extraordinarily hard to execute: the biggest bottleneck for AI computing isn’t the speed of the transistors — it’s the time it takes data to move between chips, across circuit boards, through cables, and between racks. Every GPU cluster training a large language model is burning enormous amounts of energy and time just shuffling data from place to place.
Cerebras’s answer is to keep everything on the same piece of silicon. One chip. One wafer. An on-die interconnect fabric instead of external networking. Here are the numbers on the WSE-3:
Transistors
4 Trillion (19× Nvidia B200)
Compute Cores
900,000 AI-optimized
Chip Area
46,255 mm² (dinner-plate sized)
On-Chip Memory
44 GB SRAM on-die
Memory Bandwidth
21 PB/s vs ~8 TB/s for H100
Peak AI Performance
125 PFLOPS on CS-3 system
The CS-3 system delivers claimed performance of up to 28× more compute than an Nvidia DGX B200 Blackwell at one-third the cost and one-third the power. Third-party benchmarking firm Artificial Analysis has independently ranked Cerebras as the fastest AI inference provider across hundreds of models.
Cerebras also claims the CS-3 can train models up to 24 trillion parameters without the complex model parallelization software that GPU clusters require. For frontier AI labs, that simplicity translates to fewer engineers debugging distributed training, faster iteration cycles, and lower total cost of ownership.
Company HistoryFrom SeaMicro to $23 Billion: The Full Timeline
2007
Andrew Feldman and Gary Lauterbach found SeaMicro, a microserver company focused on energy-efficient compute for web workloads.
2012
AMD acquires SeaMicro for $334–357M. The exit gives Feldman and team capital — and a thesis that data movement, not transistor density, is the real bottleneck for compute.
2016
Cerebras Systems founded in Sunnyvale by five SeaMicro veterans. $27M Series A led by Benchmark, Foundation Capital, and Eclipse Ventures.
2019
WSE-1 debuts — confirmed as the world's largest chip. $270M Series E raises total funding to ~$475M.
2021
WSE-2 launched with 850,000 cores. $250M Series F led by Alpha Wave and Abu Dhabi Growth Fund values Cerebras at $4B.
Sept 2024
WSE-3 ships. S-1 filed with SEC. IPO delayed almost immediately by CFIUS national security review of G42's minority stake.
Mar–Oct 2025
CFIUS clears review (G42 converts to non-voting shares). Cerebras withdraws S-1, calling it "stale." $1.1B Series G at $8.1B valuation closes in September.
Jan–Apr 2026
$10B+ OpenAI compute deal announced. $1B Series H closes at $23B valuation, led by Tiger Global. Cerebras confidentially refiles for IPO targeting Q2 2026.
What the Numbers Actually Show
The original S-1 is the best window we have into Cerebras’s economics, and the 2026 filing is expected to show dramatically improved figures. Here’s what was disclosed in September 2024:
Revenue Growth ($M)
Net Loss ($M)
The trajectory is directionally clean: revenue tripled from 2022 to 2023, then surged again in 2024 (H1 revenue up over 1,400% year-over-year). Losses are narrowing. Gross margins improved from 11.7% in 2022 to 33.5% in 2023, then expanded to ~41% in early 2024 before compression from volume discounts offered to G42.
At the $23B Series H valuation, that implies a revenue multiple in the range of 65–70× — a number that demands sustained hyper-growth and margin expansion to justify in public markets.
The G42 Problem, and the OpenAI Solution
If there was one number that killed Cerebras’s first IPO attempt, it was this: 87% of H1 2024 revenue came from a single customer — G42, a UAE-based technology conglomerate. In 2023, G42 accounted for 83% of revenue and 97% of hardware sold. For institutional investors, that level of customer concentration is a dealbreaker.
The complexity deepened because G42 wasn’t just a customer — it was also a minority investor. And G42 had documented ties to Chinese technology companies including Huawei, which drew the attention of CFIUS. The combination of extreme concentration, geopolitical entanglement, and regulatory uncertainty was enough to pause the original IPO indefinitely.
Then came January 14, 2026. On that date, Cerebras announced a multi-year compute agreement with OpenAI worth more than $10 billion, covering 750 megawatts of AI inference capacity through 2028. This is the largest AI infrastructure contract ever awarded to a non-Nvidia supplier.
Beyond OpenAI, the customer roster includes AWS, Meta, IBM, Mistral, Cognition, AlphaSense, Notion, GlaxoSmithKline, the Mayo Clinic, the U.S. Department of Energy, and the U.S. Department of Defense.
| Metric | 2024 S-1 | 2026 Position |
|---|---|---|
| Primary Customer | G42 (87–97% of revenue) | OpenAI + diversified roster |
| Largest Known Contract | G42 hardware purchase orders | OpenAI: $10B+ / 750MW / 2028 |
| Valuation | $4B (2021 last round) | $23B (Series H, Feb 2026) |
| CFIUS Status | Under review — blocked IPO | Resolved March 2025 |
| Revenue Run Rate | ~$70M/quarter (Q2 2024) | ~$300–350M full-year 2025 (est.) |
| Gross Margin | ~41% (H1 2024) | To be disclosed in updated S-1 |
Taking On Nvidia — and Everyone Else
Let’s be direct: Nvidia has won the AI training market. The CUDA software ecosystem is a moat that doesn’t erode quickly. Cerebras knows this, and they’re not actually fighting Nvidia on training. Cerebras’s real battle is on the inference side.
| Company | Architecture | Primary Strength | Cerebras Advantage |
|---|---|---|---|
| Nvidia | Multi-die GPU clusters | Training + CUDA ecosystem | 21× faster inference, 3× lower power |
| AMD | GPU (MI300X/MI325X) | GPU training, HPC | Wafer-scale memory bandwidth advantage |
| Groq | LPU (Language Processing Unit) | Fast inference, low latency | Similar inference pitch; Cerebras is larger scale |
| SambaNova | Reconfigurable dataflow | Enterprise AI | Scale, WSE-3 specifications |
| Intel Gaudi | AI accelerator | Open ecosystem play | Performance claims, OpenAI validation |
The OpenAI partnership is a signal to the whole market: the world’s most technically sophisticated AI lab chose Cerebras over Nvidia for a 750-megawatt inference build. That’s not a minor endorsement — that’s the strongest proof point available in the industry.
Note also that AMD participated in Cerebras’s Series H funding round. Nvidia’s most prominent public-market rival made a strategic bet on a wafer-scale alternative. That’s a signal worth sitting with.
Eyes OpenThe Real Risks Every Investor Should Know
The Cerebras story has improved dramatically since 2024. But improved doesn’t mean risk-free. Here’s an honest accounting of what still deserves scrutiny:
🔴 Valuation vs. Revenue Reality
At $23B on an estimated $300–350M in 2025 revenue, the implied multiple is approximately 65–70×. Sustaining that multiple in public markets requires flawless execution. Any revenue miss or margin compression will hit hard.
🔴 750MW Is a Big Commitment
Delivering 750 megawatts of wafer-scale compute capacity through 2028 for OpenAI is an enormous infrastructure build. Any supply chain disruption or manufacturing yield issue would be very public — it powers ChatGPT.
🟡 TSMC Manufacturing Dependency
Cerebras chips are manufactured by TSMC. The WSE wafer is one of TSMC's most complex products. Any capacity constraints, yield issues, or geopolitical disruptions to Taiwan supply chains flow directly to Cerebras.
🟡 Software Ecosystem Gap
Nvidia's CUDA ecosystem represents years of developer investment. Cerebras has a software stack (CSoft), but enterprise developers building on CUDA won't switch easily. The inference moat needs developer adoption to be durable.
🟡 Altman Conflict of Interest
OpenAI CEO Sam Altman is an early personal investor in Cerebras. The $10B deal was struck between a customer whose CEO has a financial interest in the supplier. This will invite scrutiny from shareholders and analysts.
🟢 Regulatory: Now Cleared
CFIUS cleared the G42 investment in March 2025 after G42 converted to non-voting shares. G42 is no longer listed among investors in the new filing. The regulatory overhang that killed the 2024 IPO is resolved.
$2.91 Billion Raised and Still Accelerating
Cerebras has raised approximately $2.91 billion in total funding. The trajectory of the most recent rounds tells you how dramatically the market’s conviction has shifted:
| Round | Date | Amount | Valuation | Key Investors |
|---|---|---|---|---|
| Series A | May 2016 | $27M | — | Benchmark, Foundation Capital, Eclipse |
| Series E | Nov 2019 | $270M | $2.4B | Altimeter, Coatue, VY Capital |
| Series F | Nov 2021 | $250M | $4B | Alpha Wave, Abu Dhabi Growth Fund |
| Series G | Sept 2025 | $1.1B | $8.1B | Fidelity, Atreides Management |
| Series H | Feb 2026 | $1B | $23B | Tiger Global, AMD, Benchmark, Fidelity, Coatue, Altimeter |
The valuation nearly tripled in five months between Series G and Series H. Tiger Global leading the H round signals institutional confidence. AMD’s participation is a strategic statement. Benchmark’s presence across multiple rounds — from Series A through Series H — is a sign of sustained conviction from one of Silicon Valley’s most disciplined firms.
The IPO ItselfWhat We Know About the Offering
| Parameter | Details |
|---|---|
| Expected Ticker | CBRS (Nasdaq) |
| Target Listing Date | Q2 2026 (April target) |
| Target Raise | ~$2 billion |
| Valuation Range | $22–25 billion |
| Lead Underwriter | Morgan Stanley |
| Filing Status | Confidential re-filing, Feb 2026; public S-1 expected ~15 days before roadshow |
| G42 Status | No longer listed as investor in new filing |
| IPO Market Context | Traditional US listings in 2025 raised $46.15B, highest since 2021 |
What to Make of All This
In 2024, the bear case on Cerebras was easy to construct: a single UAE customer accounting for 97% of hardware sales, regulatory uncertainty, and technology claims that were largely self-reported. The skepticism was warranted.
In 2026, the bear case is harder. The structural customer concentration risk has been addressed by the most credible counterparty imaginable — OpenAI. The CFIUS overhang is resolved. The Wafer Scale Engine has earned genuine third-party validation. Institutional investors put $1 billion into the company at $23 billion in February, including AMD, which has every incentive to understand the competitive landscape.
What remains is a valuation question, and it’s a serious one. At roughly 65–70× estimated 2025 revenue, Cerebras enters public markets pricing in a future that requires consistent hyper-growth, margin expansion, successful delivery of 750 megawatts for OpenAI, and durable differentiation against Nvidia’s relentless pace of product development.
The company has earned the right to be taken seriously. The question is whether the $23 billion number is where that seriousness gets appropriately valued — or overpriced.
Watch the updated S-1 closely. The gross margin structure of the OpenAI deal will tell you more about Cerebras’s long-term economics than any other number in the filing.
This analysis is based on publicly available information, including Cerebras’s September 2024 S-1 filing and subsequent press reports. It does not constitute investment advice. Financial figures from the 2026 S-1 will supersede estimates cited here once publicly available.