Cerebras Systems has secured $1 billion in Series H funding, catapulting its post-money valuation to approximately $23 billion. This investment, led by Tiger Global with participation from investors like AMD, Benchmark, Fidelity Management & Research Company, and Coatue, underscores the growing investor confidence in Cerebras as a significant player in the AI infrastructure landscape. The funding aims to bolster Cerebras' production capabilities, expand its customer deployments, and facilitate the development of next-generation AI processors.
Cerebras distinguishes itself with its innovative Wafer Scale Engine (WSE), a single chip the size of an entire silicon wafer. This design diverges sharply from traditional GPU-based systems. The latest iteration, WSE-3, is engineered to deliver enhanced training and inference speeds while optimizing power consumption per unit of compute. The WSE-3 contains 4 trillion transistors and possesses a 44-gigabyte pool of SRAM memory. The company's architecture splits the WSE-3 into 900,000 cores, mitigating the impact of manufacturing defects by routing data around affected cores. Cerebras ships the WSE-3 as part of a water-cooled system called the CS-3, which provides 125 petaflops of performance.
Organizations training very large models, struggling with the complexity and energy demands of GPU clusters, are increasingly turning to Cerebras as an alternative to conventional AI infrastructure. Cerebras' systems are deployed across various sectors, including large enterprises, research institutions, and government organizations, spanning multiple continents through both private data centers and cloud deployments.
Nvidia remains the dominant force in the AI chip market, commanding a substantial market share. As of 2025, Nvidia controlled over 80% of the market for GPUs used in training and deploying AI models. However, several factors are contributing to the rise of alternative solutions. Clients are seeking to diversify their suppliers to mitigate the risks associated with Nvidia's long waiting lists and lack of pricing competition.
Companies like AMD, Google, Amazon, and Microsoft are developing their own chips. Startups such as Cerebras and Groq are also introducing specialized processors. Google TPUs, AWS Trainium, Cerebras, and SambaNova support both training and inference, while Groq LPU and AWS Inferentia focus exclusively on inference. This distinction matters for buyers as GPUs offer flexibility across different AI workloads, while ASICs deliver better performance-per-watt but are harder to reprogram when model architectures change. Custom ASIC shipments from cloud providers are projected to grow 44.6% in 2026, while GPU shipments are expected to grow 16.1%.
Cerebras faces competition from companies such as Celestial AI, WorkFusion, Defined.ai, and Conviva. Despite the competition, Cerebras' technology is drawing growing interest from strategic customers; for example, it recently signed a commercial agreement with OpenAI. There are reports that OpenAI is seeking alternatives to NVIDIA chips because they are not satisfied with the speed at which NVIDIA's hardware can give answers to specific issues. Cerebras has engaged with AMD, and Groq for potential chip solutions, as it seeks hardware that can better meet its inference needs. Kim Branson, SVP and Head of AI, GlaxoSmithKline, said that training which historically took over two weeks to run on a large cluster of GPUs was accomplished in just over two days on a single Cerebras system.
With its billion-dollar funding round and a $23 billion valuation, Cerebras is strategically positioned to challenge Nvidia's dominance. The AI accelerator chip market will grow at an annual rate of 16% to reach $604 billion by 2033. While Nvidia is expected to hold 70% to 75% of the market, companies like Cerebras are poised to capture a growing share.



















