In a landmark move set to reshape the future of artificial intelligence, Nvidia and Intel have announced a groundbreaking collaboration focused on powering the next generation of AI-driven personal computers and cloud-based AI infrastructure. The partnership will see the two tech giants working together to develop custom data center and PC products, seamlessly integrating Nvidia's AI and accelerated computing prowess with Intel's CPU technologies and x86 ecosystem.
The collaboration addresses the increasing demand for tightly integrated CPU-GPU designs to deliver more efficient performance across cloud workloads, enterprise AI systems, and next-generation personal computing. Nvidia will invest $5 billion in Intel, signaling a strong commitment to the partnership.
For data centers, Intel will construct custom x86 CPUs tailored for Nvidia's AI infrastructure platforms. These platforms, when combined with Nvidia's GPUs, such as the A100 and H100, will provide unparalleled performance for training and deploying large-scale AI models. This collaboration extends beyond the PC, revolutionizing cloud AI as Intel builds custom x86 CPUs for Nvidia's AI infrastructure platforms. This move shifts Intel into the CPU-for-accelerator market.
On the personal computing front, Intel will create x86 system-on-chips (SoCs) that incorporate Nvidia RTX GPU chiplets. These SoCs will power a new range of PCs designed for demanding applications requiring world-class CPUs and GPUs. By integrating RTX-class GPU chiplets, businesses can anticipate more powerful and cost-effective solutions for AI training, inference, and accelerated computing. This will accelerate the mainstreaming of AI PCs, reshaping procurement decisions in both corporate IT and consumer markets. The collaboration addresses limitations posed by GPU sizes and cooling requirements within standard ATX cases, which have hindered AI application progress on PCs.
Nvidia's CEO, Jensen Huang, hailed the collaboration as a "fusion of two world-class platforms" that will expand their ecosystems and lay the foundation for the next era of computing. Intel's CEO, Lip-Bu Tan, emphasized that combining Intel's data center and client computing platforms with their process technology and manufacturing capabilities will complement Nvidia's AI leadership, enabling new breakthroughs.
This partnership's implications extend beyond PCs and cloud servers, potentially impacting the automotive and robotics industries, where both companies have a presence. The collaboration could lead to a standardized CPU+GPU compute stack for robotics, enabling efficient handling of general-purpose tasks with Intel silicon while dedicating Nvidia's hardware to complex AI and machine learning workloads. This push toward hybrid AI, where workloads are intelligently distributed between the cloud and the edge, could become a hallmark of the Nvidia-Intel partnership.
The integrated platforms will use NVIDIA NVLink to seamlessly connect the architectures. NVIDIA NVLink fusion technology will serve as the high-speed interconnect fabric between NVIDIA's accelerated computing platforms and Intel's custom x86 CPUs. This advanced interconnect offers ultra-high bandwidth, low latency, and direct peer-to-peer communication between CPUs and GPUs, surpassing traditional PCIe.
This alliance is poised to redefine consumer PCs, accelerate cloud AI, and revolutionize industries like automotive and robotics through advanced hybrid AI solutions. By combining Intel's CPU expertise and manufacturing scale with Nvidia's GPU and AI leadership, the partnership paves the way for a future where intelligence is seamlessly embedded into every device and interaction.