Cisco has unveiled its new Silicon One P200 chip and accompanying 8223 series routers, designed to revolutionize the way AI data centers connect across vast distances. This innovation aims to address the increasing challenges of scaling AI infrastructure, where the need to connect geographically separated data centers without compromising performance has become critical.
The P200 chip is at the heart of Cisco's new 8223 routing system, enabling multiple AI data centers to function as a single, seamless system. This is a significant advancement, allowing companies to train massive AI models across geographically distributed facilities while maintaining ultra-low latency and high data throughput. The 8223 system offers 51.2 Tbps throughput.
One of the key challenges in connecting AI data centers over long distances is maintaining data synchronization without loss, which requires robust buffering technology. Cisco has incorporated deep buffering capabilities into the P200 chip to address this challenge, ensuring reliable data transfer even during traffic surges. According to Cisco, its packet processing inside Silicon One is unique in the industry, with a fully shared deep buffer.
The rise of AI has led to soaring demand and increasing power constraints in data centers. Many new data centers are being located near renewable energy sources, which are often far from traditional tech hubs. Cisco's P200 chip addresses this concern with its power efficiency. Cisco claims the P200 chip packs the processing power of 92 of its previous chips into one, reducing its 8223 routing system's power consumption by 65%. The 8223 system achieves significantly lower power consumption compared to alternatives, directly addressing hyperscalers' demands for environmentally friendly and cost-effective networks.
The P200 chip and 8223 router system are designed to create a "global AI fabric," allowing geographically distributed data centers to work in concert as if they were a single, local system. This is particularly important for generative AI models, which require frequent synchronization of model weights and large-scale gradient updates. Cisco's approach minimizes bottlenecks and ensures consistency across distributed AI training environments.
Cisco's new chip will compete against rival offerings from Broadcom. Initial customers for the P200 chip include cloud computing units of Microsoft and Alibaba. Microsoft was an early adopter of Silicon One, and the common ASIC architecture has made innovation and options in this space possible. Alibaba plans to use this new routing chip to extend into the Core network, replacing traditional chassis-based routers with a cluster of P200-powered devices.
The 8223 system supports "scale-across" architectures, enabling multiple data centers to work together efficiently. The system offers 64 ports of 800G, processing more than 20 billion packets per second and scaling to over 3 exabits per second. It includes deep buffering for traffic surges, 800G coherent optics for data center interconnects up to 1,000 km, and line-rate encryption with post-quantum resilient algorithms.