224 Gigabits per second (Gbps) pulse-amplitude modulation 4-level (PAM 4) technology is foundational for the hyperscale data centers needed for artificial intelligence (AI) and machine learning (ML) training and implementation.
This article begins by examining the multiple uses of 224 G connectivity in servers and storage devices. It then considers how multiple 224 G lanes are used to support 1.6 Terabits per second (Tbps) networking across data centers and enable new data center architecture.
In servers and storage devices, 224 G is used for near ASIC connector-to-cable systems, high-performance mezzanine connectivity, Open Shortest Path First (OSFP), Quad Small Form Pluggable Double Density (QSFP-DD), Quad Small Form-factor Pluggable (QSFP) and similar standards for reaching out into the data center and backplane connectors for direct links to other servers and storage (Figure 1).
Near ASIC connector-to-cable systems that support near-chip and on-package I/O for chip-to-chip connectivity are available. They support high signal integrity with up to 10x reduction in signal loss and eliminate the need for retimers. Models are available that can pair with OSFP, QSFP DD, QSFP, and similar interfaces for reaching out to the data center.
Interfaces like OSFP, QSFP DD, and QSFP can support 224 G in three ways: direct-attach copper (DAC) can provide the lowest cost and power consumption over distances up to 3 meters. For connectivity over distances up to 5 meters, active electrical cables (AEC) can support multiple cable configurations while providing low latency. Active fiber optic cabling can support multiple 224 G lanes and 1.6 T networking across data centers for long-distance connectivity.
Mezzanine connectors for 224 G feature improved impedance and reduced cross talk compared to previous generations. They support modular and scalable system designs like the open compute project (OCP) open accelerator infrastructure (OAI) architecture that can support up to 8 GPU modules. Switched mezzanine cards (XMC) are high-speed modules that can add extra processing power, I/O capabilities, various types of accelerators, or specialized features to speed AI/ML operations. Mezzanine connectors for 224 G have short circuit path lengths and tight impedance tolerances to minimize attenuation and crosstalk.
Backplane connectors for 224 G enable efficient communication and coordination between numerous servers and storage devices in a rack to amplify their ability to implement ML algorithms and power AI.
Reimaging data centers with 224 G
224 G technology and 1.6 T connectivity make it possible to reimage data centers to maximize their utility for AI/ML. Data centers are being disaggregated, and uniform resources like computing, networking, and storage are housed in separate racks connected by high-speed, low-latency technologies, including next-generation 1.6 T Ethernet (1.6 TbE).
Clusters of servers can be linked to act as a single computing device, and multiple clusters can be linked to speed up the processing of unstructured workloads related to large language models and other ML problems. These models can have trillions of parameters and massive data sets that must be kept in memory and available with low latency.
Power consumption is another important factor. Over 60% of data center power consumption is estimated to be related to Ethernet connections. The use of 1.6 TbE is expected to reduce the power consumed by connectivity by up to 50% and reduce latency by 40%, eliminating bottlenecks. 1.6 TbE will form the backbone of hyperscale data centers (Figure 2).
Summary
224 G connectivity and 1.6 T Ethernet are used across hyperscale data centers to support the high-performance computing needed by AI/ML. Uses start outside the racks by linking disaggregated resources across the data center and extend into the server and storage boxes and racks in the form of OSFP, QSFP-DD, QSFP, mezzanine and backplane connectors, and near ASIC connector-to-cable systems.
References
224G High-Speed Solutions, Amphenol
224G Solutions for AI & Data Centers, TE Connectivity
Building the Next-Gen Data Center with 224 Gbps-PAM4 Technologies, Molex
Meeting the World’s Growing Bandwidth Demands with a Complete 1.6T Ethernet IP Solution, Synopsys
New 224 Gbps PAM4 Interconnect Solutions, Samtec
The Road from 1 Gbps-NRZ to 224 Gbps-PAM4, Signal Integrity Journal
Filed Under: FAQ, Connector Tips