Search
Close this search box.

System Considerations for Compute Express Link® Attached Devices – form factors, connectors, backplanes

4 min read
By: Sandeep Dattaprasad, Technical Application Lead, Microchip Technology Inc.

Introduction

Compute Express Link™ (CXL™) addresses the growing memory bandwidth and capacity needs for processors to accelerate high-speed computing applications, such as artificial intelligence, cloud computing and machine learning. The industry is quickly transitioning to take advantage of the capabilities enabled by this new protocol and the fast-path to adoption is in no small part based on leveraging the existing PCI Express® (PCIe®) 5.0 physical layer, electricals and infrastructure.

Designing large-scale systems that utilizes this emerging standard to optimize capacity, performance and power within specific area constraints requires careful selection of memory module form factors taking into consideration:

  • An interface that supports high bandwidth

  • Dimensions that support optimal capacity

  • Support for power dissipation by allowing proper airflow

  • Standardized for multi-source

  • Modular to scale with application needs

This article explores some of these design aspects.

Memory Technologies

A CXL based application has several advantages including higher performance as it allows sharing of memory resources, memory coherency between the CPU memory space and memory on attached devices and reduction in software stack complexity. Previously, the industry used Unbuffered DIMMs (UDIMMs) and Register DIMMs (RDIMM) for memory applications.

UDIMMs are usually used in lower-end servers and are the most cost-efficient, but they are not ideal for higher speeds due to high loading on the command/address signals. Also, they can only support up to two DIMMs Per Channel (DPC). RDIMMs have a register on the DIMM to buffer the address and command signals between each of the DRAMs on the DIMM and the memory controller. This not only makes it possible to have a lot more DRAMs on a DIMM but also improves the signal integrity of the module making it suitable for speeds at which CXL operates. All these factors make memory modules in the RDIMM form factor a good fit for CXL applications. As an example, a standard RDIMM can be connected to a CXL based memory controller on a motherboard or they can be plugged in to a CXL riser card.

Despite the advantages of RDIMM modules, as the speed of DRAM continues to increase (DDR5 *Max 6400 MT/s), the number of pins per channel is still quite large (380 pins for DDR5). Differential DIMM (DDIMM) is a newer memory module form factor that has a significantly lower pin count (84 pins), enables data throughput rate of 25.6GB/s with a latency of 40ns, densities up to 256GB, allows CPUs to attach point-to-point to accelerators and I/O devices. The decrease in pin count while increasing supported data rate at the same time can be extremely beneficial for CXL based applications in a standard server environment.

CXL Form Factors

As the speed of interfaces increases, there are several challenges in designing backplanes such as: choosing an appropriate form factor for drives and connectors that supports bandwidth requirements, meets signal integrity needs, leverages existing designs, meets power budget, and helps in improving airflow.

Enterprise and Data Center SSD Form Factor (EDSSF) is yet another existing form factor for NVM Express™ (NVMe™) SSDs that supports drop-in cage replacement, needs a much smaller connector allowing the backplane design to be leveraged from PCIe for both 1U for 2U servers and significantly reduces 2U airflow needs leading to lower cooling costs.

EDSSF offers several options targeted for specific types of end applications:

  • E1.S is a small form factor, which supports thicknesses between 5.9mm to 25mm and is ideal for hyperscale and enterprise compute nodes and storage accommodating more NAND packages for increase capacity per drive compared to Legacy SSD form factors.

  • E1.L is a form factor that was developed to maximize capacity per drive and per rack unit in storage array (JBOD, JBOF), with superior manageability, serviceability, and thermal characteristics vs traditional form factors such as U.2 that were designed for rotating media. It supports x4 or x8 lanes of PCIe with a maximum thickness of 18mm.

  • E3 form factor has various options for length and height in form of E3.S and E3.L meant for 2U vertical orientation or 1U horizontal. They support hot plugging, which has varying thicknesses from 7.5mm to 18mm to maximize drives per system and supports x4, x8, or x16 PCIe host interface.

Choosing an appropriate connector on the backplane that can support the speeds of CXL is extremely important as well. Mini Cool Edge IO connector (MCIO) is the next-generation connector type that supports 0.60mm pitch connector, comes with a slim form factor design, capable of transmitting high-speed signal up to 56Gb/s. It is widely adopted as the preferred connector for PCIe Gen 5 internal cabling in servers since it allows much greater signal path lengths while maintaining SI performance when compared to conventional PCB routing methods. It is cost-effective, highly modular, scalable, and extremely easy to repair making it a very good choice for CXL applications.

Conclusion

For designing CXL systems that require memory pooling and expansion there are several existing memory module form factors that can be used which includes RDIMMs and DDIMMs that meet the bandwidth and capacity needs. The type of memory module form factor used should be based on the end application’s performance needs, latency, loading and space constraints.

Lastly, choosing EDSSF form factor for drives along with MCIO connectors for backplanes would help in leveraging existing PCIe infrastructure as well as take advantage of the performance enhancements of CXL.

References

For more information on CXL technology, visit our Resource Library for white papers, presentations and webinars.

Facebook
Twitter
LinkedIn