By: Anil Godbole and Kurtis Bowman, CXL Consortium MWG Co-Chairs
CXL technology made a huge splash at FMS: the Future of Memory and Storage 2024 as the Consortium and our member companies highlighted the growing CXL ecosystem. The rapid evolution of the CXL specification from 1.1 to 3.X in a short span of four years demonstrates tremendous interest in this protocol. Today, CXL Consortium member companies are launching products based on the CXL 2.0 specification, visible in the numerous CXL device demonstrations across the show floor and in the various CXL tracks at FMS.
During the event, the CXL Consortium hosted two sponsored sessions, which included a panel to share the memory capacity and performance improvements capable with CXL featuring representatives from Astera Labs, Micron, Samsung, and SK hynix. Additionally, Consortium member companies Astera Labs, Montage Technology, and Xconn Technologies highlighted various implementations of CXL for AI and HPC workloads. Their presentations are available below.
- Astera Labs: Importance of Pre-boot Process for CXL Type 3 Devices
- Montage Technology: CXL 2.0 Use Case – Using Both DDR4 & DDR5 on the Same Server to Allow Memory & Bandwidth Scaling
- Xconn Technologies: CXL 2.0 Switch for a Composable Memory System
FMS 2024 also included six CXL speaker tracks with Open and Pro sessions, illustrating the high demand for sharing CXL opportunities, CXL device availability, and CXL use cases. During the presentations, Consortium member companies discussed key CXL features, such as fabric management, memory pooling & tiering, and the use cases for CXL in AI applications. Google and Meta presented a compelling case for CXL memory expansion, compression & decompression, and the re-use of DDR4 DIMMs, driving home the ROI of CXL solutions. Wolley also explored how CXL could make its way into desktops, notebooks, and embedded devices as memory speeds continue to increase and memory expansion by adding high pin count DDR channels becomes undesirable. CXL memory continues to be the focus of end users as it is the only memory semantic interface allowing customers to add memory capacity and bandwidth at memory speeds.
We were excited to see the CXL implementations across the exhibit floor with Consortium member companies, including Astera Labs, Marvell, MetisX, Micron, MemVerge, and more, sharing CXL demonstrations capable of addressing AI use cases. Many companies also launched memory pooling devices and systems at the event, including MemVerge, Samsung, SK hynix, and Xconn Technologies. The Consortium also showcased CXL video demonstrations in the SNIA Open Standards Pavilion, available to view on our website.
We were excited to see that not only is CXL ready for deployment as devices enter the market across the industry, but the momentum behind the ecosystem is flourishing. CXL Consortium member companies are driving CXL devices and solutions forward to enable AI and HPC applications in the data center, which was clearly visible at FMS 2024. We look forward to continue highlighting the growth of CXL technology in the CXL Pavilion (Booth #1807) during Supercomputing 2024 (SC’24), from November 17 – 22, 2024. We hope to see you there!