As the industry continues to push the limits of performance, efficiency, and scalability, the role of open standards has never been more critical. That innovation was on display at this year’s Supercomputing 2025 (SC25), where the CXL Consortium made a major impact by officially announcing the CXL 4.0 specification, which meets the increasing demands of emerging workloads placed on today’s data centers.
Celebrating the Launch of CXL 4.0
CXL 4.0 increases speed and bandwidth to meet the growing demands of emerging workloads in today’s data centers, introducing key enhancements, including:
- Doubles the bandwidth to 128GTs with zero added latency
- Enables rapid data movement between CXL devices, directly improving system performance
- Maintains previously enabled CXL 3.x protocol enhancements with the 256B Flit format
- Introduces the concept of native x2 width to support increased fan-out in the platform
- Support for up to four retimers for increased channel reach
- Implements CXL bundled port capabilities
- Ability to combine device ports between Host and CXL accelerators (Type 1/2 devices) to increase bandwidth of the connection
- Delivers memory RAS enhancements
- Improves reliability, error visibility, and maintenance efficiency
- Continued full backward compatibility with CXL 3.x, 2.0, 1.1, and 1.0
The Consortium’s technical working groups have collaborated rigorously over the past year to deliver a specification that empowers data center innovation and enables scalability for future usage models. Download the CXL 4.0 white paper for a deep dive into the specification’s new features.

Highlights from the CXL Pavilion
At the CXL Pavilion (Booth no. 817), 14 CXL Consortium member companies showcased CXL technology demonstrations that are transforming memory architectures for AI and HPC workloads. Check out the videos below to learn more about the demos our members showcased during the show.
Birds of a Feather Session: How to Leverage CXL Memory Pooling and Sharing for AI and HPC Workloads
CXL Consortium representatives also brought together over 100 attendees for our Birds of a Feather session to discuss how CXL 2.0 and 3.x enable memory disaggregation and composable infrastructure. Moderated by Anil Godbole, Chair of the CXL Marketing Working Group (MWG), the session explored how memory pooling and sharing reduce overprovisioning, improve utilization, and lower costs. Panelists Kurtis Bowman (AMD), Luis Ancajas (Micron), Yongjin Cho (Panmnesia), and Ronen Hyatt (UnifabriX) shared insights on the performance benefits of memory pooling, MPI over CXL, acceleration of GraphDB workload using shared disaggregated memory, and CPU-centric workloads for in-memory databases. It was great to see the strong interest among attendees in learning more about CXL and how the technology supports memory-intensive workloads.

Ecosystem Momentum: Industry Adoption of CXL
CXL Consortium members also announced CXL deployments during the show. Below are highlights from these announcements.
- Astera Labs’ Leo CXL Smart Memory Controllers on Microsoft Azure M-series Virtual Machines Overcome the Memory Wall
“Microsoft’s Azure M-series VMs is the industry’s first announced deployment of CXL-attached memory, addressing the growing demands of memory-intensive workloads, including in-memory databases.” - Azure delivers the first cloud VM with Intel Xeon 6 and CXL memory – now in Private Preview
“In collaboration with SAP and Intel, Microsoft is delighted to announce private preview of CXL technology on Azure M-series family of VMs. We believe that, when combined with advancements in the new Intel Xeon 6 processors, it can tackle the challenges of managing the growing volume of data in SAP software, meet the increased demand for faster compute performance and reduce overall TCO.” - SK hynix at SC25: Showcasing Advanced AI Memory From HBM4 to Next-Gen Storage
“Alongside product exhibits, SK hynix also conducted demonstrations of next-generation solutions, highlighting their applications in future technologies. First, a heterogeneous memory-based system composed of CXL Memory Module-DDR5 (CMM-DDR5) and MRDIMM was demonstrated in collaboration with semiconductor design specialist Montage Technology. This system highlighted the scalability of memory capacity as well as the improvements in overall system performance.” - XConn Technologies and MemVerge to Deliver Breakthrough Scalable CXL Memory Solution to Offload KV Cache and Prefill/Decode Disaggregation in AI Inference Workloads
“To address these challenges, XConn and MemVerge have demonstrated a rack-scale CXL memory pooling solution built around XConn’s Apollo hybrid CXL/PCIe switch and MemVerge’s Gismo technology, optimized for Nvidia’s Dynamo architecture and NIXL software stack. The demo showcases how AI inference workloads can offload and share massive KV cache resources dynamically across GPUs and CPUs, achieving greater than 5x performance improvements compared with SSD-based caching or RMDA-based KV cache offloading, while reducing total cost of ownership. The demo particularly shows a scalable memory architecture for AI inference workloads where there is a disaggregation of prefill and decode work stages.”
Conclusion
Thank you to everyone who visited the CXL Pavilion, attended our Birds of a Feather session, and engaged with us throughout the week. At SC25, we highlighted the rapid progress and growth of the CXL ecosystem, from the introduction of CXL 4.0 to significant advancements demonstrated across AI and HPC infrastructures. The CXL Consortium looks forward to continuing this momentum as members bring new solutions to market and accelerate adoption across next-generation systems. Stay tuned for updates by following the CXL Consortium on LinkedIn.
