Breaking Boundaries in Memory: Highlights from AI Infra Summit and SDC 2025

3 min read

By: Anil Godbole, CXL Consortium MWG Chair

 

The demand for AI and data-intensive workloads is reshaping the architecture of modern datacenters. As compute, storage, and interconnect technologies converge, composability has become a central theme in the search for scalable, efficient, and interoperable infrastructure. The CXL Consortium recently participated in the AI Infra Summit and SDC events to highlight the benefits of CXL memory pooling sharing—a critical enabler for AI workloads.

 

Scaling AI Performance with CXL – Highlights from AI Infra Summit

The CXL Consortium participated in AI Infra Summit to introduce CXL technology and promote CXL’s memory pooling and sharing capabilities. CXL’s memory pooling and sharing features bring major advantages to AI workloads by allowing CPUs, GPUs, and accelerators to access a larger, unified pool of memory instead of being limited to the capacity within each device. This improves utilization by reducing stranded memory, enabling training of larger models without costly hardware overprovisioning, and lowering data movement bottlenecks through low-latency, cache-coherent access. The result is faster training, greater flexible scaling, and a more efficient infrastructure for running AI applications.

Additionally, CXL hosted two dedicated sessions during the event. The demo theater presentation titled “Where can you deploy CXL in an AI Infrastructure?” featuring CXL Consortium members Liqid, UniFabriX, and XConn Technologies, discussed how companies can deploy CXL in their AI infrastructure, from hardware to software, while ensuring interoperability.

Dr. Debendra Das Sharma, Chair of the CXL Consortium, also led a panel session titled “Composable Infrastructure for AI – The Convergence of Storage, Interconnect, and Compute.” Although the panelists represented distinct domains from servers, optical switching, to interconnect IP, the discussion consistently circled back to a shared need: data centers must evolve toward greater composability to meet emerging workload demands.

The session also made clear that open standards play a key role in the ecosystem. By ensuring interoperability across compute, storage, and interconnects, standards such as CXL provide the framework for scalable, AI-optimized datacenter design.

CXL Takes Center Stage at SDC2025

The following week, CXL Consortium representatives and our members reconvened at the SNIA Developer Conference (SDC). Representing the CXL Consortium, I presented the advantages and benefits of CXL memory pooling, tiering, and sharing for storage applications. Attendees were also interested in examining CXL-based storage applications, which can be deployed in data centers.

Building on these discussions, there were multiple sessions across the conference that showcased usage models and applications of CXL:

  • Samsung reinforced the memory hierarchy framework, showing how CXL reshapes the balance between DRAM, storage-class memory, and SSDs. This evolution is particularly relevant for AI workloads requiring fast, large-scale data movement across tiers.
  • Solidigm, representing SNIA’s NVMe standards group, shared a sneak peek into a CXL-based NVMe specification paving the way for SSDs based on CXL. The new specification will create a cache-coherent, addressable memory space on the drive, which promises significant gains in efficiency for reads, writes, and peer-to-peer SSD transfers.
  • Micron presented on how CXL enables disaggregated memory for composable capacity-on-demand and shared memory. The presentation also introduced the Fabric-Attached Memory File System (famfs), which enables disaggregated shared memory to be used as memory-mappable files that map directly to the shared memory.

 

Summary & Closing Thoughts

Real-world deployments of CXL are gaining momentum, with CXL memory pooling and sharing emerging as the primary driver of CXL adoption. As AI workloads continue to grow and data centers demand both flexibility and efficiency, CXL memory pooling and sharing stand out as a prime solution in infrastructure architecture.

Next, CXL Consortium members will participate in the OCP Global Summit to showcase how the technology is transforming data center design by enabling large-scale memory expansion, pooling, and sharing across CPUs, GPUs, and accelerators.

We hope to see you at the event!

Facebook
Twitter
LinkedIn