The CXL Consortium is looking forward to returning to Supercomputing 2025 (SC’25) to highlight the benefits of CXL technology for AI and HPC applications.
Birds of a Feather Presentation: How to Leverage CXL Memory Pooling and Sharing for AI & HPC workloads
- Date and time: Tuesday, November 18, 12:15 – 1:15 pm PT
- Moderator: Anil Godbole (CXL MWG Chair)
- Panelists: Kurtis Bowman (AMD), Luis Ancajas (Micron), Yongjin Cho (Panmnesia), and Ronen Hyatt (UnifabriX)
- Location: 275
Visit the CXL Pavilion in booth no. 817 to view live CXL technology demonstrations.
- AMD: Bringing CXL value to reality with tiered memory systems
The demonstration previews the next-gen AMD EPYC™ processor, showcasing the industry-leading interoperability with CXL Gen6 memory device and pushing the boundaries for advanced data center performance.
- Astera Labs: Accelerate AI Inferencing with Leo CXL Smart Memory Controllers
Astera Labs and partners will showcase how CXL-attached memory accelerates AI inferencing by through significant improvement in memory capacity, bandwidth and GPU utilization. Featuring the Leo Smart Memory Controller, the demo will show how CXL optimizes AI workloads with scalable, high-performance CXL-attached DDR memory. We’ll also feature advanced COSMOS fleet management and diagnostic software for telemetry & RAS that enables cloud scale deployment of CXL-attached memory for AI infrastructure. - Intel: CXL Memory Pooling in Action: Powering High-Performance Analytical Databases
This demonstration showcases the deployment of CXL technology to enhance memory scalability and efficiency in modern, data-intensive server infrastructures. The setup integrates four Intel Granite Rapids-AP servers with a CXL switch connected to 22 Micron CZ122 memory expansion devices, collectively forming a 5.6 TB shared memory pool accessible by all participating servers. - Lightelligence: Photowave™ CXL Over Optics
Lightelligence will demonstrate the benefits of CXL-based memory expansion to improve workload efficiency using the Photowave™ CDFP Active Optical Cables. Learn more by visiting: https://lightelligence.ai/index.php/product/photowave.html. - MemVerge: Transparent Checkpoint Operator, SQLite using PMDK, Shared CXL memory with GISMO and NVIDIA Dynamo
This demonstration showcases how CXL 2.0 memory sharing delivers a unified, low-latency data and KV-Cache fabric for AI frameworks like NVIDIA Dynamo and NIXL. The MemVerge GISMO NIXL plugin enables CXL as a new transport, streamlines the model, and provides KV-Cache access, substantially lowering latency compared to legacy solutions. This architecture drives more efficient GPU utilization, faster AI inference for prefill and decode, and simpler rack-scale deployment in modern data centers.
- Micron: Acceleration of Graph Analytics via Disaggregated CXL Memory
Micron and Pometry have partnered to showcase a breakthrough in accelerating large-scale graph analytics workloads using disaggregated CXL memory. These workloads often exceed the memory capacity of conventional servers, creating bottlenecks in performance and scalability.This demo highlights how dynamically attachable pools of near memory—enabled by Micron’s H3 Falcon CXL-based disaggregated memory system and the Linux-supported FAMFS (Fabric-Attached Memory File System)—can deliver up to 20× performance gains for graph databases. - Panmnesia: CXL 3.x Switch-Based Memory Pooling and Sharing in Multi-Host Environment
Panmnesia will showcase a demo utilizing a framework including Panmnesia’s CXL 3.x Switches. It illustrates how memory sharing and memory pooling in a multi-host environment can optimize resource utilization and performance for large-scale computing workloads such as AI, HPC, and Cloud applications.We expect that this demo, which accelerates real-world applications aligned with the interests of SC attendees, will contribute to the adoption of CXL by demonstrating its practical use cases. - Rambus, Samtec, and VIAVI: CXL memory expansion over Optics with VIAVI PCIe testing platform & Rambus Controller via Samtec interconnect
This demonstration illustrates the potential of CXL over optics as a key solution to meet the bandwidth demands of heterogenous distributed data center architectures. Optical interconnects offer significant advantages including extended reach, reduced latency, and efficient resource sharing across multiple servers. This demo showcases a Rambus CXL Controller IP instantiated in an endpoint device connected to a VIAVI Exerciser using Samtec Firefly optic cable technology, effectively creating a remote “CXL Memory Expansion” block. - SMART Modular Technologies: CXL Memory Unleashed: Redefining Server Scalability
Penguin Solutions will showcase a CXL 2.0 memory expansion demo at SC25, featuring up to 2TB of ultra-fast DDR5 via SMART’s 4-DIMM CXL Add-In Cards in Dell’s Wichita-OT-C1 2U server. Experience lightning-fast 64GB/s PCIe Gen5 bandwidth and unlock breakthrough scalability for AI/ML, HPC, in-memory database, and virtualization workloads. See how SMART breaks the limits of server memory and enables next-gen composability for data center and enterprise environments. - Teledyne LeCroy: Live demonstration of CXL 3.x using Teledyne LeCroy Protocol Exerciser/Analyzer with Synopsys IP solution
Real CXL 3.x traffic demonstration using a Teledyne LeCroy Summit M616 Protocol Analyzer and Exerciser connected to Synopsys CXL 3.x IP solution.
- UnifabriX: Memory over Fabrics™ – Where AI Meets Memory Innovation
UnifabriX will showcase its advanced Memory Pooling and Memory Sharing technologies, designed to enhance AI workload performance. The solution enables dynamic, high-bandwidth memory allocation across heterogeneous compute and AI systems, unlocking new levels of efficiency, scalability, and responsiveness in data center environments. - Wolley: CXL 3.0 Memory Expansion for Client AI Workstation
Wolley will showcase an E1.S CXL Memory Module that breaks the memory barrier plagued in the LLM applications. This CXL memory module delivers an additional 32 GB/s to 64 GB/s bandwidth over CXL protocol by expanding the effective memory capacity through DRAM and SSD virtualization. The result is a single plug-in card that unifies memory and storage, eliminates PCIe congestion, and reduces total cost of ownership, empowering on-prem systems to run long-context LLM inference smoothly and at scale. - XCENA: MX1: CXL Computational Memory for AI & HPC Workload Acceleration
XCENA will showcase its MX1 CXL 3.2 Type-3 Computational Memory, demonstrating how its key capabilities accelerate AI and HPC workloads, also with live demonstrations available at the XCENA booth. The demos feature two core features: Near-Data Processing and Infinite Memory.The first demo shows how the MX1 accelerates database analytics by offloading compute tasks directly to its built-in RISC-V cores, while the second demonstrates how the MX1 expands system memory capacity to petabyte scale by attaching SSDs. Together, these demos highlight how the MX1 enables scalable, data-centric computing for AI and HPC workloads.
For more information, visit www.xcena.com. - XConn Technologies: Break Memory Wall of AI Workload with CXL Memory Pool
XConn Technologies will be demonstrating Memory Pooling and Memory sharing with the Apollo 256 lane switch with our partner Memverge. Xconn will also demonstrate CXL DCD (Dynamic Capacity Devices) for server memory dynamic allocation.
Follow the CXL Consortium via LinkedIn for updates!