Modern AI and HPC workloads are pushing traditional system architecture to its limits. Training Large Language Models (LLMs) and running Generative AI real-time inferences require massive memory capacity, high bandwidth, and low latency across heterogeneous compute environments. Compute Express Link® (CXL®) offers a transformative solution that enables low-latency, coherent communication across CPUs, GPUs, and memory devices.
At SC25 (Supercomputing 2025), taking place November 17 – 20, in St. Louis, MO, the CXL Consortium and our member companies are showcasing how CXL memory pooling and sharing can help overcome AI and HPC architectural constraints.
Visit the CXL Pavilion – Booth # 817
Stop by the CXL Pavilion (Booth #817) to discover how CXL Consortium members are transforming memory architectures for AI and HPC. The Pavilion will feature the following CXL technology demonstrations:
- AMD: Bringing CXL value to reality with tiered memory systems
- Astera Labs: Accelerate AI Inferencing with Leo CXL Smart Memory Controllers
- Intel: CXL Memory Pooling in Action: Powering High-Performance Analytical Databases
- Lightelligence: Photowave™ CXL Over Optics
- MemVerge: Transparent Checkpoint Operator, SQLite using PMDK, Shared CXL memory with GISMO and NVIDIA Dynamo
- Micron: Acceleration of Graph Analytics via Disaggregated CXL Memory
- Panmnesia: CXL 3.x Switch-Based Memory Pooling and Sharing in Multi-Host Environment
- Rambus, Samtec, and VIAVI: CXL memory expansion over Optics with VIAVI PCIe testing platform & Rambus Controller via Samtec interconnect
- SMART Modular Technologies: CXL Memory Unleashed: Redefining Server Scalability
- Teledyne LeCroy: Live demonstration of CXL 3.x using Teledyne LeCroy Protocol Exerciser/Analyzer with Synopsys IP solution
- UnifabriX: Memory over Fabrics™ – Where AI Meets Memory Innovation
- Wolley: CXL 3.0 Memory Expansion for Client AI Workstation
- XCENA: MX1: CXL Computational Memory for AI & HPC Workload Acceleration
- XConn Technologies: Break Memory Wall of AI Workload with CXL Memory Pool
Join CXL’s Birds of a Feather (BoF) Session
Session: How to Leverage CXL Memory Pooling & Sharing for AI and HPC Workloads
Date and time: Tuesday, November 18, 12:15 – 1:15 pm CT
Join the session to explore how CXL 2.0 and 3.x support memory disaggregation and composable infrastructure to unlock scalable and flexible deployment of large models and simulations. Attendees will also learn how memory pooling and sharing reduce overprovisioning, improve utilization, and lower costs. We invite system architects, researchers, hardware developers, and operators to discuss real-world CXL adoption, implementation challenges, and opportunities to reshape the next-generation AI and HPC systems.
Moderator: Anil Godbole, CXL MWG Chair
Panelists:
- Kurtis Bowman, AMD
- Luis Ancajas, Micron
- Yongjin Cho, Panmnesia
- Ronen Hyatt, UnifabriX
This interactive session is open to all SC25 attendees and provides a great opportunity to engage directly with CXL experts advancing CXL technology across the industry.
Don’t miss your chance to connect with CXL experts at SC25!
Visit the CXL Pavilion (Booth #817) to see how CXL will improve AI and HPC workloads. Learn more about the Consortium’s activities at SC25 by visiting https://computeexpresslink.org/event/supercomputing-2025/.