The CXL Consortium will be on-site at Supercomputing 2024 (SC24) to demonstrate how CXL technology solves memory bottlenecks for memory-intensive and memory-elastic workloads while also enhancing AI / ML workloads with expanded memory access. Our member companies are excited to showcase CXL solutions available in the market today capable of addressing performance gaps between compute and memory. View the comprehensive list of our activities at SC24 below:
CXL Presentations
- Exhibitor Forum: CXL Consortium Progress Report: Available CXL Devices in the Market
- Date and time: Tuesday, November 19, 10:30 – 11:00 am ET
- Speakers: Kurtis Bowman (AMD) and Anil Godbole (Intel)
- Location: B206
- Receive the latest update from the CXL Consortium and learn about the growth of the CXL Compliance program, as well as the ecosystem of CXL devices including IP, operating platforms, storage, system-level products, and more.
- Birds of a Feather: Using CXL to improve performance of AI language models
- Date and time: Tuesday, November 19, 12:15 – 1:15 pm ET
- Speakers: Kurtis Bowman (AMD), Rita Gupta (AMD), Anil Godbole (Intel), and Larrie Carr (Rambus)
- Location: B204
- Join our panel of experts to explore the advantages of memory sharing and DRAM improvements for CPU, GPU, and CPU plus GPU-based memory applications utilizing AI language models, such as RAG and LlaMA.
CXL Technology Demonstrations
CXL Consortium will host live CXL demonstrations in the CXL Pavilion (Booth #1807) showcasing products available in the market today from the following member companies:
- Alphawave Semi / Amphenol: 64 Gbps CXL® 3.1 Subsystem IP (PHY + Low Latency Controller) Supporting OSFP-XD Direct Attach Cabling
- AMD: Boosting Workload Scalability: CXL® Memory Tiering powered by AMD EPYC for AI and Beyond
- Astera Labs: Accelerating AI and HPC Workloads with Leo CXL® Smart Memory Controllers
- Cadence: CXL® 3.1 System Demo with Protocol Analysis
- Intel: Enable CXL® Memory with Intel® Xeon® 6 CPUs
- MemVerge: AI Use Cases for Server CXL® Memory Expansion and Shared CXL® Memory
- Microchip Technology Inc.: SMC 2×00 – CXL® 2.0 DDR4/5 Smart Memory Controllers
- Micron Technology: Memory sharing made possible with CZ122 Memory Expansion and Fabric Attached Memory File System (FAM-FS)
- Panmnesia: HPC Applications on Panmnesia’s CXL 3.1 Server Including the Full Hardware/Software Stack
- Phison: Optimizing CXL Connectivity for Faster, Low-Latency Data Transfer and Superior Signal Integrity
- Synopsys / Teledyne LeCroy: Synopsys CXL® 3.x PHY + Controller IP with the Teledyne LeCroy Summit M616 Protocol Exerciser
- Teledyne LeCroy: Industry’s First CXL® Test and Validation Solution
- UnifabriX: Memory Pool: Revolutionizing System Performance for AI and HPC
- VIAVI / Rambus: CXL® 2.0 over Optics with VIAVI PCIe Platform and Rambus Controller via Samtec Interconnect
- Xconn Technologies: Scalable Memory Pooling/Sharing with CXL® 2.0
- ZeroPoint Technologies: Hyperscale Dynamic Compressed Memory Tier
We hope to see you at the Pavilion to explore ready-to-deploy CXL solutions capable of enhancing AI workloads. Follow the CXL Consortium on X and LinkedIn for live updates during the event.
Contact press@computeexpresslink.org to schedule a meeting with CXL Consortium representatives and learn more about the technology.