Visit the CXL Pavilion in booth #1807 to view CXL technology demonstrations.
CXL Consortium representatives will be providing an update from the Consortium and highlighting the benefits of CXL technology for AI and HPC applications during the following session:
Birds of a Feather presentation: Using CXL to improve performance of AI language models
- Date and time: Tuesday, November 19, 12:15 – 1:15 pm ET
- Speakers: Kurtis Bowman (AMD), Mahesh Wagh (AMD), Anil Godbole (Intel), and Larrie Carr (Rambus)
- Location: B204
- Description: Increased demand for AI applications highlights the “memory wall” obstacle – a capacity and bandwidth memory transfer bottleneck. CXL facilitates memory sharing between accelerators and GPUs while enabling direct-attached memory (i.e. DRAM) to any node, improving memory bandwidth, performance, and capacity for AI language models.This session will explore the advantages of memory sharing and DRAM improvements for CPU, GPU, and CPU plus GPU-based memory applications utilizing AI language models, such as RAG and LlaMA. Attendees will learn about performance, cost, and power consumption benefits of DRAM and CXL memory modules.