CXL Consortium member companies Astera Labs, Samtec, and UnifabriX recently hosted a webinar exploring the use cases and implementations of CXL technology. Ahmed Medhioub (Astera Labs), Matthew Burns (Samtec), and Oren Benisty (UnifabriX) each highlighted their company’s CXL solutions based on the CXL 1.1 and 2.0 specifications available in the market today as well as products in development based on the CXL 3.x specification. Our presenters also answered audience questions about these solutions and discussed the benefits of CXL technology for modern applications.
Watch On-Demand or Download the Slides
If you were not able to attend the live webinar, the recording is available via YouTube and BrightTALK and the webinar presentation slides are available for download on the CXL Consortium website.
Webinar on YouTube | Webinar on BrightTALK | Download Slides |
Answers to Q&A
We received great questions from the audience during the live Q&A but were not able to address them all during the webinar. Below, we’ve included answers to the questions we didn’t get to during the live webinar:
Q: Memory provisioning on the fly for multiple hosts is complicated. How does UnifabriX address this challenge?
UnifabriX tackles the complexity of real-time memory provisioning for multiple hosts with its cutting-edge technology and intuitive GUI. Here’s an overview:
- UnifabriX has engineered a CXL-based Memory Pool device specifically designed for Data Centers and High-Performance Computing (HPC). This device surpasses the constraints of socket-connected DRAM that limit memory capacity and bandwidth.
- The memory pool can be segmented into several tiers, each offering enhanced capacity and bandwidth. Each host can be allocated memory, either through static allocation using GUI or on-demand via an API. This enables multiple hosts to connect to a memory device and access their CXL memory resources.
- UnifabriX facilitates dynamic memory expansion for servers by assigning them cache-coherent remote memory from the CXL pool.
- UnifabriX adopts a performance-centric approach, unlocking the speed, density, and scale of Data Center infrastructure, thereby enhancing the performance of HPC, AI/ML, and storage systems
Q: What are potential concerns in terms of RAS, security, and management associated with CXL implementations, and how can they be mitigated?
Some of the concerns we see for RAS are memory reliability, stability, and repair at both runtime and pre-boot. For manageability, as an ecosystem, we have more work to do to harden dynamic memory allocation and deallocation. Lastly, for security, the protection of data in transit and at rest within shared memory environments. To mitigate these gaps, a combination of device features and ecosystem alignment initiatives is key. For example, implementing robust error correction and detection mechanisms and ensuring robustness by testing DIMMs from all vendors. Utilizing different encryption techniques to secure data in different stages within all the devices of the memory pool and, when needed, using redundant paths and failover mechanisms to enhance availability.
Q: What advantages do mid-board optics provide over optical transceiver MSA standard solutions?
Optical transceiver MSA standard solutions are the bedrock of front panel connectivity in the data center. They are cost-effective, have known form factors, consistent supply chains, and wide adoption across the industry. However, the disruption of AI/ML and disaggregated computing calls for new topologies within the rack, rack-to-rack, and for GPU clustering. Mid-board optics provide improved SI, increased density, and additional flexibility for system architects of emerging topologies supporting CXL.
Q: Can you expand on what use cases may use optical CXL interconnect?
We see immediate interest in optical CXL connectivity in Type 1, Type 2, and Type 3 Devices. Pooling of resources both inside a rack or even rack to rack will require optical connectivity given the distances signals have to travel. The same applies for composable fabrics. Optical CXL connectivity is required to fully implement all the benefits of CXL technology.
Q: What is the hardware interface between the hosts and the memory pool? Is it a PCIe interface or is there another interface?
That’s an interesting and complex question. CXL Protocols leverage the physical and electrical interfaces as defined by the various PCIe specifications they aligned to – CXL 1.1/2.0 and PCIe 5.0 and CXL 3.1 and PCIe 6.0 as examples. The hardware interface between the host and the memory pool can leverage any number of topologies defined by various standard bodies like PCI-SIG, SNIA, OCP, and more. We already see CXL Memory Modules in an E3.S 2T form factor leveraging PCIe 5.0 interfaces on the market. PCI-SIG CEM AIC form factors are very popular. The new PCI-SIG® CopprLink™ Cable Specification introduces internal and external cable options as well. Optical CXL interconnects are also in play here. Ultimately, the hardware interface between the host and memory pool will depend on the use case and topology requirements.
Related Links on CXL Consortium Website