The “CXL 1.1 vs. CXL 2.0 – What’s the difference?” webinar shared an overview of the CXL 1.1 specification and explored the enhancements made in CXL 2.0 focusing on switching, memory pooling, Single Logical (SLD) vs. Multiple Logical Devices (MLD), and fabric management. The webinar also covered managed hot-plug, memory QoS telemetry, speculative reads, and security enhancements.
If you missed the live webinar, the recording is available on BrightTALK and YouTube. The presentation is also available for download on the CXL Consortium’s Resource Library page. We received great questions from the audience during the live Q&A. Below is a recap of the questions and answers discussed in the webinar.
Q: Does the CXL 2.0 memory device connected to the CXL 1.1 host appear as a root complex integrated device or PCI Express® (PCIe®) endpoint device?
It will appear as a root complex integrated endpoint. Every CXL 2.0 device must support a CXL 1.1 way of communication.
Q: In a Virtual CXL Switch (VCS), is there only 1 host?
A classic VCS supports a single host and multiple endpoints (CXL 2.0, CXL 1.1, or PCIe).
The specification defines CXL Switches that support both a single VCS and multiple VCS must have only one upstream port (USP – port to host). Other architectures combining the domains of the respective VCS are possible with additional coherency buses between hosts or devices. One example is a shared memory using a coherency bus to get by 3-, 6-, 12- or 16-way interleaving which is shown in the CXL 2.0 interleave ECN specification.
Q: After removing a device with Global Persistent Flush (GPF), will you get the immolator device info from the OS automatically?
The GPF is a hardware mechanism that disperses all non-persistent data to a persistent destination on the same CXL domain. It can be the same device or another device in the same coherency domain. The use cases and mechanisms by which the hosts are notified and coordinated across the CXL Root Ports are outside the scope of the CXL 2.0 specification, but we can look at a few common scenarios.
For a storage expansion unit, GPF would be used to ensure that all in-flight data, whether written or already reported as written (while still in cache or memory), will finalize in the persistent destination or find its way to other persistent locations along the way.
For cache or memory, it will be seamless as it is regarded as a flush of any cache and internal memory to either local or persistent memory.
Compute Express Link® and CXL® Consortium are trademarks of the Compute Express Link Consortium.