Search
Close this search box.

CXL 3.0 Webinar Q&A Recap

2 min read

Answering questions from the “CXL 3.0: Enabling composable systems with expanded fabric capabilities” webinar

In October, the Co-Chair of the CXL Consortium Technical Task Force Dr. Debendra Das Sharma and MWG Contributor Danny Moore presented a webinar on CXL 3.0 to explore the new features and usage models in the latest specification.

If you missed the live webinar, the recording is available on BrightTALK and YouTube. The presentation is also available on the CXL Consortium website for download here. We received great questions from the audience during the live Q&A but couldn’t address them all during the webinar. Below, we answered the questions we didn’t get to during the live webinar.

Q: Can existing shared memory programs use CXL memory to work across hosts?

Yes, Hosts that have a coherent proprietary interface between CPUs can access the CXL attached memory of a far host in a coherent manner.

Q: Can you elaborate on the limitation of the CXL 2.0 of switch/host connecting to more than one device type?

In CXL 2.0, only a single Device Type 1 or Device Type 2 can be supported per virtual hierarchy. With CXL 3.0, we increased that to 16.

Q: For the new 256-half Flits, how does FEC come into play?

The FEC is a capability added in PCIe Gen 6. The same FEC applies to the entire Flit across both halves (250B of information + 6B FEC), in the same 3-way interleaved fashion as the PCIe 256B and CXL standard 256B Flit Mode.

Q: Can you share the expected latency of a fabric infrastructure running at PAM4? What is the expected length and type of cable (copper v/s optical) used for CXL3.0 switching across racks?

CXL 3.0 uses the PCIe Gen6 electricals, and therefore can use the same lengths and cable types as any other PCIe Gen6 compliant device. The latency is expected to be the same as prior generations with latency-optimized 256B Flits.

Q: For a Global Fabric Attached Memory (GFAM) Type 3 device, would the whole directory/snoop filter be only in the GFAM device, or can part of the directory/snoop filter be included in the switches?

The GFAM device on its own will be able to back-invalidate host caches, this functionality must be in the device itself, and not shared with the CXL 3.0 switch.

Q: Is GFAM replacing LD-FAM?

No, both GFAM and LD-FAM devices exist as part of the CXL 3.0 specification. There are many similarities between GFAM and LD-FAM devices, such as support for up to 8 Dynamic Capacity regions, related parameters in the CDAT, similar mailbox commands but with access methods that are considerably different. Some differences include a GFAM having a single DPA space, as opposed to the LD-FAM will have a sperate DPA space per host. A LD-FAM can support up to 16 hosts, whereas a GFAM device could scale up to 4096 hosts.

Q: How is communication achieved between multiple hosts and fabric manager to share Type 1/2 compute devices and Type 3 memory sharing during runtime?

There is no mechanism to communicate between FM’s and Hosts specifically in-band; this is expected to be side-band. With respect to Type 3 memory sharing, for dynamic allocation of resources while managed by the FM, the communication of changing memory allocation happens between the Type 3 device with DCD capability and the host itself.

Facebook
Twitter
LinkedIn