We recently presented the “Memory Challenges and CXL Solutions” webinar, where we explored the current trends and challenges of memory and shared how CXL can address the challenges that designers are facing with new emerging applications. We received many great questions during the Q&A portion, but we ran out of time to address them all. We have provided answers to the questions below.
If you were unable to attend the live webinar, the full recording is available on BrightTalk and YouTube. Also, the presentation is available for download here. If you would like information on future webinars, please register on the CXL Consortium BrightTalk Channel, and follow us on Twitter and LinkedIn for updates.
Q: Will storage devices such as SSDs require “native” CXL controller support or can they use the PCIe® interface?
A: Since SSDs are block devices (and not random access load store devices) they don’t need CXL controllers. PCIe will continue to be used in the same way as today.
Q: Does CXL address memory interleaving administrative controls or is that the choice of CXL device vendors?
A: Memory interleaving choices and implementations are outside the scope of the CXL specification.
Q: How will applications deal with different latencies for different memory types?
A: Most likely, applications will not be aware of what memory they are using and therefore the different latencies. The OS/kernel will have the responsibility to allocate the correct memory type to an application.
Q: How do you see applications evolving with CXL being like a far NUMA-like node?
A: Similar to the previous question, applications will generally not be latency aware. In theory, it is possible to create a malloc function that can specify whether it can use a higher latency memory pool and the operating system services it accordingly. You could apply existing NUMA-like approaches with CXL as well.
Q: How are atomics supported over CXL?
A: Since CXL memory is cache-coherent this should be the same as CPU/direct attached memory.
Q: Is PCI Express® 5.0 technology the “transport” for all things CXL? Or will there be CXL connectivity between devices that do not require PCI Express?
A: In CXL 1.1 the transport is PCIe 5.0. Other transports might be developed/specified in the future.
Q: Can we expect to see CXL 1.1 memory expanders using non-volatile memory, or do we have to wait for CXL 2.0?
A: CXL 1.1 can support memory expansion devices but might need special software/driver support for persistence, RAS and other features.
Q: What are your thoughts on first adoption of CXL – first only directly attached memory in a system or in pools of memory used by many systems?
A: CXL 1.1 does not support pooled memory.
Q: Are there any CXL memory expansion devices in development and if so, when do you expect to see servers being built with the new topology, of course requiring CPUs with CXL IOs as well?
A: Vendors are working on memory expansion solutions. For specific product plans and roadmaps, you’ll need to talk directly with member companies.
Q: Will CXL eventually replace DDR due to higher bandwidth per pin?
A: CXL latency addition is a concern, but it is certainly possible that CXL could replace DDR in the future.
Q: Does the CXL specification plan to incorporate Gen-Z fabric extension?
A: The CXL Consortium and Gen-Z Consortium workgroups are defining bridging between the protocols while leveraging the strength of both technologies. Follow CXL Consortium and Gen-Z Consortium on Twitter for updates.
Q: Do you have any timelines regarding CPUs supporting CXL? Are there specifications on latency and HA, SA, RA support for the first CPUs that will arrive?
A: We expect to see first products come to market in 2021. For specific product plans and roadmaps, you’ll need to talk directly with member companies.
About CXL Consortium
Q: Will the slides be available for download?
A: The presentation is available for download in CXL’s Resource Library.
Q: Where can I download the evaluation copy of the CXL 1.1 specification?
A: You can download the CXL 1.1 specification here.