CXL™ Consortium member company Rambus participated in a recent Q&A session to discuss CXL’s impact on the evolution of the data center, Rambus’ expertise in CXL interface subsystems, and ideal use cases for CXL technology. Find the full Q&A session with Rambus below.
Can you share a brief introduction of Rambus?
Rambus makes industry-leading chips and IP that advance data center connectivity and solve the bottleneck between memory and processing. The ongoing shift to the cloud, along with the widespread advancement of AI across data center, 5G, automotive and IoT, has led to an exponential growth in data usage and tremendous demands on data infrastructure. Creating fast and safe connections, both in and across systems, remains one of the most mission-critical design challenges limiting performance in advanced hardware.
Rambus is ideally positioned to address this challenge as an industry pioneer with over 30 years of advanced semiconductor interconnect experience moving and protecting data. We are a leader in high-performance memory subsystems, providing data center chips, IP and innovations that maximize the performance and security in data-intensive systems.
Whether in the cloud, at the edge or in your hand, real-time and immersive applications depend on data transfer speed and trust. Rambus products and innovations deliver the increased bandwidth, capacity and security required to usher in a new era of data center architectures and drive ever-greater end-user experiences. Rambus is headquartered in San Jose, CA and has offices around the world.
Why did Rambus decide to join the CXL™ Consortium?
The recent development and introduction of the Compute Express Link™ (CXL™) standard, an advanced interconnect for processors, memory and accelerators, is a critical enabler for memory expansion and resource pooling. CXL builds off the widely adopted PCI Express® (PCIe®) infrastructure to offer features that can increase the performance of many workloads compared to existing architectures, while still utilizing current hardware specifications and compatibilities.
The improvement in low-latency connectivity and memory coherency directly leads to improved computing performance, efficiency and lowered cost of operation across the data centers. CXL memory expansion allows for additional capacity and bandwidth above and beyond the direct attached main memory in servers today. There are many emerging compute-intensive workloads whose performance depends on large memory capacities, with AI training being a prime example and OLTP (Online Transaction Processing) being very popular. Considering the focused investment businesses and cloud service providers are making to tackle these kinds of workloads, the advantages of CXL are clear.Rambus has launched the CXL Memory Interconnect Initiative focused on developing chips and IP that improve performance, efficiency and TCO for a new era of data center architecture.
What expertise does Rambus bring to the consortium?
Rambus brings decades of interface and security IP experience offering PCIe interface subsystems (PHY, controller and IDE), CXL interconnect, memory subsystems (PHY, controller and RAS/ECC) as well as Root of Trust and inline encryption engines. As a leading chip supplier, we’re also able to integrate these state-of-the-art IP solutions to offer CXL memory interconnect chips.
What is the biggest advantage of CXL Consortium membership? How does Rambus participate?
The biggest advantage of membership is being on the ground floor to help define the open standards that will be the backbone of future data center architectures.
What use cases will be ideal targets for CXL technology? Which market segments will benefit from CXL?
The biggest use case is the data center – specifically in the cloud – as it relates to memory expansion and memory pooling. What we’ve seen evolve in the last 10 years with the ability to work in the cloud and rent computing power on demand is that there’s a broad range of workloads, and that diversity grows every year. That range of workloads varies widely in the amount of memory capacity and bandwidth needed. CXL technology provides the ability to have a base configuration server system that can scale in an on-demand manner, adding more memory bandwidth and capacity as a workload requires.
As we look further out in the evolution of the data center, CXL interconnect technology can support disaggregation of computing resources with pools of processing, memory and storage that can be composed on demand to meet the needs of any workload. When resources are no longer needed, they can be released back to the pool. That leads to much higher computing resource utilization analogous to a car which sees much higher utilization when used for ridesharing.
What does Rambus see as CXL’s impact within your industry?
At the end of the day, what we face as an industry is an insatiable demand for greater performance, and we need to deliver that increase at an economically viable cost. CXL does both. It delivers new levels of performance through greater memory bandwidth and capacity. At the same time, it makes possible new use models, like memory pooling, that enable higher utilization of computing resources. As such, our belief is that CXL will become a mission-critical part of future data center architectures.
Enabling the memory expansion and pooling use cases, as well as those needed for server disaggregation and composability, will require the combination of several critical chip technologies. The Rambus CXL Memory Interconnect Initiative brings together expertise in memory and SerDes subsystems, semiconductor and network security, high-volume memory interface chips and compute system architectures, among many other areas, to develop breakthrough solutions for the future data center.
Keep up with the CXL Blog to see upcoming member spotlight posts from other CXL Consortium member companies. Interested in participating? Send a message to press@computeexpresslink.org for details.