Uncategorized

Want your server to access more than 100,000 DIMM slots in one go? This Korean startup claims that is CXL 3.1-based technology can help you scale to more than 100PB of RAM — but it will cost nearly $5 billion

The possibilities are endless, but the costs might be a bridge too far for even the largest enterprises.

Ever imagined drawing on up to 100 petabytes of RAM? Well, this startup could be the key to unlocking groundbreaking memory capabilities.

Korean fabless startup Panmnesia unveiled what it described as the world’s first CXL-enabled AI cluster featuring 3.1 switches during the recent 2024 OCP Global Summit.

The solution, according to Panmnesia, has the potential to markedly improve the cost-effectiveness of AI data centers by harnessing Compute Express Link (CXL) technology.

Scalable – but costly

In an announcement, the startup revealed the CXL-enabled AI cluster will be integrated within its main products, the CXL 3.1 switch and CXL 3.1 IP, both of which support the connections between the CXL memory nodes and GPU nodes responsible for storing large data sets and accelerating machine learning.

Essentially, this will enable enterprises to expand memory capacities by equipping additional memory and CXL devices without having to purchase costly server components.

The cluster can also be scaled to data center levels, the company said, thereby reducing overall costs. The solution also supports connectivity between different types of CXL devices and is able to connect hundreds of devices within a single system.

The cost of such an endeavor could be untenable

While drawing upon 100PB of RAM may seem like overkill, in the age of increasingly cumbersome AI workloads, it’s not exactly out of the question.

In 2023, Samsung revealed it planned to use its 32GB DDR5 DRAM memory die to create a whopping 1TB DRAM module. The motivation behind this move was to help contend with increasingly large AI workloads.

While Samsung is yet to provide a development update, we do know the largest RAM units Samsung has previously produced were 512GB in size.

First unveiled in 2021, these were aimed for use in next-generation servers powered by top of the range CPUs (at least by 2021 standards – including the AMD EPYC Genoa CPU and Intel Xeon Scalable ‘Sapphire Rapids’ processors.

This is where cost could be a major inhibiting factor with the Panmnesia cluster, however. Pricing on comparable products, such as the Dell 370-AHHL memory modules at 512GB, currently stands at just under $2,400.

That would require significant investment from an enterprise by any standards. If one were to harness Samsung’s top end 1TB DRAM module, the costs would simply skyrocket given their expected price last year stood at around $15,000.

More from TechRadar Pro

Want to have access to 96TB (yes Terabytes) of RAM? This CXL expansion box shows what the future of memory looks likeWith AMD’s fastest mobile CPU, 64GB RAM and a pair of OLED screens, GPD Duo may be the best mobile workstation everWe’ve rounded up the best mini PC choices around

Read More 

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top
Generated by Feedzy