What is ZeroPoint Technology?
Our research shows that up to 70% of memory content is not used. The way data is stored is redundant and inefficient. Imagine if there was a fast, transparent solution that could transform this inefficiency into a more efficient representation. This would significantly increase memory capacity and bandwidth.
The Memory Challenge
We are all acutely aware of the challenges computers, servers, and data centers face today. There simply isn't sufficient memory capacity and bandwidth to keep processors fully utilized. And it's not just that memory is expensive. There are two additional hard challenges: Firstly, xPUs are running out of memory interfaces, and there's no additional space to fit more DIMMs. Secondly, even if there were interfaces and space, the server would need to fit the additional DIMMs within an already highly constrained power budget. The first challenge is tough, the second is a non-starter.
In a previous blog, I talked about how the proprietary ZeroPoint Technology algorithms can deliver high compression ratios at nanosecond latency. These characteristics open up the possibility to introduce data compression in areas where it hasn't been possible before, like in the on-chip cache (SRAM) or in the near/far connected main memory (DRAM). This is groundbreaking, almost like magic, and it will enable us to fully utilize the potential of the memory hierarchy.
The Three Pillars of ZeroPoint Technology
So far, I've only talked about compression, and that is important, but it's only one part of ZeroPoint Technology. Here are the three important pillars:
1. Ultra-Fast Compression: It's crucial to have an ultra-fast compression algorithm that can compress small pieces of information extremely quickly. By small pieces, I mean a cache line, 64 bytes or 512 bits, of ones and zeros. A cache line is the typical granularity of data that a microprocessor transfers. We've developed a set of very efficient algorithms that can compress a cache line 2-4x, resulting in a string of bits that is half to a quarter of the original size. We achieve this across a wide set of data, losslessly.
2. Real-Time Compaction: Once we have these new, smaller representations of the original data, we fit as many as possible into a 64-byte array—the same-sized cache line mentioned earlier. This is what we call real-time memory compaction. It's like Tetris, fitting different-sized blocks on each row, rewarding you when you manage to fit them without gaps.
3. Transparent Memory Management: After compressing and compacting the data, we need to store the resulting new cache line in memory. Since we've rearranged the data, the operating system or application won't know where the data resides. We need to keep track of it. This is what we call transparent memory management. The solution tracks and translates memory requests on the fly to store and retrieve the requested data. The operating system and applications don't need to know what's happening; they simply enjoy the benefit of larger and higher bandwidth memory.
The Magic of ZeroPoint Technology
ZeroPoint Technology combines ultra-fast compression, real-time data compaction, and transparent memory management. All these components execute in hardware with nanosecond latency. This is how we create magic, delivering 2-4 times more memory capacity with unmatched power efficiency.