A lot of folks believe that when Intel’s Optane is gone there will be nothing left but the story of its rise and fall. That is far from the truth. Optane has created a legacy of developments that will find use in computing for quite some time.
In this three-part series The SSD Guy blog reviews Optane’s lasting legacy to reveal six changes that it has brought to computing architecture in its short lifetime.
Each of the three parts covers two new developments:
Intel’s Optane, the new memory/storage technology that was designed to fit the growing gap between DRAM main memory and NAND flash SSDs, required computer designers to tackle new problems that they had never addressed before. Because of this, Intel’s Optane experiment created six new benefits to computing architecture:
- New programming paradigm
- New processor instructions
- New approach to 2-speed memory
- Improved latency handling
- New approach to memory expansion
- New thinking about security concerns with persistent memory
This first post will cover the new programming paradigm and Intel’s new processor instructions.
New Programming Paradigm
The beauty of persistent memory is that data can be made immune to power failures without having to take the time to do an I/O transfer to an SSD or (even worse) to an HDD. This is extremely important in transaction systems like banking, where data loss simply cannot be tolerated, even if there’s a power failure.
Although you would imagine that someone could simply add persistent memory to the bus, and then a handful of software tweaks would solve the problem, that is not the case. A universal solution is required to allow packaged software to operate in any environment no matter how much, or how little, persistent memory it has. Furthermore, there must be a uniform way of addressing both persistent and volatile storage (DRAM) to allow the same software to operate across a number of OEMs’ machines.
Intel, SNIA (the Storage Networking Industry Association), and a number of other organizations worked together to create the SNIA Nonvolatile Memory Programming Model which takes all of these issues into account and provides a means to develop standard software that will operate across all of these environments. It is blocked out in the diagram below.
Readers who want to understand this model should visit SNIA’s Nonvolatile Memory Programming Model pages.
New Processor Instructions
Intel added a couple of new instructions to its IA (Intel Architecture) instruction set to help processors benefit from persistent memory. These instructions allow software to assure that dirty cache lines are flushed to persistent memory before a power failure.
Those readers who are unfamiliar with cache memory operation may not understand that last paragraph. The SSD Guy wrote a book on cache memory design some time ago, so let me explain it in simple terms.
Cache memories fool a processor into thinking that it’s communicating with main memory, while it’s really reading and writing a significantly faster cache memory that is right on the processor chip itself. When the processor performs a fast write to cache the cache line has current data, and the main memory address that it pretends to be still has stale data. If power is lost, the cached data is lost, and the stale data is all that remains, as long as it’s in persistent memory. (If it’s in DRAM then that’s lost too!)
The new instructions, CLWB (Cache Line Write-Back) and CLFlushOpt (Optimized Cache Line Flush), allow the programmer to force individual updated cache lines to be written to persistent main memory upon command. This gives the programmer a solid understanding of when data is persistent and when it may not be. The programmer can then write code that solidly assures that the data will not be deemed “persisted” until that is really the case.
Before persistent memory existed, instructions like these weren’t useful, because persistence to an SSD or HDD took a very long time, so the entire cache could be flushed without noticeably impacting the performance of the persist operation, which would be a very slow write to SDD or HDD storage.
Coming Up: Handling 2-Speed Memory & Longer Latency
In the next part of this series we will discuss how the DDR bus, which has been tuned for speed, but not for memories of differing speeds, has been managed to work with Optane’s reads, which are slower than DRAM, and its writes, which are even slower than its reads.
Keep in mind that Objective Analysis is an SSD and semiconductor market research firm. We go the extra mile in understanding the technologies we cover, often in greater depth than our clients. This means that we know how and why new markets are likely to develop around these technologies. You can benefit from this knowledge too. Contact us to explore ways that we can work with your firm to help it create a winning strategy.