A lot of folks believe that when Intel’s Optane is gone there will be nothing left but the story of its rise and fall. That is far from the truth. Optane has created a legacy of developments that will find use in computing for quite some time.
In this three-part series The SSD Guy blog reviews Optane’s lasting legacy to reveal six changes that it has brought to computing architecture in its short lifetime.
Each of the three parts covers two new developments:
-
-
-
- New programming paradigm & instructions
- New approach to 2-speed memory & latency handling
- New approach to memory expansion & security
-
-
This third and final post will cover Optane’s new approach to memory expansion and the new security measures that persistent memory requires.
New Approach to Memory Expansion
Processor chips are confronted with a dilemma. The increasing speed of the memory channel has made it impractical to support more than two DIMMs per memory channel, and this severely limits the largest size memory that a processor can support. The capacitance of an additional DIMM would reduce the maximum clock rate of the channel.
You might respond that the number of memory channels per processor has been increasing over time. Memory bandwidth is insufficient to keep the processors fed, so the designers have had to add more memory channels to keep up. Each channel consumes more pins and more power, and this pushes the die size up to cause a like increase in cost, while the added power limits the power budget that can be allotted to the processor cores.
Optane provided one way to increase memory size. Intel’s Optane DIMMs were intended to be very attractive to server users who needed larger memories than DRAM DIMMs could support. Since Optane was priced at half the price of DRAM, it would make sense for systems to use larger memories and to use Optane for the bulk of that memory.
There’s a problem, though. As we saw in the second post in this series, Optane needed a special version of the memory channel that would support its different read and write speeds. Intel developed DDR-T to add Optane capabilities to the DDR4 bus, but when the industry moved to DDR5 then Intel would need to develop a DDR5 version of DDR-T, and after that it would need to do the same for future DDR generations.
The challenge of supporting large memories has led to the development of a number of Far Memory approaches, namely CXL, OpenCAPI, GenZ, and CCIX, all of which support larger memory pools than could be tied directly to the processor chip. These presented Intel with a more attractive alternative that won’t require redesigning DDR-T for every new DDR generation.
Eventually all of these standards seem to be coalescing into a single CXL standard. This new interface allows enormous memories to be attached to a single processor through a channel that doesn’t add as much latency as an I/O channel (remember what I said about context switches in the second post). Not only that, but the newer version of CXL, version 3.0, also supports configurations to allow multiple servers to share a memory space so that very large messages can be passed between processors considerably faster than was possible by passing them via shared storage.
The figure below depicts eight hosts connected to each other and to two memory pools (MC on the right) via four switches.
I need not even mention that the engineers most interested in such configurations are those with hyperscale computing systems like the major Internet data centers and mainframe computing centers used for modeling scientific information, weather patterns, and airfoils.
This advance in large memory pools and shared memory was an offshoot of the large memories supported by Optane. Optane has made people think of ways to expand memory that are NOT limited by capacitive loading and pin count as was the case prior to CXL. Although the switching interface slows down accesses to some degree, system architects believe that the enormous size of the memory CXL supports will more than offset any additional latency.
New Thinking about Security Concerns with Persistent Memory
Persistence causes security issues. Decommissioned equipment might still hold sensitive data that could be recovered by others, perhaps to the detriment of the data’s actual owners. This is not an issue with volatile memory like DRAM since the contents of the DRAM are lost as soon as the power is removed.
In the past persistence has been a security concern for SSDs & HDDs, where data was destroyed in any of a number of ways:
-
-
- Physical destruction, where the SSD was shredded or crushed
- Secure erase software, which resided either in the system or within the drive.
- If it was in the system it would typically overwrite the disk a number of times.
- Internal erases are triggered by an external command which causes the drive to erase and overwrite each track repeatedly as long as power remains available. This feature is supported on military SSDs
- AES encryption, in which the contents of the drive can only be deciphered by someone who held the key. A key had to be kept in both the host and the drive. If either was lost the data became unintelligible. An external command could very rapidly erase the key in the drive after which the drive’s data could never again be accessed.
-
The Optane DIMM approached security by becoming the first-ever encrypted DIMM.
There are special ways that Optane handles AES encryption.
-
-
- When Optane is used in “Memory Mode” it just looks like an extremely large DRAM, so users don’t expect persistence. Since that’s the case, Intel’s drivers simply lose the AES key for Optane every time the power is lost. Just like DRAM, Optane comes up with random contents.
- If Optane is being used in “App Direct Mode”, in which applications take advantage of its persistence, the data must become available again once power is restored. Optane does this by storing the key on the module itself, and requiring a passcode for the host to read the key. This way the module’s contents cannot be read unless the reader has the passcode.
-
If that’s not enough, Optane DIMMs also support an internal overwrite mechanism as described above. It will erase and over-write all addresses upon command.
Whenever persistent memory comes back into the market, which is certain to occur eventually, the issue of security will already have been considered and solved, and this should accelerate its adoption.
Summary
So we finally have all of the technologies that have been developed as a side benefit of Intel’s Optane experiment, and many of them are likely to become mainstream over the course of time:
-
-
-
- New programming paradigm
- New processor instructions
- New approach to 2-speed memory
- Improved latency handling
- New approach to memory expansion
- New thinking about security concerns with persistent memory
-
-
Optane left the computing world with a number of tools that may not have been developed without it.
Keep in mind that Objective Analysis is an SSD and semiconductor market research firm. We go the extra mile in understanding the technologies we cover, often in greater depth than our clients. This means that we know how and why new markets are likely to develop around these technologies. You can benefit from this knowledge too. Contact us to explore ways that we can work with your firm to help it create a winning strategy.
More Info
This series is excerpted from a presentation called Persistent Memories: Without Optane, Where Would We Be? which SNIA has shared in an online video.
The SSD Guy planned to co-present this at SNIA’s Storage Developers Conference (SDC, 12-15 Sept, 2022) but COVID concerns got in the way. Big thanks to Tom Coughlin for making the presentation without me.