Micron presented something really interesting during the company’s Investor Day Conference last week, but it didn’t seem to get any press coverage. The company naturally repeated its plan to become a more important supplier of data center SSDs, but what The SSD Guy was most interested in were a few comments they gave for choosing to make vertically-integrated SSDs. Micron now makes not only the NAND and the DRAM internal to its SSDs, but also the controller.
Why would a company Continue reading “Smarter NAND for Better SSDs”
It seems not so long ago that there were frequent press releases, and showings at trade shows, of “Hero” SSDs. These demonstration models (which weren’t always released as products) always had some unique and impressive attribute. They may have had a higher capacity than any SSD known to humankind, or perhaps they had phenomenal endurance. Some broke the IOPS barrier.
The SSD Guy doesn’t remember anyone Continue reading “Whatever Happened to “Hero” SSDs?”
Micron recently briefed The SSD Guy on its new 7450 SSD series, a range of high-capacity data center SSDs offered in an impressive number of capacities and form factors spanning M.2, U.3 and E1.S. The 7450 is a mainstream drive targeted at a wide variety of data center applications, including common, mixed, and random workloads.
The 7450 series is an evolution of Micron’s 7400 series which was first introduced at 96 layers and was based on Continue reading “Using 176-Layer NAND for High-Capacity Data Center SSDs”
For a long time The SSD Guy has meant to write something about the budding use of AI in SSDs. It’s an interesting approach whose time has come.
If you’re not conversant with AI, and maybe find the whole subject to be daunting, don’t worry. AI comes in many forms, and some are very simple. When major Internet firms like Google and Facebook use AI to Continue reading “Using AI to Manage Internal SSD Parameters”
There have been numerous changes to SSDs since they moved into the mainstream 15 years ago, with controllers providing increasing, then decreasing endurance levels, and offering greater, then lesser levels of autonomy. What has been missing is any ability for the system to determine the level of performance that the SSD provides.
Recently Kioxia, the company formerly known as Toshiba Memory, announced Continue reading “What’s Software-Enabled Flash?”
At its October Insight Conference Micron Technology finally revealed its 3D XPoint SSD, dubbed the X100.
While the company didn’t disclose too much about the device, it did brag about its speed, claiming that the X100 is the world’s fastest SSD, running three times faster than the fastest NAND flash SSDs and almost three times the speed of other XPoint SSDs. The product is said to Continue reading “Micron’s New XPoint SSD Finally Arrives”
It recently dawned on me that one of the charts that I most frequently use in my presentations has never been explained in The SSD Guy blog. This is a serious oversight that I will correct with this post.
The Memory/Storage Hierarchy (also called the Storage/Memory Hierarchy, depending on your perspective) is a very simply way to Continue reading “The Memory/Storage Hierarchy”
I was recently reminded of a presentation made by GoDaddy way back in the 2013 Flash Memory Summit in which I first heard the statement: “Failure is not an option — it is a requirement!” That’s certainly something that got my attention! It just sounded wrong.
In fact, this expression was used to describe a very pragmatic approach the company’s storage team had devised to determine the exact maximum load that could be supported by any piece of its storage system.
This is key, since, at the time, GoDaddy claimed to be the world’s largest web hosting service with 11 million users, 54 million domains registered, over 5 million hosting accounts, with a 99.9% uptime guarantee (although the internal goal was 99.999% – five nines!)
The presenters outlined four stages of how validation processes had Continue reading “Failure is Not an Option — It’s a Requirement!”
Although the Trim command has been defined for nearly a decade, for some reason I have never written a post to explain it. It’s time for that to change.
Trim is something that was never required for HDDs, so it was a new command that was defined once SSDs became prevalent. The command is required because of one of those awkward encumbrances that NAND users must accommodate: Erase before write.
NAND flash bits cannot be altered the same way as an HDD. In an HDD a bit that’s currently set to a “1” can be re-written to a “0” and vice versa. Writing a bit either way takes the same amount of time. In NAND flash a 1 can be written to a zero, but the opposite is not the case. Instead, the entire block (4-16k bytes) must be erased at once, after which all bits are set to a 1. Once that has been done then zeros can be written into that block to store data. An erase is an excruciatingly slow operation, taking up to a half second to perform. Writes are faster, but they’re still slow.
Let’s say that a program needs to Continue reading “What is an SSD Trim Command?”
The Storage Developer Conference in September gave a rare glimpse into two very different directions that SSD architectures are pursuing. While some of the conference’s presentations touted SSDs with increasing processing power (Eideticom, NGD, Samsung, and ScaleFlux) other presentations advocated moving processing power out of the SSD and into the host server (Alibaba, CNEX, and Western Digital).
Why would either of these make sense?
A standard SSD has a very high internal bandwidth that encounters a bottleneck as data is forced through a narrower interface. It’s easy to see that an SSD with 20+ NAND chips, each with an 8-bit interface, could access all 160 bits simultaneously. Since there’s already a processor inside the SSD, why not open it to external programming so that it can perform certain tasks within the SSD itself and harness all of that bandwidth?
Example tasks would include Continue reading “SSDs Need Controllers with More, NO! Less Power”