SSD-watchers have expressed some concern over the last few years that SSDs cannot be manufactured using advanced NAND flash process geometries. This is because these parts have lower endurance and a larger number of bit errors than NAND made using less-advanced processes – the tighter the process, the shorter the flash’s life, and the more errors it will have.
Fortunately these concerns seem to be Continue reading
Over provisioning is one of the most common ways that SSD designers can help assure that an SSD has a longer life than the flash’s endurance rating would support. If an SSD contains more flash than is presented at its interface, the controller can manage wear across a larger number of blocks while at the same time accelerating disk performance by moving slow operations like block erases out of the way of the SSD’s key functions.
Many people like to compare wear leveling to rotating a car’s tires. In this vein, think of over provisioning as having a bunch of spare Continue reading
On October 23 along with the highly-anticipated announcement of the iPad 4, Apple rolled out new Macintosh computers that for the first time in an Apple product pairs an SSD with a conventional HDD to get the best combination of capacity, speed, and price. The company calls this its Fusion Drive, not to be confused with Fusion-io’s highly-regarded products.
The SSD Guy did not attend the announcement, and there is little on the Apple website. I contacted Apple, and they don’t have very much detail to share at this time. This is important to note, since Continue reading
Write amplification plays a critical role in maximizing an SSD’s usable life. The lower the write amplification, the longer the SSD will last. SSD architects pay special attention to this aspect of controller design.
Unlike the other factors described in this series this is not a technique that extends flash life beyond the 10,000 erase/write cycles that one would normally expect to result in a failure, but it is very important to SSD longevity.
Write Amplification is sufficiently complex that I won’t try to define it in this post, but Continue reading
At this week’s Storage Networking World (SNW) conference there was no shortage of SSD presentations, but none of the keynoters who shared their data center experiences had deployed any SSDs in their systems.
This seemed particularly odd to The SSD Guy since the MySQL conference I have been attending for some time has fewer SSD presentations simply because almost everyone who attends that conference already uses SSDs.
Why is there such an odd disparity?
The simple reason is that Continue reading
There are more advanced means than simple error correction to help remove bit errors in NAND flash and those will be the subject of this post. The general term for this approach is “DSP” although it seems to have very little to do with the kind of DSP algorithm used to perform filtering or build modem chips.
While ECC corrects errors without knowing how they got there, DSP helps to correct any of the more predictable errors that are caused by internal error mechanisms that are inherent to the design of the chip. A prime example of such an error would be adjacent cell disturb.
Here’s a brief explanation of Continue reading
OCZ has drafted a replacement CEO from its board of directors. Ralph Schmitt, who until yesterday was CEO of PLX Technologies, will lead the company starting today.
In a conference call early this morning Mr. Schmitt laid out his plans:
Our actions will be based on innovation, quality, and profitability. Our focus will be to further penetrate OEMs and the enterprise market.
He comes to OCZ with a very strong background, after having Continue reading
Error correction (ECC) can have a very big impact on the longevity of an SSD, although few understand how such a standard item can make much difference to an SSD’s life. The SSD Guy will try to explain it in relatively simple terms here.
All NAND flash requires ECC to correct random bit errors (“soft” errors.) This is because the inside of a NAND chip is very noisy and the signal levels of bits passed through a NAND string are very weak. One of the ways that NAND has been able to become the cheapest of all memories is by requiring error correction external to the chip.
This same error correction also helps to correct bit errors due to wear. Wear can cause bits to become stuck in one state or the other (a “hard” error), and it can increase the frequency of soft errors.
Although it is not widely Continue reading
IBM announced on Monday October 1 that it had finalized its acquisition of Texas Memory Systems (TMS.) This transaction was first announced in mid-August and was analyzed at that time in an Alert sent to Objective Analysis clients.
Here are a few salient points from the Alert:
- TMS is the world’s oldest SSD maker, and has recently made an aggressive move from DRAM to NAND flash, providing very high performance PCIe SSDs and arrays.
- TMS and IBM play into the same market: storage for large-scale computing. Their technologies complement each other, since IBM’s current solid state storage offerings were lightweight compared to those of TMS.
- The acquisition meshes with IBM’s mantra of “Smarter Planet, Smarter Storage, and Smarter Computing.” SSDs improve storage speed while reducing power and space requirements.
Objective Analysis sees this as a good fit that will harness the synergies of two very similarly managed companies to produce very positive results.