One of the thorniest issues in SSD design how to manage erasing blocks that are no longer in use. That’s saying a lot, because NAND flash presents so very many difficult challenges like wear leveling, bad block management, error correction, and write amplification.
The difficulty stems from the fact that all of today’s software was written for HDDs which don’t behave like the flash in an SSD. An HDD can over-write existing data with new data. In a flash SSD, a block must be erased before being over-written and this can take a half a second – a huge amount of time in the world of computing. Since the software doesn’t accommodate flash’s “erase-before-write” needs, the controller inside the SSD must take care of this bit of housekeeping. Unused and unerased blocks are moved out of the way and erased in the background. This is called the “garbage collection” process.
Most SSD controller makers carefully guard their garbage collection algorithms under a veil of secrecy. These techniques are an important part of the differentiation between suppliers. One thing that is common to nearly all SSDs is the use of over provisioning to help with this process. More flash resides within the SSD than is available to the user – a 64GB SSD may actually contain 80GB of internal NAND, but only 64GB is visible to the user. The other 16GB provides an area that can be used for background processes.
The SSD controller moves unerased blocks that are no longer in use to this pool of reserve flash. A background task erases these blocks without getting in the way of standard disk operation. In most cases there is enough extra flash to ensure that the slow erases will not cause the SSD to run out of erased blocks.
A difficulty arises when the pool of spares backs up. In certain very extreme cases this becomes a problem and the entire SSD is forced to wait until a block’s erase cycle is complete.
Another post explains the ATA “Trim” command that helps SSDs identify which blocks can be erased.
We expect to hear a lot more about this issue before it is resolved. Nonetheless, most current garbage collection algorithms perform well enough for today’s applications, and we do not anticipate its getting in the way of the 148% unit growth that Objective Analysis projects in our reports covering the enterprise SSD market.
5 thoughts on “SSD Garbage Collection”
In regards of Trim and GC, have you seen the O&O-software that says it can Trim any SSD in a better way? I’m not really sure what this software does, and their whitepaper on the solution didn’t tell me much.
But it seems like this software fills up the SSD with files which it then delete again. Well – can help in one way, but at the same time increases wear? Or can the Trim/CG-process also increase wear a lot?
Good of you to point me to O&O. I didn’t know of them before. Others may want to visit the O&O site at http://www.OO-Software.com.
The white paper is not all that confusing to me, but I am pretty deeply involved in this technology. It looks like you may have misunderstood – the software does not fill the SSD with files to delete them again. I will explain in a moment.
One key point I would first like to highlight in this white paper is the statement:
“Defragmentation would not cause any measurable increase in acceleration. On the contrary, due to the many short write accesses, it would only produce unnecessary delete-write cycles. Defragmentation of SSDs is not only unnecessary, if done frequently it may even significantly reduce their life span !”
This is accurate. Windows versions earlier than Windows 7 try to defragment SSDs and this wears out the SSD. O&O says that its O&O Defrag software prevents this.
The Trim command is more complicated. I will explain that in another post. With Trim (which is in Windows 7 and Windows 8 ) the software gives the SSD information that helps the SSD run faster. O&O says that its O&O Defrag software adds Trim to Windows XP and Vista.
If this software works the way O&O says it does then it looks like it should be a good product!
Comments are closed.