A couple of specifications for SSD endurance are in common use today: Terabytes Written (TBW) and Drive Writes Per Day (DWPD). Both are different ways to express the same thing. It seems that one vendor will specify endurance using TBW, while another will specify DWPD. How do you compare the two?
First, some definitions. “Terabytes Written” is the total amount of data that can be written into an SSD before it is likely to fail. “Drive Writes Per Day” tells how many times you can overwrite the entire capacity of the SSD every single day of its usable life without failure during the warranty period. Since both of these are guaranteed specifications, then your drive is most likely to last a lot longer than the number given by the SSD’s maker.
To convert between the two you must know the disk’s capacity and the warranty period. If drive maker gives you TBW but you want to know DWPD you would approach it this way:
TBW = DWPD * Warranty * 365 * Capacity/1,024
The constants are simply to convert years to days (365) and gigabytes to terabytes (1,024). Some might argue that this number should be 1,000, and that may be correct, but the difference between the two is only 2.4%, and The SSD Guy highly doubts that you are planning resources so tightly that this will matter.
If you want to go the other way, and convert TBW to DWPD, you would use this formula:
DWPD = TBW * 1024/(Capacity * Warranty * 365)
Why are there two different specifications? The TBW specification doesn’t really specify how long the drive will last in years. An SSD with a TBW specification will fail either when it has exceeded its TBW goal or after its warranty period ends, whichever comes first. The DWPD specification intertwines the number of writes with the warranty period in a way that should cause both to occur at the same time. All in all, it’s just a matter of preference. There is no one standard way that endurance is specified.
Before going to all this trouble, though, I would suggest for you to review the SMART attributes on an SSD that has been used in this application or a similar one for a number of months. You are likely to find that that wear is so much smaller than the drive’s specification that you will never come close to exceeding the TBW or DWPD specifications. If that’s the case you need not worry much about the SSD you select for this application. On the other hand, if you are close to either limit then you would do well to choose an SSD that can handle your write requirements with room to spare.
This is not correct. Intel specifies 45TBW on it’s ARC page for the 80GB DC S3500. In their brochure “Why Choose a Data Center Class SSD” they state that this SSD is rated for 24.6 GB host writes per day (DWPD)
45 * 1000 / 365 / 5 = 24,6575
45 * 1024 / 365 / 5 = 25,2493 (so, it’s not 1024, “…Some might argue that this number should be 1,000…”)
Nowhere in this calculation does the capacity come into play, as it is already considered in the specified TBW / DWPD
Danuiel, you make a very interesting point – that Intel has come up with a third way of specifying the same thing.
While most SSD makers will specify TBW (as Intel does in its DC S3500 datasheet*) or DWPD (as Intel does in its DC S3700 datasheet**), the calculation in the Intel brochure*** you mention calculates GB of writes per day, rather than “Drive” writes per day (DWPD).
Most SSD makers that don’t specify TBW will specify DWPD, which means the entire drive, in which case the capacity is a key part of the specification.
You also are correct to say that Intel uses 1,000 in its calculation, but some companies choose 1,024. HDD makers gravitate towards using 1,000, and many SSD makers use 1,024, but this appears not to be the case with Intel, at least in the brochure you cited.
Thanks again,
Jim
* Intel DC S3500 Datasheet: http://www.intel.com/content/dam/www/public/us/en/documents/technology-briefs/data-center-class-solid-state-drive-brief.pdf
**Intel DC S3700 Datasheet: https://www-ssl.intel.com/content/dam/www/public/us/en/documents/product-specifications/ssd-dc-s3700-spec.pdf
*** Intel brochure: Why Choose a Data Center Class SSD?: http://www.intel.com/content/dam/www/public/us/en/documents/technology-briefs/data-center-class-solid-state-drive-brief.pdf
Wow this is an old post but comes up first on google congratulations.
I started googling as I was checking out warranty on drives and Samsung 750 evo has a 3 year warranty or 70TBW
Which ever comes first.
70 TBW not per day. So I decided to calculate its max live. That is if I kept writing at the 560Mbytes per second the fastest write speed of the drive.
How soon will i write 70TBW
My calculations showed 1.5 days!!!!
Is my calculation wrong/
I took 70TBW and converted it to 75161928 Megabytes. Divided it by 560 and that gave me how many seconds the drive would last. Divided it by 60 to get minutes, Then divided that by 60 to get hours and divided that by 24 to get number of days and I landed up with 1.5xxxx days!!!
Is it true. I know it takes only 7 minutes to fill a 240 drive and yes using it as an operating system might not write that much of data per day.
But I just wanted to know what is the actual life. And it seems that SSDs can never be used for incrememental backup of huge bandwidths of data. Nor can it be used as a download disk for an application like utorrent.
But 1.5 days is too less.
So if the drive is not writing 560mbps per second for every second of the day how much do you think it would write?
Even for the drive to last 3 years of warranty how less do we have to use it.
I think it would be an interesting article for you to write Please do so
Please let me know your thoughts on this through comments first please.
Thanks
Rawraj, Thanks for the congratulations and for a very insightful comment. Yes, the SSD would only last a day and a half at 560MB/s of constant writes, but it’s not as bad as this makes it seem.
The 750 EVO SSD is not intended for high write workloads like you mention. It’s for general-purpose PC use. I have been surprised to hear from PC users who review their SMART attributes that they have discovered that, over the course of a full year, their SSDs have an average of fewer than ten writes per block. Since these blocks have endurances of 300-3,000 erase/write cycles, then just about ANY SSD should provide satisfactory endurance in a PC.
It would take four complete overwrites of a 250GB SSD to make a terabyte, so the 70TB of endurance is 4*70 = 280 full-SSD writes. This is very close to the endurance rating that I gave above of 300 erase/writes, so the numbers agree.
I would not expect for anyone to want to use a client SSD this way. Still, it’s an interesting way to look at the numbers.
Jim Handy says:
December 29, 2016 at 5:12 pm
Rawraj, Thanks for the congratulations and for a very insightful comment. You are correct in your concern, but you math is in error.
One terabyte is 1,000 gigabytes to some people, and 1,024 to others. Let’s not worry about small differences like that. The highest write bandwidth of 560GB/s is then about one terabyte every two seconds. Since the drive is rated at 70TBW, and since it only takes two seconds to write a terabyte, then you should reach the guaranteed endurance in 70*2 = 140 seconds. (I double-checked the Samsung site which showed 520MB/s, but that’s not much different. The 70TBW specification was for the 250GB drive. See http://www.samsung.com/semiconductor/minisite/ssd/product/consumer/750evo.html).
As far as I know there is 1000x mistake! The highest write bandwidth of 560GB/s is actually 560 MB/s(or 520 MB/s). That means in 2 seconds you can write only around 1 GB.
Paul.
Paul,
Thanks for pointing out my embarrassing mistake! Rawaj’s math was correct after all!
I have corrected my comment to reflect that.
Jim
VERY FEW users write more than 5GB DWPD, and the Intel Devel forum presentation PPT indicates the top 1% of users perform 50GB writes in a day. 5GB _per_day_ (written) is just under 2TBW _per_year_. The *burst data* may be peaking at 500+ MB/s per sec but that is atypical /non-sustained in typical HDD nor SSD use. It is doubtful that a typical user could exceed 150TBW in 10 years (Samsung V-NAND warranty), and if you hit this level, you have likely deployed the drive in HyperV Virtualization server ; in a recent look at SMART in one of the busiest Alarm collection servers at a Mexican CLARO Telco server, I observed the equivalent of 10 TBW in all of 2015.
J Rad.
You are very right. PC users I have spoken with have looked at their SMART attributes and found that their writes are far below what they expected.
About three years ago everyone was obsessing about the maximum number of drive writes. Today it appears that more sophisticated users understand their workloads enough to purchase SSDs with lower DWPD since they know that they don’t need the highest number.
Jim
I think we should preface this with regard to VM users. In my experience with VM PC clients on Macs (ie unix Darwin kernel) running OS X as host, hard drives and SSDs are hit fairly hard. If you have a mission critical application and might lose lots of money with downtime, it’s better to go with the more reliable SSDs and harddrives in situations like that. For the prosumer, that means that generally Samsung Pro series drives are the best way to go.
Similarly, if you are going to use an SSD as a constantly and heavily used backup/archiving drive, you want to get one with waaay better than 70 TBW rated SSDs.
By RAID 5, 5+1, or 6-ing SSDs, you can not only increase throughput performance while lessening the chance of a sudden loss of operations. Thus, in a RAID 5 configuration of 3 SSDS, each with a TBW 70, the effective TBW is approximately 200TBW. Note also that this is still less than a single SSD with a 300 TBW rating (eg modern Samsung Pros in the larger sizes). Thus, you could set up a RAID 5+1 configuration and have a fairly reliable system and great performance. (And assuming that you weren’t trying to always max out write to drive per day, etc…since, yeah, even this setup would only give you approximately 5 days of reliable drive use under constant max write if the TBWs are accurate, as per rawraj’s theory above!)
Timbo
Thanks for a very useful comment from “Down in the trenches”. I am sure that a lot of folks will find it extremely useful.
Since I don’t have experience with VM PC clients I find this to be more than puzzling. I would imagine that the client would have extremely little local write traffic, with everything going to the server. Since PCs without virtualization already have very light write loads it would make sense for virtualized PCs to have even less!
Can you shed any light on why the opposite is true?
Thanks again,
Jim
I would suggest you go RAID 1 using two Micron 5100 MAX Drives. They are not cheap, a 960GB SATA runs around $550 each. However, the endurance rating is 8,300 TBW or 8.3 PBW. These are true Enterprise drives with a 5 year warranty.
Mfg. Part: MTFDDAK960TCC-1AR1ZABYY CDW sells them and so does Amazon. You could also probably use the 5100 PRO which is the mixed use version, or the 5100 ECO which are the read intensive version which are only 800 TBW.
There are three general categories of Enterprise drives. Read intensive, Mixed Use, and Write Intensive.
Thanks, Bryan, for yet more good info “From the Trenches”.
I’m sure you’re helping out more people than you know!
Jim
Hi,
Does this measure “TBW” translate into more understandable single bit reads/writes?
Giedrius,
If you’re looking for a translation of TBW into single bits or bytes, then simply look at this as the trillions of bytes written to the SSD. Some SSD makers use 10^12 as one trillion, or 1,000,000,000,000 (twelve zeros). Others use 2^40, which is about 10% larger: 1,099,511,627,776.
Either way, it’s a huge number!
Jim
Is the above translation applicable to TBR as well ?
Not at all.
SSDs can be read an infinite number of times. It’s only writes that cause any problems.
Best,
Jim
Thanks Jim for the quick response. So in order to verify TBR, can I use the below calculation
TBR = (Number of read commands* transfer size (in TB))
I guess you could do that, but I can’t understand why you would care to.
If the drive can be safely read an infinite number of times, then TBR and DRPD (drive reads per day) would be infinite as well.
Perhaps you have some other motive. Mind explaining?
Jim
Yes. But when I want to calculate the Read Error Rate, i want to use Read Error Count for every ‘n’ TBR.
Normally Read Error Rate is measured in Parts Per Million (PPM.) If you start to use a term like TBR you are likely to confuse people into thinking it’s similar to TBW, which is the lifetime of the drive before it is expected to fail.
To avoid confusion, I would highly recommend staying with standard nomenclature.
Jim
Hi Jim Handy.
Normally the vendors does not calculating with the page file what is used by the operating system. i have 32 GB of system memory and after the system installing already written 32 GB data to the disk. Only one month later i already thinking to buy a new ssd because already written about 3-4 TB.
Conclusion: I could handle any information from the vendors like this> for home using, for workstations*like my machine,for servers…
XeonForever,
I would suggest that you have a close look at the SSD’s SMART attributes. I suspect that there’s a lot less write traffic to your SSD than you think there is.
The traffic between your 32GB memory and the SSD involves software (which is read from the SSD but not written to it) and two kinds of data – some that is read-only and some that is read/write. The read/write is the only data that would cause wear. In many cases is a very small part of the total.
See what your SMART attributes say this week, and then check them next week. From that you can estimate how many weeks your SSD should last.
I bet that you will be delightfully surprised at the result.
Jim
hi, i want to ask about TBW, im using my PC for gaming and netflix only, and i cant find info about how many GB/day windows 10 will write my SSD if im using it for game n netflix, can you give me aprox number for GB/day ?,
and for 2nd question i use 120GB for cache using primocache to speedup my HDD, do you know how many GB/day if SSD use for cache. thank you very much, sry for my bad english.
Karel,
Your English is great! There’s no reason to apologize.
Let me suggest for you to read the SMART attributes for your cache SSD, and then, a week or a month later, read them again and compare the two sets of attributes. I am sure that you will find that the wear on your SSD is far less than you imagine. I know people who have done this and found that their SSDs should last 10 or more years with their current usage patterns.
A cache SSD will receive more traffic than a larger SSD that is used for main storage. Once you have found that your cache should last for longer than you plan to own it (and I am sure that this is the case) then you can rest assured that an SSD used for main storage will last far longer.
Applications like Netflix and video games perform very few writes to an SSD. Business software like transaction processing systems, databases, and virtualized systems write a lot to the SSD, and these are the only applications that really need to be watched.
Check the SMART attributes and then relax!
Jim
Hi All,
In the formula to calculate TBW = P/E cycles * Capacity/WAF. Can someone help me to understand if the “capacity” variable here is USER capacity or RAW capacity? To be specific, if I have 128GB drive, which is over-provisioned from 256GB raw drive (100% OP). Will 128GB or 256GB take into my formula? I see many articles telling me that it should be RAW capacity. However, since I do 100% OP, my WAF value is better than 10% or 28% OP. If I take raw 256GB into my calculation with the optimized-100%OP WAF, will it be a fair calculation?
Thanks,
-Vincent
Vincent,
This isn’t something that I have ever attempted. I would suspect that you need to know more than just the WAF, and I can’t imagine that an SSD company would share that number with you anyway.
The drives that I have seen with variable O/P specify the DWPD or TBW for certain fixed levels of O/P and leave it at that.
Perhaps another reader can help answer your question directly, though. Any takers?
Jim
Hello Jim Handy,
Thanks for all the hard work you have been doing writing these articles over the years, but after reading a lot I still have a question that I want to ask you as I can’t find any information on this peculiar case of mine.
I am still quite young and I think of buying an m.2 factor SSD , be it SATA-3 or NVMe, and I don’t plan on writing a lot of data into it, but I just want it to last 30-50 years. Is that even possible?
I know that it won’t be after all those years, but are there SSD drives out there that I can plug it in and forget it?
I know cloud storage would be a wiser solution, but if I just want to dump 500GB of photos and leave it at that.
Would the durability change depending on if I left the driver “spinning” or should I just put the photos in and take the m.2 drive out?
Nagas,
Thanks for asking. I see two problems:
The biggest problem is that it is unlikely that there will be support 50 years from now for any of today’s interfaces. If you really did keep a SATA or NVMe SSD for 50 years you might not be able to read the data.
The second problem is that Flash is only guaranteed to retain its data for 10 years or less. The only exception to this was a USB drive that SanDisk used to sell that was supposed to retain its data for 100 years.
You are correct to think that the data would be refreshed if you continued to apply power – some SSDs automatically refresh data that has been ignored for a long time. But if you can’t find a computer that will read it then what difference would that make?
Your best approach would be to store your photos on an SSD, or a USB drive, or the cloud and to copy them to a new device once every year. That way your data will never be more than a year old. If you keep the old device then you will have a 2-year-old backup, and that’s still pretty safe.
If you take this approach and then interfaces change you can move your data from an SSD with the old interface to an SSD with the new one a few years before that interface becomes obsolete.
I hope that helps.
Jim
“You are correct to think that the data would be refreshed if you continued to apply power – some SSDs automatically refresh data that has been ignored for a long time.” So this states that it would be better to keep the ssd plugged to an computer that’s being used even though you never really use it for anything alse just for storing purposes in the end?
“Your best approach would be to store your photos on an SSD, or a USB drive, or the cloud and to copy them to a new device once every year. That way your data will never be more than a year old. If you keep the old device then you will have a 2-year-old backup, and that’s still pretty safe” But that requires having multiple devices, or buying quite a few new ones, and if that’s the case, is it worth to save up more for ssd storage thinking that it will last you longer than conventional hdd storage? If so, can you think of any ssds (preferably m.2) that were designed with durability in mind? I know the most durable ssds are SLC, but they are hardly around in the not used market, and MLC and even TLC based surpass them in TBW, or is the TBW irrelevant in this case when were are talking about just putting something in a drive as a long term storage? Or maybe setting up a raid or a nas storage solution would be better, even if technically more challenging?
Thank you for answering my questions, hope I am not taking too much of your time.
Hello Mr.Handy,
I intend to buy a SSD..
I am planing to buy a 500GB or 1TB(if i get it in my budget).
Can you please guide me on the followings-
1) Will it be a good idea to keep only OS(Win 7 Pro) and my daily primary data(50GB) on the SSD and keep all other non essential data on a slave HDD?
2) Dose TBW mean the total terrabyte data that can be written(and or read?) on the SSD?
3) how to calculate the smart attribute of my daily work?
Thanks in advance
Vishal, Thanks for the comment.
You don’t need to worry so much. SSD wear is really only an issue for server workloads. It’s nearly impossible to wear out an SSD in a PC.
You can put all of your files on the SSD. The items that have the greatest activity are temporary files from the O/S that are invisible to you and are out of your control. Even though they cause more writes than your personal files do, the wear is not very high.
For #2 you are right: TBW is the amount of data that can be written onto the SSD. The SSD should withstand infinite reads. After the SSD has reached its TBW limit the manufacturer no longer guarantees it, but it may work for a very long time after that.
If you want to to read the SMART attributes you will need to read the SSD’s technical specifications. I would expect for you to only read the attributes once before deciding that you didn’t really need to read them at all. Someone I knew uses his PC a LOT, but when he read his SMART attributes after a full year of use he found that it would take him 135 years to wear it out. I don’t need to say that he didn’t plan to keep his PC that long!
Hope that helps.
Jim
Hello from Romania !
I just found this article – and read a lot of interesting things ; as I am about to buy a S-ATA SSD, and was looking at Enterprise-class because, even if I do not need it that much, I want one with the BEST endurance rating… What can I say, I just love overbuilt things…
So, being limited to just S-ATA interface, as of today (and excepting NVMe SSDs, or Intel’s Optane technology), best-in-class in terms of endurance would be :
1) Micron 5200 MAX series (with a 2.2 PBW rating – or 5 DWPD) [also stated as a 3 Mil. hours MTBF !] ;
2) Intel D3-S4610 series (with a 1.4 PBW rating – or 3 DWPD) ;
3) Kingston DC500M series (with a 1.3 PBW rating) ;
4) Samsung SM883 series (with 0.8 PBW rating).
My question is…do I miss some other manufacturer ? Is the list I made accurate ?
P.S. : I cannot afford a ATP Velocity SII Pro (SLC-based) S-ATA SSD, as their price is…enormous.
Mihai, Thanks for the note.
Another company you might consider is Western Digital Corp. (WDC). In 2016 WDC acquired SanDisk who had acquired SMART Storage Solutions in 2013, who used to produce the highest-wear SSDs in the market, over 25 DWPD!
In those days the data center obsessed about wear and was willing to pay a lot of money for super-high wear ratings. Today data centers have become much more sophisticated – where they used to worry but not measure their write load they now measure the write load and don’t worry. The data center has subsequently driven the development of super-low DWPD SSDs – below 1 DWPD. They find that they don’t need any more than that.
In short, there may not still be any 25 DWPD SSDs, but I’ll leave you to look that up for yourself.
I should explain that the MTBF figure has nothing to do with write workload. It is more related to solder joints and other mechanical issues that could corrode or physically fall apart. a 3 million hour MTBF means that if you have a large number of SSDs and you power them all up for 3 million hours then half or fewer of them will have failed whether or not you write to them. That’s 342 years!
Good Luck!
Jim