I have been receiving questions lately from people who are puzzled when companies use different parameters than their competitors use to specify the endurance of their SSDs. How do you compare one against the other? Some companies even switch from one parameter to another to define the endurance of different SSDs within their product line.
I have found that Intel uses three different endurance measures for its products: DWPD (drive writes per day), TBW (terabytes written), and GB/day.
There’s not any real difference between any of these measures – each one is one way of stating how many times each of the SSD’s locations can be overwritten before the drive has gone past its warrantied life.
The relationships between these three measures are illustrated in this post’s graphic. You can click on it to see an expanded version. It’s all pretty simple. We’ll spell out the relationships in detail below, but in brief, if you want to compare two SSDs that are specified using two different measures, all you need is one of those measures (TBW, DWPD, or GB/day), the drive’s capacity, and the warranty period or lifetime of the SSD. Since the lifetime is usually expressed in years, and since DWPD and GB/day are measured in days, then you will also have to multiply the warranty period by 365 to find what it is in days.
You may also need to convert from GB to TB or PB or back. For some reason Intel converts these using decimal 1,000s instead of the computer scientist’s way of using the binary number 2^10, or 1,024.
Regular readers of The SSD Guy will have already seen a post that explains how to convert terabytes written (TBW) to drive writes per day (DWPD). Intel sometimes uses a third measure for the same thing, and that’s GB written per day (GB/Day). These can all be calculated from each other as long as you know the SSD’s capacity (I’ll use GB here instead of TB) and its warranty period (measured in years).
- DWPD = (GB/Day)/Capacity –OR– DWPD = (10^3 * TBW)/(Capacity * Warranty * 365)
- TBW = (Capacity * DWPD * Warranty * 365)/10^3 –OR– TBW = ((GB/Day) * Warranty * 365)/10^3
- GB/Day = (10^3 * TBW)/(Warranty * 365) –OR– GB/Day = DWPD * Capacity
To give an example of how that can be used, let’s pick some Intel SSDs at random and calculate the missing parameters. In the table below (copied from an Excel spreadsheet) the white cells represent specifications provided by Intel and the yellow cells are calculated from these numbers.
To make the comparisons relatively similar I have chosen SSDs that all offer close to 1TB of capacity.
Most of these SSDs provide only a single one of the three endurance measures, but the SSD 750 shows two. The SSD 750’s TBW and GB/day match the values we can calculate based on one another.
I happen to be most comfortable with DWPD, so let’s talk about that column. A few years ago SSD users, particularly those using SSDs for enterprise applications, focused a lot of attention on this figure and always wanted the largest number the SSD maker could provide. Higher numbers increased the cost of the SSD because one of the tricks used to increase endurance was to increase the overprovisioning of the SSD.
As the market matured users began to realize that certain of their workloads didn’t need a lot of endurance, and many started to ask for a lower price for SSDs with low endurance ratings. One particular such application is the PC, which has very low write requirements, especially when compared with the needs of real-time database applications like on-line transaction processing (OLTP). Note that the lowest endurance figure is the SSD 750’s 0.06 DWPD, a number that meshes well with the needs of most PC applications.
At the other extreme is the DC P3700, Intel’s top-of-the-line NAND SSD, which supports 17 DWPD. This product is designed for the highest write-load enterprise applications.
Other SSDs fall at various points between these extremes, reflecting Intel’s efforts to provide the right mix of specifications (price, performance, and endurance) to match the needs of several different user types.
For this example I chose SSDs all of similar capacities. Let’s change our perspective to explore a single SSD with a range of capacities.
Intel’s DC P3600 is available in five different capacities. The table below illustrates how its static 3 DWPD translates to varying TBW and GB/Day values as a function of the capacity.
The relationship is simple: As the SSD’s capacity triples, from 400GB to 1,200GB, its TBW and GB/Day specifications triple as well, even though the DWPD doesn’t change.
Those who have been paying a lot of attention to Intel’s recent Optane SSD announcements may be interested to see how those SSDs stack up. Two versions have recently been introduced: The DC P4800X, which is aimed at enterprise applications, and the m.2 Optane Memory SSD, which is for PC applications. The enterprise SSD’s endurance was disclosed using DWPD while the endurance of the m.2 SSD is expressed in GB/Day.
Intel’s Optane SSD DC P4800X enterprise SSD is specified to last three years at 30 Drive Writes per Day (similar to the specifications used for Intel’s NAND flash based SSD DC P3700) while its consumer counterpart, the 16-32GB Optane Memory SSD is specified to last for 5 years at 100GB of writes per day.
Although I do not know why Intel chose to specify drive lifetime differently for the two Optane SSDs, it’s simple enough to calculate one from the other, allowing us to compare the two, not only against each other, but also against their NAND-based counterparts. Here’s how the first table appears with the addition of the Optane SSDs:
Since there are two capacities of the Optane Memory SSD, and since Intel chose to use GB/Day as its endurance specification, the DWPD numbers differ between the two, which is very different from what we saw with the DC P3600 in the second table. This product’s wear is higher than half of the NAND-based SSDs in the table, giving credence to Intel’s early announcement that 3D XPoint Memory has higher endurance than NAND flash.
The Optane enterprise SSD beats that figure by a good margin, and is nearly double that of the next-higher candidate – the DC P3700 – offering 30DWPD compared to the DC P3700’s 17. Articles in the trade press indicate that the DC P4800X has about 19% overprovisioning, and this probably helps it to achieve this level of endurance. Since this Optane SSD’s capacity is low, though, the TBW number is proportionally smaller.
It is unclear that the Optane Memory PC SSD uses any overprovisioning at all. This could explain why its DWPD is only 1/5th that of its enterprise counterpart.
Note, too, that the DC P4800X has a shorter warranty period than most Intel SSDs at 3 years instead of 5. This reduces its TBW figure to 3/5ths (60%) that of a similar drive with a 5-year warranty, but has no impact on the DWPD or GB/Day figures.
The SSD Guy hopes that this post will help clear some of the confusion surrounding these three measures of endurance, and will help purchasers understand how one SSD’s endurance compares to that of another.
16 thoughts on “Comparing Wear Figures on SSDs”
Jim this is a great piece and – as you can imagine – is something we have a keen interest in at NVMdurance. With 3D flash the endurance game is changing quite a bit though and it is going to be interesting to see what emerges with DWPD specs for SSDs based on 3D flash. The big difference is that with 2D each flash chip had a specified endurance that was typically a good bit less than the intrinsic endurance of the device. So there was a lot of scope for people to use a variety of techniques to apparently extend the endurance and it was not easy to determine the SSD’s DWPD performance based on the specified endurance of the flash components. With 3D devices, not only is their no headroom above the specified endurance, but unless the SSD maker pays great attention to the functioning of the LDPC error correction at each stage of life, then the flash components will not even get to the specified endurance and/or will have awful latency in later life.
Thanks, Pearse, for the comment. I wasn’t aware of the headroom difference between planar & 3D NAND.
The SSD business is getting increasingly sophisticated over time. The first SSDs weren’t designed for endurance, and they caused issues. Then SSD designers went to great lengths to maximize endurance and numbers were able to achieve 25+ DWPD. Recently many end-users have started to pay attention to the cost-vs.-wear trade-off, and they have found that certain systems don’t need high DWPD specifications, allowing them to purchase less costly SSDs.
I suspect that this increasing end-user sophistication will allow 3D NAND SSDs to continue to enjoy increasing acceptance despite the issues you mentioned.
You said that it’s: “not easy to determine the SSD’s DWPD performance based on the specified endurance of the flash components.” I agree with that, but will counter that it would seem that a medium with 1,000 times the endurance of NAND flash should have a drive life that reflects that advantage to some degree.
Perhaps, as Optane SSDs ship in volume, we will see improvements in their wear specifications.
Let me start by saying Thank you for this straight forward explanation. I have a couple of questions related to this,
1. Is it possible to short stroking ssd drives? If so, What is the benefit ?
2. If the answer is yes to my Q.1, how does it affect its DWPD ?
Glad to help. Short stroking only improves the performance of a mechanical HDD. It can’t be used on an SSD. If you tried using short stroking software on an SSD you would find that it made no difference to the system’s performance, but that it reduced the capacity of the SSD, just as it reduces the capacity of an HDD.
The DWPD would remain the same.
Thank you Jim!!
This provides some great information but in my field we are working with MDVR (DVR in trucks) which are often working for 10 -14 hours a day and depending on the type of system can be writing up to 360MB/Minute, day in, day out. SSD are attractive in this area as they don’t have the same issues as HDD which tend to fail due to excessive vibration, some last 1 year, some last 5 years but of course any drive failure causes data loss and data recovery is not usually considered unless it is critical and they client is sure that the failure occurred after the event they want to recover.
What I am most interested in understanding better is whether a higher end drive with a greater TBW value is really the way to go over a consumer drive with about 25% the TBW.
I was comparing a Samsung 860Pro 1TB which has 1,200 TBW against a Crucial MX500 which shows 360TBW but the Crucial is nearly half the price.
Price is a big factor for us as our customers are not in the IT industry, they are transport companies and don’t see the value unless it clearly presented in figures showing a cost benefit over time.
Unfortunately, giving you advice about this is kind of like giving advice about insurance. How much does the trucking company have to lose if the data is lost? What kind of price are they able to justify?
You might want to work with someone with an accounting background to figure out how to frame the options, but I would suggest coming to your trucking customers with 2-3 alternatives: Low cost, less reliable vs. higher cost, higher reliability.
it might also give you some leverage against your competition, showing that you understand the problem better than they do.
Hi Jim and many thanks for this post! I was looking for an easy way to translate endurance values. Not only Intel mixes these up, it’s still a bit of a mess in the entire industry it seems, though most consumer-oriented PR now seems to put more emphasis on TBW.
What baffles me somewhat is that many manufacturers, including Intel and Samsung, insist on listing an MTBF value with their SSDs. Why is this? I have no engineering background, so perhaps some useful statistical parameter eludes me here, but I fail to see what it might be?
As I understand, MTBF has to do with random failures rather than wear. It might be the likelihood of a solder joint failure or something equally unpredictable.
The MTBF specification would be the manufacturer’s guarantee that the SSD won’t simply stop working until it had reached a certain point.
Thank you for this, I’ve just discovered your blog in my research into SSD behaviour.
A brief comment, that I’d be more than willing to get into in, in more depth, is that I don’t think one should view any of these valuations as “endurance” or reliability. That is, if one uses these terms in the way that engineers would use the terms in any rigorous manner.
Rather they should be viewed in the sense of how a casino states the odds for a bet on its tables, or how an insurance company bases its premiums, based on what it’s actuaries tell it of the risks of the acident that it covers.
They are just there to be used for warranty purposes, and can be adjusted higher or lower for either marketing reasons, or to protect the bottom line. But they are not anything that one should place any great trust in, as far as the risk of a given drive failing, leading to loss of data.
Why do I say this? For several reasons:
(1) They have no support in terms of the research that has been done to provide them, and they do not come equipped with the assumptions that were used to make them.
Any statement of reliability made by an engineer, would tell you this, as well as the confidence that one should place in it, from a statistical point of view.
And these have none of that.
(2) The earlier generation of planar NAND had data sheets, and some published research that one could use to make one’s own estimates of reliability, but the newer 3D generations have very little of that. It’s all cloaked in secrecy, and hidden behind NDA’s.
(3) From what is known about NAND flash in general, and of 3D flash, it’s obvious that any engineering-based statement of endurance or reliability is a very strong factor of how it is used. Factors such as:
– Thermal history
– Past demands on it (things like type of writes, wear levelling)
– How it is configured (OP levels, cache etc)
All that these values are good for is to provide a fixed bound at where the manufacturer must replace an SSD that has failed before it’s end of warranty period.
And unlike a car with a similar type of warranty based on miles or years, a broken down or leaking SSD cannot be taken in by the dealer and repaired. Whether or not the repairs are covered by warranty or by the customer. If it fails due to NAND end-of-life, then the data on that NAND is lost.
It’s like the joke where the night club stand-up comic bets the guest with the fancy silk tie that he can do magic, and says he bets him $20 that he can cut his tie in half, and then fix it. The Guest, egged on by his table partner agrees, then “snip”…..and “oops, sorry, didn’t work that time, here’s your $20”.
$20 was the price of failure for the one gag, and in return, the nightclub gets a packed house.
To belabour what is now hopefully very obvious, these large and unjustified, unqualified DWPD and TBW values are based on what Micron, Samsung etc are willing to risk if they start to see one hundred thousand begin to develop failures before end of warranty years. And it’s exactly like any clever casino house’s odds, if they have to pay out a few drive failures, or even quite a large number, after having your money for a few years, they can afford to give you a replacement of what is now outmoded tech.
Their actual costs will be in processing the claims, and not so much in the cost of the drive. Because in many cases, you will have bought the drive from a 3rd party, who must replace the drive itself as a whole. All that the flash maker must replace, or perhaps repay is the cost of the bad NAND they sold. Without any complex and expensive customer care department needed.
It would be interesting to see if the agreement between the SSD manufacturer and the NAND manufacturer for warranty failure is for refund or replacement. I bet it’s replacement. In which case, “The House cannot lose”.
I’m glad that you found the post useful.
You remind me of an old quote, attributed to Benjamin Disraeli: “There are three kinds of liars: Liars, Damned Liars, and Statisticians!”
You appear to view SSD specs in light of the third category.
Whether or not this is the case, businesspeople do worry about the damage they might suffer from a tarnished reputation, so they aren’t quite as cavalier as the nightclub comedian. In fact, there are analysts who try and assign a monetary value to the harm done to a manufacturer by product failures. I don’t know how they accomplish that, but it’s something that people worry about to this extent.
In the end, I wouldn’t worry very much that any SSD manufacturers are playing this game. It would be a short-sighted approach, and I doubt that anyone in the SSD business has such a shot-term outlook. I trust that the DWPD or TBW specs that the manufacturers publish, while possibly the best of several specs they could have published, are reasonably accurate.
For once, short comment….:-}
Yes, that (supposed) Disraeli comment was what Seagate was referring to in their criticism of most benchmarking efforts: “Lies, Damn Lies and SSD Benchmark Tests”
However, I do NOT see the SSD Manufacturer’s as being “Liars” because they USE statistics.
But rather, as my point comparing thngs to where NASA killed the Challenger astronauts, is that they do NOT use statistics.
Or at least ALLOW others to use them.
Because they are so secretive.
No offence, but you have bought into what they are trying to sell us: “Trust me, this is awesome stuff that this so over designed it will last forever…”.
The problem is, EXACTLY as what happened with NASA, is that not only did the fool other people, but they fooled themselves.
I’m a materials engineer, I have a very good idea of what can happen in a very complex, and small scale process as used to make NAND.
You cannot build such a robust process that you don’t see failure.
You NEED the statistics to get a handle on things.
And you need to be open with it.
To allow others to do independent checks.
N.B. It probably wasn’t Disraeli that said it, but someone after he was dead.
In addition, I want to make it clear that I don’t see this as a minor point.
Just think about where we’re going with the use of SSD’s, in a very short period.
You’d know the exact figures better than me, but consider how many computers have gone from using virtually no SSD’s, to likely close to 100%.
And what is the fraction of data centre storage on SSD’s now?
Covering everything from industrial, to financial, military, government….
I’m sure the percentage in all cases is rising, as is the total amount stored.
And yet, how much information is available to reliably predict the ACTUAL reliability of this storage?
There isn’t actually an industry standard on how to calculate the various terms in a consistent manner, as your earlier blog post showed, where some manufacturer’s use 1,000 to convert from GB to TB, and others use 1024.
There is NOT a CHOICE, there’s only ONE way. The SAME way.
Even if it’s a mistake, EVERYONE should be doing it the SAME way.
Doesn’t matter if it’s 1024, 1,000 or 999. As long as it’s the SAME number.
And so what about “Capacity”? Do people use the advertised/nominal capacity? Or the formatted capacity? Total capacity of formatted plus overprovison? Raw capacity using all the NAND?
And I’m not going to mention the point of converting from days to years and using 365, or using 365.25.
So to connect this very visible point about the fuzzy thinking about how to calculate these numbers, with my first post about lack of an accurate value for reliability, surely it makes you a bit nervous?
Just as the Captain of the Titanic said when he was watching where they were going….”Hah, that’s pretty small, just a bit of floating ice. Nothing my big ship needs to worry about….”
After all, if the SSD industry can screw up such a basic calculation as TBW/DWPD, then what have they done to come up with the actual numbers that this is supposed to be based on?
Is this DPWD just the tip of the iceberg? I strongly suspect it is.
And THEN, take it one step further. These NAND are made by only a few foundries, using a very complex and secretive series of processes.
As was observed after a fire at Kioxia in January, just one foundry, but it could influence 3-4% of total world NAND production.
So how does anyone know that what is being produced, and used in just about every single facet of modern life is actually conforming to spec, when you don’t even KNOW the spec, and thus cannot test it independently?
Like what happens if an instrument used for QA goes off the rails, and it’s not caught or a month or two….
Too simplistic an example?
Well, I dunno, it wasn’t anything more complex than this that caused the Challenger space shuttle to go boom.
Seriously, it’s worth a re-read of the Challenger investigations. It’s the same incompetent disregard for quantitative risk assessment in both cases.
This is a nice easy to follow summary:
“NASA’s “management methodology” for collection of data and determination of risk was laid out in NASA’s 1985 safety analysis for Galileo. The Johnson space center authors explained: “Early in the program it was decided not to use reliability (or probability) numbers in the design of the Shuttle” because the magnitude of testing required to statistically verify the numerical predictions “is not considered practical.” Furthermore, they noted, “experience has shown that with the safety, reliability, and quality assurance requirements imposed on manned spaceflight contractors, standard failure rate data are pessimistic.”
You can see it’s a corporate, or industry culture thing.
For various reasons, the NAND industry has decided it doesn’t need to release any actual data that can be used to do reliability analysis. And nobody is forcing them to do it.
Just assume if at some point a very large fraction of the entire infrastructure is all running on the same type of NAND flash, and there’s a serious flaw in Quality Control that wasn’t caught at one of the small number of foundries supplying it, and it affects a large fraction of production for a period from that foundry. And causes premature failures. Without any prior warning.
So what do you replace these WITH? And how long does it take? I don’t think anyone knows.
Look at what happened when a few guys were sent off to fly planes into the Twin Towers. Pretty easy to plan, cost very little to carry out (except for the willingness of the terrorists to die), and now it’s a few decades later, and trillions and trillions of dollars spent in the aftermath, and probably close to a million deaths directly related to it. And we’re STILL not done.
Look at COVID. A super-flu virus, in one remote part of China, and a few months later the entire WORLD is in chaos.
My point is, it’s a very connected world now, and very easy to cause havoc.
So I think it’s completely irresponsible to have some half-a$$ed procedure for looking after the supposed reliability of a very essential part of the world’s infrastructure now.
The circus is being run by the clowns…..
While you raise a number of good points, I wouldn’t view this so negatively. There are people who worry about the difference between 1,000 & 1,024, and use the terms Kibibytes, Mibibytes, etc. to describe numbers based on 1,024, and there are others who choose to overlook the discrepancy. I suspect that you would be more comfortable with companies who specify things the way you want to see them.
As for NAND specs, they are very conservatively rated, especially in those parameters that are difficult to test. I once helped author a SNIA document that included a ton of reliability data that Fusion-io measured on standard NAND flash from companies like Toshiba and Samsung. (That document can be found at https://www.SNIA.org/sites/default/files/SSSI_NAND_Reliability_White_Paper_0.pdf.) Fusion-io’s finding was that flash suppliers tended to be amazingly conservative on their wear figures. You are correct to say, though, that detailed specs aren’t disclosed to just anybody. NAND vendors tend to ask for an NDA before handing them over.
TechReport did a wonderful experiment that hammered on client SSDs from a range of vendors and found that they, too, were conservatively specified. (https://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead/) There are other larger and more scientific studies on the web with similar results but for enterprise SSDs.
The 2.4% difference between 1,000 and 1,024 shouldn’t cause issues, though, since it provides more data, or more wear, etc. to the end user. That said, if the user is planning usage to within a 2.4% margin then they are looking for trouble, since software can arbitrarily cause changes that are larger than that.
In the end I am perfectly content with the idea that this blog is hosted on a system that uses SSDs. They do a good job of accelerating the performance of the servers, and hosting services seem to have figured out how to bypass any problematic SSDs in a way that prevents me from suffering from the occasional failure.
Thanks for the comment!
Thanks for the feedback.
And also thanks for the reliability data: I’ll read that.
And I’m not saying that the confusion over using 1024/1000 is intself a major issue, just that I see it as a lack of consistency, and thoroughness.
However, I looked at a few more manufacturer’s (Kingston and Seagate, and Viking, a rediustributor) and they all used 1,000 as the conversion factor.
And were careful to state that tey were using GB and GiB.
And Intel does a similar thing on its specs, although they do use EXACTLY the same values for their newest Persistent Memory.
An in one case the SAME vaslues are reported to be GB (and VERY clearly NOT GiB).
And in another, it’s GiB.
So they’ve got THEMSELVES mixed up here.
OK, maybe it’s the Marketing Department that messed it up, and not the Engineering one, but my point was they are not very open with providing Engineering data.
In addition, the Data Centre SSD’s often tend to specify they conform to the JESD219 specs, so that mean’s that they should be consistent.
But I still think there’s an underlaying issue with having too many eggs in one basket, with respect to a very critical part of the infrastructure of modern society relying on components from a limited number of manufacturers.
Look what happens when there’s a loophole in Windows, and it immediately affects MILLIONS of users.
Also look at the issues with Huwei now, and the Russian inteference with the West’s elections, referendums (Brexxit) and even health issues like Covid.
I think there needs to be a LOT more oversight on critical components like SSD’s/NAND flash and related.
Where you have an independent lab testing all the items to ensure that the design conforms to consistent standards.
I’m an engineer, with a European focus.
In the EU, we follow standards much more rigorously for safety critical items than the USA or China does.
The system is homogeneous all over the EC (not EU, but Europeran Community, so Switzerland, and now the UK will still be included).
And safety critical items must be tested by an independent lab to dshow they satisfy the standards.
By contrast, the US, and China, do not have anywhere near like the same focus on producing to a consistent standard.
Three is the charm….
Your last comment was:
“Note, too, that the DC P4800X has a shorter warranty period than most Intel SSDs at 3 years instead of 5. This reduces its TBW figure to 3/5ths (60%) that of a similar drive with a 5-year warranty, but has no impact on the DWPD or GB/Day figures.”
This is not your fault, but it’s another example that these figures are complete rubbish, and are only mean’t to mislead and beguile users.
This approach is backwards, with regards to quoting wear rates as GB/day.
And even DWPD is poor.
What is actually governing reliability, and thus endurance, is the actual mechaniasm that CAUSES a NAND to no longer be able to store data reliably.
And that’s mostly going to be the degradation over time of the oxide layer to prevent charge from leaking away.
Because the more you cycle it, the more that oxide is degraded, and in effect “wears”.
It’s like you’ve got a water bucket, made of wire mesh, and you’ve stopped it from leaking with a layer of waterproofing. And every time you use the bucket, some of the waterproofing gets chipped, and starts a small leak.
And the bucket has a hose coming out, part way down, that waters a very caluable orchid. If you stop watering the orchid, it dies.
In order to work, the bucket has to stay above the level of the hose that waters the orchid, and as the wear increases, the leaks increase, and you have to keep on refilling it to counter the water loss from the leaks.
And at some point, you don’t do that often enough, the water level goes too low, the orchid is not watered, and it dies.
So with the NAND, you can measure at what point it suffers enough wear to drain in a noticeable period.
But that’s not so easy to do accurately: you need a lot of cycles to get enough “leak” to measure.
You cannot do this on a daily, or even monthly basis, because the effects are not easy to measure accurately in small amounts.
In the end your actual measured data is how many writes, under what conditions, were needed to cause a measurable decline in the NAND’s ability to store data.
And then, if you are an ethical engineer, you do it a few times to see how much the results vary.
And produce a figure that states after how many writes, you get a given amount of damage, to within stated variations.
But the point is that it’s a LONG term, and is of TOTAL cumulative damage: caused by total writes.
It’s got NOTHING to do with time.
Nothing to do with 1 year, or 3 years, of 5 years.
It’s a function of the amount of write cycles.
Regardless of whether or not you’re giving a 3 or 5 year warranty, the actual NUMBER of writes you can make, without causing an unacceptable amount of damage doesn’t change.
So it’s very misleading to justify any wear rates in terms of GB/day.
The two values, time period and TBW should be quoted.
Perhaps as an AID to visualising things, if they want to convert it to GB/day as well, and then give some references of average consumer use as a guideline to the average user’s writes per day, and per year, that’s fine.
But the actual warranty should be for TBW and years.
It’s the same sort of flim flam, and snake oil salesmanship that forces the lending industry to state very clearly what the actual interest rates are for loans. They didn’t do that to be nice guys: they were forced to do it after a lot of trusting customers got sand bagged.
Comments are closed.