New Interest in Monster-SSDs

Ugly scary SSD with tentacles, frowning yellow eyes, and a shark-like mouth full of pointy teethWhile listening to the most recent quarterly earnings calls The SSD Guy learned that there’s suddenly a lot of interest in 60-100TB SSDs for the data center.  What I didn’t hear was a strong understanding of why that is happening, although four of the five the leading flash companies say that it is driven by AI.  (Kioxia hasn’t reported yet.) Some say these SSDs are being used for training applications, while others say that they’re being used for inference.

What’s pretty unusual is that these SSDs have significantly higher capacity than HDDs.  The highest-capacity HDDs that ship today are a “mere” 30TB.  The average capacity of HDDs has been growing steadily over the past decade as SSDs take over the lower-capacity applications.  The HDD market is narrowing to archival storage, an application that requires enormous inexpensive capacity but doesn’t need a lot of speed.  This trend to higher capacities is pretty clearly illustrated in the chart below which plots Western Digital’s average HDD capacity since 2017.

Plot from the first quarter of 2017 through the present of Western Digital's average HDD capacity. It moves up and to the right, getting to 10 terabytes by the first quarter of 2024, although it's pretty jagged.

But this blog post is not about HDDs, it’s about high-capacity SSDs, so let’s get back to that.

Let’s see what management said about ultra-high-capacity SSDs in the four NAND flash makers’ earnings calls.  I’ll present them in chronological order.


In its March 20 earnings call, Micron management said:

We achieved record revenue share in the data center SSD market in calendar 2023.  During the quarter, we grew our revenue by over 50% sequentially for our 232-layer-based 6500 30TB SSDs, which offer best-in-class performance, reliability and endurance for AI data lake applications.

So Micron’s selling a lot of 30TB models to realize a big revenue boost.

SK hynix

This is from the SSD Guy’s own notes from the April 25 SK hynix earnings call, so they’re rough.

Buyers are mostly on-premise datacenter customers.  AI is shifting from training to inference, shifting storage to on-premise for security and customization.  This has led to growing demand for faster SSDs due to unstructured data and the lower power of new enterprise SSDs.   It’s a positive development for NAND vendors and may be a sign of structural change.

Longer term AI growth is leading to increasing demand for NAND vs. slower storage.  Also there is demand to address datacenter space constraints, which are driving demand for higher densities from 36TB to 128TB.  These high capacities require QLC instead of today’s TLC, and SK hynix will be introducing a 60TB eSSD unique to Solidigm.

In brief, SK sees demand coming from on-premises data centers for inference.

Western Digital

Western Digital also held its earnings call on April 25, and once again, the following is taken from my notes, rather than a formal transcript:

Enterprise SSD demand has returned.  Dramatically higher capacities will ship in the second half of calendar 2024.

AI is coming into focus so we don’t know much yet.  Customers are looking for 30-60TB.  The PCIe Gen5 BiCS6 SSDs are getting good feedback from the customers.  It has been sampled for qualification at hyperscalers, and demand is strong in enterprise as well.

They are being used for training applications.

WDC’s viewpoint is very different from SK hynix’ since WDC sees demand coming from training, rather than inference, and it’s both at hyperscalers and in on-premises (enterprise) applications.


Finally comes Samsung’s April 30 call.  This has been excerpted from a transcript as well as from my notes:

We plan to expand SSD sales for server and a timely response to AI demand by developing and providing samples of ultra-high-density 64 terabyte SSD during the second quarter.

As AI models increase, the size of training data becomes proportionately bigger, leading to higher data storage needs. So we’re seeing a lot of incoming requests from the customers for 8TB & 16TB solutions.

For inference, vast amounts of database storage is required, so we’re seeing an increase in customer inquiries for 64-terabyte 128-terabyte, ultra-high-density SSD solutions.

Our server SSD shipments this year are expected to grow 80% Y/Y and sales volume for QLC server SSD is expected to surge 3X in the second half versus the first half of the year.

In summary, Samsung expects a big boost from high-capacity SSDs for both training and inference, with 8-16GB SSDs going into training and 64-128TB SSDs going to inference applications.

So from these four manufacturers we don’t see full agreement about why the market is suddenly demanding ultra-high-capacity SSDs, but they all do agree that there’s a rapidly-growing market for these products.

Of course these NAND flash makers are all pleased that this is happening because a 60TB SSD will use more than 480 1Tb NAND flash chips.  That’s a lot of silicon for a single sale, consuming about half of an entire wafer.

This is certainly a trend that my firm Objective Analysis will be keeping a close watch on, as we feed our clients the latest information about the events that drive memory market changes.  Please contact us if you believe, as we do, that information like this can help your company develop and manage a winning strategy in today’s markets.


2 thoughts on “New Interest in Monster-SSDs”

  1. Jim, it’s interesting that we’ve had 30TB SSDs since 2018, yet the HDD market has only just started to reach that equivalent capacity ( and However, enterprise SSDs have a DRAM problem, or need to compromise on indirection unit size ( – with Solidigm suggesting tiering/caching as the solution. Pure Storage & IBM have pioneered their own flash solutions, which far exceed the enterprise SSDs on the market (expect 150TB from Pure in the summer), FCM3/4 from IBM delivers around 88TB compressed.

    I believe that enterprise storage systems vendors will have a problem deploying drives >30TB in their systems, as the capacity per system will be driven by architecture. If a vendor can’t add incremental capacity at the single drive level, (but needs to add a shelf or RAID stripe), then the costs will be significant for the customer & vendor. In addition, if a single drive tops $10K (which the first 16TB drives did), then the warranty process needs to be watertight (both system vendor and customer won’t tolerate discarding in-warranty drives, they will insist on replacement).

    The DRAM/power issue will also be a problem. So, will vendors need to engineer specific “enterprise array SSDs” and is there a big enough market? What about the hyperscalers? When will they start engineering their own SSDs (if they are not doing so already), in the way they’ve done for custom Arm chips ( – will they bother to work with SSD vendors or just buy the NAND and develop custom controllers?

    There’s lots to think about in this market, it’s far from clear how it will evolve!


    1. Chris,
      Thanks for a very thoughtful reply from one of the best storage analysts in the industry.

      Your arguments are all sound. The industry has a bad habit of saddling itself with inflexible things like fixed sector sizes that hamper the very predictable growth of storage capacities without the use of enormous amounts of DRAM within the SSD. I expect to see a lot of ingenious work-arounds to this. Key-value storage should be a help here.

      As for price, NAND flash chips today sell for around 10 cents/GB, so a 100TB SSD would have well over $10K worth of NAND chips, when you account for overprovisioning. I would expect for the SSD to command a price of about twice that, or $20K. Kind of daunting!

      Unfortunately, the mixed messages from the NAND chip makers make it hard to tell whether these 100TB SSDs are being purchased by hyperscalers or on-prem users. If it’s hyperscalers, then you’re right – they could build them internally, as they have done with standard SSDs for many years. They could also use proprietary work-arounds. It’s more complicated if it’s on-prem, partly because they would normally use off-the-shelf solutions for both hardware and software. I’ll be digging into this question.

      Thanks for chiming in.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.