Western Digital launched a storage fabric architecture in August that aims to address certain issues that occur with existing storage systems. WDC is furthermore taking a number of steps to try to promote this architecture’s adoption into mainstream storage. It’s an interesting approach that The SSD Guy thought was worth discussion.
The company is working to address the problem of Continue reading “WDC’s Top-to-Bottom Storage Fabric Approach”
I was recently reminded of a presentation made by GoDaddy way back in the 2013 Flash Memory Summit in which I first heard the statement: “Failure is not an option — it is a requirement!” That’s certainly something that got my attention! It just sounded wrong.
In fact, this expression was used to describe a very pragmatic approach the company’s storage team had devised to determine the exact maximum load that could be supported by any piece of its storage system.
This is key, since, at the time, GoDaddy claimed to be the world’s largest web hosting service with 11 million users, 54 million domains registered, over 5 million hosting accounts, with a 99.9% uptime guarantee (although the internal goal was 99.999% – five nines!)
The presenters outlined four stages of how validation processes had Continue reading “Failure is Not an Option — It’s a Requirement!”
Data centers that use centralized storage, SANs or NAS, sometimes use servers to cache stored data and thus accelerate the average speed of storage. These caching servers sit on the network between the compute servers and storage, using a program called memcached to replicate a portion of the data stored in the data center’s centralized storage. Under this form of management more-frequently-used data presents itself faster since it has been copied into a very large DRAM in the memcached server.
Such systems have been offset over the past five or more years thanks to the growing availability of high-speed enterprise SSDs at an affordable price. Often direct-attached storage (DAS) in the form of an SSD within each server can be used to accelerate throughput. This can provide a considerable cost/performance benefit over the memcached approach since DRAM costs about 20 times as much as the flash in an SSD. Even though the DRAM chips within the memcached server run about three orders of magnitude faster than a flash SSD most of that speed is lost because the DRAM communicates over a slow LAN, so the DAS SSD’s performance is comparable to that of the memcached appliance.
There’ a catch to this approach, since the DAS SSD must be Continue reading “A New Spin on Memcache”
Kaminario recently decided to adopt a “software-centric” business model, rather than sell all-flash arrays as the company has done since its inception. The company says that this will allow it “to streamline operations, while focusing its resources on continued software innovation,” acknowledging that the change: “represents a strategic business model shift for Kaminario.”
Hardware support for existing and future Kaminario customers will be provided by Tech Data, which Kaminario’s release tells us is the world-leading end-to-end distributor of technology products, services, and solutions.
Jay Kramer of Network Storage Advisors, a friend of The SSD Guy recently provided me with some valuable insights on Kaminario’s restructuring and has allowed me to share them here. Jay is a recognized technology consultant specializing in the network storage industry.
Here’s what Jay has to say Continue reading “Kaminario Adopts Software-Only Business Model”
Sometimes it’s enlightening to compare several viewpoints on similar data. At yesterday’s SNIA Persistent Memory Summit a number of presentations provided interesting overlapping views on certain subjects.
One of particular interest to The SSD Guy was latency vs. IOPS. Tom Coughlin of Coughlin Associates and I presented the findings from our recently-published IOPS survey report and in Slide 19 displayed the basic chart behind this post’s graphic (click to enlarge, or, better yet, right-click to open in a new tab). This chart compares how many IOPS our respondents said they need for the storage in their most important application, and compared that to the latency they required from this storage. For comparison’s sake we added a reference column on the left to roughly illustrate the latency of various standard forms of storage and memory.
You can see that we received a great variety of inputs spanning a very wide range of IOPS and latency needs, and that these didn’t all line up neatly as we would have anticipated. One failing of this chart format is that it doesn’t account for multiple replies for the same IOPS/latency combination: If we had been able to include that the chart would have shown a clearer trendline running from the top left to the lower right. Instead we have a band that broadly follows that trend of upper-left to lower-right.
Two other speakers presented the IOPS and latency that could be Continue reading “Latency, IOPS & NVDIMMs”
On January 12 IBM announced some very serious upgrades to its DS8000 series of storage arrays. Until this announcement only the top-of-the-line model, the IBM System Storage DS8888, was all-flash while the less expensive DS8886 and DS8884 sported a hybrid flash + HDD approach. The new models of the DS8886 and DS8884 are now also all-flash.
But that’s not all: Every model in this product family has been upgraded.
The original DS8000 systems used a module called the High Performance Flash Enclosure (HPFE) for any flash they included, while these newer models are all based on HPFE Gen 2. While the original HPFE was limited to a maximum capacity of 24TB in a 1U space, the larger 4U HPFE Gen 2 can be configured with as much as 153.6 TB, for more than six times the storage of the previous generation. By making this change, and by optimizing the data path, the Gen 2 nearly doubles read IOPS to 500K and more than triples read bandwidth to 14GB/s. Write IOPS in the Gen 2 have been increased 50% to 300K, while write bandwidth has been increased by nearly 4x to 10.5GB/s .
This kind of performance opens new Continue reading “IBM Upgrades DS8000 Series: All Models are now All-Flash”
Yesterday IBM unveiled a sweeping update of its existing flash storage products. These updates cover a range of products, including IBM Storwize All Flash arrays: V7000F, V7000 Gen2+, and V5030F, the FlashSystem V9000, the IBM SAN Volume Controller (SVC), and IBM’s Spectrum Virtualize Software.
The company referred to this effort as a part of a: “Drumbeat of flash storage announcements.” IBM has a stated goal of providing its clients with: “The right flash for the right performance at the right price.”
IBM’s representatives explained that the updates were made possible by the fact that the prices of flash components have been dropping at a rapid pace while reliability is on the rise. The SSD Guy couldn’t agree more.
Here’s what IBM announced:
Starting from the low end and moving up, the V5030F entry-level/midrange array is an Continue reading “IBM Refreshes Broad Swath of Flash Offerings”
In a joint press release, SanDisk and IBM announced support for each other’s products. The IBM Spectrum Scale filesystem will support SanDisk’s InfiniFlash all-flash array to provide a high-capacity high-speed software-defined storage system.
At first glance this may seem a little odd, since IBM sells its own all-flash array, the FlashSystem, which became an IBM product line when the company acquired Texas Memory Systems (TMS) back in 2012. That is not the case, though. IBM has been validating that its Spectrum Storage products will work with just about any storage type that its customers may want to use. Rather than narrowing this software’s support to only IBM storage systems, IBM is showing that Spectrum Scale is flexible enough to work with a multitude of solutions, supporting InfiniFlash the same as it does other internal server capacity and other external storage in the form of JBODs (“Just a Bunch of Disks”) or JBOFs (“Just a Bunch of Flash”).
In this case IBM has worked with SanDisk to validate that its Spectrum Scale storage management software works with InfiniFlash, just as it does with those many other storage solutions.
In the announcement IBM explains that Continue reading “IBM Software + SanDisk Hardware”
A recent conversation with some fellow analysts revealed a puzzling set of claims. EMC, at its EMC World conference (May 3-7) claimed to be the leader in flash array shipments. The very next week, in the same Las Vegas hotel, IBM also claimed leadership in flash.
Who do you believe?
Well a friend of The SSD Guy, marketing consultant Jay Kramer of Network Storage Advisors Inc., tallied up all of the leadership claims he could find and provided this list:
- EMC is counting XtremIO Arrays as units shipped and according to Gartner Group held the #1 market share position with a 31.1% share, which is over a ten percentage point share lead
- IBM is counting capacity of PBs shipped with all of their flash storage solutions: The FlashSystem 840, 900, V840, V9000, DS8000, plus the XIV systems, Storwize V7000, IBM Flash DAS, and IBM PCIe Adapters
- NetApp is the leader if you count total flash systems shipped (NetApp-branded plus privately-branded systems) spanning multiple years as their SANtricity operating system and E-Series platforms have sold over 750,000 units
- Pure Storage uses its 700% growth to show that it’s the #1 fastest-growing flash storage company
- Then, if you want to compare any vendor’s total all flash array (AFA) systems sold this past year against hybrid storage arrays, Nimble Storage beats any of the AFA vendors.
Continue reading “Who’s #1 in Flash Arrays?”
The following post is an excerpt of an article Objective Analysis submitted to the Pund-IT Weekly Review for 11 March, 2015.
With a webcast in the style of the big system makers like EMC and Oracle, SanDisk announced its InfiniFlash flash appliance. InfiniFlash is a box that crams a whopping 500 terabytes into only 3U of rack space.
How big is 500 terabytes? It’s more bytes than SanDisk’s entire flash output for 2001.
SanDisk boasts that InfiniFlash is a “category-defining product”, and pointed to the fact that IDC, who provided support for the roll-out, created a new “Big Data Flash” storage product category for this device.
The system boasts performance of one million random-read IOPS, which is impressive, but doesn’t give much indication of how it performs in standard enterprise dataflow, which is generally assumed to consist of a 70/30 split of reads and writes. (I should mention here that Objective Analysis published a survey of users’ IOPS and latency needs which can be purchased on our website.)
Price is a major focus for this product. SanDisk says that it will sell systems bundled with software at less than Continue reading “SanDisk Rolls Out InfiniFlash”