IT professionals find it difficult to determine which SSD or flash array to buy or even whether they can get the speed they need from standard HDDs. There is an extraordinarily wide rage of IOPS (from hundreds to millions), latencies, and capacities, and this can be confusing. A new Objective Analysis report: How Many IOPS Do You Really Need provides, through a survey of IT managers and other end users, an understanding of the performance needs of various applications including IOPS, latency, and capacity.
This report answers questions that have never previously Continue reading “How Many IOPS Do You Really Need?”
Objective Analysis is pleased to announce availability of a new report: Enterprise SSDs: Technologies & Markets.
The report’s key finding: The stunning growth of SSDs in enterprise servers and storage systems is only going to get stronger. Objective Analysis finds that the enterprise SSD market is likely to approach $4 billion in revenues by 2016, nearly six times that of 2011, while unit shipments will increase by ten times during that period to almost 4 million units.
This 104-page report is the third update of Objective Analysis’ cornerstone enterprise SSD report. The new report reviews Continue reading “Enterprise SSDs to Grow Over 10x by 2016”
In Big Data circles there is a saying that it might be easier to move the application program to the data, rather than to move the data to the server where the application is working. There’s a lot of wisdom in that. The application is small and can move rapidly. Big data takes time to move.
In that spirit at least one Violin Memory customer has decided to move their applications into the servers that reside within one of Violin’s 6000 Series flash Memory Arrays. These are the two green boards running Continue reading “Big Data? Move the App to the Data”
Last week The SSD Guy was at a conference for users of the open source MySQL database program. This is a gathering of foward-thinking mavericks who try new technologies ahead of many others. This group has been deeply involved with SSDs for at least the past four years.
Vadim Tkachenko, co-founder of Percona (the show’s sponsor) shared a lot of significant new research that he has performed over the past year on SSDs. I thought the chart in this post’s graphic Continue reading “Another Look at SSD Performance”
A topic The SSD Guy often brings up in presentations is the fact that SSDs can be used in enterprise applications to reduce server count, a phenomenon often called: “Server Consolidation.” This is a confusing issue, so it bears some explanation.
There are lots of ways to accelerate an I/O-bound application. The most direct one is to speed up the I/O. In the past this has involved some pretty elaborate ways of using HDDs in arrays with striping and short stroking. Many of these arrays cost a half million dollars or more.
Another is to hide the slow I/O speed by Continue reading “SSDs and Server Consolidation”
The SSD Guy attended TechTarget‘s Storage Decisions Conference last week in San Francisco. Dennis Martin of Demartek gave a very good presentation called Making the Case for Solid-State Storage.
Demartek tests a lot of systems based on various forms of storage.
I really liked an expression that Mr. Martin shared to compare SSDs to HDDs. He said that SSDs cost dollars per gigabyte and pennies per IOPS, while HDDs cost pennies per gigabyte and dollars per IOPS. This is a really good way to think about the strengths and weaknesses of these two technologies. There is every reason to use a mix of both. Continue reading “Sometimes SSDs Don’t Improve System Speed”
A colleague – Isilon’s Rob Peglar – pointed out an interesting paper written by researchers at the University of Toronto in collaboration with Microsoft. The paper makes a case for using an HDD to cache writes to an SSD to improve storage system performance.
“Wait a minute!” you say. “An HDD as a cache for an SSD? This can’t be possible!” Continue reading “An HDD Cache for an SSD?”