A New Spin on Memcache

IBM DataStore LogoData centers that use centralized storage, SANs or NAS, sometimes use servers to cache stored data and thus accelerate the average speed of storage. These caching servers sit on the network between the compute servers and storage, using a program called memcached to replicate a portion of the data stored in the data center’s centralized storage. Under this form of management more-frequently-used data presents itself faster since it has been copied into a very large DRAM in the memcached server.

Such systems have been offset over the past five or more years thanks to the growing availability of high-speed enterprise SSDs at an affordable price. Often direct-attached storage (DAS) in the form of an SSD within each server can be used to accelerate throughput. This can provide a considerable cost/performance benefit over the memcached approach since DRAM costs about 20 times as much as the flash in an SSD. Even though the DRAM chips within the memcached server run about three orders of magnitude faster than a flash SSD most of that speed is lost because the DRAM communicates over a slow LAN, so the DAS SSD’s performance is comparable to that of the memcached appliance.

There’ a catch to this approach, since the DAS SSD must be Continue reading

SSDs and Server Consolidation

Consolidating Servers with SSDsA topic The SSD Guy often brings up in presentations is the fact that SSDs can be used in enterprise applications to reduce server count, a phenomenon often called: “Server Consolidation.”  This is a confusing issue, so it bears some explanation.

There are lots of ways to accelerate an I/O-bound application.  The most direct one is to speed up the I/O.  In the past this has involved some pretty elaborate ways of using HDDs in arrays with striping and short stroking.  Many of these arrays cost a half million dollars or more.

Another is to hide the slow I/O speed by Continue reading