The votes have been counted, and the decision is in: 2014 is the year of the all-flash data center. The signs have been pointing in this direction for the past few years, and 2014 is the year when all the market factors have aligned to finally make it a reality. Storage Switzerland, an analyst firm, recently noted that an all-flash data center is now not only desirable on performance merits, but has finally emerged at an increasingly attractive price point. Due to this shift, major enterprises and service providers are now considering using all flash disks – also known as solid-state disks (SSDs) – in their data center architectures, an option that previously wasn’t even possible. Such a massive swing begs the question: Why flash, and why now? To fully understand, it is instructive to examine traditional “spinning disk” approaches to storage; including why spinning disks rose to prominence and the drawbacks that have cast flash in such a good light.
What the Future Brings
Disk storage used to be the next big thing, touted for its lower cost and higher efficiency than the tape storage it was replacing. When flash was introduced, spinning disks continued to march on as the standard server architecture of choice. Why? Despite flash’s substantially higher performance, it was just too expensive to consider as a full-fledged alternative to spinning disks. Furthermore, flash was smaller in capacity and unable to hold as much data per unit as spinning disks for the price.
Yet recent advancements in flash have whittled away at its thorniest shortcoming: flash is now quite reasonable in price. As the price of flash has dropped, its signature benefits – speed in throughput and latency – have dramatically increased at the same time. In addition, flash is a pro at energy efficiency, only consuming a small fraction of the electricity needed to power a spinning disc, sometimes at the ratio of one to 16. And even though flash drives still wear out at a faster rate than do spinning disks, the recent improvements have made flash an increasingly feasible – and desirable – option, even in high-volume environments such as the data center.
The Flashy Data Center
The modern data center. One imagines a football field – or many football fields – covered by enormous server farms loaded with hundreds and hundreds of servers. Surely they couldn’t all be running on flash storage?
Today, those data centers could very well be all flash. Tomorrow, it’s certain that they will. For major industries whose profit margins depend on the speed and availability they can offer their customers, flash storage’s stratospheric performance is becoming less of a tempting option and more of a straightforward business need. Telcos and service providers – whose business models are extremely sensitive to latency and downtime – are particularly interested in an all-flash data center model.
As attractive as the concept of an all-flash data center model is, other industries are biding their time until the cost of flash drops even further. For example, a file hosting service that delivers free web storage for consumers isn’t likely to be as concerned about performance as they are with ensuring massive quantities of cheap storage. Today, the tradeoff is between price and capacity. But when the price of flash catches up to the price of spinning disks, there will suddenly be no reason for choosing spinning discs. Just as spinning disks replaced tape storage with its higher power and better value, so too will the market shift to adopt flash as the storage standard.
Software-defined Storage and the All-flash Data Center
Software-defined storage is another storage trend rising in parallel popularity to flash. While it might be too soon to call the two trends linked, it’s undeniable that a software-defined approach to storage infrastructure gives organizations the flexibility they need to adopt an all-flash data center strategy relatively quickly and easily.
Software-defined storage works by taking features typically found in hardware and moving them to the software layer, thereby doing away with depending on built-in and inefficient redundancies that solve problems typically found in the hardware layer. As a fact of life, hardware will fail, regardless whether it’s expensive, inexpensive, flash or spinning disk. Flash storage, in particular, currently has faster time-to-failure rate than spinning disk options. In traditional storage setups without RAID cards, the failure of a disk typically prompts an error that will impact the end-user’s experience. This is typically solved by using RAID cards to hide the errors, which can be pricey. With the right software-defined approach, these problems would be absorbed and invisible to the user. In addition, software-defined storage is hardware-agnostic, and can run on any hardware setup.
By using a software-defined approach to storage architecture, the organization could still utilize a single name space spanning all its storage nodes. Organizations could also run applications in the storage nodes as well, turning them into “compustorage” nodes. As a result, the storage hardware itself wouldn’t need to be that large or expensive, but would still have very high performance and speed. Therefore, instead of building a really big, expensive and traditional installation, organizations can start with a small number of cheap servers, and if needed, scale linearly from there and still have a high-performance data center.
Other benefits of an all-flash data center running software-defined storage technology include:
We are not far removed from the days when spinning disks were the fresh and exciting new technology, phasing out tape storage and offering new possibilities for innovators. Today, flash storage – with its unparalleled performance, low energy usage and rapidly declining cost – is preparing to take the mantle. Software-defined storage is one that delivers a flexible, efficient and powerful framework for organizations seeking to maximize the output of an all-flash data center.
About the Author:
Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson (News - Alert), the world-leading provider of telecommunications equipment and services to mobile and fixed network operators.