infoTECH Feature

April 25, 2013

Make Scaling a Snap: How to Add Servers, Cut Costs and Boost Performance in the Data Center

A wide range of online services are beginning to see the light, and it’s coming from the Cloud. Since so many services are hosting servers in the Cloud these days, service providers are analyzing their Big Data storage to find new ways to cut spending and raise performance levels.

In the recent past, it was fairly easy for people to find free cloud services from a variety of providers. This is no longer the case. Free or inexpensive cloud services are becoming scarcer. Consumers are facing storage restrictions, while service providers battle higher prices and higher energy consumption overall. To reduce these problems, service providers are looking for cost-effective ways to scale and improve performance to keep customers accustomed to low-cost options.

Best Use of Investment?

One tactic service providers are using is centralizing data in a single location and making it accessible via the Internet from anywhere. By centralizing all the equipment, costs can be kept lower. Additional advantages of a single, large data center include better performance and reliability and improved Internet connections. The downside to these improvements in performance is that scalability is made more difficult and expensive. Raising performance in this way requires purchasing more specialized, high-performance equipment, which increases energy consumption and costs that are difficult to control at scale.

 It’s Different in the Cloud

Cloud providers must manage far more users and greater performance demands than enterprises, which make solving performance problems like data bottlenecks a big concern. While the typical user of an enterprise system demands high performance, these systems generally have fewer users who can access their files directly through the network. Additionally, enterprise system users are typically accessing, saving and sending relatively low-volume files such as spreadsheets and document files, alleviating performance load and using less storage capacity.

However, it is a different story for a cloud user. An order of magnitude more users are accessing the system simultaneously over the Internet, which itself becomes a performance bottleneck. The cloud provider’s storage system not only has to scale to each additional user, but must also uphold performance across the aggregate of all users. The average cloud user is accessing and storing far larger files – photo, video and music files – than the average enterprise user. 

Economical Storage Scaling

The business ramifications of these storage demands are extensive. Service providers must be able to scale rapidly to accommodate the expanding demand for more data storage. Users are accustomed to free online services, and are not timid about leaving providers that put up paywalls. To be economical, service providers need extremely inexpensive storage that performs well and scales easily.

The Perfect Combination

Below are three best practices for service providers seeking the perfect combination of cost effectiveness, performance and scalability. 

1.      A distributed storage system is the answer.

Although the data center trend has been moving toward centralization, distributed storage provides the best solution for building at scale. There are now ways to improve performance at the software level that cancel out the performance benefits of a centralized data storage approach.

2.      Making use of commodity components.

Low-energy hardware can make good business sense. Commodity-component servers are not only less expensive, but they also use much less energy, which considerably reduces both operating and setup costs in one move.

3.      Dodge the troubles that come with a single point of entry.

A single point of entry can become a single point of failure, especially with the demands cloud computing puts on Big Data storage. Furthermore, a single point of entry very easily becomes a performance bottleneck. Adding caches to mitigate the bottleneck, as most service providers currently do, adds cost and complexity to a system quickly. Conversely, a horizontally-scalable system that distributes data across all nodes makes it possible to choose less expensive, lower-energy hardware.

Conclusion

Currently, the Big Data storage landscape is composed of high performance, vertically scaled storage systems with architectures that can only scale to a single petabyte and are expensive. These systems are not as economical or sustainable long term. By transitioning to a horizontally-scaled data storage model that distributes data evenly onto low-energy hardware, costs can be decreased and performance can be improved in the cloud. Scalability, efficiency, and performance of the data storage centers can be greatly improved for cloud service providers using these best practices.

Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson (News - Alert), the world-leading provider of telecommunications equipment and services to mobile and fixed network operators.




Edited by Stefania Viscusi
FOLLOW US

Subscribe to InfoTECH Spotlight eNews

InfoTECH Spotlight eNews delivers the latest news impacting technology in the IT industry each week. Sign up to receive FREE breaking news today!
FREE eNewsletter

infoTECH Whitepapers