infoTECH Feature

September 08, 2011

Cooling the Cloud

While the basic concept of cloud computing is fairly clear to everyone, the structure of cloud-computing architectures and the effect of cloud computing on IT and facilities infrastructure are less well understood.

According to the National Institute of Standards and Technology (an agency of the U.S. Department of Commerce), cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

The key point here is that cloud computing still requires a set of computing resources. Despite being highly configurable, extensible and available through the shared infrastructure of the Internet, the fact remains that cloud computing infrastructure comprises, at core, electrical components that consume electricity and produce heat.

This cloud-computing hardware must be cooled by something, regardless of the hardware’s location or owner. In the end, the sophistication of the required cooling infrastructure depends on the computing resources’ power density and layout.

Data centers have traditionally been air-cooled, and most still are. Air cooling has been relatively inexpensive and simple to scale to fit room needs. However, as power densities have risen dramatically in the past decade, some data center managers have chosen other mediums for energy removal (e.g., liquid-cooling), while others have considered more efficient ways of utilizing existing air-cooling infrastructure. 

For all but the highest power density racks (up to 15 KW), air cooling is not only economical but also reliable, as air can be sourced from multiple provisional cooling units within the room. Because air’s heat-transfer coefficient is essentially constant throughout a typical data center’s temperature range, a given volume of air will remove a fixed amount of heat from equipment when the temperature of the air rises a certain number of degrees. In other words, to make hot equipment cooler, you can either move more air around or make the supply-air colder.

Realistically though, air-supply temperatures are limited by equipment manufacturers, and these limits become “law” for economic reasons (i.e., warranty) – regardless of whether the specified maximum safe temperature range for server and storage equipment can be safely exceeded in practice without equipment failure. In addition, moving large volumes of air can be very expensive for very-high-density racks. As such, liquid cooling can become economical for racks above 15-20 KW, although the costs of liquid piping can vary and must be carefully researched. Also, as with air-cooling, a liquid-cooled design must contain redundancy in the cooling medium infrastructure (piping and pumps) or provide some form of non-liquid backup cooling.

Many cutting-edge companies are building new data centers that rely solely on ventilation air to cool heat-generating equipment. This type of cooling allows natural wind currents to direct outdoor air, which is usually within allowable temperature and humidity ranges, to flow through the data center. Companies pursuing this strategy are often large enough buyers to influence server manufacturers to support their hardware under environmental conditions consistent with ventilation air cooling.

Cloud-computing equipment is often located in a mixed use co-location type space, and therefore much of the surrounding cages and racks may be of much lower power density. This can be particularly challenging for air-cooling infrastructure, because pressure dynamics beneath a raised floor can inhibit proper airflow patterns: Either high-density racks are supplied with too little volume (creating hot spots), or low-density racks are supplied with too much air (creating inefficient mixing). Traditional raised-floor air systems usually err on the side of over-cooling the entire room in order to satisfy the hottest rack in the room. The most reliable remedy for this is to have a variable-regulation system that directs cold air where it is needed most while preventing too much cold air from entering low-density areas.

Want to learn more about the latest in communications and technology? Then be sure to attend ITEXPO West 2011, taking place Sept. 13-15, 2011, in Austin, Texas. ITEXPO (News - Alert) offers an educational program to help corporate decision makers select the right IP-based voice, video, fax and unified communications solutions to improve their operations. It's also where service providers learn how to profitably roll out the services their subscribers are clamoring for – and where resellers can learn about new growth opportunities. To register, click here.

Stay in touch with everything happening at ITEXPO… follow us on Twitter

Coy Stine is the Director of Data Center Services at Bluestone Energy  Services LLC, an OpTerra Energy Company. Bluestone Energy provides professional engineering and project development services for utilities and corporate clients, and has secured millions of dollars in utility incentives for its clients by developing and implementing hundreds of HVAC, lighting and data center conservation projects.



TMCnet publishes expert commentary on various telecommunications, IT, call center, CRM and other technology-related topics. Are you an expert in one of these fields, and interested in having your perspective published on a site that gets several million unique visitors each month? Get in touch.

Edited by Jennifer Russell
FOLLOW US

Subscribe to InfoTECH Spotlight eNews

InfoTECH Spotlight eNews delivers the latest news impacting technology in the IT industry each week. Sign up to receive FREE breaking news today!
FREE eNewsletter

infoTECH Whitepapers