infoTECH Feature

February 25, 2013

Big Data and Data Center Cooling

By TMCnet Special Guest
Richard Jenkins, VP Marketing, RF Code

The most overused resource in the data center is the cooling system. Why? Because data center operators are paranoid that overheating, or under-cooling, will impact the performance of their facility.

Reality is the opposite and implementing those practices is becoming a differentiator between cost effective and high-performing facilities, and energy-guzzling, unstable IT environments.

Power consumption in data centers has soured to levels where the cost to power a server through its lifecycle is now on par with the initial cost of that server. Multiple level of redundant “just in case” servers are online – powered – awaiting any potential failure and most CPUs are running at between 10-20 percent utilization. Cooling is a safety net for organizations who either don’t know, or aren’t willing to invest in, how to deploy an efficient infrastructure.

Big data is increasing the focus on infrastructure as the critical foundation for analysis. Underperforming infrastructure will lessen the ability to provide a highly-optimized data analysis environment. Data security has also become a regulatory issue under close scrutiny from consumer, fiscal and government groups. Both of these demands require an investment in power-hungry IT hardware and the infrastructure to keep it “live” and available.

Using the “right” amount of power to cool a data center is an interesting dilemma. When do you know when it is “right”? How far do I push it before I know it isn’t “right”? Is my objective to save power or maximize availability? Are the two mutually exclusive?

Many, predominantly IT industry, companies have proven that a well optimized data center can save money, improve profitability and increase the brand strength of the business through the positive PR generated. Using less power is not only a financially-astute strategy, it attracts customers too.

While “managing” the facility is important, “planning” capacity is more strategic and, ultimately, is the difference between efficiency and disaster. More data needs more hardware, more hardware needs more space, more hardware and space needs more power, etc. etc. Space, power and cooling systems, however, have limits and many data center owners are reaching theirs. With growth being inevitable, consolidation of infrastructure is another strategic skill demanding information about how assets and environments behave under fully optimized conditions.

It is impossible to reduce power, increase temperatures, manage energy utilization across a large IT facility without the ability to monitor and measure the environment in real-time. Thermal fluctuations, airflow irregularities and inconsistent server workloads make it impossible to “set it and leave it.” Automated sensors provide accurate data that can be correlated into real-time visualization of the environmental performance of a data center. Full lifecycle asset tracking and management of assets ensures the power and cooling needs are directly associated with the workload and location requirements of those assets. With those traceability metrics in place, both live and predictive environmental management can be accurately carried out. The archive of intelligence and automated nature of wire-free environmental sensors means an ever-improving facility is able to provide higher performance to the business.



RICHARD JENKINS, Vice President of Marketing, RF Code Richard Jenkins joined RF Code in 2012 and brings over 21 years of management and international marketing of small and large IT, media and investment organizations.




Edited by Brooke Neuman
FOLLOW US

Subscribe to InfoTECH Spotlight eNews

InfoTECH Spotlight eNews delivers the latest news impacting technology in the IT industry each week. Sign up to receive FREE breaking news today!
FREE eNewsletter

infoTECH Whitepapers