Telecom datacenter planners are usually the most guilty (and least repentant) about location. You’d almost think they were all ex-realtors. They naturally gravitate towards “carrier hotels” like 1 Wilshire in Los Angeles or 60 Hudson in New York. Why you ask? I almost expect a mountaineering answer from them, “Because it’s there”, but telecom datacenter folks inevitably respond with Austin Powers like zeal, “Because that’s where the connectivity and cheap cross connects are -- yah baby yah!” Their reasons are valid, but to me it’s some serious tunnel vision. It’s not the complete picture.
True datacenter architects have to master the obvious – and the not so obvious. I’ve been to one too many datacenters where the obvious -- flood, fire, tornado, earthquake or tsunami -- did find us and it’s often in a downtown high rise datacenter several floors up strapped to 400,000 gallons of fuel in a basement (which gets flooded), or on the roof (lovely when shaking in a major earthquake). Weather and climate conditions will factor in, and I’ve got a feeling worldwide we’re just getting started. I’m not saying to grab the FEMA map of least likely zones for having few if any natural disasters (word has it we’d all be in Phoenix or Las Vegas). However, you need to plan accordingly for your geography. You’d think it goes without saying to NOT plan a datacenter that is in a known flood, tsunami, hurricane, tornado, earthquake, or fire zone, but I’ve seen plenty of people who didn’t give it a second thought. More importantly, and often overlooked, is the secondary effects these climate changes bring, which leads to the real disaster.
You don’t know what you don’t know, but you need to get good at anticipating the “what ifs”. When Hurricane Sandy hit, naturally, the power went out (we expected that much right?), but the biggest problems came later. First, most gasoline reserve tanks were in basements and those quickly got flooded. Second, when the generators got to the bottom of their regular tank, they sucked in a lot of debris that clogged the filters. This required technicians and parts which were rarer than hen’s teeth to come in and fix the generators before anyone even got to the debacle of pumping the water out of the basement and finding additional fuel. The maintenance routine for too many datacenters is to merely run that generator for a few hours a month and top off the tank. Who knew the primary tanks would guzzle their way to the bottom and that the reserve tanks wouldn’t be available? Who knew the technician you contracted with had 500 clients, now similarly situated that all needed help? Who new parts wouldn’t be in stock for simple things like filters? Who knew the maintenance routine was wrong? Insult to injury, try refueling when you’re downtown and chaos erupts. Gasoline trucks can’t get there and helicopters can’t land. Its right about then that you’re wishing you picked the datacenter closer to the major highway or airport. So before you cook your goose, take a gander on some other location that meets all your criteria, not just the most obvious ones to your industry segment.
What Would Dr. Cloud Do?
Running a cloud provider, we live for these moments. Here’s some quick advice on how to pick datacenters:
The San Francisco earthquake, New York/New Jersey’s Hurricane Sandy, Japan’s tsunami (with Fukushima nuclear disaster), San Diego fires, Chicago caught in the “polar vortex”, and much more to come – are just the world we live in. Act accordingly.
Mike L. Chase is the EVP/Chief Technology Officer for dinCloud, a cloud service provider and transformation company that helps businesses and organizations rapidly migrate to the cloud through the hosting of servers, desktops, storage, and other cloud services via its strong channel base of VARs and MSPs. Visit dinCloud on LinkedIn (News - Alert): www.linkedin.com/company/dincloud.