infoTECH Spotlight Magazine

infoTECH Magazine

Security

August 01, 2011

How Virtualization Changes Your Disaster Recovery Plan

By TMCnet Special Guest
Mike Reynolds, senior manager of product marketing, Symantec

This article originally appeared in the July 2011 issue of infoTECH Spotlight.

The adoption of cloud computing and virtualization is increasing quickly. According to Symantec’s (News - Alert) “Enterprise IT Journey to the Cloud” survey, more than 75 percent of enterprises are at least discussing virtualization, and a significant number have already implemented some form of virtualization in their infrastructure. But the adoption of these new technologies raises concerns regarding the reliability of data and applications accessed through the cloud. In particular, organizations are concerned about high availability and disaster recovery.

According to Symantec’s 2010 Disaster Recovery Survey, 85 percent of enterprises have reevaluated their disaster recovery plan following server virtualization. This may seem like a monumental task, especially with DR budgets expected to shrink in the coming year. But with a single hour of Web server downtime costing a large enterprise more than $62,000, it’s easy to see how critical uptime is – not only for revenue and employee productivity, but also for less easily measured metrics such as brand reputation.

System upgrades alone caused nearly 51 hours of downtime on average over the past 12 months, according to IT professionals responding to the survey. When something as routine as regular upgrades can cause so much downtime, it is vital for organizations to implement failover capabilities to minimize risks. These solutions should be able to seamlessly recover applications by moving them to a functioning server, with minimal downtime.

Despite the differences in technologies, the principles of DR remain largely the same whether dealing with a physical or virtual environment. By implementing High Availability solutions as part of your comprehensive disaster recovery plan, you can weather the storms of outage by keeping your uptime as high as possible.

Data Backup

Data backup is an integral part of a successful disaster recovery plan. According to the Disaster Recovery Survey, 56 percent of virtualized data is currently backed up, with only 20 percent of data being protected by replication. A focus group discussing virtualization found that the area of data replication in particular is one area which often suffers as organizations look to reduce expenses. One data manager said, “Here, we lack replication. It was something that we were going to implement, and due to company changes, our replication plan fell by the wayside.” Survey respondents reported that the largest challenge in backing up virtual systems is resource constraints.

 

Protecting Mission-Critical Applications

Another critical aspect of a successful disaster recovery plan is the protection of mission-critical applications in a virtualized environment. While half of organizations have begun to use the cloud for mission-critical applications, management is often hesitant to put applications into the cloud because of perceived security risks and a lack of control. The largest challenge cited by IT professionals in protecting mission-critical applications in a virtual environment is the lack of monitoring tools comparable to those for monitoring a physical environment, followed closely by lack of scalability. Vendors need to develop solutions that address these concerns in order for virtualization to reach its full potential.

Network Monitoring and Automated Recovery

Organizations can avoid many outage problems through proactive monitoring of applications and IT services, to identify problems before they escalate. An ideal DR solution will allow you to monitor applications and their components, including the virtual machine, network components, storage components and the physical server.

Currently, 26 percent of an IT department’s budget is designated for disaster recovery initiatives, such as clustering, spare servers and data replication. But 43 percent of organizations are reporting that disaster recovery budgets are shrinking within the next 12 months. With IT departments perpetually overworked, automated processes are ideal for improving your recovery time with minimal staffing resources. When outages inevitably happen, having an automated recovery solution will enable you to restart applications and reconnect users without the need for staff intervention.

Testing

One of the most important things IT can do to evaluate preparedness is to perform regular disaster recovery tests. Fifty-one percent of enterprises carry out full scenario testing of its DR plan every six months, with 31 percent testing them more often. Performing these regular assessments will give you an idea of where you need to make improvements. The best way to perform assessments without compromising service is by implementing non-disruptive testing tools.

Conclusion

The adoption of cloud computing and virtualization technologies can offer increased flexibility to your IT service through on-demand provisioning and lower operational expenses. But in order to reap the benefits, enterprises need to address challenges inherent in the technology by revising their DR plans. An effective plan needs to include backup and protection of sensitive data, proactive network monitoring and automated recovery, and testing in order to mitigate availability issues and ensure maximum uptime.

Mike Reynolds is senior manager of product marketing for Symantec’s Storage and Availability Management Group. 


TMCnet publishes expert commentary on various telecommunications, IT, call center, CRM and other technology-related topics. Are you an expert in one of these fields, and interested in having your perspective published on a site that gets several million unique visitors each month? Get in touch.

Edited by Stefania Viscusi