infoTECH Feature

August 04, 2010

Top 5 Mistakes in Virtualized DCs

Server Virtualization is a hot trend which means that many enterprises today are running to adopt it. However virtualizing the data center is not always trivial and IT managers may overlook some key issues which should be addressed to correctly achieve the “Holy Grail”: complete data center virtualization.We have identified five common mistakes which may occur when virtualizing a data center’s business applications and provided means to effectively overcome them.

1. Consolidating Servers without guaranteeing fault isolation on the ADC (News - Alert) layer

One benefit of data center virtualization is the ability to consolidate multiple physical servers into a smaller number of physical servers, each running multiple virtual application instances, where each instance has full fault tolerance and computing resources isolation.While the consolidation process may be a relatively standard procedure, it becomes more complex when multiple ADCs (Application Delivery Controller) exist throughout the data center, each providing application delivery services to different applications. As such, the consolidation of the server infrastructure also involves the consolidation of the ADC layer.While the solution to consolidate the ADC layer may seem straight forward – reduce the number of ADCs with each ADC serving more applications; there is an inherent issue: the assurance of fault isolation in the ADC layer. Without complete fault isolation, a problem in a single ADC may cause loss of business continuity for more applications.

Therefore when building an ADC consolidation project, it is recommended to choose an ADC solution which guarantees complete fault isolation and resource reservation within the ADC layer.

2. Virtualizing business critical applications while disregarding the effects of a shared environment

When virtualizing an organization’s business applications, some IT managers may choose to treat both critical and non-critical business applications the same way – i.e., using the same virtualization infrastructure for both. While this approach may be easier to manage with all applications residing on the same infrastructure, it creates potential issues for critical applications, including quality of performance. Migrating an application from a dedicated physical server to a shared virtual infrastructure may degrade the application’s performance since multiple applications must “fight” over the resources of a single physical server. While this may be tolerated for non-critical business applications, it is not for critical business applications.

Therefore, to prevent this degradation, IT managers should ensure that an ADC solution is installed in front of all virtualized applications. By using the ADC’s application acceleration capabilities (e.g. compression, caching, TCP multiplexing, etc), it is possible to ensure the applications’ performance thus providing end-users with the required level of service.

3. Automating the virtual infrastructure without aligning the traffic distribution logic within the data center

Server virtualization creates a single, consolidated infrastructure that enables deployment of multiple resources on-the-fly. As a result, adding or removing VMs becomes easy and painless. Provisioning a new server in a traditional environment can typically take several weeks. The same server in a virtual environment can be provisioned within a few minutes.  An ADC solution guarantees the availability and performance of virtual applications, yet network administrators must continuously adapt its configuration with configuration changes in the virtualization layer.  These may include adding or removing a new VM to a virtual application cluster, etc. Failing to ensure an ADC’s configuration is always up-to-date can result in significant degradation of the virtual application’s availability and performance.To ensure the on-going alignment between the virtual infrastructure and the ADC, IT managers should implement an ADC solution which allows monitoring both the virtual infrastructure and the ADC - automatically synchronizing the ADC’s configuration due to any change in the virtual infrastructure (VI).

This will ensure the ADC will continue to distribute traffic to applications, even when more servers are provisioned.Additionally, it should also grow on demand with the applications; meaning that if more application servers are provisioned to support more end-user traffic, the ADC must follow.

4. Planning a multi-data center solution without benefiting from a true GSLB solution

Moving to server virtualization provides IT managers with an easier means of designing data center disaster recovery, deploying applications within multiple data centers or moving applications between data centers. This is because there is no need to install a new physical server but rather utilize the existing server infrastructure for more applications. For applications deployed in multiple data centers, one critical requirement is to guarantee that each transaction is executed to completion. To accomplish this, a user’s transaction must always be directed to an available site that knows the user’s information and the status of the transaction.By using an ADC which supports global traffic redirection (GSLB), issues like business fluctuations (bursts and peaks) and potential application or network failures can be averted. Users are directed to the site which delivers the best experience.When choosing an ADC, IT managers must ensure it ensures both transaction completion and fast response times, by fully optimizing globally distributed server resources across multiple data centers based on application/transaction persistency, content availability, load and proximity.

5. Architecting a virtualized environment while overlooking the potential security risks

As discussed above one of the main benefits of server virtualization, is that VMs share a common physical infrastructure. However this is also a weakness since any issue affecting the physical infrastructure effects all hosted applications (VMs).  One such issue is DoS (Denial of Service) attacks which may target a physical server’s network card, preventing it from passing legitimate traffic to hosted applications, causing applications downtime.Additionally, if the virtualized infrastructure supports auto scaling capabilities, a DoS attack targeting an application may cause it to continuously scale up to handle growing “bogus” traffic, thus increasing the cost of operation without real business benefit.To prevent such scenarios, IT managers should implement a real-time network attack prevention device which allows to fully protect the virtualized infrastructure against known and emerging network security threats. It should also be able to detect and mitigate emerging network attacks in real time i.e., zero-minute attacks, DoS/DDoS attacks and application misuse attacks – all without the need for human intervention or blocking legitimate user traffic.

To summarize, when designing a virtual server architecture, IT managers should make sure they do not over look issues of availability, performance, alignment and security which are derived from the virtualization of applications and the use of a share physical environment. 


Amir Peles is Chief Technology Officer at Radware (News - Alert). To read more of his articles, please visit his columnist page.

Edited by Stefania Viscusi
FOLLOW US

Subscribe to InfoTECH Spotlight eNews

InfoTECH Spotlight eNews delivers the latest news impacting technology in the IT industry each week. Sign up to receive FREE breaking news today!
FREE eNewsletter

infoTECH Whitepapers