infoTECH Feature

April 14, 2015

The Hybrid IT Enterprise Demands an End to Network Guessing Games

Although enterprises are increasingly adopting cloud computing and SaaS (News - Alert)-based applications, the bell has not tolled for on-premises IT. As a result, enterprises are creating hybrid infrastructure stacks that present a new set of challenges for IT, including an increased risk of costly data breaches, and architectural “collisions” where design patterns for on-premises development and deployment don’t translate well (or at all) into cloud. To make things even more difficult for IT, the typical environment is a collection of isolated systems with no application-aware network performance monitoring tools. These individual systems do not offer one complete view into network performance, limiting the ability to anticipate and prevent issues before they occur, or to at least minimize the damage. However, the data on which applications are being successfully delivered, which aren’t, which personnel are using them, and over which network paths is likely already available to them. If they can collect and analyze that data, they can more easily identify the causes of performance issues, and address them more quickly and efficiently.

You, in fact, may already be working to improve your network visibility after a year that saw some of the world’s largest companies across several sectors including retail, financial services and healthcare suffer significant data breaches. That’s a good thing, because relying on the once tried-and-true approach of hardening the network perimeter with endpoint security software solutions and  mobile device management is no longer effective at preventing attackers from getting in to steal data or sabotage systems. Visibility across all applications, networks, and devices is the first critical step toward improving overall security postures.

As visibility, control, and optimization are brought to hybrid networks it will become increasingly important to construct an analytics-driven infrastructure that can take action when problems occur anywhere in the network. We’re already seeing more IT organizations instrumenting network architectures with predictive analytics to create self-correcting, self-generating networks that respond to business needs and intents.

Well-instrumented infrastructures provide the foundation for introducing automation. Such automation helps infrastructures react to changing demands without requiring manual intervention (and also reduce errors that might occur whenever humans touch technology). Visibility tools can help to discover and map dependencies in application workloads, a necessary element for true workload portability. Furthermore, rich analytics supports the recommended shifts in security techniques toward detection and response.

In addition to implementing these monitoring tools, companies are starting to establish partitions, a.k.a. “network segmentation” to prevent attackers from moving freely across the entire network while still allowing authorized personnel to access all systems and information. 

One common method to create this network segmentation model is the use of virtual local area networks (VLANs) to break the network into several isolated, smaller networks. However, VLANs are unable to enforce reliable control of sensitive or confidential business information because they simply isolate network traffic. Additionally, like other more traditional information security measures such as antivirus software and internal firewalls, VLANs can be a point of failure in terms of providing adequate security and ease of management.

Network Function Virtualization (NFV) offers a more capable and easier-to-manage software-based approach to IT virtualization of the entire network architecture. Service providers typically operate the applications and services, relieving the CIO and IT team of the burden of installing and maintaining hardware on-site.

Whether a company implements network and application management tools locally or on a Software-as-a-Service basis, the objective is to automate the collection, visualization and analysis of performance data to deliver streamlined insight into root causes. Through a single dashboard, CIOs can access the intelligence needed to pinpoint challenges, drilling down to resolve issues from the data center to the desktop, before they reach the helpdesk. This visibility isn’t just essential to ongoing management and operations, but also to planning. Organizations gearing up for large-scale cloud application implementations or data center consolidation must navigate challenges related to workloads being hosted by partners in new environments, with data often traveling farther distances across a variety network paths.

The best way to truly know if a network and applications are up to the challenge is to model future performance needs. This will help the CIO identify and proactively address the risks related to capacity, latency, quality and other common performance constraints within complex federal architectures.

Developing an understanding of the burdens IT systems and applications place on the network and how they impact performance enables a CIO to create and implement an optimization strategy that maximizes the company’s existing resources, prevents any unnecessary and costly bandwidth upgrades, and empowers employees to be as productive as possible. The good news is that the data they need on what applications are working and which personnel are using them is most likely available to them.  Just as more enterprises are proactively monitoring their networks and analyzing data to help prevent cyber-attacks, they must also implement similar strategies in order to answer the question “What’s going on across my network?”

 
FOLLOW US

Subscribe to InfoTECH Spotlight eNews

InfoTECH Spotlight eNews delivers the latest news impacting technology in the IT industry each week. Sign up to receive FREE breaking news today!
FREE eNewsletter

infoTECH Whitepapers