infoTECH Feature

November 06, 2013

It's Time to Rethink Performance Monitoring

The meteoric rise of mobility is placing new – and often unpredictable – demands on the network. Today’s consumer expects constant data and connectivity. Yet the reality of today’s mobile computing platforms streaming local and social data to end-users in enormous quantities is taking a significant toll on network performance.

As network engineers work to deliver these massive data streams in real-time, performance monitoring is turning into a pressure cooker, with multiple usage crises dragging down network performance at any given time.

It’s time to re-evaluate network performance monitoring.

What’s driving the need for change? Three significant drivers are playing a part:

  • The need for application awareness
  • The emergence of unified communications, including “SoLoMo” services (Social, Local, Mobile)
  • The expansion of monitoring into new areas

New requirements for application awareness and support for unified communications are challenging performance monitoring appliance vendors to reconsider their approach where flexibility and scalability are key requirements.

Performance monitoring is changing. In fact, it has been gradually changing for some time. Leading vendors have anticipated its shift and reacted in time. Others will soon have to rethink how they approach performance monitoring.

The Winds of Change Can Be Brutal

Understanding the impact of these drivers is the first step in rethinking how performance monitoring appliances are designed. A new mindset and vision is required that focuses on what is important for the end-user and where rationalization needs to occur.

In this article, we will take a closer look at the drivers affecting change and what OEM vendors and systems architects need to consider when designing performance monitoring appliances.

Designing for Application Awareness

Performance monitoring is essentially a troubleshooting tool. While it is possible to use this tool to proactively identify network issues; end-users – such as product line managers and CXOs – value performance monitoring as a means towards identifying root cause issues quickly and efficiently.

In the past, a network focus was sufficient, since most applications adhered to well-known TCP port designations, making them easily recognized and managed. However, with the proliferation of the Internet and web-based applications – as well as thick-client applications running on corporate wide area networks (WANs) – network information is no longer enough. End-users are now more concerned with which applications are being used and how much resources they are consuming. Indeed, for some enterprises, applications are now an integral part of the business process, so understanding how applications are performing is business-critical.

The challenge for OEM vendors focused on network performance monitoring is how to build application awareness into their products while at the same time assuring throughput performance in the face of growing data loads and network speeds.

Designing for Unified Communications (News - Alert)

Not so long ago, corporate IP network bandwidth was almost exclusively consumed by email and web-browsing. Today, trends such as cloud computing, “bring your own device” (BYOD) and social networking are placing strains on corporate networks. In addition, Voice-over-IP (VoIP) and video teleconferencing are key corporate initiatives that require high quality of service.

It is in today’s unified communications context that performance monitoring needs to operate. OEM vendors, system integrators and CXOs have a clear opportunity to provide significant ROI to corporate enterprises by helping them better understand performance issues and network planning to support individual employee and corporate goals.

The challenge for OEM vendors is building the ability to support various unified communications components into their products while also providing metrics produce meaningful correlations between application performance and network performance. At the same time, CXOs are looking for ways to monitor and report on performance issues to stay on top of their networks and balance the need for the increase in data streams.

Designing for New Opportunities

One major concern for enterprises is appliance spread. Appliances are necessary to ensure that all traffic can be captured and analyzed at high speeds with zero packet loss under all load conditions. Nevertheless, there are a plethora of appliance solutions addressing various concerns, such as network security, transaction monitoring, lawful intercept and policy enforcement, to name a few.

On the one hand, this can be a threat to appliance vendors who might face resistance in introducing “yet another box” into the network, even if it is passive. On the other hand, it is an opportunity to consolidate various functions into a common physical platform making their appliance more valuable. Already today, appliance probes that are deployed for performance monitoring in networks are also used to provide data to other tools focused on security, surveillance or network optimization.

Best practices suggest that it is time to rethink the design of appliances, allowing them to be more open to including or cooperating with functions that are not normally associated with performance monitoring.

Rethinking Appliance Design

Since performance monitoring is traditionally focused on the network, it is natural to consider competence in network hardware and software as being the core to success. However, with the increased need for application awareness and unified communications support, the core competence focus needs to shift toward understanding applications and transactions and how they relate to the network.

De-coupling the network from the application layer helps to realize this focus, while at the same time opening appliances to opportunities that support new functions not normally associated with these appliances.

It is time to rethink what is performed in hardware and what is performed in application software, with more network functions off-loaded to hardware, allowing application software to focus on application intelligence. The networks of today require intelligent network adapters that can identify well-known applications in hardware at line-speed by examining layer one to layer four header information.

In addition, hardware that provides this information can be used to identify and distribute flows up to 32 server CPU cores allowing massive parallel processing of data. All of this should be provided with low CPU utilization.

Appliance designers should look for features that ensure as much processing power and memory resources as possible and are able to identify applications that require memory intensive packet payload processing.

Built-In Flexibility and Scalability

Networks today must be flexible and scalable. With the introduction of 40 Gbps Ethernet, there is a need for tools that support a wide variety of network speeds. However, since the end-user is focused on application performance, there is an expectation that performance monitors will provide the same features and support, no matter the line-rate.

De-coupling application intelligence from network line-speed is therefore essential. Look for network adapters that provide the same features and support from 1G to 10G to 40G that can be accessed via a single Application Programming Interface (API). This technology enables appliance designers to develop and test application software once, safe in the knowledge that it will perform in the same way no matter the hardware configuration.

It’s also important to look for the capability to merge traffic from multiple ports on multiple adapters into a single analysis stream, thereby completely abstracting the hardware configuration from the application software programmer. No matter the number or type of ports configured, the programmer should only see one “virtual adapter” with many ports.

Additionally, it provides extensive statistics for each port that can provide essential information for deep-dive analysis of root-cause issues. Transfer of statistics from multiple ports is time synchronized to allow accurate correlation with time-stamped packet data.

Future-proofing the Network

Specialized network applications can be incredibly expensive, making scaling to meet demand a costly proposition for telcos, carriers, cloud providers and enterprises alike. Even worse, if the market shifts toward adoption of novel network hardware, these organizations must bear the cost of updating their infrastructure in order to stay competitive.

By de-coupling network and application data processing and building-in flexibility and scalability into the design, appliance designers now have the ability to introduce a powerful, high-speed platform into the network that is capable of capturing data with zero packet loss at speeds up to 40 Gbps.

The analysis stream provided by the hardware platform can support multiple applications, not just performance monitoring. Investigate solutions that provide the capability to share captured data between multiple applications without the need for costly replication. Multiple applications running on multiple cores can be executed on the same physical server with software that ensures that each application can access the same data stream as it is captured.

This transforms the performance monitor into a universal appliance for any application requiring a reliable packet capture data stream. With this capability, it is possible to incorporate more functions in the same physical server increasing the value of the appliance.

Conclusion

As network traffic continues to escalate and Ethernet connectivity speeds increase, the capabilities of standard network monitoring systems are no longer viable. The proliferation of today’s mobile data usage, app development and BYOD are putting extreme demands on networks everywhere. To stay ahead of the curve, CXOs, OEMs, cloud providers and telcos require new technologies that are able to deliver the necessary accuracy and reliable measurement and analysis tools needed to match the demand.

About the Author
Daniel Joseph Barry is VP of Marketing at Napatech (News - Alert) and has over 20 years experience in the IT and Telecom industry. Prior to joining Napatech in 2009, Dan Joe was Marketing Director at TPACK, a leading supplier of transport chip solutions to the Telecom sector.  From 2001 to 2005, he was Director of Sales and Business Development at optical component vendor NKT Integration (now Ignis Photonyx) following various positions in product development, business development and product management at Ericsson (News - Alert). Dan Joe joined Ericsson in 1995 from a position in the R&D department of Jutland Telecom (now TDC). He has an MBA and a BSc degree in Electronic Engineering from Trinity College Dublin.


Daniel Joseph Barry (News - Alert) is VP of Marketing at Napatech and has over 20 years experience in the IT and Telecom industry. Prior to joining Napatech in 2009, Dan Joe was Marketing Director at TPACK (News - Alert), a leading supplier of transport chip solutions to the Telecom sector.

Edited by Ryan Sartor
FOLLOW US

Subscribe to InfoTECH Spotlight eNews

InfoTECH Spotlight eNews delivers the latest news impacting technology in the IT industry each week. Sign up to receive FREE breaking news today!
FREE eNewsletter

infoTECH Whitepapers