infoTECH Feature

July 21, 2014

Faster than the Future in the 100G Era

As we enter the 100 Gbps era, producers of network management and security applications will need to address numerous analysis challenges in order stay ahead of the data growth curve. Staying faster than the future will require next-generation solutions that not only scale but also deliver and understand data, as well as accelerate application performance.

Data delivered in the right way creates insights that enable actions. Being able to understand all the data within networks ensures that apps run quickly, videos stream smoothly and end-user data is secure. Yet, as the volume and complexity of data increase, processing it all becomes increasingly difficult.

A Daunting Onslaught of Data

Financial, telecom, corporate and government networks are experiencing exponential growth in data volume, variety and velocity. By 2017, Cisco predicts global IP traffic will reach 1.4 zettabytes a year, or 120.6 exabytes per month. This relentless increase in data is being driven by consumers using mobile devices with increasingly data-heavy applications, such as streaming video and peer-to-peer file sharing. Traffic from wireless and mobile devices will exceed traffic from wired devices by 2016.

Cisco also projects that global mobile traffic will increase nearly 11-fold between 2013 and 2018 and mobile network connection speeds will increase two-fold during the same period. Over two-thirds of the world’s mobile data traffic will be video by 2018.

As a result of this rapid growth in data traffic, connectivity speeds will also need to increase. High-bandwidth applications such as video on demand and high-performance computing, as well as server virtualization and high-speed applications in data centers, will continue to drive adoption of 40 Gbps and 100 Gbps connections.

Best Practices for the 100G Era

Network equipment manufacturers must find a way to reliably increase performance at connections up to 100 Gbps while reducing risk and time-to-market. They must also effectively manage and secure networks while still handling a varied portfolio of 1, 10, 40 or even 100 Gbps products. Network services are agnostic to connection speeds and analysis will have to be performed at the same level across speeds ranging from 1 Mbps to 100 Gbps. Below is a list of best practices to ensure the network of today can move successfully into the 100G era.

Guaranteed Data Delivery

High-speed solutions must be able to capture network traffic at full line rate, with almost no CPU load on the host server, for all frame sizes. Full line-rate packet capture with zero packet loss, frame buffering and optimal configuration of host buffer sizes removes the bottlenecks that can cause packet loss. It also reliably delivers the analysis data that network management and security solutions demand. Zero-loss packet capture is critical for applications that need to analyze all the network traffic in real time.

Frame buffering is a feature that can absorb data bursts, ensuring that no data is lost. It can also remove application limitations, allowing frames to be transferred once the burst has passed. PCI (News - Alert) interfaces provide a fixed bandwidth for transfer of data. This can limit the amount of data that can be transferred from the network to the application. Frame buffering is a critical feature for high-speed network analysis.

Understanding Frame Processing

Next-generation network analysis requires understanding and insight. With frame classification, details on the type of network protocols being used can be provided. For users who want to monitor network traffic in the most efficient way, it is important to be able to recognize as many protocols as possible, as well as extract information from layer 2-4 network traffic. Header information for the various protocols transported over Ethernet must be made available for analysis. This includes encapsulation and tunneling protocols.

Time Precision

Knowing when something happened, and the amount of delay in the network, is important for many high-speed analysis applications. Assuring quality of time-sensitive services and transactions is often essential and requires high precision. In 100 Gbps networks, nanosecond precision is essential to assure reliable analysis. At 10 Gbps, an Ethernet frame can be received and transmitted every 67 nanoseconds. At 100 Gbps, this time is reduced to 6.7 nanoseconds.

Nanosecond precision time-stamping is essential for uniquely identifying when a frame is received. Precise time-stamping of each Ethernet frame allows frames to be merged in the correct order. The result is a significant acceleration of performance as Ethernet frames can now be grouped and analyzed in an order that makes sense for the application and is not restricted by hardware implementations.

Flow Identification

Analyzing individual Ethernet frames provides insight into activity at a single point in the network. Network applications must able to examine flows of frames that are transmitted between specific devices (identified by their IP addresses) or even between applications on specific devices (identified i.e. by protocol and UDP (News - Alert)/TCP/SCTP port numbers used by the application).

In high-speed networks up to 100 Gbps, it is important to identify and analyze flows of data to gain an overview of what is happening across the network and then control the amount of bandwidth services are using. It also allows for intelligent flow distribution, where frames are distributed to up to 32 CPU cores for massive parallel processing.

Packet Capture Acceleration

High-speed solutions must provide guaranteed delivery of real-time data with information that allows quick and easy analysis. What will distinguish these is the ability to accelerate the performance of analysis applications. This can be achieved by reducing the amount of data to analyze, ensuring that applications are not overwhelmed and only processing the frames that need to be analyzed. One of the main challenges in analyzing real-time data in high-speed networks is the sheer volume of data. Reducing this amount of data can often accelerate the performance of analysis applications. This can be accomplished through features such as frame and flow filtering, deduplication and slicing.

Processing Acceleration

100 Gbps solutions must provide acceleration features that enable appliance vendors to maximize the performance of their analysis applications. These features must off-load data processing that is normally performed by the analysis application. Some examples of off-loading features are: intelligent multi-CPU distribution, cache pre-fetch optimization, coloring, filtering and checksum verification. These free up CPU cycles, allowing more analysis to be performed faster.

Tunneling Support

Tunnels have been used to transport information reliably and securely across networks that are often outside of the control of the sender. Tunneling provides challenges because the data to be analyzed is encapsulated in the tunnel payload and must first be extracted before analysis can be performed. This is an extra and costly data processing step. By off-loading recognition of tunnels and extraction of information from tunnels, high-speed solutions can provide a significant acceleration of performance for analysis applications.

This is especially true in mobile networks, where all subscriber Internet traffic passes through one point in the network, namely the GPRS Tunneling Protocol (GTP) tunnel between the signaling and gateway serving nodes. Monitoring this interface is crucial for assuring quality of service. Next generation solutions will open up this interface, providing visibility and insight into the contents of GTP tunnels. Analysis applications can use this capability to test, secure and optimize mobile networks and services.

Faster than the Future Today

As we enter the 100 Gbps era, network equipment manufacturers will need to explore solutions that can help them stay one step ahead of the data growth curve brought on by the explosive growth in mobile data traffic, cloud computing, mobility and big data analysis.

Key considerations for accelerating the network to 100G:

  • Reliable hardware platforms for the development of 100 Gbps analysis products.   A 100 Gbps accelerator, for example, can intelligently manage the data that is presented for analysis, providing extensive features for managing the type and amount of data. Slicing and filtering of frames and flows, even within GTP and IP-in-IP tunnels, significantly reduces the amount of data. Look for deduplication features that can be extended in analysis software to ensure that only the right data is being examined.
  • Software suites that provide data sharing capabilities should also be considered to enable multiple applications running on the same server to analyze the same data. When combined with intelligent multi-CPU distribution, this allows the right data to be presented to the right analysis application, thus sharing the load. Intelligent features for flow identification; filtering and distribution to up to 32 CPU cores accelerate application performance with extremely low CPU load.
  • PCI-SIG® compliant products that will fit into any commercial off-the-shelf server will allow organizations to focus their development efforts on the application, not the hardware.
  • A common Application Programming Interface (API) that allows applications to be developed once and used with a broad range of accelerators. This allows combinations of different accelerators with different port speeds to be installed in the same server.

Conclusion

New technologies are setting the stage to enable organizations to manage the ever-increasing data loads without compromise. By scaling with increasing connectivity speeds, as well as accelerating network management and security applications, enterprises can stay faster than the future.

About the Author:

Daniel Joseph Barry (News - Alert) is VP of Marketing at Napatech and has over 20 years experience in the IT and Telecom industry. Prior to joining Napatech (News - Alert) in 2009, Dan Joe was Marketing Director at TPACK, a leading supplier of transport chip solutions to the Telecom sector.  From 2001 to 2005, he was Director of Sales and Business Development at optical component vendor NKT Integration (now Ignis Photonyx) following various positions in product development, business development and product management at Ericsson (News - Alert). Dan Joe joined Ericsson in 1995 from a position in the R&D department of Jutland Telecom (now TDC). He has an MBA and a BSc degree in Electronic Engineering from Trinity College Dublin.




Edited by Adam Brandt
FOLLOW US

Subscribe to InfoTECH Spotlight eNews

InfoTECH Spotlight eNews delivers the latest news impacting technology in the IT industry each week. Sign up to receive FREE breaking news today!
FREE eNewsletter

infoTECH Whitepapers