This article originally appeared in the July 2011 issue of infoTECH Spotlight.
With the rampant increase in multimedia devices and applications, the demand for bandwidth is at an all-time high. Now, more than ever, networks need to be operating at peak efficiency to meet the growing demands for network capacity and network intelligence.
Whether you’re operating a mobile, fixed or converged network, companies are facing unprecedented demand for an improved quality of experience from customers. In a nutshell, network optimization ensures that companies are operating as efficiently and cost-effectively as possible, while they prepare to transform to an all-IP next generation network. The very heart and soul of network optimization are simplification, efficiency and automation.
Maximizing that efficiency – first and foremost – is achieved through real-time visibility, according to Patrick Ancipink, vice president of product marketing, service assurance, at CA (News - Alert) Technologies.
In an interview with InfoTech Spotlight, Ancipink explains that during his 17-year IT management career, working almost exclusively on the vendor side, he has seen the convergence of performance metrics and an aggregate picture as pivotal elements toward achieving true network optimization and maximum productivity.
In recent years, application performance and delivery has become an increasingly complex undertaking for companies today based on the nature of the applications, Ancipink explains.
“The nature of applications has changed. First it was the client server, and then enters the Internet when the browser became this amazing entry point. And now we’re facing new accumulation with more complex applications,” he says. On top of that, you have many applications running at the same time. “IT is a mainframe – we never really throw anything out.”
CIOs today are not only responsible for the infrastructure of the organization and CAPEX, but also for the OPEX (News - Alert) associated with maintaining and scaling the exploding number of applications that must be supported for various internal business processes, according to Umesh Kukreja, director of business services marketing for Alcatel-Lucent.
“As the CIO’s role becomes more strategic in the organization, it is evolving to a role that is more than just cost center of the organization,” says Kukreja. “The explosion of networked applications includes everything from classical applications such as email, web hosting to emerging mobile applications and integrations of social networking applications.”
Of particular concern, he explains, are the mission-critical business applications such as network-based business applications such as Salesforce.com, HR applications and financial applications including SAP and Oracle (News - Alert). He cites new initiatives emerging that are specific to specific vertical markets, such as electronic medical records for the hospital and health care industry.
“Cloud computing architectures are being adopted to virtualize IT infrastructure and to realize the benefits of software, platform and infrastructure as a service. At the same time, real-time voice, multimedia and business-critical data applications are converging on the same unified communications infrastructure, and the increasingly collaborative nature of business is putting pressure on how business applications perform across the Wide Area Network,” Kukreja explains.
Even as enterprises are increasingly reliant on WAN/cloud applications for successful day-to-day operations, many IT departments have little or no visibility of how these applications are performing over the network, he adds. Yet, the complexities of applications today call for IT professionals to understand where performance problems might exist.
“The majority of IT CIOs are under pressure to maximize the value of existing resources and contain costs, even as the lack of application visibility has led to unpredictable and failed projects, and cost overruns. And even though the top issue for a majority of CIOs is achieving consistent end-to-end application performance, most don't know what applications are running on their WANs, making it difficult for them to address the issue,” says Kukreja. “Maintaining visibility of business-critical applications to ensure optimized performance and to detect application issues is often a huge challenge for resource-strapped IT departments.
Capturing and analyzing data from applications, devices, and the network itself provides an accurate and comprehensive understanding of how well an organization is supporting application delivery.
“In this composite of applications, we look at reusable pieces of an application rather than it being custom created from the beginning,” says CA Technologies’ (News - Alert) Ancipink. “The end user is often accessing it through a mobile device….they could be accessing credit card info, or they could be getting a movie ticket – those pull together scores and scores of components. It might be within your firewall – or it might be out in the cloud,” he explains.
Armed with this information, IT organizations can ensure problems are resolved quickly, mitigate risk from planned and unplanned changes, deliver consistent application performance, take measured steps to optimize application delivery, and reduce and avoid costs.
“It’s not enough to just know application code – you have to know server tiers, virtualization, and visibility – outside your firewall, and tie that into the performance of your network,” says Ancipink.
With the migration to virtualization and services in the cloud, no transaction uses infrastructure the same way twice. And with traffic models changing from voice- to data-centric services, it is leading to considerable strains on networks and budgets. Over the past few years, network optimization has been accomplished primarily through appliances that are managed by the enterprise or part of a managed service from a service provider.
“As applications grow and explode, and some of the mission critical move to the cloud, there is an increasing trend for the enterprises to explore how the end-to-end application optimization can be outsourced to the network service providers,” says Kukreja.
He cites recent market research conducted by Alcatel-Lucent in conjunction with Nemertes, which shows a preference for network-based application monitoring and optimization by CIOs of enterprises (see chart).
Alcatel-Lucent recently rolled out its Wireless Network Optimization service, which is designed to help service providers continually upgrade and optimize their networks to keep download times short and broadband coverage wide without sacrificing call quality or reliability.
The new network optimization solution includes tool sets such as RF design audits, remote network monitoring, Long Term Evolution (LTE) as well TIA (News - Alert) services suite and reverse engineering to improve network capacity and utilization and to increase quality of experience for mobile data subscribers.
Efficiency as a Market Differentiator
Network optimization is also a key market differentiator, since it greatly impacts both network performance and customer experience. New cloud services, multi-screen experiences and multimedia applications that enhance your customers’ experience are also driving network optimization as a key priority for CIOs.
As end user expectations evolve, bandwidth demands increase and Web 2.0 innovations change the way the world communicates, service providers are being forced to adapt their business to address a number of new challenges, which are: Growing revenues, introducing new services quickly, managing growth in capacity, reducing operational costs, supporting new business models and improving eco-sustainability.
Alcatel-Lucent’s High Leverage Network makes that possible through converged, scalable and efficient all-IP architecture, enabling a fundamental shift from keeping value in the network to extracting it from the network and allowing operators to transform their business to take advantage of emerging opportunities.
Traffic patterns from one application to another have to be assessed in real-time, says CA Technologies’ Ancipink says. “You have to ask: Can they ride along the same length? Does Monday morning traffic look the same as it does on Friday afternoon?”
The key drivers of network optimization are multimedia applications, smartphones taking the Internet mobile – and a trend among enterprises to a reliance on cloud-based applications.
“Unfortunately you still need more bandwidth, but you can’t just get bigger servers and throw infrastructure at the problem. Sometimes it’s a network configuration problem, sometimes it is something else,” Ancipink says.
Adding to hardware is by far the most inefficient to optimizing your network, but rather the answer lies in making existing networks more efficient – a growing arsenal of equipment and techniques from a variety of vendors aim to optimize different parts of the network and base station, according to ABI Research.
Ancipink emphasizes the significance of full visibility of the network in real-time, rather than piece-meal methods that don’t address the entire problem.
“Often times we’ll be talking to a customer – and they are using packet sniffing, where they are watching one portion of the problem. The limitation there is that the problem may never occur again the way it did because of all that variability,” he explains. “It’s like trying to capture lightning in a bottle. What you need is a baseline of performance of key applications, when some meaningful deviation of normal, you instruct packet data and provide links to actual packets,” Ancipink adds. “The re-creation of problems is an old school method that is becoming more and more obsolete.
CA Technologies works with enterprise customers – approximately 4,000 of them – by looking at their critical applications – drilling down to performance problems, looking at upgrades as whole, including not just more memory and more bandwidth, but also application delivery.
“Modern applications are not responding the way older applications once did,” Ancipink says. “Let’s say the application they wanted to optimize broke? Going in with full visibility is key to making the right investment.”
CIOs have addressed the CAPEX issue with consolidating the servers that support the multitude of applications to a few central locations, says Alcatel-Lucent’s Kukreja.
“The first step in resolving the complexities beyond the implementation of the applications is actually having the visibility to the dashboard on how the applications are performing across the corporate network that may span many locations, big and small,” he explains. “In the past few years, IT teams have tried to get visibility to the applications themselves via probes and various appliances to monitor the traffic. As the number of applications explode, this is an opportunity for CIOs to explore partnerships with network service providers to get visibility to the application performance across the wide area network. Many are starting to leverage extensions to VPN services that allow enterprises to monitor and address application availability and performance via a custom portal.”
Ancipink details a kind of war room scenario that occurs in many organizations – where everyone on the IT side is charged with going off in their own direction in order to see what is causing inefficiencies – a long expensive process.
“It’s just troubleshooting, it’s just firefighting,” Ancipink says. “Instead of spray and pray guessing, what are often overlooked are the cost of core productivity and the cost of downtime, which are directly related to employee productivity.”
Capturing and analyzing data from applications, devices and the network itself provides an accurate and comprehensive understanding of how well an organization is supporting application delivery.
Alcatel-Lucent’s Kukreja says it is evident that the sheer number of applications being used in the enterprise are huge, some of them are mission-critical, and often vie for the same network resources.
“These applications often compete for network resources with applications which are not business oriented such as Internet access, YouTube, Slingbox, Pandora and social networking applications,” he explains. “So in this case analyzing data from applications, devices and networks, helps CIOs to assess the true nature of network resource consumption. Next, once the characterization of the applications has been established, CIOs are able to develop strategies for assigning priorities to the application traffic over the wide area network.
The traditional methods of assigning lowest priority to data applications now needs to accommodate the fact that business applications may often also use the Internet access bandwidth, and therefore may need different treatment, he says.
“Finally, a comprehensive understanding of the applications can also help develop the right levels of budget for capital, operational and human resources to meet the strategic initiatives of the corporate organizations,” adds Kukreja.
Ancipink differentiates network optimization by three different maturity levels, from the very aggressive companies that crank out mobile apps all the time, to those who are working on a smaller scale and take a less aggressive approach.
“There are those who are on an accelerated pace, who have a slow infrastructure view – they’ve gone past what the norm is. They started proactively looking for things. This is the ahead of the curve group who are more likely to use cloud services.”
Then there is what he calls the “middle group” – those who are using composite applications and looking to upgrade from last generation of technology. “They look at their processes and their cultures – or moving toward better application delivery. They are on a natural progression.”
Finally, there are what Ancipink calls the “laggards – the most penalized group.” These are the companies that are looking when there is a highly visible application crash and immediately management takes action to change. “There are symptoms before the major crash. They need to hedge their bets and move from one provider to another.”
Regardless of which category you may fall in, the punch line to all of this rests with what IT is responsible for – and that, quite simply, is providing a service.
“When you move to cloud out there, and you have infrastructure as a service, software as a service, or any kind of service in the cloud, what is IT responsible for? The end user experience, making sure the transaction is complete. We have to make that the center of gravity,” Ancipink says.
This lack of real-time visibility is precisely the challenge for CIOs, who lack an ability to anticipate savings in an environment where poor visibility into how applications perform in the WAN requires a large investment in DPI appliance at branches, regional sites and data centers, before any savings can be realized.
“The ability to buy application visibility and control over the WAN-as-a-service provides a clear view of problem areas and anticipated savings while significantly raising the ROI of cost-saving IT projects,” says Kukreja.
Automating the monitoring, management and provisioning of common tasks can greatly reduce the additional workload caused by virtual environments, according to a Force10 Network white paper, “Open Network Automation is Critical to the Virtual Data Center,” written by Zeus Kerravala, senior vice president of Yankee Group.
Automation can improve data center operations in many ways, including instantly adjusting to changes in data flows, without manual reconfiguration, optimize application performance, delivering an “always on” data center fabric, and providing on demand resource allocation through automated network reconfiguration, the white paper says.
Without automation, data center managers have to manually re-provision and optimize server, storage and network resources every time the smallest change in the environment occurs. New computing architectures significantly complicate many activities, such as application and server provisioning, and change management.
With this in mind, CA Technologies professes the significance automation plays, encouraging companies to gather relevant information not just from the performance side but also from the project management side – and link them to the overall optimization picture.
In October 2010, CA Technologies debuted its Automation Suite, which takes a “next-generation” approach because it incorporates virtualization and cloud as a critical technology backdrop.
The CA Automation Suite addresses automation requirements on three levels:
Business Service Automation – includes private and hybrid cloud services, process and workload automation, app discovery, configuration and compliance – including visibility and accountability for business service planners and executives.
Virtualization Management – targeted at life cycle optimization, change management and capacity management and security.
Infrastructure Automation – for cross-domain automation and management of the integrated physical and virtual infrastructure.
“The taking on of automation is necessary given all the complexity today,” says CA Technologies’ Ancipink says. “More CIOs are embracing automation, but they need to have confidence to make automation really work. In the last year, we are starting to see a breakdown more … there’s definitely less digging in of the heels and the attitude that automation is too much of a culture change.”
In fact, Ancipink says that the economic recession of 2008-2009 forced CIOs into automation, whether they were truly on board or not.
“IT professionals are embracing automation more than they did three to four years ago. IT budgets were slashed two years ago, and they still haven’t come back. The economy has been a huge part in motivating this kind of behavior,” he says.
Return on Investment
With optimization, companies can expect OPEX reductions of 5 percent to 10 percent per year, improved QoE, improved capacity management and bandwidth scaling, according to Alcatel-Lucent’s figures.
Alcatel-Lucent undertook an RF optimization project to help an operator increase the capacity and performance of its network and improve customer satisfaction. The project resulted in cost savings of over $14 million over five years, with an ROI of 340 percent. The operator said they saw significant savings beginning in the first year.
Optimizing Application Performance
Alcatel-Lucent typically advises enterprise customers looking to optimize application performance over their networks following these steps:
Identify issues via end-to-end application monitoring service from their VPN provider. This provides instant access to reporting and analysis, per application/flow performance & bandwidth consumption, and per application thresholds and “Green Wall” to test against assumed internal SLAs.
Use the same service to mitigate problems via application-level policies and control. Techniques can include application traffic shaping, QoS remarking, and protocol optimizations within the VPN network. Other popular techniques include per application time of day restrictions (i.e. YouTube), and bandwidth on demand provided to each application as required (i.e. critical video conferences).
Evaluate more comprehensive solutions for targeted sites. These can include WAN optimization controllers and virtualized appliances, specialized application/content network services that include caching and optimal routing through multiple ISPs, and hosted application SLAs provided through integrated application hosting and VPN services offered by service providers.