These workloads include both critical and secondary applications—such as email, collaboration, ERP, document sharing and CRM. While moving workloads to the cloud can lower your costs and make you more agile, it also poses a number of visibility and security concerns.
If these workloads aren’t secure, available and resilient, they will crush your business.
How can you take advantage of the cloud while maintaining visibility into your workloads and ensuring that your applications are performing?
The Old Way No Longer Works
The same methods you use to manage your applications on premises won’t work in the cloud.
Traditionally, you would use span ports on switches to capture all of your packets to disk. By analyzing stored flows or packets, you could see how all of your transactions proceeded throughout the network and understand how well (or how poorly) applications performed.
However, capturing packets isn’t always useful or even possible in the cloud. You need a new technique—that’s different from how you manage your data center and branch office data—to gain visibility into your cloud applications.
Four Keys to Measuring and Improving Application Performance in the Cloud
According to the survey, the No. 1 reason enterprises move to a hybrid model is to optimize application performance. However, if you have applications scattered across on-premises and cloud environments, it can be hard to measure their performance.
Here are four keys to building an infrastructure strategy that allows you to deploy applications in the cloud while guaranteeing a high-quality experience:
Develop for the cloud. In the past, network administrators would monitor your applications. However, the cloud operates at the application layer, as opposed to the network layer. This means that developers now must play a key role in this process.
In the cloud, location is a very coarse-grained attribute. That is, you might know only that you’re using services somewhere in the eastern United States or Western Europe. Except for specialized services, fine-grained deployment isn’t possible. In other words, you can’t make designs based on assumptions about latency and availability.
Cloud providers offer services that help you work around traditional assumptions and achieve high resiliency. For example, you can situate message queues or other inter-process communications tools between elements in a workload. These will improve overall performance and availability and allow workloads to process greater quantities of information.
In the cloud, it’s not a matter of “if” something will fail; it’s a matter of “when” it will fail. In fact, the same is also true for on-premises deployments. But because cloud infrastructure resides outside of your organization, and thus outside of your control, failures feel more problematic.
It doesn’t have to be this way, though. There are steps you can take to ensure your applications are resilient. Design backwards from potential points of failure to make your applications resilient. Take advantage of your provider’s features that offer the opportunity to build multiple instances of your workloads spread across different locations. These steps enable your applications to function during local outages or if the underlying physical hardware is removed or replaced.
Ensure performance across multiple locations. SaaS apps—such as Office365 and Salesforce.com (News - Alert)—move data between your branch offices, data centers and SaaS providers. User experience can vary greatly, depending on where a user is in relation to where the data resides. Performance optimization techniques designed specifically for SaaS applications can ensure a high-quality performance across locations and clouds, regardless of where a user might happen to be.
Moving your workloads to the cloud can help you be more agile, cut costs and improve the customer experience. However, if you don’t gain visibility into your cloud applications, you’ll put your organization at risk.
Start experimenting with application performance management tools (APMs). Many offer full visibility into your applications, along with detailed reporting and recommendations. APMs can also automate your processes, so you can focus on high-value projects, as opposed to babysitting your applications.
Once you understand where performance problems exist now or might exist in the future, implement optimization techniques that provide a consistent and predictable user experience. After all, happy users are productive users.
Steve Riley is Technical Director in the Office of the CTO at Riverbed Technology. Steve actively works to raise awareness of the technical and business benefits of Riverbed's performance optimization solutions, particularly as they relate to accelerating the enterprise adoption of cloud computing. His specialties include information security, compliance, privacy, and policy. Steve has spoken at hundreds of events around the world, including RSA (News - Alert), SANS, Black Hat Windows, InfoSec US, (ISC)2, SIIA, IANS, TechEd, DevConnections, The Experts Conference, Cloud Expo, Cloud Connect, CloudCamp, and Interop. He is co-author of Protect Your Windows Network, contributed a chapter to Auditing Cloud Computing, has published numerous articles, and conducted technical reviews of several data networking and telecommunications books. Before Steve joined Riverbed, he was the cloud security evangelist at Amazon Web Services and a security consultant and advisor at Microsoft. Steve is a global moderator of Kubuntu Forums, a support community for Ubuntu (News - Alert)'s KDE-based distribution. Besides lurking in the Internet's dark alleys and secret passages, he enjoys freely sharing his opinions about the intersection of technology and culture. Contact him at firstname.lastname@example.org; check out his occasional writings at http://blog.riverbed.com.