But which flavor is right for you?
So, your organization has taken the plunge and are leveraging a SaaS solution for your email, collaboration, document storage, or other critical business app – or maybe all of the above. Now you have the responsibility to migrating users to the new cloud based solution(s) and then making sure that those users don’t experience application outages or poor performance.
If you’ve been reading our previous blog articles you probably already know that your legacy tools aren’t going to be much help, but at the same time most APM solutions aren’t for you either; they’re for DevOps teams developing and
operating apps like the SaaS apps you are consuming.
Approaches to Application Monitoring
It’s clear you need to monitor these cloud-based apps. The question is how. Generally speaking, there are three common approaches to application monitoring:
- Log, SNMP, or management API monitoring
- Passive / Real-user monitoring
- Active / Synthetic monitoring
You are probably already familiar with log, SNMP or management API monitoring. This is the most common method used for monitoring the health and availability of the applications you’ve been operating on your own servers. There are lot of tools, like Microsoft Systems Center Operations Manager (SCOM) and Splunk, that can consume, visualize, and give you alerts from these data sources. Unfortunately, you don’t have access to logfiles or SNMP messages from SaaS applications. They are completely black box, so this approach won’t work.
How about passive or real user monitoring? These solutions work by inserting something between the users and the application code. Real User Monitoring (RUM) solutions do this through additional code that is inserted in the application to capture user experience and performance data. Many “DevOps” focused APM tools use this approach. It’s great if you are a application developer/operator because it allows you to get very fine-grained user performance data down to the code segment level. But, as the app buyer (the “business operations” team), you don’t have access to the code or web servers to do this, and even if you did, you don’t need this level of granularity for a specific application tier, you need overall performance across multiple tiers, networks, and supporting infrastructure.
Other passive solutions will often leverage some form of network tapping appliance (a.k.a. a “wall wart”) or software deployed to user desktops that intersects network application transactions, interrogates them, and then surfaces data about the application network traffic. These solutions can be challenging to deploy and maintain (who wants add yet more network gear or desktop apps to manage?), and often don’t provide the level of application context awareness needed to be effective at pinpointing problem sources that are caused by downstream network or infrastructure faults.
Why Synthetic Monitoring for SaaS?
As a SaaS app admin, you need to find the most efficient and effective way to answer these three questions:
- Is my app online and accessible?
- Are core app functions performing as expected?
- If not, where is the problem (the app provider, the ISP networks, or local)?
You don’t need to know if a specific library of application code is 10 msec slower than last week, and you don’t want to take on a lot of added management burden for more monitoring infrastructure.
Synthetic monitoring strikes a good balance between effectiveness and efficiency. With synthetic monitoring you run software agents that interact with the SaaS app in the same way users do. You can deploy these agents to as few or as many locations as you want and the tests can be run at whatever frequency you want.
However, not all synthetic monitoring solutions are the same, and it’s important to pick a model that is right for you. Here are three approaches to synthetic monitoring and some pros and characteristics to consider with each:
Roll your own
Organizations which do a lot of scripting and development of their own applications sometimes chose build their own synthetic testing tools. This can make sense for internally built applications or situations where teams have a robust testing framework in place. Building your own allows a high degree of control of the specific scenarios and configurations you test. But this flexibility comes at a considerable cost in terms of complexity and maintenance. Researching, coding, debugging, and updating your own synthetic scripts for 3rd party apps isn’t viable for most organizations and doesn’t scale well as you add more apps to test.
Train and maintain
There are a number of synthetic testing tools that provide mechanisms to enable you to “record” a user application session and then replay that recording periodically to test that execution path. This requires less pure programming skills than a home-grown implementation yet still allows you to test very specific user scenarios. However, these recordings are not maintenance-free. SaaS app user interfaces and API’s change frequently, so even with these tools you’re signing-up for an ongoing maintenance commitment.
There is also a difference between tools designed to be used for application QA or diagnostics and those designed to run continuously for monitoring. A tool that requires that you are there to run it and look at the results isn’t going to help you get ahead of issues before they impact your users. In addition, while tools designed to assist with application debugging may provide some form of waterfall display of each segment of the transaction, this is often not enough information to pinpoint problems stemming from network or infrastructure issues. A problem at the provider will present itself much like a problem in the ISP or your local network. You may need to access several other tools or teams to find the source, all while your users are complaining.
Ideally you want a solution that a) is pre-trained for the apps you use, b) is maintained for you, c) is designed for continuous monitoring and alerting, and d) augments synthetic transaction data with other internal and external reference data to help you “triangulate” and pinpoint problem root causes.
Those are some of the core requirements we had in building CloudReady. We build and maintain application-specific sensors that only require you to configure them with the credentials, run frequency, and locations where you want them to run. In a minute you can fully configure and deploy across hundreds of locations. Once deployed you never have to worry about updating them for the latest version of the apps they monitor. We do that for you, and it happens seamlessly and continuously.
Additionally, CloudReady sensors don’t just record application transaction information, they simultaneously collect network health metrics end-to-end from your network to the SaaS provider’s network, as well as application and network performance data trends measured at other customer locations (a.k.a. “the crowd”). This combination of local + crowd, application + network data is correlated and visualized in a way the helps you quickly find and fix problems fast, regardless of whether they are inside or outside your network – so you can spend time on initiatives that are truly meaningful to your business instead of fiddling with a bunch of synthetic testing scripts.
Want to see how easy it can be to cover your SaaS? It only takes 5 minutes to get started.