Skip to main content

A Comprehensive Approach to Application Management

In this post, I explore the discipline of application management, which is an IT management process with tools focused on the availability, performance and manageability of applications. Applications that have files, databases and other resources distributed over multiple OS images can be a challenge to manage. When suddenly some part of the application isn’t working, what’s wrong?

Applications Management as Part of a Comprehensive Approach

The diagram in Figure 1 represents a model of a complete or close-ended approach to managing an application. It’s used here to explain some key aspects of application management terminology and its basic approach.

Figure 1: Applications management as part of a comprehensive approach
Software Monitoring

The model shown in Figure 1 includes the dimensions of software monitoring and web application sampling—two different methods that work together to help monitor and manage an application. Software monitoring (left side of the figure) has three parts that build upon one another:

  1. Basic monitoring and control
  2. Additional component monitoring
  3. Application instrumentation

Parts 1 and 2 are used today but the third involving application instrumentation is often neglected. Let me explain.

The focus of basic monitoring and control is to detect problems on the server hosting the application component. The basic monitoring can be active, probing system resources periodically to see that they are available and responding. And, basic monitoring can also be passive by simply waiting for specific error messages to appear that indicate that there’s a problem and taking a recovery action.

Although basic monitoring seems straightforward, some IT departments don’t deploy basic server monitors. This is because they don’t have the skills or resources to support monitoring and the software and hardware infrastructure required. More complete monitoring is focused on additional component monitoring mainly on middleware used by the application (e.g.,  databases, message queue managers, etc.). Comprehensive monitoring includes the addition of application instrumentation like monitors looking at specific application resources or using performance tools like Application Response Measurement. This is just one half of the overall approach to manage an application.

Web Application Sampling

The model also includes web application sampling (right side of Figure 1) that has four parts that build upon one another:

4. Manual execution of application tests
5. Automated execution of application tests
6. Integration with problem management
7. Full systems management integration with other perspectives like change and performance

Manual execution of application tests involves having people (human application monitors) who periodically use the application being monitored to make sure that functions are running properly. This might include ordering products that are later returned by using a different function of the application.

Why would a company use people to do this and not use software? In some countries, labor is cheaper than software so people are used. Some believe that people are more flexible than software as they can change their processing on a moment’s notice—just give them a different checklist. Some contracts with end users may specify the use of human monitors, as confidence in the ability of software to detect problems may not be high in the user community.

Automated execution of application tests is used instead of having humans do the work. This approach can sometime leverage test artifacts previously used for a different purpose. For example, test scripts designed for stress tests can be run in a kind of slow motion (e.g., every 15 minutes) to detect failures in application functionality. Integration with problem management means automatically generating problem records that require human intervention when application sampling fails. This is a kind of “management by exception” approach.

Finally, full systems management integration occurs when the application sampling is united with other perspectives like performance, workload, configuration, operations, network, storage and security managements. The degree of integration required is dictated by the IT organization and how the degree they find necessary to manage applications.

Does This Model Work?  

Models like this one are an effort to keep an application running using sampling as the main tactic. The results are only as good as the sample. If the basic monitors selected or automated monitoring scripts used aren’t carefully selected, then the results could be unpredictable.

This model is focused on the running application, but what else can be done to make the application manageable at other phases of its development? That is an interesting question that isn’t addressed by this model but can be handled with some planning and basic activities. For example, during application design, key application resources or interfaces should be identified so monitors (application instrumentation) can be created as the application is developed so the management solution can be tested along with the application itself. How do you know that the monitors actually work unless you test them alongside the application?

Next week, I’ll keep going with the management theme with a focus on cloud management. What is different with cloud as compared to system, network and application management?