Virtualization and High-Availability Strategic Planning

Virtualization is a critical part of today’s IT infrastructures. Globalization is radically impacting the way we manage datacenters. It increases the requirements of resource availability to users while exploding the quantity of data that must be stored. Additionally, for many organizations, the service-level requirements to support supply chains and customer relationships are ramping up at an unparalleled rate. How’s a company to meet the increasing demands?

Some organizations add new servers to bridge gaps in their resources. Each additional server adds new complexity to the overall infrastructure, reducing the window of time these organizations are protected from interruption (Figure 1). But at the same time companies are responding to workload increases, governmental and industry regulations are mandating that they develop comprehensive plans to protect their infrastructures from calamities and preserve critical transactions longer.

This requires a technology strategy that will permit continually expanding the computing infrastructure and consolidating the complexity of its operations into more manageable and secure footprints, all the while meeting high-availability (HA) requirements.

How Virtualization Technologies Assist Business-Continuity Planning

Virtualization combined with HA products can significantly assist organizations in achieving business-continuity planning (BCP) goals. HA solutions provide precisely the resilient, scalable and cost-effective means to fulfill these innately complex strategic requirements. Instead of adding hardware to fulfill demand requirements, companies can configure existing infrastructure to respond to increased capacity and availability while protecting the information in a defined manner.

Consider the organization running multiple Linux applications and Oracle file servers in conjunction with an array of i5/OS applications running DB2 databases.

The company’s BCP may require a backup schedule that creates a targeted recovery time objective (RTO) of critical data within minutes or hours of an interruption. Yet the combination of users, applications and databases scattered across the ever-expanding array of application and file servers may require a backup coordination strategy that pulls these resources out of service for increasing periods of time.

In conflict with those goals and physical realities, the BCP may also require that these same computing resources be available to meet the access requirements of users across diverse time zones. Additionally, the BCP may also dictate that the recovery point objective (RPO) be made more current to ensure that a disruption of service doesn’t impair the accuracy of transactions.

In other words, the critical computing infrastructure must be available for longer periods with fewer unplanned and scheduled disruptions (Figure 2).

Thomas M. Stockwell is an independent analyst and writer. He may be reached at

Like what you just read? To receive technical tips and articles directly in your inbox twice per month, sign up for the EXTRA e-newsletter here.

comments powered by Disqus



2019 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

Achieving a Resilient Data Center

Former IBM exec Bob Angell shares his insights on improving data-center resiliency

Achieving HA

Using cluster resource groups for high availability

Announcing IBM High Availability

Customers have asked for an end-to-end high-availability solution from IBM, and it delivers with HASM.

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
IBMi News Sign Up Today! Past News Letters