High Availability on System p
Will live partition mobility replace IBM's flagship availability product on the midrange, IBM's High Availability Cluster Multi-Processing (HACMP)? The answer is no.
HACMP is certainly not for everybody. While it’s a mature, proven product, even the most experienced system administrators will tell you that configuring and maintaining HACMP clusters isn’t for the faint of heart. While much cheaper than competing products such as VERITAS Cluster Manager (also available for AIX and Systems p), HACMP still comes with a cost, much of which actually lies outside of the licensing and associated software costs. This cost includes the funds necessary to train IT staff in HACMP, probable consulting costs incurred during cluster installation and configuration and other related maintenance costs.
Further, HACMP is only really necessary in environments that must have continuous availability. When deciding whether to use HACMP, you must consider the cost of deploying it versus the cost of having your systems down for four to eight hours while your hardware is being fixed. Other important considerations include the actual cost of failure to your environment, and what applications must be highly available. The bottom line is, if you can afford the downtime, then you don’t need really HACMP. If your application absolutely can’t afford to be down, then you may not be able to live without it. (We should also note that many applications, including Oracle, also provide availability solutions at the application layer.)
How does HACMP work?
While HACMP can support up to 32 nodes (8 for Linux), the vast majority of configurations are two-node clusters, in which one node functions as the failover node for the primary node. In HACMP lingo, that means one active and one standby node are running, both using the same shared disk. See Figure 1.
This figure illustrates a two-node IBM AIX HACMP environment running Oracle, consisting of an active and a standby server. Mutual takeover configurations, where both nodes are running applications and backing each other up, aren’t as common, though they certainly also work well. When configuring your cluster, you must account and plan your applications, cluster topology, network connectivity, shared storage devices, shared LVM components, resource groups, cluster event processing and ultimately the clients themselves.
Search our new 2013 Buyer's Guide.
Web Exclusive | Seven charities that innovate for good
Web Exclusive | Data experts aim to balance privacy risk, research potential