IBM i > ADMINISTRATOR > HIGH AVAILABILITY

Achieving a Resilient Data Center

Former IBM exec Bob Angell shares his insights on improving data-center resiliency


I’ve witnessed the need for resilient data centers firsthand while working as a cross-platform systems-integration expert for the past 25 years, and have also seen its need in my personal life when a friend recently had his identity stolen. It’s taken him quite some time to bounce back from this horrible experience.

 

IBM understands the need for business resiliency and continues to innovate so customers can respond quickly to changing needs. IBM and the rest of the technology industry have been abuzz for the several years about cloud computing, software as a service, virtualization, green, resiliency and other data-center initiatives. Many of these developments are designed to provide high value to the enterprise data center.

Creating a resilient data center is one of the main and critical means of minimizing “Maalox moments,” the heartburn and headaches of a data-center outage. Due to the intertwined initiatives of today’s business environment, many people confuse resiliency with elasticity, virtualization and other efforts. As in my personal example, resiliency implies there’s some mechanism in place to help adjust to an adverse crisis, condition or environment. How do you, as an individual, best handle and bounce back from adversity? For many, the best answer is, “It depends.” Some people have nightmares; some develop insecurities that eventually subside with time and others remarkably just bounce back seemingly without much effort.

A resilient data center can be achieved without the loss of time, money or business when you better understand good data-center techniques, elasticity and virtualization—and how they fit together. When you employ these techniques in your data center, the question of resiliency can be answered without first experiencing a catastrophic failure.

Best Practices

Many data centers today have become too complex and some ignore best practices. One best practice is actually quite simple: ensure all testing, development and production environments are at the same levels. These easy synchronization efforts have sometimes been tossed aside in lieu of budgetary concerns. Businesses cutting corners in such a manner to save a few coins creates a very slippery slope; side effects can cause millions of dollars in mistakes, outages and lost revenue. How can you expect a critical business application developed in one environment to work seamlessly in a completely different one when the computer mix and operating-system levels don’t even match? If you have your production, testing and development environments at the appropriate levels, when applications are developed or tested, they can be easily put into production. When you ignore data-center best practices, you’re essentially jumping over dollars to pick up pennies.

Elasticity

Elasticity is defined as how well something stretches or responds to outside stimuli. A rubber band has elasticity, as it can stretch and return to steady state quickly. In a data center, elasticity can be defined in terms of how your environment responds to unexpected high traffic on your Web site, natural disasters, dramatic business downturns, etc. Each event can instantaneously render your data center impotent if it can’t flex when needed. Elasticity helps a piece of hardware or collection of systems respond to a spiked need in resources. IBM PowerHA (www.ibm.com/systems/power/software/availability) offers businesses both disaster recovery and high availability to expand elasticity and headroom. On larger hardware, the term headroom is used to define the computer’s elasticity and how well it might handle spikes in computing requests. Therefore, to achieve a resilient data center, the hardware being used must have appropriate headroom and elasticity.

Virtualization

IBM PowerVM (www.ibm.com/systems/power/software/virtualization) is yet another set of tools to create and maintain a resilient data center. PowerVM can help you manage computing workloads and regulate overall utilization. With current techniques, you can deploy entire computers, operating systems, applications or a combination thereof in a virtualized environment. For example, on my laptop, I have several versions of Linux, OpenSolaris, Windows 7 Beta and FreeBSD installed, and can use any or all concurrently depending on my needs and memory constraints. Each one of these virtual machines can share data, network interfaces and other main machine resources. They can also function independently. The same strategy occurs in a data center and lets you better utilize your investments; virtualization lets you smooth out computing spikes and headroom issues.

A resilient data center can be yours if, and only if, you design it to not only handle daily tasks appropriately, but also anticipate unforeseen tragedies or opportunities. Rebounding from identity theft takes time, but if you safeguard your personal data by using safe data practices and have a cash reserves for elasticity, it’ll help you become more resilient. A resilient data center is successful when you use good data-center practices, pay attention to the elasticity of the system and use virtualization to minimize the expense of underutilized machines on the data-center floor.

 

Bob Angell has more than 20 years experience as an information scientist.


comments powered by Disqus

Advertisement

Advertisement

2019 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

Achieving a Resilient Data Center

Former IBM exec Bob Angell shares his insights on improving data-center resiliency

Achieving HA

Using cluster resource groups for high availability

Announcing IBM High Availability

Customers have asked for an end-to-end high-availability solution from IBM, and it delivers with HASM.

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
IBMi News Sign Up Today! Past News Letters