POWER > Systems Management > Workload Management

Server Virtualization Means High Utilization and Lower Costs


Three executives from a New Jersey-based Fortune 1000 manufacturer arrived at the corporate hanger early one Monday. Steve, the CIO, wants the company’s Gulfstream V with luxury seating for 16 to take him to a vendor conference in Atlanta. His boss and the CEO, Randy, needs to be in Boca Raton by 10 a.m. and back home for a dinner with the board. Jack, who manages the factories, wants a ride to San Jose. The pilot wisely decides the jet is going to Boca and tells Steve and Jack they’re welcome to ride along but having access to a private jet has no benefit over commercial flights for them that day.

In rough math, 16 seats times 12 hours a day is an availability of 192 seat hours per day. One passenger to Boca is approximately five seat hours, meaning the jet—an asset with a current market value of $9 million—will have utilization that day of 2.6 percent. No multitasking, no virtualization and, by the numbers, terrible utilization.

Boosting Utilization

Now consider another corporate investment—IT infrastructure. Servers in enterprises come in all stripes and sizes.

In commodity-server data centers, server utilization is often in the 20 to 30 percent range. Server sprawl happens for many reasons, but chief among them are application isolation (each application has its own server), line of business (LOB) isolation (each LOB has its own servers) and the mentality that hardware is cheap. The factors supporting application and server isolation are often justified by security or cost allocation concerns but are at best political, with little or no real advantages accruing for technical, security or accounting reasons. However, commodity servers aren’t cheap when you factor in the real estate, power and cooling, and especially manpower.

IBM Power Systems* servers have emerged as best of breed in several TCO studies including Wintel and other UNIX* servers. Best practices dictate high equipment utilization and low manpower requirement, thus leveraging real estate, equipment and software investments.

With server utilization well below 100 percent in aggregate, the business and technology challenge is to extract the maximum resource utilization through virtualization. Think of virtualization as the fulcrum supporting the lever of large-scale, high-utilization servers as compared to the same workloads deployed on low-cost, commodity server technology. Those workloads are inherently inefficient, unreliable and underutilized due to the inability of the infrastructure to deploy all of the resources into productive work. Arguably, virtualization supports the efficient high utilization of workload isolation. Virtualization has proven superior in terms of security and uptime because, in effect, every workload down to the job level can be run on a VM (a concept, by the way, developed and initially deployed IBM in the late ’60s).

Deploying Virtualization

Server virtualization is not alchemy; nobody is claiming to make gold out of lead. But virtualization, by dynamically controlling resource deployment (CPU cycles, memory, storage) among and between many workloads, can find the peaks and valleys in a workload’s resource needs (times when a task could use more CPU cycles than it’s configured for). And it can lower those peaks by temporarily provisioning them from a workload that has the resource available to share without degradation.

The end result is not the capability to get 101 percent server utilization, but to approach that goal by having the “smarts” built into the software to automatically execute allocation and reallocation dynamically, so the environment theoretically approaches maximum utilization of 100 percent. As a side benefit, for the advocates of workload, application and LOB isolation, virtualization provides superior security, monitoring, reliability, availability and accountability by the very workload containerization required by virtualization. You can’t show HR the machine dedicated to their department, but you’ll know a great deal more about it—its health, capacity, agility to reach for peaks, etc.—than in a server-farm infrastructure.

Jim Young is the VP of Sales for Midrange Performance Group, providers of performance management and capacity planning software and services for IBM Power* platforms.


comments powered by Disqus

Advertisement

Advertisement

2018 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

POWER > SYSTEMS MANAGEMENT > WORKLOAD MANAGEMENT

Server Virtualization Means High Utilization and Lower Costs

POWER > SYSTEMS MANAGEMENT > WORKLOAD MANAGEMENT

Anytime Management

POWER > SYSTEMS MANAGEMENT > WORKLOAD MANAGEMENT

Why backups, maintenance and security are must-dos for IT shops

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store