The System z Virtualization Environment Provides Far Lower TCA Than its Intel Alternative
Editor’s note: Mainframe Evangelist Roberto Calderon and zChampion Jon Nolting contributed to this article.
Most IT centers recognize that virtualization improves hardware utilization and administrative efficiency, and lowers overall costs.
Choosing the right virtualization platform can lead to high consolidation ratios while maintaining acceptably performing guests. Conversely, choosing the wrong one can lead to either poorly utilized hardware or underperforming guests. It isn’t enough to cram as many guests onto a host as possible; those guests must also run well enough to meet their service-level agreements (SLAs). Business-critical or priority guests must be allocated the resources they need when they need them. Hypervisors that can efficiently divert resources from noncritical or donor guests to priority ones, when necessary, can provide high consolidation ratios without missing priority-guest SLAs.
In other words, virtualization platforms must efficiently differentiate between high-priority and low-priority workloads running together. The capability to mix workloads while maintaining SLAs enables maximum utilization of the computing resources provided by the hosting platform. An incapability to do this may require using separate platforms for high- and low-priority workloads, increasing cost and complexity.
This article summarizes findings from IBM researchers comparing workload management mechanisms in virtualized environments.
Each hypervisor tested by IBM hosted multiple guests—some designated as higher priority and others as lower priority. Data was taken for three scenarios:
- Priority guests under load. No donor load.
- Donor guests under load. No priority guest load.
- Both priority and donor guests under load.
Priority Guest Workload
Priority guests ran a transactional workload. The test used a simple banking application based on IBM WebSphere* Application Server (WAS) to simulate transactional workload. The WAS application was backed by a DB2* database. Each priority guest included its own WAS V8.5 and DB2 V10.1 instance. Data taken from guest testing included the number of transactions processed, maximum transaction rate, average response time and response time standard deviation.
Rational* Performance Tester was used to drive transactions on the priority guests. The selected transaction profile modeled one of two transaction profiles. One included short bursts of high transactional load separated by periods of low transactional load. The second was a heavy stress profile. These test priority workload profiles resulted in differing peak to average CPU usage rates. Each profile represents an hour in real time.