The System z Stack Evolves to Deliver Business Value From Large Memory
Illustration by Douglas Smith
Editor’s Note: IBM Distinguished Engineer and IMS Chief Architect Betty Patterson, Senior Technical Staff Member Marcel Mitran, IBM Distinguished Engineer Chris Crone, IBM Distinguished Engineer John Campbell and Senior Technical Staff Member Akiko Hoshikawa co-authored this article.
As a real-time data and transaction processing host, the z/OS* software stack provides data for up-to-the-minute requests both inside and outside the enterprise. Response times are critical to productivity and meeting business goals.
With advanced mixed-workload capabilities, z/OS can host batch processing on both live and time-consistent views of the data served for transactions. Like transaction processing, batch processing is time sensitive. Large memory and z/OS can enable new business value by processing more data in shorter elapsed time at lower CPU cost. Shorter elapsed processing time could allow users to move from quarterly to monthly—or even daily—close cycles. Reduced CPU consumption could enable deeper analysis to detect new sales opportunities.
Data access from the coupling facility can be 10 to 50 times faster than getting the equivalent data from disk.
—IBM Information on Demand whitepaper, “Save CPU Using Memory”
Physical memory on the server helps determine both the response time of transactions, and the elapsed time and CPU cost of batch workloads. Today, System z* servers support up to 3 TB of real memory per server shared across all of its partitions. Each partition can use up to 1 TB of memory. Large z/OS partitions typically are configured with 100 to 300 GB of real memory—some even larger. Many z/OS stack users have completed the migration to 64-bit systems and are deploying the software stack needed to gain value from very large real memory.
Technology supports an increased role for large real memory. Dynamic RAM (DRAM), the current building block for memory, has industry roadmaps showing substantial scaling past 20 TB per server despite current DRAM scaling issues. Real memory per image in a System z server can approach the size of some databases, enabling in-memory database optimization.
Big insights, big data and the demand for real-time analytics in combination with low-latency transactions place high demands on data access, and it’s becoming increasingly challenging for modern I/O subsystems to deliver. The need to improve latency has fallen on application and middleware developers as well as system providers. The industry, therefore, is seeing a shift in application and middleware programming models, persistency systems, and application-development frameworks. At the same time, we see the evolution of in-memory databases and analytics, large-scale distributed caching systems such as WebSphere* Extreme Scale, and object-relational mapping libraries for persistency such as Java* Persistence API.
As a result of this shift, modern Java Runtime Environments, such as IBM J9 Virtual Machine, include incremental garbage collection (GC) technology (like the balanced GC policy) to address increasing heap storage to thread performance ratios. Java users are reacting to these changes by configuring servers, including System z servers, with much larger amounts of real memory. Large memory is necessary to support the evolving Java environment for paging avoidance, for example, and to enable CPU-saving technology such as 1 MB large pages.
Research shows a 5 to 10 percent improvement in CPU performance in DB2 10 versus DB2 9 through better memory management for remote/local applications.
Search our new 2013 Buyer's Guide.
Cover Story | How to leverage existing technology to gain
a competitive advantage
Cover Story | The mainframe is entering new markets and encouraging new uses
Trends | Emerging enterprises know System z can help them compete globally