Skip to main content

Modernizing on the Mainframe Lowers Total Costs

Learn about two common modernization techniques in the industry that show that modernizing on the mainframe results in a lower total cost

Tech textured freeways overlapping each other

Modernizing applications often comes with the assumption that you need to move your processing off the mainframe. This is attributed to the expectation that moving onto a distributed platform will lower costs. This article demonstrates two common modernization techniques in the industry that show the opposite—modernizing on the mainframe results in a lower total cost.

The two uses cases that are covered include publishing data to a Kafka cluster and invoking a rules engine in Java. We demonstrate that not only do you get cost savings from staying on the mainframe, you will get a simpler architecture and better quality of service.

For each use case, we have two scenarios: one where the processing is deployed on z/TPF, and another where the processing is offloaded to Linux® on IBM Z®. CPU utilization is measured for both systems as CPU is the primary cost driver in these use cases.

For all scenarios we are exploring, we are using a z/TPF system running on two cores of an IBM z15™ and a Linux on IBM Z system running on a single core of the same z15. The z/TPF system is running natively on dedicated processors while the Linux on IBM Z system is running as a z/VM guest, also on a dedicated processor.

Publishing Data to a Kafka Cluster

For the publishing data use case, the decision point is where to run the Kafka producer component. In both scenarios, the z/TPF application produces a JSON document, which is placed on the MQ queue that resides on the z/TPF system. The MQ queue is configured to not guarantee that messages are persisted. 

For the z/TPF scenario, the Kafka producer is run on z/TPF using the Guaranteed Delivery for JVM support introduced with APAR PJ45923. For a visual, see Figure 1, below.

Figure 1

For the Linux on IBM Z scenario, a Java application uses an MQ client to read the document from the z/TPF queue and uses the Kafka producer APIs to publish the document. This is expected to have behavior similar to using a product such as Kafka Connect. For a visual, see Figure 2, below.

Figure 2

In this use case, because the Kafka Broker was located on the same physical machine as the z/TPF system, we did not use SSL in either scenario to encrypt the communications. 

We ran multiple variations using an increasing message size from 1,000 bytes to 5,000 bytes to observe the effects of the message size on the relative utilization.

CPU utilization on z/TPF is categorized into general processor (GP) and transformation engine (TE) utilizations, with the TE utilization charged at a significantly discounted rate for modernization workloads. You can see in Figure 3 below, that even for very small message sizes (1kb) there was significant benefit to running the Kafka publish workload on z/TPF. While the Linux on IBM Z scenario showed a modest decrease in the TE utilization, the increased GP cost and Linux utilization far outweighs that minor difference.

Figure-3.pngFigure 3

We also observed significant benefits from a queue health perspective. Using the z/TPF solution, even at higher message rates the queue remained relatively empty. When ramping up the Linux solution to similarly higher rates we observed the queue filling up and often hitting a maximum queue depth, resulting in potentially lost messages, greater resource consumption, or impact to application response time. 

Invoking a Java Rules Engine

In our second use case, the decision point is where to run the business rules engine. In both scenarios, we are using the “Drools” rule engine, configured to use 200 rules to calculate or modify the price of an airline ticket. The z/TPF application builds random inputs, then uses the tpf_srvcInvoke() API to call the rules application.
For the z/TPF scenario, we are using the business engine running in a JVM on z/TPF. For a visual, see Figure 4, below.

Figure 4

For the Linux on Z scenario, the same business rules engine is running in a JVM on the Linux on IBM Z system using Apache Tomcat as the application server. For a visual, see Figure 5, below.
Figure 5

In this case, both the request and reply are small messages (<1000 bytes). SSL connections are not used for REST request since both server and client are on the same physical machine.
Our results are summarized in Figures 6 and Figure 7, below:

Figure 6


Figure 7

To put these numbers into a practical perspective, to build a solution that can process 10,000 calls per second, the Linux on IBM Z solution would require .38 TPF GPs, 1.6 TPF TEs and 1.8 Linux IFLs. The z/TPF solution would require .05 GPs (87% less), 1.15 TEs (29% less), and 0 Linux IFLs (100% less). In addition, the local rules engine significantly improves the application response time.


For both use cases, we performed modernization of the z/TPF application by performing the processing in a Linux on IBM Z environment. While this does not move the processing off the mainframe, this experiment demonstrates that even moving the data off the system of record can have dramatic effects. If you are consider moving the processing off the mainframe altogether (for example into the cloud or onto commodity hardware), you can expect to have even more pronounced observations because of the necessity to encrypt the data while it is in flight. 

If you are considering modernizing your applications running on the mainframe, it is vital to consider all costs before assuming that it will be cheaper to run your workload elsewhere. For these two scenarios, the total cost can be significantly lower when you keep the processing local rather than moving them off platform. 

Leveraging Java support on z/TPF as well as the optimized connections provided by the JAM support and Guaranteed Delivery for JVM support not only saves you money but has the least impact to your application response time. 
IBM Systems Webinar Icon

View upcoming and on-demand (IBM Z, IBM i, AIX, Power Systems) webinars.
Register now →