Skip to main content

IBM DS8880 zHyperLink Technology Provides I/O Enhancements on the IBM z14

Lowering database transactional latency is critical to mitigate the impact of new data sources and transaction volumes on both the heritage workloads as well as this new work.

Black background with blue and orange glowing lines cascading from a central point.

The growth in cloud, analytics, mobile and social/secure workloads drive additional requirements on IBM Z to execute transactions with continuously improved service-level agreements (SLA) and enterprise class qualities of service. These include workloads that the most demanding of clients have come to expect (e.g., adding new data sources from the cloud for a workflow may increase elapsed times).

Mobile access to data on IBM Z may add unpredictable increases in the volume of transactions that occur, with the resulting contention affecting SLAs. It’s critical that middleware such as Db2 be able to scale to meet these demands. Lowering database transactional latency is critical to mitigate the impact of new data sources and transaction volumes on both the heritage workloads as well as the new work driven directly by the above-mentioned growth.

Lowering I/O latency for IBM Z clients can provide additional business opportunities. In the financial sector, for example, clients may make a trivial transaction such as display an account balance or transfer money. Good response time is critical to provide an excellent user experience. With lower I/O latencies it’s possible to include additional personalized offers to the clients, such as showing how refinancing their mortgage could be advantageous. Thus lower latency can grow business opportunity.

Better I/O latency means a better user experience. Clients will be less likely to switch websites to make a purchase or credit cards on a checkout line because the transaction is taking too long. This means preserving a business’s reputation and client satisfaction.

Improved I/O latency means that financial transactions are less likely to use stand-in processing instead of executing the fraud detection logic. This means that the business may lose less money due to fraud.

Improving I/O latency mitigates or delays the need to re-engineering applications to improve scale as the transaction rates and business grows. Additionally, better I/O latency allows clients to save money by continued growth without adding data sharing instances, which reduces hardware cost and complexity.

Most importantly, reducing I/O latency improves system availability. With improved latency clients get more head room for growth. Unpredictable workload spikes driven by mobile applications and workloads can be handled without degrading system performance or availability. If any hardware failures occur, driving the need for recovery processes to execute (e.g., reset event recovery or storage warm start) the work queues that are built up can be worked off much more quickly.

VSAM applications that need to be restarted after an application or system failure typically invokes VSAM VERIFY processing before the application can resume. Lower I/O latency allows VSAM VERIFY processing to complete more quickly, thus improving availability without any application changes. Figure 1 shows the result of an experiment that demonstrates a two-thirds reduction in elapsed time for VSAM VERIFY processing.

MF-NovDec17-TechCorner-Figure-1.jpg

Figure 1

I/O Latency Improvement Technologies

Prior to the IBM Z family of processors, the IBM z14 and IBM DS8880 Storage System with zHyperLink technology, I/O latency was primarily thought about only as the I/O response. This would include software queueing delays for an I/O request and the components of I/O service time. IBM Z FICON I/O technology provides the ability to parse the components of I/O service time. This ability simplifies the chore for clients to manage their mainframes systems by providing the ability to pinpoint potential problem areas, diagnose problems, and provide a mechanism for chargeback and accounting.

The components of I/O response time fall into four major areas: IOS queue time, pending time, connect time and disconnect time. However, much of the total time for the application to get results from the I/O operation has never been included as part of I/O service time. This additional time includes the time for PR/SM to dispatch the LPAR after the I/O interrupt occurs, the time for z/OS to dispatch the application that made the I/O request and the time it takes for the processor L1/L2 cache to get repopulated for the application reference set after other work that ran on the CPU polluted the processor cache.

The IBM DS8000 Storage System, z/OS, IBM Z processors and middleware (e.g., Db2 for z/OS) has implemented many technologies over the last few years to deliver a cadence of incremental improvements for I/O latency (see Figure 2). These include techniques to improve the cache hit ratios in the DS8000 Storage Systems, comprehensive workload management algorithm with the DS8000 I/O Priority Manager to provide policy-based goal management for workloads, improved I/O parallelism with HyperPAV and IBM zHyperWrite, eliminating unnecessary synchronization points in the I/O execution, faster I/O transport speeds with improved reliability and I/O protocol enhancements for improved efficiencies.

MF-NovDec17-TechCorner-Figure-2.jpgFigure 2

However, the heritage z/OS and z/Architecture I/O model has additional delays that contribute to the latency as seen by applications requesting I/O that hadn’t been directly addressed, including:

  • I/O interrupt delay time: When waiting for an I/O operation to complete, it’s possible for the LPAR to stop executing. The time from when the I/O interrupt occurs to the point in time when z/OS starts running and issues the instruction to retrieve the status is captured and reported by RMF as I/O interrupt delay time. I/O Interrupt delay time is a measure of the virtualization overhead in the system. This time maybe large because of the amount of contention for CPU resources.
  • z/OS dispatcher queueing time: After an I/O interrupt is processed by z/OS, the waiting middleware or application needs to be dispatched to process the results and continue execution. Clients can typically run their z/OS systems at very high CPU utilization.
  • Processor L1/L2 cache reload time: The heritage asynchronous I/O model allows other applications and units of work to run on the processor causing pollution of the processor L1/L2 cache. When the application resumes processing after the I/O request completes application incurs delays while the CPU reloads the cache.

zHyperLink technology changes the I/O programming model, eliminating most of the components of latency for applications issuing an I/O request to read or write data. The delays described are all now directly addressed by a new I/O paradigm for IBM Z processors and DS8000 storage.

With the faster link technologies available in the market today and the outstanding cache hit ratios available with IBM storage for z/OS on IBM Z, a new CPU synchronous I/O paradigm allows z/OS to significantly reduce I/O latency. All of the component times described above that occur for heritage I/O operations are either eliminated or significantly reduced. There are no longer queues of I/O requests, the links run 5x faster, there are no more I/O interrupts, applications no longer need to be dispatched for I/O completion and, very importantly, the processor L1/L2 cache no longer needs to be reloaded for the application to resume execution as the data remains in cache. Figure 3 shows that zHyperLink technology reduces the total elapsed time of I/O operations for a huge improvement in I/O latency.

MF-NovDec17-TechCorner-Figure-3.jpgFigure 3

The zHyperLink design is able to meet the low latency requirements because of the knowledge of the entire stack from the point where an application performs an I/O operation until the completion. Understanding the requirements of Db2 and VSAM I/O and the possible interactions with other I/O conditions allowed the DS8880 optimize the implementation to process zHyperLink I/O with minimum latency. Every aspect of the I/O in DS8880 has been examined in detail by the entire end-to-end stack to eliminate or greatly reduce the processing requirements and thereby latency.

The zHyperLink technology provides the low latency while using the traditional SAN attached DS8000 Storage Systems with a complementary new short distance link technology. Low latencies are provided for read and write operations for storage systems within the 150 meter distance requirement using a point to point link from the z14 to the DS8880 I/O bay. Existing disaster recovery technologies will continue to work with the initial general availability of the zHyperLink read capability. IBM intends to have the zHyperLink write capability working with Metro Mirror and HyperSwap within 150 meter distances and asynchronous replication at the supported distances.

Initial Client Value

zHyperlink is designed for up to 10X lower read latency than High Performance FICON. As described previously in the article, the improved read latency provides clients with the following value where middleware must use zHyperLinks or exploit software that uses zHyperLinks:

- Higher availability through faster recovery operations

- Higher availability through improved peak IO capacity

- Basis for faster middleware with better scaling

- Lower latency and better throughput for FICON storage systems

zHyperLink Express improves application response time, cutting I/O sensitive workload response time by up to 50 percent without requiring application changes. Faster transaction processing allows the lines of business to add new functions that:

  • Have lower cost and less risk than application development to get lower response times
  • Enables new business logic in existing application at the same or lower response time

Technology Deliveries

At general availability of the DS8880 firmware in December 2017, enterprises will be able to exploit:

  • Db2 or “synchronous” (blocking) reads (e.g., batch and transaction processing)
  • Middleware exploitation of Db2 (e.g., from IMS and CICS)

As part of the IBM Z and z/OS announce on July 17, 2017, a statement of direction was issued for the VSAM exploitation of zHyperLink. The VSAM exploitation will provide the following:

  • VSAM read support (batch and transactions)
  • IMS, CICS and other middleware and applications that use VSAM

Figure 4 shows the z/OS and Db2 supporting software versions. VSAM exploitation will deliver within the scope of the standard statement of direction window.

DS8880 Hardware/Firmware Characteristics and Function Availability

Initial zHyperLink connectivity supports two zHyperlink connections on each I/O bay, providing up to 16 physical connections in a fully configured system with eight I/O bays. At general availability in December 2017, zHyperlink support will be restricted to DS8886 configurations with 16 cores and 256 or 512 GB of memory, and only four active zHyperlink connections will be able to be used. These restrictions are intended to be lifted in the first half of 2018, enabling up to 12 zHyperLink connections to be used and providing support for all configurations of DS8886 systems.

IBM also intends to extend zHyperlink support to all DS8880 and DS8880F models in future releases. IBM intends to provide write support to improve active log throughput in a future release of zHyperLink function.

Delivering Value

Low I/O latencies deliver value through improved workload elapsed times and faster transactional response times, and contribute to lower scaling costs. zHyperLink is a new CPU synchronous, SAN I/O approach that delivers a significant breakthrough in SAN link latency.

The DS8880 implementation of zHyperLink I/O delivers service times fast enough to enable a synchronous I/O model in high performance IBM Z servers. The combination of the zHyperLinks link latencies and the benefits of synchronous I/O operations deliver up to 90 percent reduction in I/O services times compared to traditional SAN I/O operations yielding a 50 percent reduction in elapsed times for I/O sensitive workloads. zHyperLinks is initially supported by Db2 for z/OS and VSAM, enabling exploitation by transactional and batch workloads.


Disclaimer: Projections of read I/O latency are based on I/O service times and CPU queueing delays from IBM internal measurements. The actual performance that any user will experience may vary.

Disclaimer: This response time estimate is based on IBM internal measurements and projections that assume 75 percent or more of the workload response time is associated with read DASD I/O and the storage system random read cache hit ratio is above 80 percent. The actual performance that any user will experience may vary.

Components of I/O Latency for Applications

The IBM Z heritage I/O model reports on the following components of I/O response time:

IOS Queue Time: The I/O Supervisor (IOS) queue time represents the average time from when an application sends an I/O operation to the IOS component of z/OS until it’s accepted by the channel subsystem. High IOS queue time could result from a shortage of Parallel Access Volume (PAV) Alias devices, frequent use of reserve/release and various OS recovery processes running.

Pending Time: PEND time reflects the time between acceptance of the I/O request in the channel subsystem for the device (subchannel) and acceptance of the first command associated with the I/O request. This value includes the time waiting for an available channel path and control unit as well as the delay due to shared DASD contention. High PEND time can occur because of contention for shared resources in the channel subsystem, channel paths, SAN fabric, storage host adapters and the device itself. The Initial Command Response time (ICMR) is an additional measurement that further parses the PEND time. Additional instrumentation retrieved by the software from the DS8000 storage system provides additional information for understanding PEND time.

Connect Time: Connect time represents the average time that the device was connected to a channel path and actively transferring data and commands between the device and central storage. Typically, this value measures data transfer time but also includes the time introduced by the multiplexing of multiple FICON or IBM Z High Performance FICON operations to multiple on the same channel path.

Disconnect Time: Disconnect time for the I/O operation represents the average time not transferring data while processing the I/O request. Thus, this value reflects the time when the device was in use but not transferring data. It includes the overhead time when a device might be resolving a cache miss for a read operation or the time to synchronously replicate data for a write operation.

—H.Y.

IBM Systems Webinar Icon

View upcoming and on-demand (IBM Z, IBM i, AIX, Power Systems) webinars.
Register now →