Skip to main content

Optimize Memory to Enhance Speed of CICS Functions

This article focuses on CICS performance across all platforms. On the mainframe CICS is known as CICS Transaction Server (TS), but on other platforms it’s called TXSeries, with CICS Transaction Gateway providing standardized CICS interfaces and secure access. All products are developed and supported by CICS development, and interact with each other via facilities like CICS Intercommunication. While techniques, methodology, syntax, parameters and implementation specifics may vary, most functions, concepts and usage apply to all processing platforms. Thus, the term “CICS” in this article applies to both CICS TS and TXSeries unless otherwise stated.

Just as memory is a vital biochemical and neural brain component, preserving information on all the components of life, so is memory a necessity to a computer for performance of desired tasks. Both entities process data and direct activities, albeit using vastly different mechanisms. Similarly, both forms of memory storage can be limiting factors to the efficiency with which the data is processed. If memory is defective, insufficient or slow, it can have a dramatic, negative effect on processor efficiency and throughput.

Similarly, multiple forms of memory are used by both processor types: The brain harbors both short-term and long-term memory—which process differently and are separately located—as well as the spine’s high-speed memory that triggers reactions like response to pain. Also, a processor usually has:

  • Main memory it directly accesses
  • Virtual storage stored on disk that reflects main memory and becomes main memory when recalled by the processor, and
  • High-speed cache

Whether it’s a brain or a CPU, good performance is a byproduct of memory efficaciously interacting with processor.

Two Forms of Main Memory

Main memory (i.e., main storage) accessibility is vital to processor performance, but as applications outgrew main storage’s growth, an analog for main storage—called virtual storage—was devised to simulate main storage. It provides a much greater logical memory footprint by devising the “page” (i.e., a 4096 byte piece of storage) that was written to a disk when not in use, and read back into real main storage when referenced. This allowed a computer to run a program larger than the available main storage, providing 16,777,216 logical bytes.

While virtual storage relieved real storage constraints, it also created a new performance issue known as paging. Paging refers to the I/O activity that’s involved in writing a memory page out to disk when not used for a predetermined time (a process called aging), and the reading a page back into storage when a program needs it for continued execution. Processor overhead is created to perform this I/O both ways; when an executing program needs a page on disk, the program goes into a wait state until the page-in operation completes.

Virtual storage was limited to 16 megabytes because of the use of a 24-bit addressing scheme to access storage. The highest number that can be represented with 24 bits binary is 16,777,216 and, while that’s a big number, it didn’t take long before new applications (and partly the OS plus subsystem storage) were straining that limit. CICS in its various forms was one application causing strain, and for several years my time as IBM CICS Regional Designated Specialist was dedicated to dealing with what were known as “virtual storage constraints.”

CICS Intercommunication

CICS Intercommunication is a tuning option that proves to be the most effective method in reducing the negative performance impact of both real storage and especially virtual storage. This mechanism facilitates resource sharing between CICS TS and TXSeries systems in a network, via a parameter-based facility called Function Shipping (FS). Files, terminals, temporary storage queues, transient data destinations and databases can be shared between CICS systems. Using a facility called Transaction Routing (TR), transactions can be sent to other CICS systems for processing, with the resultant output returned and delivered to the originating terminal or system. File, terminal and application-owning CICS systems can be created with a sole function of managing and processing informational or entity-based applications, simplifying administration and sculpting them to optimize available hardware resources.

CICS Intercommunication also provides Asynchronous Processing (AP), whereby a transaction can autonomously initiate a transaction in a remote system replete with relevant data, enabling the secondary transaction to perform a wide variety of functions independent from the initiator, freeing up resources in the local processor. Additionally, a Distributed Program Link (DPL) allows a program in one CICS system to LINK to a program in a different CICS, spreading the processing of a single transaction among multiple CICS systems, possibly in multiple processors or locations with full integrity. Distributed Transaction Processing (DTP) allows programs in multiple CICS systems communicate directly via a send/receive interface, providing a most flexible, tailorable and tunable intercommunication mechanism, albeit requiring programming effort.

By distributing the processing and data, performance becomes a distributed function. While this may add some complexity, it also provides the flexibility to provide the right platform for the right process based on transaction construction and requirements. Tuning becomes more reactive, adding capacity becomes more incremental and platforms can be more effectively configured to the workload they will handle.

Virtual Storage Tuning

While CICS Intercommunication is the most powerful and versatile tuning facility regarding virtual and main storage, the time-honored discipline of configuring and tuning the paging subsystem can’t be ignored. Paging data set allocation and sizing, placement, implementation of dataspaces and swapping, and monitoring can improve paging subsystem performance. Tuning tips include:

  • Do diligent investigation and estimation in calculating paging data set sizes
  • Only allocate one paging dataset per device
  • Over-specify space for all page data sets
  • If systems encounter auxiliary storage shortages, either over-allocate local page space or use IEFUSI Step Initiation Exit to limit address and data space sizes
  • Use Parallel Access Volume devices for page data sets
  • Use multiple local page data sets on separate devices to spread paging and improve throughput
  • Spread paging data sets on different channels and control units
  • Dedicate channel paths, control units and devices to paging
  • Dedicate devices to paging data set
  • In a multiple-system environment, dedicate devices to individual systems

Additional page tuning information can be found in MVS Initialization and Tuning and Optimizing AIX Memory Performance

Main Storage Tuning

In addition to the previous steps, CICS offers many tuning parameters that can improve main storage efficiency:

  • Move CICS and/or user modules (programs) into the MVS Link Pack Area (LPA) and/or the MVS Extended Link Pack Area (ELPA). This allows multiple CICS systems to access the same modules instead of requiring copies of these modules in each separate address space.
  • Just as with LPA/ELPA, putting programs used by multiple copies of CICS above the 16M line so they can be shared between CICS systems
  • Unaligned maps (screen definitions) are packed more tightly in main storage than unaligned maps, requiring less storage
  • Resident programs stay in storage even when unused, while nonresident or transient programs can be purged when unused. Nonresident/transient programs require less main storage, but entail more processor overhead because of reloading.
  • Transaction Isolation requires more main storage than normal transactions do, so minimizing or eliminating Transaction Isolation reduces main storage requirements
  • Reduce storage over-allocation for caching bufferpool in TXSeries

Too Little, Too Bad

Processor speed has little meaning if a computer doesn’t have sufficient memory to feed a CPU’s appetite. Data is the fuel for the engine, and both real and virtual storage constitute the gas tank that feeds the motor. If the fuel line is too constricted, or if tank capacity is too small, the power plant can’t perform at maximum capacity and efficiency.

Performance tuning isn’t limited to optimizing any single IT component, it’s about honing the performance of the I/O subsystem, processor utilization, real storage, virtual storage and application logic. All pieces need to mesh to provide the overall deliverable an enterprise needs.