MAINFRAME > TRENDS > WHAT'S NEW

The Modern Mainframe: Then and Now


With the recent announcement of the latest evolution of the mainframe—the zEnterprise EC12 (zEC12)—it seems fitting to look back at the platform and where it’s been. The history shows that in a sense the modern mainframe has come full circle.

Most consider the mainframe’s birth to coincide with the April 7, 1964, announcement of the IBM System/360 line of computers. Prior to that, computers had offered limited flexibility in terms of the computing problems they could address. Furthermore, these previous computers were incompatible with each other from both a hardware and software perspective.

The Early Years

The early days of commercial computing had no single concept of “The Computer.” Instead, there was the data preparation processor, the application processor, the print processor and so on. And more often than not, each of these functions was performed on a unique and incompatible physical machine designed specifically for one purpose at a price point commensurate with the value of that function.

Application processors, for example, were designed for either numerically intensive or data intensive work. No networks or physical interconnects existed between the various functional processors. The primary means of transferring data between the functional processors was magnetic tape and punched cards.

The System/360 changed all that by introducing the notion that a single, general-purpose computer could concurrently satisfy the needs of multiple disparate workloads. Furthermore, through software adherence to a common hardware architecture, applications and machines could be made compatible, interoperable and scalable. Thus was born the holistic view of “The Computer” in which even early application users regularly referred to the collective of data processing equipment housed in the “machine room” as “The Computer.”

Shattering the Holistic View of the Computer

This view prevailed for more than a decade after the mainframe’s introduction. It was reinforced by the ubiquitous deployment and use of online systems that occurred throughout the 1970s and early 1980s. The average users sitting at an interactive mainframe computer terminal rightfully had little or no awareness of the hardware and software elements that enabled their particular application. To them, it was simply “The Computer” as in: “The computer messed up!” or “The computer won’t let me do that!”

But this simple view quickly morphed into a much more complex network of interconnected and increasingly visible components. For all kinds of reasons—cost and application-owner control at the forefront—IT organizations began to distribute mainframe-hosted functions to non-mainframe platforms. Mini computers were deployed as departmental and remote function/data servers. With the advent of PC technology, these distributed servers became pervasive and arguably unmanageable. A typical application’s transactions and data passed through multiple disparate hardware and software technologies, often resulting in one or more single points of failure.

Users were now forced to alter their view from one of a single computing entity to one of multiple interconnected components, any one of which could disrupt application availability. And when service disruptions occurred, the proverbial Help Desk might not know which component failed or the expected outage time. The bottom line is the simple view of the computer as a single, application-hosting entity had been broken—if not completely shattered.


comments powered by Disqus

Advertisement

Advertisement

2019 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

MAINFRAME > TRENDS > WHAT'S NEW

Your Input Needed: IBM Systems Media Reader Survey

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
Mainframe News Sign Up Today! Past News Letters