Compute Trends and the Evolution of the IBM Mainframe
The past 40-plus years of commercial computing has been a story of three waves (acts) of increasing integration of processes coupled with increasing scale of computing.
By Phil Weintraub07/18/2019
For many of the world’s largest enterprises, modern businesses were built around process automation using the mainframe. Introduced by IBM in the mid 1960s, these systems became the undisputed king of the data center. The human interfaces were limited to specially trained programmers. The mainframe contributed to the large-scale automation of online and back-end processing, delivering great efficiencies to large enterprises and government agencies around the world.
Act I: The PC RevolutionThe personal business computer revolution arrived in the early 1980s, and many experts predicted the demise of the mainframe. In fact, other mainframe manufacturers—known as Plug Compatible Manufacturers (PCMs)—exited the mainframe business and shifted to the production of smaller distributed systems. The IBM mainframe however, evolved and thrived for the following reasons:
- IBM shifted from expensive Bipolar based technology, to lower cost CMOS based technology, keeping the platform price competitive
- IBM embraced the open standard intercomputer protocol TCP/IP, versus the proprietary SNA
- Most professionals now had a personal computer workstation at their desk. The PC provided an interface for non-programmers to initiate a large volume of transactions that drove workload on the data center mainframe. Distributed systems did not have the scale, resiliency, security and reliability that many mission-critical applications require.
Act II: The Internet Opens for BusinessBy the early 1990s the Internet had arrived. This began a democratic revolution, bringing the power of the computer to consumers around the world. Today’s tech giants like Amazon, Facebook and Google are just a few of the startups that were born on the web. Brick and mortar businesses learned they needed a presence on the web if they were to survive. As large enterprises created web portals for their clients, the number of computer users initiating transactions grew exponentially. Every citizen with a personal computer could now transact business any time of the day! No longer were business hours limited to 9 a.m.-5 p.m., Monday through Friday. For many of the world’s large enterprises, their mainframe was the only platform that could scale to handle the increase in transaction volumes.
Let’s review the lessons from Acts I and II, to understand why the evolving mainframe continued to thrive:
- Continued price improvement to maintain competitiveness. For the mainframe this is 1. Lower cost of hardware and software per unit of work; 2. Significantly less floor space, cooling and power; 3. Fewer operations staff versus distributed systems
- Scale matters. Mainframes are shipped with many CPUs under the covers that can be enabled quickly. They have massive amounts of memory and I/O bandwidth.
- 99.999% availability due to the architecture design
- Embracing open standards such as Java
Act III: The Rise of the CloudCloud has been hailed as the future. Many people believe that all business workloads will move to the cloud. Will it? Should it?
There are two aspects of cloud:
- The Topology of Application Logic and Data
Lessons to LearnThis discussion leaves us with a few lessons to keep in mind:
- Beware marketing hyperbole. There are plenty of companies that want to change your IT more than needed, so they make more money. All workloads don’t need to move to cloud, or be rewritten for cloud if they do what you need them to do where they are at a reasonable price.
- The mainframe is not going away. It continues to evolve. It is open. It scales. It runs Linux. Many cloud services run on the mainframe today, for the same reason that mission critical applications are on the platform in the first place: scale, security, resiliency and reliability.
- Develop a plan to integrate your mainframe assets into your hybrid cloud. This starts with understanding your application and data topology. Cloud enable applications as appropriate by refactoring to provide microservices access to select mainframe assets through secure APIs.
Phil Weintraub is the IBM Z vice president of Client IT Transformation in Silicon Valley, California. More →
Sponsored ContentIBM Z Content Solutions
Post a Comment
Note: Comments are moderated and will not appear until approvedcomments powered by Disqus