AIX > Tips & Techniques > Application Development

Client Server Computing Helped Extend Enterprise Models


Client server (CS) computing is now an old-fashioned term that has largely been replaced by “distributed computing.” The term might be outdated, but CS architectures are still important today. In the 1980s, it was made possible by new networking implementations, computer hardware developments and support software. CS middleware was developed to make the architectural models practical.

While middleware had its roots in centralized computing systems, most books on the subject start with distributed systems. Initially, CS middleware was developed as a means to connect a large network of distributed computers by providing file transfer, remote printing, terminal flexibility and remote file access. With networking becoming a greater focus and providing increasing flexibility and functionality, the stage was set for other services that would support executing useful interactions between distributed and networked computers. I’ll now discuss four important CS building blocks.

Remote Procedure Calls

The remote procedure call (RPC) was an early CS building block. It’s simply an extension of traditional procedure calls (PCs)—in which a program calls another and that program returns control after it completes its work. Whereas the PC link or transfer assumes the parties are on the same system, the RPC does not. It converts parameters, sends them over the network, and converts them back. Figure 1 compares PC to RPC.

Remote Database Access

Another early building block was remote database access, which involved interacting with databases on a physical machine other than the one running the application. This interaction is implemented in several ways. For example, SQL text can be passed from client to server; then executed. This can be handled under programmer control. A related approach was implemented in which the database schema indicates where the table resides and the database management system ensuring the database commands get executed on the proper machine. The IBM DB2 product uses the CATALOG command to store remote database location and access information.

Distributed Transaction Processing

Distributed transaction processing (DTP) was developed to meet the needs of CS applications that took on challenging processing responsibilities. DTP is focused on ensuring successful and reliable completion of transactions involving distributed systems. This is notably difficult when you realize a transaction is typically a unit of work that results in an update. In practice, transactions often involved updates from multiple applications and databases. How do you ensure all of the updates that need to be done at the same time are actually done?

The solution is that protocols are utilized like SNA LU6.2 and Open DTP’s XA protocol. These mechanisms help ensure updates are handled properly. The protocols are available but a skilled programmer is needed to make software like the X/Open DTP model work with its subtle interaction of three interrelated components, including application program, transaction manager and resource managers.

Message Queuing

Unlike the program-to-program examples, message queuing is a program-to-message queue. It can be used multiple ways in a distributed application. A simple use is as a place to store data needed to provide context to the different programs that are part of a logical transaction. This is depicted in Figure 2. Another use of a message queue is to trigger the next transaction or application in a workflow.

The message queue and its multi-platform implementations make it easier for programs running on different OSs to operate as one in support of a business application. The usefulness of such messaging middleware explains its success in the marketplace.

CS Architectures

Because of the limitations of file-sharing architectures of the time, the CS model emerged. One innovation of this approach was the introduction of a database server to replace the file server. Using a relational database management system, user queries could be answered directly. The CS architecture reduced network traffic by providing a query-response mode as compared to a total file transfer. It improved multi-user updating through a graphical interface to a shared database.

The two-tier model was an early CS architecture in which the server tier featured all-in-one functionality that was low cost but also subject to availability limits. The three-tier architecture offered more scalability and distribution of resources. Both are shown in Figure 3.

Two and three-tier architectures have developed and changed. Today the N-tier CS architecture has emerged with a variable number of tiers results from scaling, performance and other software considerations. Also, N-tier models allow for heighten security with the placement of firewalls between tiers. However, performance needs have resulted in different architectural elements, like load balancers, that offer a variety of ways to distribute work in high-transaction environments. N-tier and server cluster examples are shown in Figure 4.

Evolution of Enterprise Systems

As CICS and IMS continued to expand and fill-out their enterprise model, they also provided support for client-server environments by embracing many application-programming interfaces and evolving programming standards.

Since 1969, CICS has been continually developed to handle the latest industry technologies starting with support for early 3270 display devices, virtual storage systems, database and recovery/restart support in the 1970s. In the 1980s came client server support, support on multiple platforms, including OS/390, VSE, AIX, AS/400, OS/2, and NT. Next came support of the Internet and new Web browsers, the ability for composite CICS and WebSphere transactions, and support of the Java and EJBs. Today, CICS has a component called TXSeries for Multiplatforms used for integrating data and applications between distributed solutions and enterprise systems.

Like CICS, IMS has been growing and evolving over more than four decades. Today, the focus of the IMS middleware support is through integration technologies. The IMS Enterprise Suite is a set of components that supports open integration technologies to enable new application development and extend access to IMS transactions and data. Technologies embraced include SOAP, JAVA and C APIs, and the Java Message Service APIs.

What Happened Next?

CS computing evolved into more flexible distributed computing with its broader array of architectures and constructs. Meanwhile, middleware grew beyond its preliminary role as programmer-productivity tool. Cloud computing emerged as a solution to the cost and complexity challenges of distributed computing. It also promoted a new paradigm with undeniably attractive attributes. This will be the subject of the last article in this series.

Joseph Gulla is the general manager and IT leader of Alazar Press, a publisher of award-winning children’s books. Joe is a frequent contributor to IBM Destination z (the community where all things mainframe converge) and writes weekly for the IT Trendz blog where he explores a wide range of topics that interconnect with IBM Z.


comments powered by Disqus

Advertisement

Advertisement

2019 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
IBMi News Sign Up Today! Past News Letters