Application Testing: Giving Users What They Need
Having tried my hand at consulting, systems programming and application programming, the most difficult and time consuming testing is in application development. Depending on application specifics, testing can entail significantly more effort than coding. The best application testing I’ve seen involves:
- Creation of an executable program and basic code validation,
- Verification of basic functions, error/exception handling and system integration and
- Extensive in-depth user testing.
As testing progresses, complexity and intensity increases dramatically.
I’ve seen many situations where testing was minimized in an attempt to reduce effort and speed the process. Almost always a price had to be paid; it’s a whole lot less disruptive—and costly—to identify and fix a defect during testing than production. It’s also a lot more effective to resolve a problem in the initial phase of testing than a later phase because, almost invariably, the number of people involved increases as testing progresses. While management may put pressure on a programmer or programmers to push development as rapidly as possible, it’s more productive to let the staff perform each phase carefully and thoroughly.
What follows is a blueprint of the best application testing approach I’ve seen.
The first step in establishing an effective, adaptable test environment is to install hardware and software, load data and programs, and establish procedures and processes that mirror production, albeit somewhat downsized. It’s usually impractical to duplicate production CPU capacity and disk space, but a key design point is that any production problem should be reproducible in test. This may require test system tailoring such as swapping full production application images in and out, but a test system needs to reflect production.
There’s one problem category—volume-related—that may not be reproducible in test, because the test system does not have the capacity or means of driving a peak production workload. But wherever possible, test should be symmetric with production. I’ve worked with systems where that wasn’t the case, and it severely restricted usability. Problems never detected in test quickly erupted in production because they couldn’t occur in test. The data or configuration that created them didn’t exist there.
A test system should have availability and response time/performance objectives very close to production, because it’s a production system for application development. A robust, available test system enables productivity and traps errors that would otherwise impact production availability.
Application Testing Tools
Testing tools and code analyzers greatly enhance programmer productivity and code quality. My primary experience is COBOL although I’ve also dabbled in C#, Java, PL/I and assembler. Tools like xPediter, Intertest, Veracode, etc., dramatically improve debugging and program flow. These tools display source code and storage area contents on a screen or window, and can execute programs instruction by instruction, allowing a programmer to watch instruction execution and data values. Compared to inserting DISPLAY statements in code, recompiling, reviewing output for specific storage values, then recompiling back, the productivity gains are immense.
Even better, debuggers allow dynamic change of storage areas (in COBOL, Working Storage, Linkage Section, File Section, etc.), making it possible to test different scenarios on the fly. This facility is manual, usually to change selected fields rather than full records or COPY books, so test files populated with records that generate different output, exceptions, error conditions, etc., are still needed. Functions, such as skipping instructions or branching to selected statements, further enhance testing.
While compilers’ output is mostly syntax checking, warnings and informational messages—often ignored because they don’t result in a failed compile—shouldn’t be taken lightly. They may identify potential logic errors and other output that’s useful information for a programmer’s testing.
Phase I Testing
Phase I testing is usually performed by the programmer writing or generating the code, and includes resolution of syntax errors followed by simplistic tests to validate basic logic. Compilers reports identify syntax errors and line numbers where they appear. These errors must be corrected to create an executable program. Syntax errors must be resolved before testing is possible.
In the case of batch jobs, this can include creation and editing of Job Control Language (JCL) that allocates files, invokes the program, executes job steps based on condition codes, etc. In the case of online programs such as CICS Transaction Server, the JCL function is provided by the software. Preliminary testing is often driven from production file extracts, especially if large files are the source. The data is often static, but that’s OK, because at this point the programmer is only verifying mainline logic.
Phase II Testing
Phase II testing involves more than the assigned programmer(s), including system analysts and senior programmers with in-depth business process or functional knowledge. Testing becomes much more detailed and in-depth, often involving standard test cases developed for the business process. Error situations are extensively tested; e.g., if one or more files or databases are closed, is this situation programmatically handled properly? How about all others? If an invalid customer number is entered, is it detected and handled correctly? Are range-checking algorithms properly invoked, is garbage data in a record detected and handled?
Testing is expanded from freshly-written or changed program(s) to all relevant modules through systems integration testing. Most applications are comprised of tens or hundreds programs, many of which link or pass control and data between each other, creating and modifying data according to specifics which vary with the program. Some send out screens, others perform date calculations, others calculate installment payments, etc. Do new program modules work with existing programs? Predefined scripts or senior/analyst guidance drives this effort and often participates in validation.
Regression Testing/Quality Assurance
Quality assurance testing is primarily an end-user task—although IT remains responsible for problem resolution—because users have business knowledge regarding variations and exceptions, what business components have issues and recurring errors or process idiosyncrasies. Essentially, quality assurance users employ their application knowledge to do everything they can to break the code.
Regression testing is less-frequently used. It usually involves running automated scripts that validate standard test cases used. It’s an automated way of shaking out common problems and identifying performance issues.
Regression testing and quality assurance will be discussed further in another article.
Keeping errors confined to the test system (OK, a few that get through) keeps users happy, keeps management happy and keeps the business happy. Comprehensive testing gives users quality tools to do a better job.
More in the Series
This series is based on my more than 35 years of work in mainframe IT, reviewing mistakes I’ve seen or participated in and helping others to avoid them. Find previous stories in the series on my author bio page.
Like what you just read? To receive technical tips and articles directly in your inbox twice per month, sign up for the EXTRA e-newsletter here.
comments powered by