MAINFRAME > TRENDS > MODERNIZATION

DevOps Tricks: Integrate UrbanCode Deploy With zD&T for Software Deployments

DevOps

DevOps—or Development Operations—is an approach to continuous software delivery through automation and collaboration. UrbanCode Deploy is an IBM product that helps in simplifying and orchestrating software deployment to various environments (e.g., system test, pre-production environments, etc.) as per the demands of software deployment cycle. IBM z Systems Development and Test Environment (zD&T) simplifies infrastructure requirements for mainframe development and testing by providing an identical mainframe-like environment that runs on Linux in x86 architecture.

Typically, multiple instances of zD&T may be needed to support development and test needs for different application teams in a mainframe shop. Such zD&T instances are typically cloned from a master or golden copy image and provisioned manually in a physical server or VM, or provisioned in a cloud environment. For a successful deployment using UrbanCode Deploy to a zD&T machine, the server should be able to talk seamlessly with the UrbanCode Deploy agent running in each of the zD&T instance. It should be able to uniquely identify each of the agents for software package deployment to the right zD&T instance. This can be very tricky, when multiple instances are cloned from the same golden copy.

In this article, we will discuss how to handle this situation.

Common Problems

Issues that may crop up when the base Linux hosting zD&T is rebooted include:

  • Manual intervention may be needed to start the zD&T and also responding to outstanding z/OS console messages when zD&T is started and z/OS IPL-ed as a part of Linux start up
  • Iptables may get flushed and may have to be recreated else z/OS will not have network connectivity

These can be fixed by using automation scrips as a part of zD&T startup.

There are certain issues that may surface when a zD&T instance is deployed from golden copy:

  • Unique Identification Manager (UIM) server is a component of zD&T license server that maintains a unique identification for each zD&T instance, when running multiple zD&T instances picking up license from the same license server. Each zD&T instance has a client component that synchronizes with the UIM server. UIM server and client generates and maintains in a database, a unique serial number for each zD&T instance derived from Linux machine serial number. UIM client database is typically in /usr/z1090/uim/uimclient.db in each zD&T instance. When cloned from a golden copy, zD&T could fail to start as uimclient.db already has a serial number, which may be a duplicate.
  • UrbanCode Deploy agent name of the golden copy and the deployed instance may be same, as a result conflict arises in the server in uniquely identifying different agents.

The above problems can be fixed by adding a command in zstartup.sh script to delete the file uimclient.db every time z/OS is started. zD&T will build new uimclient.db file when zD&T is started and also configuring UrbanCode Deploy agent in the golden copy such that it is able to identify itself in a unique manner to the server, each time it is cloned.

Automation of zD&T Startup and z/OS IPL

Automation scripts can be used to avoid manual intervention for responding to outstanding WTOR messages and to make sure IP tables are in place Configure IP tables (NAT) in Linux and port forwarding to make the z/OS on zD&T to be accessible from outside LAN. Also, a rule can be added for establishing the connection between UrbanCode Deploy agent and server UIM issue can been addressed by removing the uimclient.db before calling the IBM supplied runzpdt script that starts zD&T and IPLs z/OS. Below is a sample script that performs the above mentioned things:

#!bin/bash
date
rm /z/ibmsys1/nohup.out
service firewalld stop
/usr/bin/sh /z/automation/ip_tables.sh
service iptables restart
ping -c 1 `hostname`| grep PING| awk '{print $3}'| cut -d "(" -f 2|cut -d ")" -f 1 > /tmp/i.txt
iptables -t nat -A PREROUTING -s 9.***.**.** -i eth0 -d `cat /tmp/i.txt` -j DNAT --to-destination 10.1.1.2
/usr/bin/sh /z/automation/kerparm.sh
v=`cat /etc/redhat-release |sed 's/ [^0-9]//g'| cut -c 1`
if [[ $v == '7']] then
service firewalld stop
echo "1" > /proc/sys/net/ipv4/ip_forward
fi
rm -f /usr/z1090/uim/uimclient.db
su -c "cd|/z/automation/runzpdt" -l ibmsys1 > /z/ibmsys1/nohup.out
/usr/bin/sh /z/automation/ar_sysplex.sh
/usr/bin/sh /z/automation/ar_vary.sh
/usr/bin/sh /z/automation/ar.sh
cat /z/ibmsys1/nohup.out > /tmp/testz.txt
date

In our set up, the automation scripts are placed in /usr/bin/sh /z/automation which are invoked from zstartup.sh

ip_tables.sh script

For z/OS on zD&T to be accessible from outside LAN, certain IP tables rules, ip_tables.sh sets up the required IP:

       #!bin/bash
       iptables -t nat -F
       iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 623 -j DNAT --to 10.1.1.2:623
       iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 7715 -j DNAT --to 10.1.1.2:7715
       iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 7918 -j DNAT --to 10.1.1.2:7918
       iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 43678 -j DNAT --to 10.1.1.2:43678
       iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 43688 -j DNAT --to 10.1.1.2:43688
       iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 121 -j DNAT --to 10.1.1.2:21
       service iptables save 

During IPL, z/OS may issue few messages like initialization of system to the sysplex, vary console reply, Cold start integrity lock. Response to such WTOR are also taken care using scripts, included here as ar_sysplex.sh, ar_vary.sh and ar.sh scripts that searches those messages in nohup.out file and responds accordingly.

ar_sysplex.sh script

When z/OS in zD&T is IPL-ed the first time, it might issue message IXC420D, which requires a reply for initializing a sysplex, to this ‘00,I’ will be replied by the script to progress the IPL:

#!/bin/bash
while [[ `cat /usr/f` = 1]]
do
# IEA549I SYSTEM CONSOLE FUNCTIONS AVAILABLE
grep -i IEA549I /z/ibmsys1/nohup.out >> /dev/null
if [[ $? -eq 0]]
then
echo ''
break
else
#IXC420D REPLY I TO INITIALIZE SYSPLEX sysplex-name, OR R TO REINITIALIZE XCF.
grep -i IXC420D /z/ibmsys1/nohup.out >> /dev/null
if [[ $? -eq 0]]
then
su -c "oprmsg '00,I'" -l ibmsys1
break

ar_vary.sh script

If IEE389I message is found in nohup.txt, the script issues the vary console activate command ‘v cn(*),act’.

#!/bin/bash
while:
do
#IEE712I VARY CN PROCESSING COMPLETE
grep -i IEE712I /z/ibmsys1/nohup.out >> /dev/null
if [[ $? -eq 0]]
then
echo ''
break
else
#IEE389I MVS COMMAND PROCESSING AVAILABLE
grep -i IEE389I /z/ibmsys1/nohup.out >> /dev/null
if [[ $? -eq 0]]
then
su -c "oprmsg 'v cn(*),act'" -l ibmsys1
break

ar.sh script

If Linux is brought down abruptly without performing proper shutdown of z/OS then during the IPL, checkpoint data sets might get locked and an operator intervention is to further progress the IPL. When message HASP454 is found in nohup.txt, the script will issue the reply command ‘$rmn.Y’, so that JES2 will bypass the multi member integrity lock and start z/OS normally.

#!/bin/bash
while [[ `cat /usr/f` = 1]]
do
# HASP493 JES2 COLD START IS IN PROGRESS - z22 MODE
grep -i HASP493 /z/ibmsys1/nohup.out >> /dev/null
if [[ $? -eq 0]]
then
echo ''
break
else
#HASP454 SHOULD JES2 BYPASS THE MULTI-MEMBER INTEGRITY LOCK? ('Y' OR 'N')
grep -i HASP454 /z/ibmsys1/nohup.out >> /dev/null
if [[ $? -eq 0]]
then
rmn=`cat /z/ibmsys1/nohup.out |grep -i HASP454|awk '{print $2}'|tr --delete '*'`
su -c "oprmsg '$rmn,Y'" -l ibmsys1
break

Configure UrbanCode Deploy Agent in the zD&T Golden Copy Before Provisioning Instance

The UrbanCode Deploy agent name for the golden copy and the deployed instance will be same, as a result conflict arises in its server in uniquely identifying different UD agents. To overcome this problem, use the machine serial number generated for each zD&T instance, which is unique and generated from UIM database. This unique machine serial number can be captured programatically and used as UrbanCode Deploy agent name.

The following sample REXX exec can be used to extract the machine serial number and update a system symbolic variable that will be picked up by UCD agent during its start up. When the system is IPLed, this REXX exec can be started as a started task, it will capture the CPU serial number and update the system symbolic variable UAGENT.

/*REXX PROG*/                                              
"CONSPROF SOLDISPLAY(NO)"                                  
"CONSOLE ACTIVATE"                                         
	ADDRESS CONSOLE "D M=CPU"                                  
	FC=GETMSG('LINE.','SOL',,,5)                               
	PARSE VAR LINE.4 MISC.1 4 MISC.2 28 PART.1 34              
"CONSOLE DEACTIVATE"                                       
"NEWSTACK"                                                 
	QUEUE "//CHANGEIN JOB NOTIFY=&SYSUID,REGION=0M"            
	QUEUE "//IEASYMU2 EXEC PGM=IEASYMU2,PARM='UAGENT="PART.1"'"
	QUEUE "$$"                                                 
	O = OUTTRAP("OUTPUT.",,"CONCAT")                           
	"SUBMIT * END($$)"                                         
	O = OUTTRAP(OFF)                                           
"DELSTACK"                                                 
EXIT   

Figure 1 shows the output of console command “D M=CPU” to get a feel of machine serial number, from which a part can be extracted and used as UrbanCode Deploy agent name. installed.properties file in the agent directory has the name of the UrbanCode Deploy agent. We need to replace the name in this file with the value in UAGENT variable. As the agent runs as a z/OS UNIX System Services (USS) process, the system symbolic variable must be passed to z/OS USS by using a z/OS USS environment variable. This can be done by adding the following in /etc/profile:

UAGENT=$(sysvar UAGENT0
Expore UAGENT

Next part is to use a shell script to replace the name of the UrbanCode Deploy agent in the installed.properties file, with necessary ASCII to EBCDIC conversation for edit and EBCDIC to ASCII conversion post edit .

# Convert from ASCII format to EBCDIC format for edit &. save the converted file
iconv -f ISO8859-1 -t IBM-1047 /u/UCD/opt/conf/agent/installed.properties  >/u/UCD/opt/conf/agent/installed.properties.ebcdic
#Use the ‘sed’ editor to search for the pattern “name=” and replace it with #“name=$UAGENT”. The value for $UAGENT will be substituted as it’s an environment variable.
sed "s/name=.*$/name=$UAGENT/" /u/UCD/opt/conf/agent/installed.properties.ebcdic >/u/UCD/opt/conf/agent/temp
mv /u/UCD/opt/conf/agent/temp /u/UCD/opt/conf/agent/installed.properites.ebcdic
#Convert the EBCDIC file back to ASCII format, renaming it to the original name
iconv -f IBM-1047 -t ISP8859-1 /u/UCD/opt/conf/agent/installed.properites.ebcdic >/u/UCD/opt/conf/agent/installed.properties
# clean up
rm /u/UCD/opt/conf/agent/installed.properties.ebcdic

The above script can be run as a part of z/OS startup and UrbanCode Deploy can also be started as part of z/OS startup. Figure 2, shows the UrbanCode Deploy agent getting registered with a server with a name extracted from the z/OS machine serial number.

Conclusion

A combination of automation scripts, shell scripts in Linux and REXX based execs in z/OS can be used to overcome most problems with zD&T startup and uniquely identifying UCD agents running in multiple instance of zD&T, when cloning zD&T instances from the same golden copy.

We want to thank the following people for their support and contribution to this mainframe DevOps article:

Arunkumaar Ramachandran, IT Architect – IBM Z, IBM India, for his technical guidance and providing valuable suggestion in shaping this article

Nikhi Revankar, Selva Suresh, Nitash Anklesaria, IT Specialist – IBM Z, IBM India, for their valuable technical help in putting this article together.

Gokila Bs is an IT specialist with IBM India. She has over seven years of experience with IBM Z. She has worked in areas such as mainframe database administration and mainframe DevOps. Her area of interest includes Db2 system programming and DevOps for mainframe.

Siva narayana Bella is a DevOps for mainframe specialist with IBM India. He has over 13 years in designing and development of mainframe applications. His areas of expertise include COBOL, CICS, Db2 and UrbanCode Deploy. His area of interest is DevOps for mainframe.



Like what you just read? To receive technical tips and articles directly in your inbox twice per month, sign up for the EXTRA e-newsletter here.


comments powered by Disqus

Advertisement

Advertisement

2017 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

MAINFRAME > TRENDS > MODERNIZATION

Making Sense of APIs and the API Economy

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
Mainframe News Sign Up Today! Past News Letters