Bookmark and Share
RSS

Recent Posts

The Evolution of Compression: Db2 11

December 12, 2017

This is my fourth installment in a series detailing the history of Db2 compression. As is the case with all previous releases, the Db2 engine takes advantage of the z System hardware platform in Db2 Version 11 for z/OS. With Db2 11 and zEC12 hardware, compression overhead was reduced by as much as 15 percent.

Prior to Db2 11, using change data capture (CDC) to replicate changes to a target system (e.g., inserts, updates, deletes) would require a complete refresh if you executed a REORG or LOAD and did not use the KEEPDICTIONARY option. This is because the dictionary for the rows of data written to the log were no longer available to de-compress the changed data in the log.

Db2 11 provided relief to this situation. This change applies only to tables that are created with CHANGE DATA CAPTURE and COMPRESS YES. If you run LOAD or REORG for these objects, Db2 now saves the old compression dictionary. Db2 externalized the old compression dictionary to the log and adds a record with ICTYPE 'J' to the SYSIBM.SYSCOPY table. The START_RBA column of this new records points to the RBA of the data sharing member’s log to which the compression dictionary is externalized.

Another area of improvement came with support for REORG REBALANCE. Prior to Db2 11, REORG built a compression dictionary on the data partitions as data was unloaded during the UNLOAD phase. This inhibited REORG REBALANCE execution in instances where there weren't any or enough data records unloaded from a partition to build a compression dictionary during UNLOAD. Data records that were subsequently loaded into this partition as a result of data rebalancing wouldn't be compressed until a REORG was performed.

To address this issue, REORG REBALANCE built a single compression dictionary for all target partitions. This solution mimics that of partition-by-growth (PBG) table space, providing relief for partitions that don't currently have compression dictionaries built.
For those situations where batch jobs read lots of data while the filter predicate isn't on an index column,

CPU overhead can be an issue. However,  Db2 11 included a new decompression routine that sped up the expansion operation when compressed rows were read. This provided significant CPU reduction. The new routine was also compatible with the existing compression routine, so Db2 users didn't need to take any action to take advantage of this performance feature.

Read more about Db2 11 for z/OS:
I’ll conclude this series next week with a look at Db2 Version 12.

Posted December 12, 2017 | Permalink

comments powered by Disqus