AIX > Administrator > Performance

Tuning a Perfect Note

A look at performance tuning and new AIX 5.3 commands.

A look at performance tuning and new AIX 5.3 commands.

Other Tuning Suggestions

It's recommended that you tune the amount of memory to keep free. Maxfree specifies the number of frames on the free list at which page-stealing is to stop. Maxfree must be at least eight greater than minfree, which is the minimum number of frames on the free list at which the VMM starts to steal pages to replenish the free list. The difference between minfree and maxfree should always be equal to or greater than maxpgahead. Minfree and maxfree used to default to 120 and 128 respectively, though in recent OS levels they now use a calculation - so it's quite common to see minfree=960 and maxfree=1088 or even higher numbers. You should also look at maxpgahead and j2_maxPageReadAhead and set them accordingly for sequential read ahead. This is done using ioo.

Additional performance benefits can be gained by tuning the VMM write-behind parameters. When pages are updated in memory, they're not immediately written out to disk. Instead, dirty pages accumulate in memory until one of the following occurs:

  • The syncd daemon runs (usually every 60 seconds)
  • The number of free pages gets down to minfree
  • Someone issues a sync command
  • A VMM write-behind threshold is reached

When the syncd daemon runs, it obtains a lock on the i-node and holds that lock until all the dirty pages have been written to disk. Anyone trying to access that file will be blocked during the time the lock is held. On a busy system with a high I/O workload, this can cause a lot of I/O wait and dramatically effect performance. This can be dealt with in three general ways:

  1. Change syncd to run more often
  2. Set sync_release_ilock to 1 - this causes sync to flush all I/O to a file without holding the i-node lock, and it will then use the i-node lock when it does the commit (this can be dangerous!)
  3. Turn on random and/or sequential write-behind

My general approach is to customize random write-behind. I rarely, if ever, modify the syncd interval.

Numclust is used to control sequential write-behind. Files are partitioned into 16k partitions or four pages by default - these are called clusters. If all four pages in a cluster are dirty, then - when the first page in the next cluster is modified - the VMM will schedule the four dirty pages to go to disk. The default is one cluster for JFS and eight clusters for enhanced JFS, but they can be increased to delay the writes. The JFS2 equivalent is j2_nRandomCluster which defaults to 0. The j2_nRandomCluster option specifies the number of clusters apart two consecutive writes must be in order to be considered random.

Random write-behind can make a dramatic difference to system performance with a great deal of random I/O. If many pages have been modified, then you will see large bursts of I/O when the syncd daemon runs and this will affect the consistency of performance. Maxrandwrt can be used to provide a threshold where dirty pages will be written out to disk. I tend to start with 32 (the default is 0 or never). This means that, when 32 pages are dirty, any subsequent dirty pages will be written to disk. The initial 32 pages will be written out when the syncd daemon runs. For enhanced jfs, the equivalent is j2_maxRandomWrite, and it defaults to 0.

Jaqui Lynch is an independent consultant, focusing on enterprise architecture, performance and delivery on Power Systems with AIX and Linux.


comments powered by Disqus

Advertisement

Advertisement

2019 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

Achieving a Resilient Data Center

Implement these techniques to improve data-center resiliency.

AIX > ADMINISTRATOR > PERFORMANCE

AIO: The Fast Path to Great Performance

AIX Enhancements -- Workload Partitioning

The most exciting POWER6 enhancement, live partition mobility, allows one to migrate a running LPAR to another physical box and is designed to move running partitions from one POWER6 processor-based server to another without any application downtime whatsoever.

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
IBMi News Sign Up Today! Past News Letters