AIX > Administrator > Systems Management

Storage recommendations for AIX: Performance Improvements by Tuning Queue Depth

Tuning Queue Depth
In my last article, I discussed the major considerations regarding LUN size. In summary, much of that answer relies on the application IOPS requirements and the storage settings and normally we are going to need more LUNs when more IOPS are needed in order to prevent a queue FULL on the hdisk. In this article, I’ll discuss the considerations for the queue attribute on the hdisk device. Learning about this attribute is important, since we have lost control over the storage configuration when using LUNs due to the fact that these devices are being provided from some remote location and controlled by someone else. However, with the queue_depth we are able to have a performance impact on the storage server, and additionally increase the performance per LUNs.
 
The queue depth parameter specifies the maximum number of I/O operations that can be in progress simultaneously on the hdisk device. When you increase queue size and the number of I/O operations that are sent to the disk subsystems, the throughput on that LUN increases. I/O response times might also rise because the workload to be managed by the storage server is increased. That being said, I want to explain the considerations for this parameter.
 
Considerations for the queue_depth parameter on hdisk device:
 
First of all, normally there’s no need to merely to change the performance parameters and that includes the queue_depth on the hdisk device. The AIX development team typically tries to make sure that default settings work properly for the vast majority of AIX installations. However, there are some situations where making adjustment over the queue_depth parameter might improve the I/O performance.  In any case, we have to avoid the feeling of making adjustments when the system is not performing well, because particularly by increasing the queue size there is the risk of getting worse problems such as overloading the storage server which might lead to QUEUE FULL on the device, reject I/O, system crash, boot problems or maybe not getting any benefits.
 
For the aforementioned reasons, I’d suggest doing two jobs before making adjustments over the queue_depth. Firstly, the monitoring job in order to determine if the queue is filling up on the disk. Secondly, the research job by checking the storage documentation and gathering recommendations from the storage vendor. For the research job, it’s always convenient to study the host attachment guide or the host connectivity guide for the storage server. In these documents, you’ll find the best practices to run your AIX environment with the storage server and that includes the queue parameter recommendations. 
 
Let’s take a look to the commands needed to gather and change the configuration. 
 
To display the current queue_depth value:
VIO server:            lsdev –dev hdiskX –attr queue_depth
AIX client:             lsattr –El hdiskX -a queue_depth
 
Commands to change the value:
VIO server:            chdev –dev hdiskX –attr queue_depth=#
AIX client:             chdev –l hdiskX –a queue_depth=#
 
NOTE:hdisk must not be in used. Otherwise, use the chdev –P and then reboot to take effect the changes. When using VSCSI with physical volume used as backing device, the queue_depth should match on VIOS and AIX client.
 
Performance improvements by tuning queues
In the following example, I want to exhibit performance impact of tuning the queue_depth on hdisk. First, I used ndisk64 program (part of the nstress package) to stress the storage server. I was using Hitachi storage VSP G400 with flash disks. This test simulated 4K block size, random workload, 100 percent reading and 64 processes running:
/admin/root>./ndisk64 -f /dev/rhdisk2 -s 500G -R -r100 -b 4k -M 64 -t 600
 
Next, I used the iostat –D to find out queue bottlenecks: 
 
/admin/root>iostat -DR hdisk2 1
System configuration: lcpu=56 drives=25 paths=45 vdisks=1
hdisk2          xfer: %tm_act      bps      tps     bread      bwrtn
                        100.0     16.3M   3971.0       16.3M       0.0
                read:      rps avgserv  minserv  maxserv  timeouts      fails
                        3971.0      0.5     0.1     20.8           0          0
               write:      wps avgserv  minserv  maxserv  timeouts      fails
                          0.0      0.0     0.0      0.0           0         0
               queue:  avgtime mintime  maxtime  avgwqsz   avgsqsz     sqfull
                         15.5     0.1     38.3     87.0       2.0      3971.0
------------------------------------------------------------------------------

 
In the last visual I was experiencing queue problems, since the values on the queue row were non-zero. That’s an indication that maybe by increasing the queue_depth I can improve performance, because I was using the queue_depth with the default value:
 
/admin/root>lsattr -El hdisk2 -a queue_depth
queue_depth 2 Queue DEPTH True

 
Up to this point we have the following facts:
•      The application has to wait 15.5ms in the queue (avgtime) for the I/O to be processed.
•      As a result the hdisk queue becoming full, which is shown as sqfull.
•      In total the application has to wait 16 ms (avgserv + avgtime).
 
Next, I found the host attachment guide suggests increasing this attribute with the value of 32. Therefore, I increased the queue_depth value with the chdev command and then monitored the performance to see if I’d get any improvements:
 
/admin/root>varyoff testvg
/admin/root>chdev -l hdisk2 -a queue_depth=32
hdisk2 changed
/admin/root>varyonvg testvg
/admin/root>iostat -D hdisk2 1
System configuration: lcpu=56 drives=25 paths=45 vdisks=1
hdisk2          xfer: %tm_act      bps      tps     bread      bwrtn
                         99.0    182.9M   44644.0      182.9M       0.0
                read:      rps avgserv  minserv  maxserv  timeouts      fails
                        44644.0      0.7     0.1     13.8           0        0
               write:      wps avgserv  minserv  maxserv  timeouts      fails
                          0.0      0.0     0.0      0.0           0          0
               queue:  avgtime mintime  maxtime  avgwqsz   avgsqsz     sqfull
                          0.0     0.0      1.3     61.0       72.0    44288.0
 

We have the following improvements:
•      IOPS increased: 3971 to 44644 tps
•      Throughput increased:  16MBbs to 182.9MBps
•      This is 11x better performance!
•      The read average service time (avgserv)increased from 0.5ms to 0.7ms, because the workload increased
 
Service queue wait time gone away: avgtime 0; that is I don’t have queue depth problem.  Despite I’m still getting some numbers on sqfull, this is telling me I’m at the maximum capacity of my storage server using one LUN because I’m using the maximum recommended queue_depth value. Additionally, the consumption for one storage controller processor increased up to 70 percent, as you can see this in the statistics from the storage server. 
 
IOPS in storage server:
aix-1-(1).png

 
 
Processor in storage server:
aix-2.png

 
Getting this information has a couple of advantages: First, I have my baseline measurement for my performance. Second, this info helps me to some degree to size properly the storage requirements since I’m getting the limits per LUN, as long as the I/O characteristic is the same as the application uses.  
 
Brief note about queue depth on FC adapter:
The queue size on the FC adapter is controlled by num_cmd_elems attribute on fcsX. In order to determine if you need to increase the num_cmd_elems, use the fcstat fcsX, if you get any of these fields with values greater than zero, then consider increasing the num_cmd_elems:
 
fcstat fcs0
….skipped lines…
FC SCSI Adapter Driver Information
No Adapter Elements Count: 0
No Command Resource Count: 0
 
The correct value for the num_cmd_elems you’ll find in the storage documentation. In addition to the considerations already discussed, I always try to avoid using different storage servers with the same FC adapter on AIX, since when tuning parameters to improve performance for one storage server, I may have a negative impact on the other storage servers. Actually, I always try using only one storage server for LPAR. Because, I’m going to need multipath I/O software for each of these storage servers and AIX can get confused using different MPIO software, even more when problems occur. Finally, never use a single fiber channel adapter to talk both disk and tape, because the structure of the I/O for a tape drive is much different than into a disk and you can run into poor performance.
 
Conclusion
Normally, there’s no a need to change the attribute settings for the vast majority of AIX population. However, a small subset of customers will find that increasing the queue size helps their performance. When increasing this parameter keep in mind to do the monitoring job to determine if you’re experiencing a queue problem and then check your storage documentation to get the proper value for the queue_depth. Finally, avoid mixing different storage servers with the same FC adapter, since when tuning performance parameters on FC adapter for one storage box, you may get performance problems on the other storage servers.
 
 
References:
 
IBM System Storage DS8000 Performance Monitoring and Tuning 
https://www.redbooks.ibm.com/redbooks/pdfs/sg248171.pdf
 
IBM System Storage DS8000 Host Attachment and Interoperability 
http://www.redbooks.ibm.com/redbooks/pdfs/sg248887.pdf
 
IBM XIV Storage System Host Attachment and Interoperability 
http://www.redbooks.ibm.com/redbooks/pdfs/sg247904.pdf
 
IBM FlashSystem A9000, IBM FlashSystem A9000R, and IBM XIV Storage System Host  Attachment and Interoperability 
http://www.redbooks.ibm.com/redbooks/pdfs/sg248368.pdf
 
IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and Performance Guidelines
http://www.redbooks.ibm.com/redbooks/pdfs/sg247521.pdf
 
EMC Host Connectivity Guide for IBM AIX
https://www.emc.com/collateral/TechnicalDocument/docu5126.pdf
 
Open-Systems Host Attachment Guide
https://knowledge.hitachivantara.com/Documents/Storage/Unified_Storage_VM/Attach_hosts_to_HUS_VM/Open-Systems_Host_Attachment_Guide


comments powered by Disqus

Advertisement

Advertisement

2019 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

AIX > ADMINISTRATOR > SYSTEMS MANAGEMENT

How to Download Fixes

ADMINSTRATOR > SYSTEMS MANAGEMENT

Understand your options for 12X PCIe I/O drawers

clmgr: A Technical Reference

PowerHA SystemMirror 7.1 introduces a robust CLI utility

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
IBMi News Sign Up Today! Past News Letters