AIX > Administrator > Performance

Evil Twins: 10 Network Tunables Every Admin Must Understand

CPU Tunables

Setting network tunables incorrectly on AIX systems is a very common problem. In fact, I've seen this at virtually every customer site I've worked at over the years.

These tunables are important because they govern throughput over network interfaces. When properly adjusted, they can increase network performance―specifically TCP/IP―many fold. And if these settings are improperly adjusted? At least in the case of those tunables I'm going to discuss, it probably won't hurt anything. However, this situation absolutely doesn't help you, either.

Nearly every tunable, be it CPU, memory, networking or storage, affects other tunables to some degree. Twist one tuning knob, and it’s a good bet your adjustment will impact some other area of the system. That's how performance works with any OS, not just AIX. So you should look at your systems holistically and consider not just how tuning can help fix a specific performance issue, but whether it could create a new problem elsewhere.

In AIX, there are groups of tunables that override other groups of tunables. A good example is the NFS set of tuning dials, better known as the nfso tunables (they can be viewed with the nfso –FL command). Some NFS tunables can be overridden and invalidated by other tunables in the networking options group (using the no options). You must do your homework so you know when these overrides could occur. As I said, it's a common problem. I myself fell victim to this override "feature" when I started out in performance nearly 20 years ago. We've all been there.

Nowhere in AIX is this override feature more obvious than with the ten network tunables I present in this article. Nearly 150 tunables can be set with the networking options―your no values, as I'll explain. A dozen others can be set on a network interface. The ten I deal with in this article appear in both places and are the causes of quite a bit of confusion for many administrators.

Introducing the Evil Twins

Various network constructs in AIX can be tuned. For example, the networking options tune network behavior from within the AIX kernel, and duplicates of these tunables exist on the network interfaces, whose functionality is likewise duplicated. I’ve met scores of puzzled admins who, after carefully tuning their networking subsystem, were baffled by load tests that reveal no performance gain.

Those discussions always start with this question: “What’s going on here?” And here's what I always answer: Issue this command, as root...

	no -FL | grep use_isno

You'll get output that looks like this:

	use_isno  1  1  1  0  1  boolean  D

A networking option that's activated in AIX by default, use_isno tells your AIX system to “use the interface-specific networking options.” It further instructs the system to ignore certain values you may have set using the no command, and instead use those same values that appear on your network interfaces. If you're wondering why you're not seeing performance gains after setting those values in the kernel, this is why.

So without further ado, here are the five sets of network tunables (ten total) that appear in both your networking options and network interfaces, the ones that cause so much consternation for administrators everywhere. Meet our "Evil Twins":


One more time: Assuming the default value for use_isno, the interface settings always override the networking options settings.

This isn't the only challenge posed by the Evil Twins. Two of these tunables―nodelay and mssdflt―are invisible on the interface until you adjust them. In the meantime, they inherit the global settings in your networking options.

Doing a standard ifconfig -a returns this output:

lpar # ifconfig -a
        inet netmask 0xffffff00 broadcast
         tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0

While we see the send and receive values (sendspace and recvspace) and rfc1323, nodelay and mssdflt are indeed invisible. This gets back to my point about adopting a holistic approach to performance tuning. Always look at the forest rather than individual trees. You must understand that these hidden settings are present. Now, in this case, an argument can be made to simply tune the kernel settings for nodelay and segment size, but why keep track of a group of settings in two different places? In my experience, most administrators tune the interface settings as a group, though some mix and match. So suppose you do decide to turn on nodelay or mssdflt on an interface? What if you later switch it off? These values will still override your kernel options and… you see where I’m going with this. My advice? Set nodelay and mssdflt on your interfaces. Always.

Breaking Down the Tunables

Now let's learn what our Evil Twins do. Descriptions of each of the five sets of twins follow. I start with the networking option and interface tunable names (listed in that order: no option/interface option). There are also guidelines for tuning. Note that in all but one instance, the names are identical for both the network and interface values.

tcp_sendspace/tcp_sendspace: This value specifies the amount of data the sending application can buffer. When the limit is reached, the application doing the send will block and then send that amount of data over the network. Another way to think of this is to consider how much data can be sent before the system stops and waits for an acknowledgment that the data has been received. The sender’s window is called the congestion window, and there's a complex formula for computing your sendspace value: take into account your default segment size and multiply it by its starting size until a timeout is reached. Fortunately, there's also an equation to determine your ideal sendspace, and it's far simpler. Just multiply your media bandwidth in bits per second by the average round-trip time of a packet of data as expressed in seconds. (This is important because you need the fractional decimal time value.)

This is called the bandwidth delay product (BDP), and the equation looks like this:

	100000000 x 0.0002 = 20000

Mark J. Ray has been working with AIX for 23 years, 18 of which have been spent in performance. His mission is to make the diagnosis and remediation of the most difficult and complex performance issues easy to understand and implement. Mark can be reached at

Like what you just read? To receive technical tips and articles directly in your inbox twice per month, sign up for the EXTRA e-newsletter here.

comments powered by Disqus



2018 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

Achieving a Resilient Data Center

Implement these techniques to improve data-center resiliency.


AIO: The Fast Path to Great Performance

AIX Enhancements -- Workload Partitioning

The most exciting POWER6 enhancement, live partition mobility, allows one to migrate a running LPAR to another physical box and is designed to move running partitions from one POWER6 processor-based server to another without any application downtime whatsoever.

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
IBMi News Sign Up Today! Past News Letters