AIX > Administrator > Virtualization

Using 10Gbit Ethernet Adapters with PowerVM SEA: VIOS and SEA Considerations

VIOS and SEA Considerations

In my last article, I explained the main considerations for virtual Ethernet with 10Gbit Ethernet adapters. In this article, I’ll explain the considerations and recommendations for SEA with 10Gbit Ethernet adapters as they’ve become conventional with Power Systems shops. There are different options to virtualize this connection such as SR-IOV, VNIC and the Shared Ethernet Adapters. However, the system administrator would normally feel comfortable using the SEA, since it has a come a long way virtualizing Ethernet connections in Power Servers. In addition, this device offers by default the most advanced PowerVM functions such as Live Partition Mobility.

When dealing with the PowerVM SEA and 10 Gbit adapters, we should take some considerations before deployment in order to guarantee an excellent performance. For this reason, I’ll explain the considerations and recommendations to run a SEA with Link Aggregation that uses 2X 10 Gbit adapters, then I’ll discuss the performance figures when running the SEA with Link Aggregation. It’s important to ensure the recommendations already discussed for the virtual Ethernet are in place, since it is essential to guarantee optimum performance on the whole PowerVM stack.

For this part of the series, I decided not to argue the parameters tuning for physical adapter parameters because in my case, they came properly configured by default.

Considerations for Link Aggregation:

The Link Aggregation provides several advantages including enlarged bandwidth and redundancy. Therefore, it is always good idea to use it whenever is possible. Take note of the following parameters when creating the link aggregation:

$ mkvdev –lnagg ent0,ent1 -attr mode=8023ad hash_mode=src_dst_port

Parameters Explanation:

  • Mode=3023ad: The "8023ad" mode enables the Link Aggregation Control Protocol (LACP) to negotiate the adapters in the link aggregation with a LACP-enabled switch.
  • Hash_mode=src_dst_port: If the ether channel is operating under standard or IEEE 802.3ad mode, the hash mode attribute determines how the outgoing adapter for each packet is chosen. In "src_dst_port" it uses both the source and destination TCP or UDP ports for that connection to determine the outgoing adapter.

Considerations for Trunk Virtual Adapter:

For the SEA device I decided to use only one trunk adapter. For this adapter consider enabling the Data cache block flush parameter and increasing the virtual Ethernet buffers.

Run the following command over the trunked adapter:

#chdev -l entX -a dcbflush_local=yes
#chdev –l entX -a max_buf_tiny=4096 -a min_buf_tiny=2048 
-a max_buf_small=4096 -a min_buf_small=2048 -a max_buf_medium=512 
-a min_buf_medium=256 -a max_buf_large=128 -a min_buf_large=64 
-a max_buf_huge=128 -a min_buf_huge=64 

Note: Run these commands from the oem_setup_env in the VIOS.

Considerations for Shared Ethernet Adapter with 10Gbit Adapters
Because of the augmented speed of 10 GB adapters, it is reasonable that they need more processing power. Actually, this is one of the most important aspects to take into account for the SEA with 10Gbit adapters, because as we will see later, there’s a direct relationship between performance and CPU utilization. Moreover, it’s important that the following recommendations regarding to attributes for the SEA and virtual Ethernet devices are implemented.

Please keep in mind the following recommendations:
-       Ensure best practices for virtual Ethernet adapters for clients are in place.
-       Use Jumbo Frames (MTU 9000 bytes) if possible.
-       Disable SEA threading if the VIOS does not provide VSCSI services.
-       Ensure large send and large receive are enabled.
-       Ensure enough Processing resource on VIOS.

Take note of the following parameters when creating the shared Ethernet adapter:

$ mkvdev -sea entX -vadapter entN -default entN -defaultid Y 
-attr ha_mode=auto ctl_chan=entK largesend=1 large_receive=yes

Parameters explanation:

-       TCP segmentation offload (largesend): With this parameter, the LPARs can hand over the 64 KB of data in a single transmit-request. Largesend is enabled by default (largesend=1) on VIOS and higher.
-       TCP large receive offload (large_receive): When it’s set and if the real adapter supports it, packets received by the real adapter is aggregated before they are passed to the upper layer, resulting in better performance.
Large receive note: This parameter must be enabled only if all the partitions that are connected to the shared Ethernet adapter can handle packets larger than their MTU. This is not the same for Linux partitions.

Optional parameters:

- Disabled Threading: Threaded mode helps ensure that virtual SCSI and the Shared Ethernet Adapter can share the processor resource appropriately. Interesting the documentation talks only about when using VSCSI. In other words, if only using NPIV, consider disabling this parameter. The performance gain could be between 16-20 percent for MTU 1500 and 31-38 percent for Jumbo Frames.

To disable threading, use the chdev over the SEA with -attr thread=0 option.

- Jumbo Frames: Jumbo Frames is same than using 9000 bytes MTU, this means less packets to process on SEA, consequently less CPU consumption. The MTUs for all devices on the network should match.

Unconfigure SEA: $ rmdev -dev sea_device –ucfg

Configure jumbo frames on physical adapter: $ chdev -dev real_device -attr jumbo_frames=yes

Change SEA to use jumbo frames: $ chdev -dev sea_device -attr jumbo_frames=yes

Configure SEA: $ cfgdev -dev sea_device

Performance statistics:

After discussing the configuration recommendations for SEA, I’ll talk about the performance statistics for SEA with 10GbE adapters.

Shared Ethernet Adapter and tuning attribute impact:

The following graph shows performance statistics when using the SEA with L.A with 2X10Gbit ports. The VIO server has two entitled CPU and three virtual processor in uncapped mode.



From this chart, the SEA with the L.A with two 10Gbit is really expensive in terms of CPU. If it’s possible try to use Jumbo Frames and disable threading, this means 50 percent better performance on average, taking into account throughput and CPU utilization. Contemplate assigning 2 CPU for each SEA using L.A 2X10Gbit with Jumbo Frames and disabled threading.

Dedicated Link Aggregation Versus Shared Ethernet Adapter

The following chart exhibits the CPU utilization between the SEA and dedicated Link Aggregation. The CPU consumption when using Shared Ethernet Adapter is 2X than dedicated L.A.



SEA 1 X 10Gbit Versus SEA L.A 2X 10Gbit:

The data in the visual describes that was needed about 1 CPU for each 10Gbit port running on the properly tuned SEA. There’s a slight increase on CPU utilization per 10Gbit port when using L.A 2X10Gbit. In this case, you’d need two CPU for a correctly tuned SEA. The VIOS with L.A was using three VP and two entitled CPUs.



Processing resources for Shared Ethernet adapter:

In the following graph, you’ll see something peculiar in respect to SEA and its CPU utilization. Eventually, if I increased the CPU resources by adding more virtual processor or entitled processor, CPU utilization was greater but having similar throughput, resulting in SEA versus CPU inefficiency.



This is because the increase of context switch between the logical processors and the SEA Process which is depicted by using mpstat. Therefore, it’s a good idea to avoid over-allocating virtual processes, otherwise you could be wasting resources. Consider using one CPU for 1X10Gbit port. For a L.A 2X10Gbit, give thought to allocate two CPU with three virtual processors.

Final Thoughts on Using 10Gbit Ethernet Adapters With the Shared Ethernet Adapter

The Shared Ethernet adapter is able to drive pretty good throughput for 10Gbit adapters as long as there are enough available processing resources. Contemplate using SEA when dealing with large scale applications that requires little bandwidth. On the other hand, if you need to virtualize Ethernet connections for high demanding workloads, consider using SR-IOV, because the direct I/O avoids the additional extra context switch generated by SEA. Now, we’re in the 10Gbit adapters movement and customers want to exploit the capabilities of these adapters to run more ISCSI workloads, NAS and so on, services which depend on network performance as this latter improves more and more. In a few years, the 40Gbit and 100Gbit adapters will likely be popular in Power Systems shops. Nonetheless, processor utilization is a concern for high-end Ethernet adapter implementations.

Like what you just read? To receive technical tips and articles directly in your inbox twice per month, sign up for the EXTRA e-newsletter here.

comments powered by Disqus



2018 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
IBMi News Sign Up Today! Past News Letters