An important aspect of VIO Server design is high availability and resiliency for your client LPARs. However reliable the equipment is, failures do occur and there is always human error.
Etherchannel or LACP allows you to provide enhanced network availability and throughput for your LPARs by bundling a number of VIOS Ethernet NICs together to form a single virtual interface. The Etherchannel adapter in VIOS is represented as an entXXdevice, as is any other Ethernet adapter.
As an example, I have the following:
- 1 x Power System
- 2 x VIOS LPARs – each VIOS LPAR has 2 x 2 port 1GB Ethernet adapters
- 2 x Cisco switches
The objective is to create an Etherchannel adapter in each VIOS. Each Etherchannel is made up of the four ports on the physical Ethernet adapters.
A few tips before we start. Check your switches. Some older switches require that the NICs to be grouped together MUST be on the same physical switch. In this case, I would connect four ports from VIOSA to SwitchA and four ports from VIOSB to SwitchB.
If the switch supports vPV, then you should be able to connect the ports to either switch. In this case, I’d connect two ports from VIOSA to switchA and the other two ports to SwitchB. Do the same for VIOSB.
You do not have to assign an IP address to the Etherchannel adapter. I usually assign a dummy IP address such as 192.168.199.xxx for testing. By adding an IP address, the switch will detect packets so you’ll be able to confirm the correct configuration of the adapter in VIOS and of the LACP group in the switch.
Once you have mapped an SEA to the Etherchannel adapter and have LPAR data flowing through, you can remove the dummy IP address from the Etherchannel adapter.
Configuration in a new VIOS LPAR
As always, setting up from a fresh install is easier. My four physical NICs appear as ent0 to ent3:
I also have two virtual NICs in my VIOS. These are the NICs used to communicate with the LPARs. They are configured as ent8 and ent9. These are not relevant to the Etherchannel device we are going to create but will be required when we configure the Shared Ethernet Adapter (SEA) over the Etherchannel adapter.
So, let’s create the Etherchannel adapter:
mkvdev -lnagg ent0,ent1,ent2,ent3 -attr mode=8023ad
-lnagg is followed by the four physical NICs we wish to aggregate.
-attr specifies that we’re creating that as an 802.3ad aggregated link. I’ll be honest here and say I don’t fully understand what this is but I’ve always used it and it’s always worked when connecting to Cisco switches.
At this point we’ve aggregated our NICs into an Etherchannel device represented, in my case, by ent12.
Whoever is/has configured your switch will be expecting to see some traffic but they won’t because we haven’t mapped an SEA to it yet, so there’s no bridge between the virtual NICs and the Etherchannel adapter.
In a new build you may not have any LPARs yet, so how can you test? By putting a dummy IP address over the Etherchannel, we can generate some packets for the switch to detect. I used:
alias aix=oem_setup_env (Create an alias for the oem_setup_env command)
aix (go to AIX mode)
ifconfig en12 192.168.199.12 mask 255.255.255.0
This will add the IP address to the Etherchannel adapter. NOTE: ent12 is the device, en12 is the interface over that device.
If you now ping 192.168.199.1, the person configuring your switch should see packets on the “port group” on the switch.
On the VIOS, go back to padmin mode by typing “exit”. Now run:
entstat -all ent12| grep -i status
This shows the aggregated link (Etherchannel) is working and four NICs (Links) are all up. Success!
Now we need to bridge the Etherchannel to our virtual NICs so that we can get LPAR traffic in and out of the system. We do this by creating an SEA (see my previous articles regarding SEA). An important note here: if you try to create an SEA over a device that has an IP address assigned, it will fail.
Let’s delete the IP interface we created previously:
ifconfig en12 down
ifconfig en12 detach
The above commands will remove the dummy IP address we added to our Etherchannel device.
Let’s create an SEA:
mkvdev -sea ent12 -vadapter ent8 -default ent8 -defaultid 1 -attr ctl_chan=ent9 ha_mode=auto
So now I have a connection between the virtual NICs to the Etherchannel adapter which, in turn, has four physical NICs.
You can also add a dummy IP address to the SEA to test the configuration, e.g.
aix (go to AIX mode)
ifconfig en10 192.168.199.10 mask 255.255.255.0
Run the entstat command again but this time on the SEA, NOT on the Etherchannel adapter.
That’s it, the NICs are aggregated and we have an SEA bridging between the virtual network and the physical network.
Configuration in an existing VIOS LPAR
The astute amongst you may have noticed that the Ethernet device numbers I’ve shown are out of sync. You’d expect to have the Etherchannel (ent12) ID lower than the SEA’s (ent10) ID.
I had to create the Etherchannel in my last environment post-install. So how do we do that? Most importantly, make sure you know:
1. The entXX IDs of any NICs which are already being used by SEAs.
2. That you can failover the VIOS to its redundant partner. This can normally be done using:
chdev -dev ent10 -attr ha_mode=standby
…and to reactivate your SEA
chdev -dev ent10 -attr ha_mode=sharing (or auto)
NB: ent10 is whatever your SEA is.
My approach was as follows:
● Start a ping to a couple of systems from a dos or PC5250 session
● Create an Etherchannel adapter specifying the NICs currently not in use, e.g.
mkvdev -lnagg ent1,ent,ent3 -attr mode=8023ad
● Put a dummy IP address of the Etherchannel adapter and test it aggregates with the switch
● Remove the dummy IP address for the Etherchannel adapter
● Retrieve the attributes of the running SEA, e.g.
lsdev -attr -dev entXX
Make a note of the ctl_chan, ha_mode and virt_adapters values
● Put the SEA into ha_mode=standby as per the chdev command above
● Make sure the pings are still working
● Delete the existing SEA over the single NIC using rmdev -dev entXX
● Create the SEA over the Etherchannel device, e.g.
mkvdev -sea ent12 -vadapter ent8, ent10, ent11 -default ent8 -defaultid 1 -attr ha_mode=standby ctl_chan=ent9 (You’re ent values may/will be different)
● If the SEA was created without errors, restart the SEA, e.g.
chdev -dev ent10 -attr ha_mode=shared (or auto depending on your configuration)
● Check the pings. Make sure the switch engineer is seeing traffic.
● Add the final NIC to the Etherchannel, e.g.
cfglnagg -add -parent ent10 ent12 ent0
ent10 is the SEA (so the parent of the Etherchannel)
ent12 is the Etherchannel adapter to add to
ent0 is the NIC which was used as the real adapter for the SEA
Your Etherchannel should now have four adapters. How do you find out?
entstat -all ent10|grep “Device Type”
Notice that entstat is run over the SEA, not the Etherchannel.
Monitoring the traffic on the Etherchannel adapter
The entstat command will give you a plethora of information about all the components that make up the SEA, Etherchannel, physical and virtual NICs. I usually take a look using:
entstat -all ent10|more
…and then scroll through. 99.9% of this means absolutely nothing to me but I know that adapters are up, links are aggregated and send/receive errors 0 or very low means it’s looking good.
The seastats.ksh script I provided in my previous article is also useful to provide some stats.
I also like to use nmon to monitor the traffic on the SEA:
nmon then option O (upper case O)
What I really like is to use topas but here’s what you get by default:
topas the option E
No SEA showing? That’s because it needs an IP address to get the stats. I run the following:
ifconfig en10 192.168.199.10 netmask 255.255.255.0
topas then option E
Wow, that is brilliant. Well, OK, it’s useful. Personally, I don’t like to keep an IP address on the SEA so I detach it when I’ve finished looking at what I need using the ifconfig enXX detach command.
While I have an IP address over the SEA, I can also use a function like tcpdump too, e.g.
tcpdump -i en1o -s 1500 -x -c 1 -vv ‘ether [20:2] = 0x2000’ |grep -v “0: ”
I’ve learned a whole lot about networking over the last five years or so using VIOS. We need IBM to make this a whole lot more user-friendly. While we wait for that to happen, hopefully you’ll find some useful nuggets of information from this article.
Look out for my articles on monitoring and managing vSCSI devices in VIOS, coming soon on PowerWire.
As systems architect for IBM systems and storage at RSI Consulting, based in Northampton, UK, he works predominantly with IBM i clients as well as those using AIX and Linux. In the last five years, he has also had a great deal of experience working with Power customers using SVC and V7000 storage virtualisation.