November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Categories

November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

LACP, SR-IOV, Elastic Ports

LACP, SR-IOV, Elastic Ports

including LACP support, SR-IOV, Elastic Ports, BPDU Filters, and new Scalability. All of the technology presented here has been verified and “tinkered with” in the Wahl Network lab on VMware ESXi 5.1.0 build 613838 (beta).

This deep dive series will go into all of the awesome goodies that are baked into the newly released vSphere Distributed Switch (vDS) in version 5.1. I’ve broken the posts up into 4 different parts so that you can sample them at your leisure without having to run through a 40 mile long post. Here are the links to the entire series:

New 5.1 Distributed Switch Features Part 1 – Network Health Check
New 5.1 Distributed Switch Features Part 2 – Configuration Backups and Rollbacks
New 5.1 Distributed Switch Features Part 3 – Port Mirror and NetFlow Enhancements
New 5.1 Distributed Switch Features Part 4 – LACP, SR-IOV, Elastic Ports, and More

Without further ado, let’s get started.
LACP

Tired of using static mode EtherChannels for link aggregation? Good, me too. Fortunately, that’s over with now that the new vDS 5.1 supports LACP (mode active) port channels! I’ve written on LACP before, and the current process was to use a Nexus 1000v if LACP was required, because the vSphere side of the equation simply did not participate in LACP. Although the load balance piece remains the same, LACP has a few advantages in the way it handles link failures and cabling mistakes.
SR-IOV

Single Root IO Virtualization (SR-IOV) has received some attention in the past by big named bloggers, but is now getting the spotlight it deserves. For those who have worked with CNA cards, HP’s Virtual Connect, or Cisco’s Palo card, this will be old hat. It gives you the ability to divide up a PCI express card into multiple logical devices to the VMs. The big winner here is the hypervisor, as the ability to pass through a card to multiple VMs can result in lower latency and overhead (CPU) because the card is doing the work. It also means that you can pass thru a single card to multiple VMs, rather than today where the card is locked to a single VM.

There are still many caveats. Per VMware:

vSphere vMotion, vSphere FT, and vSphere HA features are not available to the customers when this [SR-IOV] feature is selected.

Elastic Ports

Not a new feature in vDS 5.1, but one that has been properly exposed in the GUI. When using static binding, you now have the option to set the ports allocation method to “Elastic” as shown below.

elastic-ports

f ports are exhausted on the port group, the vDS will automatically expand the port allocation pool to accommodate the required dvport. When the port is no longer needed, the port will be eliminated down to the size set at time of creation. From my lab tests, it seems Elastic is the default selected port allocation option. For those who were setting this value using PowerCLI back in vDS 5.0 days, all you have to do is toggle this drop down box and be done with it. I’d imagine that this further takes the wind out of the sails of ephemeral port binding.
BDPU Filters

All network engineers worth their salt will enable portfast and BPDU Guard on a switch port headed to an ESXi host. This is because there’s no way to loop a vSphere switch – they don’t connect to each other – so there’s no need to worry about spanning tree causing a loop. However, the issue is that this does allow for a potential denial of service attack in the form of a VM that sends out BPDU packets and errdisables the switch ports. In a vSwitch team, this could cause all of the uplinks to shut down as the host continues to migrate the VMs from one active uplink to another.

Thanks to an enhancement, vDS 5.1 allows you to filter BPDU packets from the vSwitch side of the equation.
Scalability

And saving the “big new numbers” part for last, here’s some of the new scalability numbers that will be released with vDS 5.1:

Static dvPortgroups goes up from 5,000 to 10,000
Number of dvports goes up from 20,000 to 60,000
Number of Hosts per VDS goes up from 350 to 500
Number of VDS supported on a VC goes up from 32 to 128

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>