Network Elements

Open vSwitch (OVS)

Open vSwitch OVS is an open source software switch implemented in the Linux kernel and designed to work in a multiserver virtualization environment. By default, OVS behaves like a layer-2 switch that maintains a MAC address table. The hypervisor host and VMs connect to virtual ports on the switch. OVS supports many popular switch features, such as VLAN tagging, load balancing, and Link Aggregation Control Protocol (LACP.)

Each AHV server maintains an OVS instance, and all OVS instances combine to form a single logical switch. Constructs called bridges manage the switch instances residing on the AHV hosts. Use the following commands to configure OVS with bridges, bonds, and VLAN tags. For example:

  • ovs-vsctl (on the AHV hosts)
  • ovs-appctl (on the AHV hosts)
  • manage-ovs (on CVMs)

See the Open vSwitch website for more information.

Bridges

Bridges act as virtual switches to manage traffic between physical and virtual network interfaces. The default AHV configuration includes an OVS bridge called br0 and a native Linux bridge called virbr0. The names could vary between AHV/AOS versions and depending on what configuration changes were done on the nodes, but in this training we will use br0 and virbr0 by default. 

The virbr0 Linux bridge carries management and storage communication between the CVM and AHV host. All other storage, host, and VM network traffic flows through the br0 OVS bridge. The AHV host, VMs, and physical interfaces use "ports" for connectivity to the bridge.

Ports

Ports are logical constructs created in a bridge that represent connectivity to the virtual switch. Nutanix uses several port types, including internal, tap, VXLAN, and bond.

  • An internal port with the same name as the default bridge (br0) provides access for the AHV host.
  • Tap ports connect virtual NICs presented to VMs.
  • Use VXLAN ports for IP address management functionality provided by Acropolis.
  • Bonded ports provide NIC teaming for the physical interfaces of the AHV host.

Bonds

Bonded ports aggregate the physical interfaces on the AHV host. By default, the system creates a bond named br0-up in bridge br0 containing all physical interfaces. Changes to the default bond br0-up using manage_ovs commands can rename it to bond0. Remember, bond names on your system might differ from the diagram below. Nutanix recommends using the name br0-up to quickly identify this interface as the bridge br0 uplink. Using this naming scheme, you can also easily distinguish uplinks for additional bridges from each other.

Only utilize NICs of the same speed within the same bond. 

Bond Modes

There are three load-balancing/failover modes that can be applied to bonds:

Active-Backup (default)

With the active-backup bond mode, one interface in the bond carries traffic and other interfaces in the bond are used only when the active link fails. Active-backup is the simplest bond mode, easily allowing connections to multiple upstream switches without any additional switch configuration. The active-backup bond mode requires no special hardware and you can use different physical switches for redundancy.

The tradeoff is that traffic from all VMs uses only a single active link within the bond at one time. All backup links remain unused until the active link fails. In a system with dual 10 GB adapters, the maximum throughput of all VMs running on a Nutanix node with this configuration is 10 Gbps or the speed of a single link.

This mode only offers failover ability (no traffic load balancing.) If the active link goes down, a backup or passive link activates to provide continued connectivity. AHV transmits all traffic including those from the CVM and VMs across the active link. All traffic shares 10 Gbps of network bandwidth.

Balance-slb

To take advantage of the bandwidth provided by multiple upstream switch links, you can use the balance-slb bond mode. The balance-slb bond mode in OVS takes advantage of all links in a bond and uses measured traffic load to rebalance VM traffic from highly used to less used interfaces. When the configurable bond-rebalance interval expires, OVS uses the measured load for each interface and the load for each source MAC hash to spread traffic evenly among links in the bond. Traffic from some source MAC hashes may move to a less active link to more evenly balance bond member utilization.

Perfectly even balancing may not always be possible, depending on the number of source MAC hashes and their stream sizes. Each individual VM NIC uses only a single bond member interface at a time, but a hashing algorithm distributes multiple VM NICs’ multiple source MAC addresses across bond member interfaces. As a result, it is possible for a Nutanix AHV node with two 10 GB interfaces to use up to 20 Gbps of network throughput. Individual VM NICs have a maximum throughput of 10 Gbps, the speed of a single physical interface. A VM with multiple NICs could still have more bandwidth than the speed of a single physical interface, but there is no guarantee that the different VM NICs will land on different physical interfaces.

The default rebalance interval is 10 seconds, but Nutanix recommends setting this interval to 30 seconds to avoid excessive movement of source MAC address hashes between upstream switches. Nutanix has tested this configuration using two separate upstream switches with AHV. If the upstream switches are interconnected physically or virtually, and both uplinks allow the same VLANs, no additional configuration, such as link aggregation is necessary.

Do not use link aggregation technologies such as LACP with balance-slb. The balance-slb algorithm assumes that upstream switch links are independent L2 interfaces. It handles broadcast, unicast, and multicast (BUM) traffic, selectively listening for this traffic on only a single active adapter in the bond.

Do not use IGMP snooping on physical switches connected to Nutanix servers using balance-slb. Balance-slb forwards inbound multicast traffic on only a single active adapter and discards multicast traffic from other adapters. Switches with IGMP snooping may discard traffic to the active adapter and only send it to the backup adapters. This mismatch leads to unpredictable multicast traffic behavior. Disable IGMP snooping or configure static IGMP groups for all switch ports connected to Nutanix servers using balance-slb. IGMP snooping is often enabled by default on physical switches.

Both active-backup and balance-slb do not require configuration on the switch side.

LACP with Balance-TCP

Taking full advantage of bandwidth provided by multiple links to upstream switches, from a single VM, requires dynamically negotiated link aggregation and load balancing using balance-tcp. Nutanix recommends dynamic link aggregation with LACP instead of static link aggregation due to improved failure detection and recovery.

Ensure that you have appropriately configured the upstream switches before enabling LACP. On the switch, link aggregation is commonly referred to as port channel or LAG, depending on the switch vendor. Using multiple upstream switches may require additional configuration such as MLAG or vPC. Configure switches to fall back to active-backup mode in case LACP negotiation fails sometimes called fallback or no suspend-individual. This setting assists with node imaging and initial configuration where LACP may not yet be available.

Review the the best practices documents for the switches used in your environment.

With link aggregation negotiated by LACP, multiple links to separate physical switches appear as a single layer-2 (L2) link. A traffic-hashing algorithm such as balance-tcp can split traffic between multiple links in an active-active fashion. Because the uplinks appear as a single L2 link, the algorithm can balance traffic among bond members without any regard for switch MAC address tables. Nutanix recommends using balance-tcp when using LACP and link aggregation, because each TCP stream from a single VM can potentially use a different uplink in this configuration.

With link aggregation, LACP, and balance-tcp, a single guest VM with multiple TCP streams could use up to 20 Gbps of bandwidth in an AHV node with two 10 GB adapters.

Configuring Link Aggregation

To take advantage of the bandwidth that multiple links provide, you will need to use link aggregation. Different switch vendors use different terms to refer to link aggregation, such as port channel or LAG. When configuring link aggregation, remember that:

  • Nutanix and OVS require dynamic link aggregation with LCAP.
  • You should not use static link aggregation, such as EtherChannel, with AHV.
  • You should configure switches to fall back to active-backup mode, in case LACP negotiation fails.
  • For clusters running AOS 5.11 to 5.18 with a single bond and bridge, you can use the Uplink Configuration UI on the Network dashboard of the Prism web console to set NIC teaming policies and NIC configurations.
  • For clusters running AOS 5.19 or later, even with multiple bridges, you can use the Prism Virtual Switch UI to configure balance-tcp with LACP.
  • You only need to use the CLI for configuration if you are using a version of AOS prior to 5.11, or if you are using a version of AOS from 5.11 to 5.18 with multiple bridges and bonds.

For more information, see the AHV Networking Best Practices Guide and the Host Network Management section of the AHV Administration Guide on the Nutanix Support Portal.

Virtual Local Area Networks (VLANs)

AHV supports the use of VLANs for the CVM, AHV host, and user VMs. You can create and manage a vNIC’s networks for user VMs using the Prism GUI, the Acropolis CLI (aCLI), or REST without any additional AHV host configuration.

Each virtual network in AHV maps to a single VLAN and bridge. You must create each VLAN and virtual network created in AHV on the physical top-of-rack switches as well, but integration between AHV and the physical switch can automate this provisioning.

By default, all VM vNICs are created in access mode on br0, which permits only one VLAN per virtual network. However, you can choose to configure a vNIC in trunked mode using the aCLI instead, allowing multiple VLANs on a single VM NIC for network-aware user VMs.

IP Address Management (IPAM)

AHV supports two different ways to provide VM connectivity: managed and unmanaged networks.

With unmanaged networks, VMs get a direct connection to their VLAN of choice. Each virtual network in AHV maps to a single VLAN and bridge. All VLANs allowed on the physical switch port to the AHV host are available to the CVM and guest VMs.

A managed network is a VLAN plus IP Address Management (IPAM). IPAM is the cluster capability to function like a DHCP server, to assign an IP address to a VM that connects to the managed network.

Administrators can configure each virtual network with a specific IP subnet, associated domain settings, and an IP address pool for assignment.

  • The Acropolis Leader acts as an internal DHCP server for all managed networks.
  • The OVS is responsible for encapsulating DHCP requests from the VMs in VXLAN and forwarding them to the Acropolis Leader.
  • VMs receive their IP addresses from the Acropolis Leader’s responses.
  • The IP address assigned to a VM is persistent until you delete the VNIC or destroy the VM.

The Acropolis Leader runs the CVM administrative process to track device IP addresses. This creates associations between the interface’s MAC addresses, IP addresses and defined pool ofIP addresses for the AOS DHCP server.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

一个在高校打杂的

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值