vmware vds LBT

本文介绍了VMware Load-Based Teaming (LBT) 的工作原理及其在虚拟化数据中心中的性能表现。通过SPECweb2005基准测试,在4台虚拟机上实现了13Gbps的网络流量负载均衡,并保持了80%的CPU利用率和99.99%的服务质量。LBT能够根据实际网络负载动态调整流量分配,避免了传统策略中部分链路上行链路闲置而其他链路饱和的问题。
摘要由CSDN通过智能技术生成

 

http://blogs.vmware.com/performance/2010/12/vmware-load-based-teaming-lbt-performance.html

VMware Load-Based Teaming (LBT) Performance

image

image

image

image

image

image

 

Virtualized data center environments are often characterized by a variety as well as a sheer number of traffic flows whose network demands often fluctuate widely and unpredictably. Provisioning a fixed-network capacity for these traffic flows can result either in poor performance (by under provisioning) or waste valuable capital (by over provisioning).

NIC teaming in vSphere enables you to distribute (or load balance) the network traffic from different traffic flows among multiple physical NICs by providing a mechanism to logically bind together multiple physical NICs. This results in increased throughput and fault tolerance and alleviates the challenge of network-capacity provisioning to a great extent. Creating a NIC team in vSphere is as simple as adding multiple physical NICs to a vSwitch and choosing a load balancing policy.

vSphere 4 (and prior ESX releases) provide several load balancing choices, which base routing on the originating virtual port ID, an IP hash, or source MAC hash. While these load balancing choices work fine in the majority of virtual environments, they all share a few limitations. For instance, all these policies statically map the affiliations of the virtual NICs to the physical NICs (based on virtual switch port IDs or MAC addresses) and do not base their load balancing decisions on the current networking traffic and therefore may not effectively distribute traffic among the physical uplinks. Besides, none of these policies take into consideration the disparity of the physical NIC capacity (such as a mixture of 1 GbE and 10 GbE physical NICs in a NIC team). In the next section, we will describe the latest teaming policy introduced in vSphere 4.1 that addresses these shortcomings.

Load-Based Teaming (LBT)

vSphere 4.1 introduces a load-based teaming (LBT) policy that is traffic-load-aware and ensures physical NIC capacity in a NIC team is optimized. Note that LBT is supported only with the vNetwork Distributed Switch (vDS). LBT avoids the situation of other teaming policies where some of the distributed virtual uplinks (dvUplinks) in a DV Port Group’s team are idle while others are completely saturated. LBT reshuffles the port binding dynamically, based on load and dvUplink usage, to make efficient use of the available bandwidth.

LBT is not the default teaming policy while creating a DV Port Group, so it is up to you to configure it as the active policy. As LBT moves flows among uplinks, it may occasionally cause reordering of packets at the receiver. LBT will only move a flow when the mean send or receive utilization on an uplink exceeds 75% of capacity over a 30 second period. LBT will not move flows any more often than once every 30 seconds.

Performance

In this section, we describe in detail the test-bed configuration, the workload used to generate the network traffic flows, and the test results.

Test configuration

In our test configuration, we used an HP DL370 G6 server running the GA release of vSphere 4.1, and several client machines that generated SPECweb®2005 traffic. The server was configured with dual-socket, quad-core 3.1GHz Intel Xeon W5580 processors, 96GB of RAM, and two 10 GbE Intel Oplin NICs. The server hosted four virtual machines and SPECweb2005 traffic was evenly distributed among all four VMs.  Each VM was configured with 4 vCPUs, 16GB memory, 2 vmxnet3 vNICs, and SLES 11 as the guest OS.

SPECweb2005 is an industry-standard web server benchmark defined by the Standard Performance Evaluation Corporation (SPEC). The benchmark consists of three workloads: Banking, Ecommerce, and Support, each with different workload characteristics representing common use cases for web servers. We used the Support workload in our tests which is the most I/O intensive of all the three workloads.

Baseline performance

In our baseline configuration, we configured a vDS with two dvUplinks and two DV Port Groups. We mapped the vNICs of the two VMs to the first dvUplink, and the vNICs of the other two VMs to the second dvUplink through the vDS interface. The SPECweb2005 workload was evenly distributed among all the four VMs and therefore we ensured both the dvUplinks were equally stressed. In terms of the load balancing, this baseline configuration presents the most optimal performance point. With the load of 30,000 SPECweb2005 support users, we observed a little over 13Gbps traffic, that is, about 6.5Gbps per 10 GbE uplink. The %CPU utilization and the percentage of SPECweb2005 user sessions that met the quality-of-service (QoS) requirements were found to be 80% and 99.99% respectively. We chose this load point because customers typically do not stress their systems beyond this level.

LBT performance

We then reconfigured the vDS with two dvUplinks and a single DV Port Group to which all the vNICs of the VMs were mapped. The DV Port Group was configured with the LBT teaming policy. We used the default settings of LBT, which are primarily the wakeup period (30 seconds) and link saturation threshold (75%). Our goal was to evaluate the efficacy of the LBT policy in terms of load balancing and the added CPU cost, if any, when the same benchmark load of 30,000 SPECweb2005 support sessions was applied.

Before the start of the test, we noted that the traffic from all the VMs propagated through the first dvUplink. Note that the initial affiliation of the vNICs to the dvUplinks is made based on the hash of the virtual switch port IDs. To find the current affiliations of the vNICs to the dvUplinks, run the esxtop command and find the port-to-uplink mappings in the network screen. You can also use the “net-lbt” tool to find affiliations as well as to modify LBT settings.

The figure below shows the network bandwidth usage on both of the dvUplinks during the entire benchmark period.

LBT-new

A detailed explanation of the bandwidth usage in each phase follows:

Phase 1: Because all the virtual switch port IDs of the four VMs were hashed to the same dvUplink, only one of the dvUplinks was active. During this phase of the benchmark ramp-up, the total network traffic was below 7.5Gbps. Because the usage on the active dvUplink was lower than the saturation threshold, the second dvUplink remained unused.

Phase 2: The benchmark workload continued to ramp up and when the total network traffic exceeded 7.5Gbps (above the saturation threshold of 75% of link speed), LBT kicked in and dynamically remapped the port-to-uplink mapping of one of the vNIC ports from the saturated dvUplink1 to the unused dvUplink2. This resulted in dvUplink2 becoming active.  The usage on both the dvUplinks remained below the saturation threshold.

Phase 3: As the benchmark workload further ramped up and the total network traffic exceeded 10Gbps (7.5Gbps on dvUplink1 and 2.5Gbps on dvUplink2), LBT kicked in yet again, and dynamically changed port-to-uplink mapping of one of the three active vNIC ports currently mapped to the saturated dvUplink.

Phase 4: As the benchmark reached a steady state with the total network traffic exceeding little over 13Gbps, both the dvUplinks witnessed the same usage.

We did not observe any spikes in CPU usage or any dip in SPECweb2005 QoS during all the four phases. The %CPU utilization and the percentage of SPECweb2005 user sessions that met the QoS requirements were found to be 80% and 99.99% respectively.

These results show that LBT can serve as a very effective load balancing policy to optimally use all the available dvUplink capacity while matching the performance of a manually load-balanced configuration.

Summary

Load-based teaming (LBT) is a dynamic and traffic-load-aware teaming policy that can ensure physical NIC capacity in a NIC team is optimized.  In combination with VMware Network IO Control (NetIOC), LBT offers a powerful solution that will make your vSphere deployment even more suitable for your I/O-consolidated datacenter.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值