A: Yes and no. Both Virtual Machine Device Queues (VMDq) and SR-IOV are technologies to improve the network performance for virtual machines (VMs) and to minimize the overhead and CPU bottlenecks on a VM manager such as the Hyper-V Windows Server management partition. However, they do it in different ways.
With VMDq, the VM manager can assign a separate queue in the network adapter to each VM, which removes overhead on the virtual switch sorting and routing for where the packets need to go. However, the VM manager and the virtual switch still have to copy the traffic from the VMDq to the VM, which, for Hyper-V, travels over the kernel-mode memory bus.
Additionally because there are multiple queues, the incoming load can be spread over multiple processor cores removing any potential processing bottleneck. VMDq reduces the work on the virtual switch and enables better scalability, but the traffic still flows through the virtual switch and over normal data transports (VMBus) as shown in the screen shot below.
[img]http://windowsitpro.com/site-files/windowsitpro.com/files/archive/windowsitpro.com/content/content/142153/networkoptimizationvmdqsriovsml.jpg[/img]
SR-IOV works similarly to VMDq, but instead of creating a separate queue for each VM, it actually creates a separate Virtual Function (VF) that acts like a separate network device for each VM. The VM communicates directly with it, completely [b]bypassing [绕过][/b] the virtual switch and any load-copying data on the VM manager, since SR-IOV uses Direct Memory Accesss (DMA) between the VF and the VM.
SR-IOV offers the best network performance but requires support on the hypervisor, motherboard, and network adapter and might affect [b]portability[移植性][/b] of VMs between hardware capable of using SR-IOV and hardware incapable of using SR-IOV.
With VMDq, the VM manager can assign a separate queue in the network adapter to each VM, which removes overhead on the virtual switch sorting and routing for where the packets need to go. However, the VM manager and the virtual switch still have to copy the traffic from the VMDq to the VM, which, for Hyper-V, travels over the kernel-mode memory bus.
Additionally because there are multiple queues, the incoming load can be spread over multiple processor cores removing any potential processing bottleneck. VMDq reduces the work on the virtual switch and enables better scalability, but the traffic still flows through the virtual switch and over normal data transports (VMBus) as shown in the screen shot below.
[img]http://windowsitpro.com/site-files/windowsitpro.com/files/archive/windowsitpro.com/content/content/142153/networkoptimizationvmdqsriovsml.jpg[/img]
SR-IOV works similarly to VMDq, but instead of creating a separate queue for each VM, it actually creates a separate Virtual Function (VF) that acts like a separate network device for each VM. The VM communicates directly with it, completely [b]bypassing [绕过][/b] the virtual switch and any load-copying data on the VM manager, since SR-IOV uses Direct Memory Accesss (DMA) between the VF and the VM.
SR-IOV offers the best network performance but requires support on the hypervisor, motherboard, and network adapter and might affect [b]portability[移植性][/b] of VMs between hardware capable of using SR-IOV and hardware incapable of using SR-IOV.