Vhost vs vDPA
- 都是Virtio的后端,遵循virtio协议,且都能支持VM热迁移
- vhost-user依靠DPDK实现数据平面
- vDPA依靠智能网卡硬件实现数据平面
环境概述
- 服务器:Huawei Fusion V5
- 网卡:NVIDIA ConnectX-6 Dx 100GbE
- DPDK Version: 20.11.3
- OFED Version: MLNX_OFED_LINUX-5.4-3.0.3.0
- Qemu Version: 2.11.1
- Guest OS: Alpine-virt-3.19.0-x86_64
QEMU预备配置
#拉取Alpine-virt iso(选virt版本,包含virtio驱动且只有60M),创建虚拟磁盘。
wget https://mirrors.tuna.tsinghua.edu.cn/alpine/latest-stable/releases/x86_64/alpine-virt-3.19.0-x86_64.iso
qemu-img create -f qcow2 alpine.qcow2 10G
浅浅试一下Vhost-User吧~
-
启动dpdk-testpmd(如果没有安装OvS-DPDK或者VPP,testpmd也可以),绑定Vhost和PF;
./dpdk-testpmd -l 0-1 -n 1 --vdev ‘eth_vhost0,iface=/tmp/sock0’ -a 5e:00.0 – -i --forward-mode=io
start -
启动QEMU(virtio-net运行的client模式,所以QEMU后启动,且只配置一个virtqueue)
qemu-system-x86_64 -enable-kvm -smp 1 -hda ./alpine.qcow2 -cdrom alpine-virt-3.19.0-x86_64.iso -nographic -m 1G \
-object memory-backend-file,id=mem0,size=1G,mem-path=/dev/hugepages,share=on -mem-prealloc -numa node,memdev=mem0 \
-chardev socket,id=char1,path=/tmp/sock0 -netdev type=vhost-user,id=hostnet1,chardev=char1 \
-device virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:00:00:00:14ifconfig eth0 192.168.201.102
ping 192.168.201.103 #检验连通性
浅浅试一下vDPA吧~
-
配置OFED:vDPA的配置和网卡的驱动有很强的关联,OFED一定要装好
-
编译运行DPDK vdpa 示例程序:
cd dpdk_src/examples/vdpa
make
./build/vdpa -a 5e:00.0,class=vdpa --log-level=pmd,info – -i # 这个命令是NVIDIA(Mellanox)给的,和DPDK官网给的有一定出入;
create /tmp/sock0 5e:00.0
stats 5e:00.0 0x0001 #查看queue1的数据
-
QEMU运行(配置多virtqueue)
qemu-system-x86_64 -enable-kvm -smp 1 -hda ./alpine.qcow2 -cdrom alpine-virt-3.19.0-x86_64.iso -nographic \
-m 1G -object memory-backend-file,id=mem0,size=1G,mem-path=/dev/hugepages,share=on -mem-prealloc \
-numa node,memdev=mem0 -chardev socket,id=char1,path=/tmp/sock0 \
-netdev type=vhost-user,queues=16,id=hostnet1,chardev=char1 \
-device virtio-net-pci,mq=on,netdev=hostnet1,id=net1,mac=52:54:00:00:00:14ifconfig eth0 192.168.201.102
ping 192.168.201.103 #检验连通性,icmp时延有点大,感觉配置没弄好