vpp+dpdk环境搭建

1. 安装

1.1 环境准备

下载虚拟机

Download VMware Fusion | VMware

ubuntu

https://cn.ubuntu.com/download/desktop

jdk

百度安全验证

安装依赖包

//更新软件源

sudo apt-get update

sudo apt-get upgrade

//检测是否安装 build-essential 包

sudo apt-get install build-essential

sudo apt-get install python-lzma python-sqlitecachec python-urlgrabber

//安装yum

sudo apt-get install yum

安装libpcap

sudo apt-get install libpcap-dev

sudo apt-get install numactl

sudo apt install openssh-server

sudo apt install net-tools

sudo apt-get install git

安装mkdep

mkdep介绍及相关错误解决方法 [复制链接] 在某些Makefile中,可能会有mkdep命令,但是通常情况下mkdep并没有安装,make时会提示错误。. 解决办法是在Ubuntu的终端中输入mkdep,然后按照提示在线安装mkdep,再执行make,问题解决。

sudo apt install bmake

安装nasm

sudo apt-get install nasm

安装asciidoc

sudo apt-get install asciidoc

安装iperf3用于测试

sudo apt install iperf3

1.1.1 安装clang 11

bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)"

增加相关地址路径

vi /etc/apt/sources.list

deb http://apt.llvm.org/bionic/ llvm-toolchain-bionic-11 main

deb-src http://apt.llvm.org/bionic/ llvm-toolchain-bionic-11 main

更新apt

sudo apt-get update

sudo apt-get upgrade

wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key|sudo apt-key add -

apt-get install clang-11 lldb-11 lld-11

apt-get install libc++-11-dev libc++abi-11-dev

cd /usr/bin

sudo ln -s clang-11 clang

sudo ln -s clang++-11 clang++

sudo ln -s /usr/bin/llvm-ar-11 /usr/bin/llvm-ar

sudo ln -s /usr/bin/llvm-as-11 /usr/bin/llvm-as

sudo ln -s /usr/bin/clangd-11 /usr/bin/clangd

sudo ln -s /usr/bin/clang-tidy-11 /usr/bin/clang-tidy

1.2 代码仓库:

官网代码库地址:https://gerrit.fd.io/r/vpp

github地址: https://github.com/FDio/vpp

下载代码:git clone https://github.com/FDio/vpp

或者git clone https://gerrit.fd.io/r/vpp

1.3 代码编译

编译VPP

  1. 在vpp目录下,安装依赖:make install-dep, make install-ext-deps
  2. 编译含debug功能的版本:make build
  1. make pkg-deb
  2. dpkg -i build-root/*.deb,该步骤会自动配置大页

编译dpdk

  1. 最新版本的dpdk编译没有发现IGB_UIO.ko文件;
  2. 下载dpdk版本为19.08,编译后可得到igb_uio.ko文件;
  1. 可通过修改build/external/packages/dpdk.mk(vpp目录)中的dpdk_version,运行make build自动下载dpdk代码;
  2. 运行make build后,代码包放在build/external/downloads,解压dpdk源码包,进入dpdk文件夹;

xz -d dpdk-21.08.tar.xz

tar -vxf dpdk-21.08.tar

  1. 编译;

DPDK

安装meson

sudo apt install meson

安装elf

sudo apt install python3-pip

python3 -m pip install --upgrade pip setuptools wheel

sudo apt-get install -y python3-pyelftools python-pyelftools

dpdk从20.02版本后默认不编译igb_uio.ko

更改config/common_base的 CONFIG_RTE_EAL_IGB_UIO=y 可以实现编译。

更高版本的DPDK已经都没有驱动代码,通过git://dpdk.org/dpdk-kmods下载

直接编译

zhuyu@ubuntu:~/code$ cd dpdk-kmods/

zhuyu@ubuntu:~/code/dpdk-kmods$ cd linux/

zhuyu@ubuntu:~/code/dpdk-kmods/linux$ cd igb_uio/

zhuyu@ubuntu:~/code/dpdk-kmods/linux/igb_uio$ make

make -C /lib/modules/`uname -r`/build/ M=/home/zhuyu/code/dpdk-kmods/linux/igb_uio

make[1]: Entering directory '/usr/src/linux-headers-5.11.0-40-generic'

  CC [M]  /home/zhuyu/code/dpdk-kmods/linux/igb_uio/igb_uio.o

  MODPOST /home/zhuyu/code/dpdk-kmods/linux/igb_uio/Module.symvers

  CC [M]  /home/zhuyu/code/dpdk-kmods/linux/igb_uio/igb_uio.mod.o

  LD [M]  /home/zhuyu/code/dpdk-kmods/linux/igb_uio/igb_uio.ko

  BTF [M] /home/zhuyu/code/dpdk-kmods/linux/igb_uio/igb_uio.ko

Skipping BTF generation for /home/zhuyu/code/dpdk-kmods/linux/igb_uio/igb_uio.ko due to unavailability of vmlinux

make[1]: Leaving directory '/usr/src/linux-headers-5.11.0-40-generic'

zhuyu@ubuntu:~/code/dpdk-kmods/linux/igb_uio$ sudo  insmod igb_uio.ko

zhuyu@ubuntu:~/code/dpdk-kmods/linux/igb_uio$ lsmod | grep uio

igb_uio                20480  0

uio_pci_generic        16384  0

uio                    20480  2 igb_uio,uio_pci_generic

编译库,驱动和测试应用程序

  meson build

  #包含example的话如下:

  #meson -Dexamples=all build

  ninja -C build

保留巨页内存

  mkdir -p /dev/hugepages

  mountpoint -q /dev/hugepages || mount -t hugetlbfs nodev /dev/hugepages

  echo 64 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages

  #最后一个可能无权限

2. 初步应用

2.1 绑定接口

在dpdk文件目录下操作:

查看接口状态:

./usertools/dpdk-devbind.py -s

zhuyu@ubuntu:~/code/vpp/build/external/downloads/dpdk-21.08$ ./usertools/dpdk-devbind.py -s

Network devices using kernel driver

===================================

0000:02:01.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' if=ens33 drv=e1000 unused=vfio-pci,uio_pci_generic *Active*

No 'Baseband' devices detected

==============================

No 'Crypto' devices detected

============================

No 'Eventdev' devices detected

==============================

No 'Mempool' devices detected

=============================

No 'Compress' devices detected

==============================

No 'Misc (rawdev)' devices detected

===================================

No 'Regex' devices detected

===========================

一般接口是先被内核接管。

绑定网卡:

注意:用于绑定的网卡的模式需要和pc能通信

zhuyu@ubuntu:~/code/vpp/build/external/downloads/dpdk-21.08$ sudo ./usertools/dpdk-devbind.py --bind=igb_uio 02:05.0

zhuyu@ubuntu:~/code/vpp/build/external/downloads/dpdk-21.08$ sudo ./usertools/dpdk-devbind.py --bind=igb_uio 02:06.0

绑定成功后截图:

zhuyu@ubuntu:~/code/vpp/build/external/downloads/dpdk-21.08$ ./usertools/dpdk-devbind.py -s

Network devices using DPDK-compatible driver

============================================

0000:02:05.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' drv=igb_uio unused=e1000,vfio-pci,uio_pci_generic

0000:02:06.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' drv=igb_uio unused=e1000,vfio-pci,uio_pci_generic

Network devices using kernel driver

===================================

0000:02:01.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' if=ens33 drv=e1000 unused=igb_uio,vfio-pci,uio_pci_generic *Active*

No 'Baseband' devices detected

==============================

No 'Crypto' devices detected

============================

No 'Eventdev' devices detected

==============================

No 'Mempool' devices detected

=============================

No 'Compress' devices detected

==============================

No 'Misc (rawdev)' devices detected

===================================

No 'Regex' devices detected

===========================

zhuyu@ubuntu:~/code/vpp/build/external/downloads/dpdk-21.08$

dpdk解绑网卡:./usertools/dpdk-devbind.py -u 0000:02:06.0

2.2 测试dpdk:(不一定可用)

 sudo ./build/app/dpdk-testpmd -c3 --vdev=net_pcap0,iface=ens37 --vdev=net_pcap1,iface=ens38 --       -i --nb-cores=2 --nb-ports=2 --total-num-mbufs=2048

      

      

 sudo ./build/examples/dpdk-l2fwd -- -p 3

2.3 启动VPP

修改vpp启动配置文件

将/etc/vpp/startup.conf文件中的dpdk配置放开,并添加绑定的网卡。

关于dpdk配置为:

 dpdk {

        ## Change default settings for all interfaces

        dev 0000:02:05.0 {

                num-rx-queues 1

        }

        dev 0000:02:06.0 {

                num-rx-queues 1

        }

        # dev default {

                ## Number of receive queues, enables RSS

                ## Default is 1

                # num-rx-queues 3

                ## Number of transmit queues, Default is equal

                ## to number of worker threads or 1 if no workers treads

                # num-tx-queues 3

                ## Number of descriptors in transmit and receive rings

                ## increasing or reducing number can impact performance

                ## Default is 1024 for both rx and tx

                # num-rx-desc 512

                # num-tx-desc 512

                ## VLAN strip offload mode for interface

                ## Default is off

                # vlan-strip-offload on

                ## TCP Segment Offload

                ## Default is off

                ## To enable TSO, 'enable-tcp-udp-checksum' must be set

                # tso on

                ## Devargs

                ## device specific init args

                ## Default is NULL

                # devargs safe-mode-support=1,pipeline-mode-support=1

                ## rss-queues

                ## set valid rss steering queues

                # rss-queues 0,2,5-7

        # }

## Whitelist specific interface by specifying PCI address

        # dev 0000:02:00.0

        ## Blacklist specific device type by specifying PCI vendor:device

        ## Whitelist entries take precedence

        # blacklist 8086:10fb

        ## Set interface name

        # dev 0000:02:00.1 {

        #       name eth0

        # }

        ## Whitelist specific interface by specifying PCI address and in

        ## addition specify custom parameters for this interface

        # dev 0000:02:00.1 {

        #       num-rx-queues 2

        # }

        ## Change UIO driver used by VPP, Options are: igb_uio, vfio-pci,

        ## uio_pci_generic or auto (default)

        uio-driver vfio-pci#igb_uio

        ## Disable multi-segment buffers, improves performance but

        ## disables Jumbo MTU support

        # no-multi-seg

        ## Change hugepages allocation per-socket, needed only if there is need for

        ## larger number of mbufs. Default is 256M on each detected CPU socket

        # socket-mem 2048,2048

        ## Disables UDP / TCP TX checksum offload. Typically needed for use

        ## faster vector PMDs (together with no-multi-seg)

        # no-tx-checksum-offload

        ## Enable UDP / TCP TX checksum offload

        ## This is the reversed option of 'no-tx-checksum-offload'

        # enable-tcp-udp-checksum

        ## Enable/Disable AVX-512 vPMDs

        # max-simd-bitwidth <256|512>

}

重启vpp:service vpp restart

2.4 配置VPP

启动vppctl,将接口配置为up:

vpp# set interface state GigabitEthernet2/5/0 up

vpp# set interface state GigabitEthernet2/6/0 up

vpp# show interface                             

              Name               Idx    State  MTU (L3/IP4/IP6/MPLS)     Counter          Count     

GigabitEthernet2/5/0              1      up          9000/0/0/0     

GigabitEthernet2/6/0              2      up          9000/0/0/0     

local0                            0     down          0/0/0/0       

vpp#

为接口配置IP:

vpp# set interface ip address GigabitEthernet2/5/0 192.168.18.129/24

vpp# set interface ip address GigabitEthernet2/6/0 192.168.18.130/24

set logging class dpdk level debug

set logging size 10000

set interface ip address GigabitEthernet2/5/0 192.168.18.129/24

set interface ip address GigabitEthernet2/6/0 192.168.19.130/24

set interface state GigabitEthernet2/5/0 up

set interface state GigabitEthernet2/6/0 up

set interface rx-mode GigabitEthernet2/5/0 interrupt

set interface rx-mode GigabitEthernet2/6/0 interrupt

set interface rx-mode GigabitEthernet2/5/0 polling

此时,用PC去ping该IP,能ping通,则说明环境搭建成功!

banma-1894@banma-1894deMacBook-Pro Code % ping 192.168.18.129

PING 192.168.18.129 (192.168.18.129): 56 data bytes

64 bytes from 192.168.18.129: icmp_seq=0 ttl=64 time=0.733 ms

64 bytes from 192.168.18.129: icmp_seq=1 ttl=64 time=0.207 ms

64 bytes from 192.168.18.129: icmp_seq=2 ttl=64 time=0.182 ms

vpp# show interface

              Name               Idx    State  MTU (L3/IP4/IP6/MPLS)     Counter          Count     

GigabitEthernet2/5/0              1      up          9000/0/0/0     rx packets                    54

                                                                    rx bytes                    4536

                                                                    tx packets                    24

                                                                    tx bytes                    2192

                                                                    drops                          6

                                                                    punt                          26

                                                                    ip4                           46

                                                                    ip6  







 

3. 进一步应用

3.1 使用PRELOAD

VPP/HostStack/LDP/iperf - fd.io

  • LD_PRELOAD的作用:系统在运行过程中,会首先加载该环境变量指定的函数库(在libc.so之前加载),如果函数库内包含了程序中执行的函数名,该可执行文件的函数将被重定向到LD_PRELOAD指向的函数中。

export LD_PRELOAD=~/code/vpp/build-root/build-vpp-native/vpp/lib/x86_64-linux-gnu/libvcl_ldpreload.so

或者直接作为参数使用:

3.1.1 VPP配置

需要增加socket接口:

session { evt_qs_memfd_seg  }

socksvr { socket-name /tmp/vpp-api.sock}

3.1.2 配置VCL

创建以下配置文件vcl.conf

vcl {

  rx-fifo-size 4000000

  tx-fifo-size 4000000

  app-scope-local

  app-scope-global

  api-socket-name /tmp/vpp-api.sock

}

3.1.3 配置环境变量

#自行修改路径

export VCL_CFG=/path/to/vcl.conf

export LDP_PATH=/path/to/vpp/build-root/build-vpp-native/vpp/lib/libvcl_ldpreload.so

3.1.4 在适当的核上执行(poll模式下不要和vpp放一个核):

#To start the server:

sudo taskset --cpu-list <core-list> sh -c "LD_PRELOAD=$LDP_PATH VCL_CONFIG=$VCL_CFG iperf3 -4 -s"

#To start the client:

sudo taskset --cpu-list <core-list> sh -c "LD_PRELOAD=$LDP_PATH VCL_CONFIG=$VCL_CFG iperf3 -c"


3.2 转发测试

Using VPP with Iperf3 — The Vector Packet Processor v22.02-0-g7911f29c5 documentation

4 相关参数

VPP version         : 22.02-rc0~329-g32c7335ea

VPP library version : 22.02

GIT toplevel dir    : /home/zhuyu/code/vpp

Build type          : debug

C compiler          : /usr/lib/ccache/clang-11

C flags             :

Linker flags (apps) :

Linker flags (libs) :

Host processor      : x86_64

Target processor    : x86_64

Prefix path         : /opt/vpp/external/x86_64 /home/zhuyu/code/vpp/build-root/install-vpp_debug-native/external

Install prefix      : /home/zhuyu/code/vpp/build-root/install-vpp_debug-native/vpp

Library dir         : lib/x86_64-linux-gnu

5 可能遇到的问题:

5.1 内存不足

5.2 绑定网卡失败

Warning: routing table indicates that interface 0000:02:07.0 is active. Not modifying

解决方案:ifconfig接口 down即可

zhuyu@ubuntu:~/code/vpp/build/external/downloads/dpdk-21.08$ ./usertools/dpdk-devbind.py --bind=uio_pci_generic 02:05.0

Error: unbind failed for 0000:02:05.0 - Cannot open /sys/bus/pci/drivers/e1000/unbind

解决方案:bind的类型不对,使用igb_uio

5.3 测试问题

zhuyu@ubuntu:~/code/vpp/build/external/downloads/dpdk-21.08$ build/app/dpdk-testpmd

EAL: Detected 2 lcore(s)

EAL: Detected 1 NUMA nodes

EAL: Detected static linkage of DPDK

EAL: Multi-process socket /run/user/1000/dpdk/rte/mp_socket

EAL: FATAL: Cannot use IOVA as 'PA' since physical addresses are not available

EAL: Cannot use IOVA as 'PA' since physical addresses are not available

EAL: Error - exiting with code: 1

  Cause: Cannot init EAL: Invalid argument

  

解决方案:sudo就行

  • 运行testpmd时无法分配内存:

EAL: Error - exiting with code: 1

  Cause: Creation of mbuf pool for socket 0 failed: Cannot allocate memory

内存加大问题解决。

  • 运行testpmd,持续输出输入输出的报错信息:

EAL: Error reading from file descriptor 23: Input/output error

TELEMETRY: No legacy callbacks, legacy socket not created

testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc

Configuring Port 0 (socket 0)

EAL: Error enabling interrupts for fd 20 (Input/output error)

原因:在虚拟机添加的网卡,dpdk不支持导致的。需要修改一行代码,跳过dpdk pci 检查

/dpdk-kmods/linux/igb_uio/igb_uio.c文件中找到

6 调试

关于log:

/var/log/vpp/vpp.log需要自行创建文件夹

可以通过直接启动/usr/bin/vpp来查看启动日志

doredump

ubuntu打开coredump

sudo sysctl -w kernel.core_pattern=/corefiles/core.%p.%e

 

sudo mkdir /corefiles

 

sudo chmod -R 777 /corefiles

 

ulimit -c unlimited

 

coredump会生成在/corefiles

报文跟踪

支持跟踪的节点:

搜索VLIB_NODE_FLAG_TRACE_SUPPORTED

关键node

node

用处

备注

dpdk-input

dpdk输入节点

ethernet-input

mac层处理节点

ip4-lookup

ip解析

ip4-receive

ip进一步处理

ip4-icmp-input

icmp处理

ip4-icmp-echo-request

icmp回复处理

ip4-load-balance

ip负载均衡转发

关键检查

fib_path_attached_next_hop_set中检查

adj_is_up + vnet_sw_interface_is_up

通过内核通信:

创建tap端口并配置ip:

DBGvpp#create tap id 0

DBGvpp# set interface state tap0 up

DBGvpp# set interface ip address tap0 192.168.18.140/24

linux下把tap0和一块网卡放入一个bridge

sudo ip link add br0 type bridge

zhuyu@ubuntu:~/code/vpp$ sudo ip link set br0 up

zhuyu@ubuntu:~/code/vpp$ sudo brctl addif br0 tap0

zhuyu@ubuntu:~/code/vpp$ sudo brctl addif br0 ens192

或者直接作为网卡的子端口进行配置

  • 18
    点赞
  • 19
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
要配置FD.io VPPDPDK,您可以按照以下步骤进行操作: 1. 安装DPDK:首先,确保您的系统符合DPDK的要求,并按照DPDK官方文档中的说明进行安装。您可以从DPDK官方网站上下载DPDK的源代码,并按照提供的说明进行编译和安装。 2. 配置DPDK环境变量:设置DPDK环境变量,包括`RTE_SDK`和`RTE_TARGET`。`RTE_SDK`指向DPDK源代码目录的路径,`RTE_TARGET`指定您要构建的目标平台。例如,在bash shell中,您可以使用以下命令设置环境变量: ``` export RTE_SDK=/path/to/dpdk export RTE_TARGET=<target> ``` 3. 配置VPP:安装FD.io VPP并启动VPP进程。您可以从FD.io VPP官方网站上获取安装说明,并根据指南进行安装。在启动VPP之前,确保您已正确配置了DPDK。 4. 配置VPPDPDK集成:编辑VPP的运行时配置文件,通常是位于`/etc/vpp/startup.conf`。将DPDK驱动程序与VPP绑定,指定所需的物理接口和CPU核心。 例如,要将DPDK绑定到VPP并配置两个物理接口(例如eth0和eth1),可以在配置文件中添加以下内容: ``` dpdk { dev <DPDK_DEVICE_NAME> { num-rx-queues <NUM_RX_QUEUES> num-tx-queues <NUM_TX_QUEUES> socket-mem <SOCKET_MEM> } } interface <INTERFACE_NAME> { dpdk <DPDK_DEVICE_NAME> } ``` 在上述配置中,您需要将`<DPDK_DEVICE_NAME>`替换为DPDK设备名称(例如`0000:00:00.0`),`<NUM_RX_QUEUES>`和`<NUM_TX_QUEUES>`分别是接收和发送队列的数量,`<SOCKET_MEM>`是用于DPDK内存的分配。 5. 启动VPP:使用VPP启动命令启动VPP进程,例如: ``` sudo vpp -c /etc/vpp/startup.conf ``` 在启动VPP后,它将根据配置文件中的设置与DPDK集成。 请注意,这只是简单的配置示例,您可能需要根据您的特定需求进行更详细的配置。您可以参考FD.io VPPDPDK的官方文档以获取更多详细信息和配置选项。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值