cisco Linux SRv6 实战踩坑记录

环境:
使用虚拟机
版本不同:原文ubuntu16,本次使用18
apt Source不同:原文使用1807,2101

网络拓扑,改成host1-vpp1-routeA-vpp2-host2
两个虚拟机做vpp1、vpp2,(虚拟机设置-添加-网络适配器,添加4块网卡)
一个虚拟机做中间的routeA

modprobe vrf

sysctl -w net.ipv6.conf.all.seg6_enabled=1
sysctl -w net.ipv6.conf.all.forwarding=1
sysctl -w net.ipv4.ip_forward=1
sysctl -w net.ipv4.conf.all.rp_filter=0

一、配置vpp

  1. 先down掉需要接管的网卡(因为up状态的网卡vpp不接管直接跳过)
    机器连接的 PCI 网络设备的 ID,预留ens33
root@virtual-machine:/home# lshw -class network -businfo
Bus info          Device      Class       Description
=====================================================
pci@0000:02:01.0  ens33       network     82545EM Gigabit Ethernet Controller (Copper)
pci@0000:02:02.0  ens34       network     82545EM Gigabit Ethernet Controller (Copper)
pci@0000:02:03.0  ens35       network     82545EM Gigabit Ethernet Controller (Copper)
pci@0000:02:04.0  ens36       network     82545EM Gigabit Ethernet Controller (Copper)
  1. 修改vpp startup.conf文件,添加内容,接管后面三块网卡
vi /etc/vpp/startup.conf

dpdk {
  dev 0000:02:02.0
  dev 0000:02:03.0
  dev 0000:02:04.0
}

保存文件,重启 VPP,可以看到:

systemctl restart vpp

root@virtual-machine:/etc/vpp# vppctl show pci
Address      Sock VID:PID     Link Speed    Driver          Product Name                    Vital Product Data
0000:02:01.0   0  8086:100f   unknown       e1000                                           
0000:02:02.0   0  8086:100f   unknown       vfio-pci                                        
0000:02:03.0   0  8086:100f   unknown       vfio-pci                                        
0000:02:04.0   0  8086:100f   unknown       vfio-pci                                        
root@virtual-machine:/etc/vpp# vppctl show interface addr
GigabitEthernet2/2/0 (dn):
GigabitEthernet2/3/0 (dn):
GigabitEthernet2/4/0 (dn):
local0 (dn):

如果显示不了pci driver,就执行(参考

modprobe vfio-pci
  1. 通过 Day0 配置文件来保存配置(每次启动不会丢失)
    创建/usr/share/vpp/scripts/interface-up.txt文件,内容(不能加注释)

vpp1

set interface state GigabitEthernet2/2/0 up
set interface ip address GigabitEthernet2/2/0 2001:1a::1/64

ip route add ::/0 via 2001:1a::2

set interface state GigabitEthernet2/3/0 up
set interface ip address GigabitEthernet2/3/0 10.0.0.1/24

set interface state GigabitEthernet2/4/0 up
set interface ip address GigabitEthernet2/4/0 10.1.0.1/24

loopback create-interface
set interface ip address loop0 fc00:1::1/64
set interface state loop0 up

vpp2

set interface state GigabitEthernet2/2/0 up
set interface ip address GigabitEthernet2/2/0 2001:2b::1/64

ip route add ::/0 via 2001:2b::2

set interface state GigabitEthernet2/3/0 up
set interface ip address GigabitEthernet2/3/0 10.0.1.1/24

set interface state GigabitEthernet2/4/0 up
set interface ip address GigabitEthernet2/4/0 10.1.1.1/24

loopback create-interface
set interface ip address loop0 fc00:2::1/64
set interface state loop0 up

修改配置文件由/etc/vpp/startup.conf,unix中添加一行

startup-config /usr/share/vpp/scripts/interface-up.txt

重启

systemctl restart vpp
vppctl show interface addr

二、配置Route

使用linux,默认开启Srv6。
存疑,不配置segment的话,中间的路由遇到目的地址为vpp sid应该如何转发。原文档中间的几个route是否做了默认路由。

sysctl -w net.ipv6.conf.all.seg6_enabled=1
sysctl -w net.ipv6.conf.all.forwarding=1
ip -6 a add 2001:1a::2/64 dev ens34

# 一端copy routeB的ip
ip -6 a add 2001:2b::2/64 dev ens35

vpp1 ping Route测试连通性

vpp# ping 2001:1a::2 source GigabitEthernet2/2/0

配置路由

#vpp1 --> vpp2
ip -6 route add fc00:a:1:0:1::/128 encap seg6local action End dev ens34
ip -6 route add fc00:2::1a/128 via 2001:2b::1 dev ens35

#vpp2 --> vpp1
# 也可以不加这一条双向都用fc00:a:1:0:1::
# ip -6 route add fc00:b:1:0:1::/128 encap seg6local action End dev ens35
ip -6 route add fc00:1::1a/128 via 2001:1a::1 dev ens34

三、控制器配置

暂时跳过

四、实验

(一)Overlay 和 Underlay 整合下的性能测试

具有2个Segment 的 SRv6 Policy 的端到端转发性能
host1-vpp1-route-vpp2-host2

  1. 配置vpp2

使用vpp sr 一定要配置源地址,参考

vpp# set sr encaps source addr 2001:2b::1
vpp# show sr encaps source addr 
SR encaps source addr = 2001:2b::1

# 定义 Segment fc00:2::1a 的操作为 End.DX4,将内部的 IPv4 包转发到 GigabitEthernet1b/0/0 的
10.1.1.2 机器下(即 Tester2)
vpp# sr localsid address fc00:2::1a behavior end.dx4 GigabitEthernet2/4/0 10.1.1.2

# 添加一个新的 Policy,包含2个 Segment,先到 Router,最后到 VPP1
vpp# sr policy add bsid fc00:2::999:12 next fc00:b:1:0:1:: next fc00:1::1a encap

# 将去往 10.1.0.0/24 的包引导至新定义的 SRv6 Policy
vpp# sr steer l3 10.1.0.0/24 via bsid fc00:2::999:12

vpp# show sr policies
SR policies:
[0].-	BSID: fc00:2::999:12
	Behavior: Encapsulation
	Type: Default
	FIB table: 0
	Segment Lists:
  	[0].- < fc00:b:1:0:1::, fc00:1::1a > weight: 1
-----------
vpp#  show sr localsids
SRv6 - My LocalSID Table:
=========================
	Address: 	fc00:2::1a/128
	Behavior: 	DX4 (Endpoint with decapsulation and IPv4 cross-connect)
	Iface:  	GigabitEthernet2/4/0
	Next hop: 	10.1.1.2
	Good traffic: 	[0 packets : 0 bytes]
	Bad traffic:  	[0 packets : 0 bytes]
--------------------
  1. 配置vpp1
vpp# set sr encaps source addr 2001:1a::1
vpp# show sr encaps source addr          
SR encaps source addr = 2001:1a::1

# 定义 Segment fc00:1::1a 的操作为 End.DX4,将内部的 IPv4 包转发到 GigabitEthernet13/0/0 的
10.1.0.2 机器下(即 Tester1)
vpp# sr localsid address fc00:1::1a behavior end.dx4 GigabitEthernet2/4/0 10.1.0.2

# 添加一个新的 Policy,回程直接到 VPP2
vpp# sr policy add bsid fc00:1::999:1a next fc00:a:1:0:1:: next fc00:2::1a encap

# 将去往 10.1.1.0/24 的包引导至新定义的 SRv6 Policy
vpp# sr steer l3 10.1.1.0/24 via bsid fc00:1::999:1a

vpp# show sr policies
SR policies:
[0].-	BSID: fc00:1::999:1a
	Behavior: Encapsulation
	Type: Default
	FIB table: 0
	Segment Lists:
  	[0].- < fc00:a:1:0:1::, fc00:2::1a > weight: 1
-----------
vpp# show sr localsids
SRv6 - My LocalSID Table:
=========================
	Address: 	fc00:1::1a/128
	Behavior: 	DX4 (Endpoint with decapsulation and IPv4 cross-connect)
	Iface:  	GigabitEthernet2/4/0
	Next hop: 	10.1.0.2
	Good traffic: 	[0 packets : 0 bytes]
	Bad traffic:  	[0 packets : 0 bytes]
--------------------

如果提示已有localsid。先删除

sr localsid del address fc00:1::1a

补:

# 删除policy
vppctl sr policy del bsid fe01::1a
# 删除steer
vppctl sr steer del l3 10.1.1.0/24 via bsid fe10::1a
  1. 跳过配置两端的trex。配置两端host的ip、网关、端口。互相ping通。

(二)控制器应用程序实现动态路径调整和策略下发

没有NCS5500或者XTC控制器,跳过计算算路部分。

网络配置

VPP1用3端口直接连接VPP2,创造一条短链路。(在Route上抓包,确定ping的时候不经过Route)
vpp1

vpp# set interface ip address del GigabitEthernet2/3/0 10.0.0.1/24
vpp# set interface ip address GigabitEthernet2/3/0 2001:12::1/64  
vpp# show int addr                                                
GigabitEthernet2/2/0 (up):
  L3 2001:1a::1/64
GigabitEthernet2/3/0 (up):
  L3 2001:12::1/64
GigabitEthernet2/4/0 (up):
  L3 10.1.0.1/24
local0 (dn):
loop0 (up):
  L3 fc00:1::1/64
vpp# ping 2001:12::2 source GigabitEthernet2/3/0

#添加路由
vpp# ip route add fc00:2::1a/128 via 2001:12::2

vpp2

vpp# set interface ip address del GigabitEthernet2/3/0 10.0.1.1/24
vpp# set interface ip address GigabitEthernet2/3/0 2001:12::2/64  
vpp# show int addr                                                
GigabitEthernet2/2/0 (up):
  L3 2001:2b::1/64
GigabitEthernet2/3/0 (up):
  L3 2001:12::2/64
GigabitEthernet2/4/0 (up):
  L3 10.1.1.1/24
local0 (dn):
loop0 (up):
  L3 fc00:2::1/64
vpp# ping 2001:12::1 source GigabitEthernet2/3/0

#添加路由
vpp# ip route add fc00:1::1a/128 via 2001:12::1

分析代码:
Python实现,main.py。主函数负责循环调用查看算路结果(PathFinder)和下发新的SRv6策略(VPP Controller_CLI)

主程序main.py

    pf=PathFinder(xtc["ip"],xtc["username"],xtc["password"],config["node_table"])

配置文件config.json(注意和(一)的配置不同)

VPP1 VPP2的sid,需要提前用vppctl配置到vpp上。(还需要提前配置VPP SR封包源地址)

"node_dx4_sid":["fc00:2::a","fc00:3::a"]

VPP2、VPP1 Intf1目的地址网段

"node_prefix":["10.0.1.0/24","10.0.2.0/24"],

控制器

文件

config.json

{
    "xtc_node":{
      "ip":"10.xx.xx.xx",
      "username":"cisco",
      "password":"cisco"
    },
    "vpp_node1":{
      "ip":"192.168.159.154",
      "username":"root",
      "password":"123456"
    },
    "vpp_node2":{
      "ip":"192.168.159.154",
      "username":"root",
      "password":"123456"
    },
    "node_list":["node1"],
    "node_table":{
        "node1":"192.168.159.160"
    },
    "node_sid":{
        "node1": "fc00:a:1:0:1::"
    },
    "node_prefix":["10.1.1.0/24","10.1.0.0/24"],
    "node_dx4_sid":["fc00:2::1a","fc00:1::1a"]
}

main.py

import json

import requests
import paramiko
import time
def load_config():
    data={}
    with open('config.json', 'r') as f:
        data = json.loads(f.read())
    return data


class PathFinder(object):
    def __init__(self,ip,username,password,node_table):
        self.ip=ip
        self.username=username
        self.password=password
        self.node_table=node_table
        self.ip_table=dict(zip(self.node_table.values(),self.node_table.keys()))

        pass

    def _build_url(self):
        return "http://{}:8080/lsp/compute/simple".format(self.ip)

    def compute(self,source,dest,method):
        data={}
        with open('route.json', 'r') as f:
            data = json.loads(f.read())
        
        return self._calculate_path(data["{}".format(source)])
    
    def _calculate_path(self,json):
        jumps=[]
        for data in json:
            jumps.append(self.ip_table[data])
        return jumps


class Translator(object):
    def __init__(self,node_sid):
        self.node_sid=node_sid
    def translate(self,node):
        return self.node_sid[node]


class VPPController_CLI(object):
    """
    Multiple ways to configure the VPP controller.
    This is the implimentation of the CLI APP by VPP.
    """
    def __init__(self,ip,username,password):
        """
        Use SSH to connect to vpp instance and configure
        """
        self.client = paramiko.SSHClient()
        self.client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
        self.client.connect(hostname=ip, port=22, username=username, password=password)

        pass

    def show_policy(self):
        stdin, stdout, stderr = self.client.exec_command("vppctl show sr policies")

        for line in stdout.readlines():
            print(line)

        # print(stdout.read())

        # output=subprocess.check_output(["vppctl","show","sr","policies"])
        # for line in output.splitlines():
        #     print(line)


    def add_policy(self,bsid,sids):
        assert type(bsid)==str
        assert type(sids)==list and len(sids)!=0
        sid_param=""
        for sid in sids:
            sid_param=sid_param+"next {} ".format(sid)
        cmd="vppctl sr policy add bsid {} {} encap".format(bsid,sid_param)
        print("EXEC: "+cmd)
        stdin, stdout, stderr = self.client.exec_command(cmd)
        for line in stdout.readlines():
            if "already a FIB entry for the BindingSID address" in line:
                print("ERROR : SID Already exist")
                return False
            else:
                print(line)

        pass

    def update_steering(self,ip_prefix,bsid):
        cmd="vppctl sr steer l3 {} via bsid {}".format(ip_prefix,bsid)
        stdin, stdout, stderr = self.client.exec_command(cmd)
        for line in stdout.readlines():
            print(line)


    def del_policy(self,bsid):
        cmd="vppctl sr policy del bsid {}".format(bsid)
        stdin, stdout, stderr =self.client.exec_command(cmd)
        print(stdout)
        pass



if __name__ == '__main__':
    print("VPP Demo Controller")
    config=load_config()
    xtc=config["xtc_node"]
    pf=PathFinder(xtc["ip"],xtc["username"],xtc["password"],config["node_table"])

    # print(pf.compute("node1","node3","latency"))
    # print(pf.compute("node1","node2","latency"))
    # print(pf.compute("node2","node1","latency"))
    # print(pf.compute("node3","node1","latency"))
    vpp=config["vpp_node1"]
    vpp1=VPPController_CLI(vpp["ip"],vpp["username"],vpp["password"])
    vpp=config["vpp_node2"]
    vpp2=VPPController_CLI(vpp["ip"],vpp["username"],vpp["password"])

    # vpp.show_policy()
    # vpp.add_policy("fc00:1::999:10",["fc00:1::a","fc00:2::a"])
    # # vpp.show_policy()
    # # vpp.add_policy("fc00:1::999:10",["fc00:1::a","fc00:2::a"])
    #
    # vpp.del_policy("fc00:1::999:10")
    # vpp.update_steering("10.0.1.0/24",)
    # vpp.show_policy()
    trans=Translator(config["node_sid"])

    # 更新两条路由
    # VPPA->VPPB
    route1=[]
    # VPPB->VPPA
    route2=[]

    while True:
        result1=pf.compute("node1", "node2", "latency")
        result2=pf.compute("node2", "node1", "latency")
        changed1=False
        changed2=False
        if result1!=route1:
            print("Node1 to Node2 Route updating : {}".format(result1))
            sid_list=[trans.translate(i) for i in result1]
            sid_list.append(config["node_dx4_sid"][0])
            # 写死:R1下发的policy是fc00:1::999:12
            vpp1.del_policy("fc00:1::999:12")
            vpp1.add_policy("fc00:1::999:12",sid_list)
            vpp1.update_steering(config["node_prefix"][0],"fc00:1::999:12")
            route1=result1
            changed1=True
        if result2!=route2:
            print("Node1 to Node3 Route updating : {}".format(result2))
            sid_list=[trans.translate(i) for i in result2]
            sid_list.append(config["node_dx4_sid"][1])
            vpp2.del_policy("fc00:2::999:12")
            vpp2.add_policy("fc00:2::999:12",sid_list)
            vpp2.update_steering(config["node_prefix"][1],"fc00:2::999:12")
            route2=result2
            changed2=True
        if changed1:
            print("VPP1 SRv6 Policy Updated:")
            vpp1.show_policy()
        if changed2:
            print("VPP2 SRv6 Policy Updated:")
            vpp2.show_policy()
        time.sleep(5)

route.json
经过Route

{
    "node1": ["192.168.159.160"],
    "node2": ["192.168.159.160"]
}

不经过Route

{
    "node1": [],
    "node2": []
}
执行

安装python3
安装pip3

apt install python3
apt install python3-pip
pip3 install paramiko

# 如果出现问题执行
python3 -m pip install --upgrade pip

运行

python3 main.py

五、中间出现的问题:

vpp状态,not enough DPDK crypto resources——结论:目前不影响

root@virtual-machine:/home# systemctl status vpp
● vpp.service - vector packet processing engine
   Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2022-02-15 11:10:42 CST; 11min ago
  Process: 2189 ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
  Process: 2190 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited, status=0/SUCCESS)
 Main PID: 2191 (vpp_main)
    Tasks: 2 (limit: 9484)
   CGroup: /system.slice/vpp.service
           └─2191 /usr/bin/vpp -c /etc/vpp/startup.conf

2月 15 11:10:42 virtual-machine systemd[1]: Starting vector packet processing engine...
2月 15 11:10:42 virtual-machine systemd[1]: Started vector packet processing engine.
2月 15 11:10:42 virtual-machine vpp[2191]: unix_config:475: couldn't open log '/var/log/vpp/vpp.log'
2月 15 11:10:43 virtual-machine vnet[2191]: dpdk: not enough DPDK crypto resources
2月 15 11:10:43 virtual-machine vnet[2191]: dpdk/cryptodev: dpdk_cryptodev_init: Not enough cryptodevs

根据VPP IPSec implementation using DPDK Cryptodev API,startup.conf配置vdev crypto_aesni_mb0,socket_id=1发现vpp启动失败。

也可能是数量不对

0: dpdk_ipsec_process:1010: not enough DPDK crypto resources, default to OpenSSL
/etc/vpp/startup.conf文件里的参数配置不对:
vdev cryptodev_aesni_mb_pmd0,socket_id=0表示使用dpdk加密的库,一个支持4 core,而我的环境是16core,所以要设置4个。
————————————————
版权声明:本文为CSDN博主「这月色」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/weixin_42265069/article/details/95475153

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值