[cloud][OVS][sdn] Open vSwitch 初步了解

 

What is Open vSwitch?

Open vSwitch is a production quality, multilayer virtual switch licensed under the open source Apache 2.0 license.  It is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols (e.g. NetFlow, sFlow, IPFIX, RSPAN, CLI, LACP, 802.1ag).  In addition, it is designed to support distribution across multiple physical servers similar to VMware's vNetwork distributed vswitch or Cisco's Nexus 1000V. See full feature list here

  

Why Open vSwitch ?

https://github.com/openvswitch/ovs/blob/master/Documentation/intro/why-ovs.rst

 

OVN:

http://www.openvswitch.org/support/dist-docs/ovn-architecture.7.html

                                         CMS
                                          |
                                          |
                              +-----------|-----------+
                              |           |           |
                              |     OVN/CMS Plugin    |
                              |           |           |
                              |           |           |
                              |   OVN Northbound DB   |
                              |           |           |
                              |           |           |
                              |       ovn-northd      |
                              |           |           |
                              +-----------|-----------+
                                          |
                                          |
                                +-------------------+
                                | OVN Southbound DB |
                                +-------------------+
                                          |
                                          |
                       +------------------+------------------+
                       |                  |                  |
         HV 1          |                  |    HV n          |
       +---------------|---------------+  .  +---------------|---------------+
       |               |               |  .  |               |               |
       |        ovn-controller         |  .  |        ovn-controller         |
       |         |          |          |  .  |         |          |          |
       |         |          |          |     |         |          |          |
       |  ovs-vswitchd   ovsdb-server  |     |  ovs-vswitchd   ovsdb-server  |
       |                               |     |                               |
       +-------------------------------+     +-------------------------------+

 

 

 在继续之前,做好先理解一下 namespace:

[cloud][sdn] network namespace

 

下面两篇中文介绍,内容相似,写的都不咋样。

http://fishcried.com/2016-02-09/openvswitch-ops-guide/ 

https://blog.kghost.info/2014/11/19/openvswitch-internal/

 

这个偏实践指导,写的好:

https://www.ibm.com/developerworks/cn/cloud/library/1401_zhaoyi_openswitch/

 

从源码编译:

文档:https://docs.openvswitch.org/en/latest/intro/install/general/

需要注意是否支持内核模块的编译,有所不同。

[root@D128 thirdparty]# git clone https://github.com/openvswitch/ovs.git
[root@D128 ovs]# git checkout v2.7.0
[root@D128 ovs]#  yum install autoconf automake libtool
[root@D128 ovs]# ./boot.sh 
[root@D128 ovs]# ./configure --prefix=/root/BUILD_ovs/
[root@D128 ovs]# make
[root@D128 ovs]# make install

上边是没编译内核模块的。。。

再编个内核模块吧!!

[root@D128 ovs]# yum install kernel-devel-$(uname -r)
[root@D128 ovs]# ./configure --prefix=/root/BUILD_ovs/ --with-linux=/lib/modules/$(uname -r)/build
[root@D128 ovs]# uname -a
Linux D128 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

kernel版本太旧,编不过。降版本到v2.6.0

[root@D128 ovs]# git checkout v2.6.0
Previous HEAD position was c298ef7... Set release date for 2.7.0.
HEAD is now at 7a0f907... Set release date for 2.6.0.
[root@D128 ovs]# git branch
* (detached from v2.6.0)
  master
[root@D128 ovs]# 

还是编不过。如果想编译通过必须要找对彼此兼容的版本。算了直接用centos自带的ko(反正是学习了解阶段,但愿能兼容跑起来)。。。

[root@D128 ovs]# modprobe openvswitch
[root@D128 ovs]# lsmod|grep openvswitch
openvswitch           106996  0 
nf_nat_ipv6            14131  1 openvswitch
nf_nat_ipv4            14115  1 openvswitch
nf_defrag_ipv6         35104  2 openvswitch,nf_conntrack_ipv6
nf_nat                 26787  3 openvswitch,nf_nat_ipv4,nf_nat_ipv6
nf_conntrack          133387  6 openvswitch,nf_nat,nf_nat_ipv4,nf_nat_ipv6,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c              12644  4 xfs,openvswitch,nf_nat,nf_conntrack
[root@D128 ovs]# 

 

运行:

[root@D128 ovs]# export PATH=$PATH:/root/BUILD_ovs/share/openvswitch/scripts/
[root@D128 ~]# ovs-ctl --system-id=random start
Starting ovsdb-server                                      [  OK  ]
Configuring Open vSwitch system IDs                        [  OK  ]
Starting ovs-vswitchd                                      [  OK  ]
Enabling remote OVSDB managers                             [  OK  ]
[root@D128 ~]# 

数据库应该已经建立了默认的,并且都初始化了。

 

测试:

[root@D128 BUILD_ovs]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 00:0c:29:2f:cf:32 brd ff:ff:ff:ff:ff:ff
[root@D128 BUILD_ovs]# ./bin/ovs-vsctl add-br br0
[root@D128 BUILD_ovs]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 00:0c:29:2f:cf:32 brd ff:ff:ff:ff:ff:ff
3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether 92:e5:c6:d2:ec:a2 brd ff:ff:ff:ff:ff:ff
4: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether be:e8:bd:df:ff:41 brd ff:ff:ff:ff:ff:ff
[root@D128 BUILD_ovs]# ./bin/ovs-vsctl add-port br0 ens33
[root@D128 BUILD_ovs]# brctl show
bridge name    bridge id        STP enabled    interfaces
[root@D128 BUILD_ovs]# 

 

ovs-vsctl add-br 增加的这俩个设备是什么?

我写了个脚本去判断:

[root@D128 ~]# cat ip_link_show_type.sh 
#! /bin/bash

TYPE=" vlan veth vcan dummy ifb macvlan macvtap bridge bond ipoib ip6tnl ipip sit vxlan gre gretap ip6gre ip6gretap vti nlmon bond_slave geneve bridge_slave macsec"

for T in $TYPE
do
    echo $T
    ip link show type $T
done

竟然不属于这里边任意一个类型。。。。

 

虽然没有正面回答,但是看一下下面这个问题的答案,就能很好的理解了:

同时,也讲解了KVM的tap设备怎么可OVS协作,以及需要注意什么。

https://github.com/openvswitch/ovs/blob/master/Documentation/faq/issues.rst

Q: I created a tap device tap0, configured an IP address on it, and added it to a bridge, like this:

 

大概就是说,OVS的bridge和OVS的internal port是OVS单独实现的两种特殊设备。

 

 

报错:

[root@D128 BUILD_ovs]# ./bin/ovs-vsctl add-port br0 p0
ovs-vsctl: Error detected while setting up 'p0': could not open network device p0 (No such device).  See ovs-vswitchd log for details.
ovs-vsctl: The default log directory is "/root/BUILD_ovs/var/log/openvswitch".
[root@D128 BUILD_ovs]# 

要这样:

https://github.com/openvswitch/ovs-issues/issues/110

The port‘s name should be a exist interface use ifconfig to see, such as eth0. If you just want to use a virtual port name to make a test you should 
specify the port's type like ovs-vsctl add-port br0 port0 -- set Interface port0 type=internal or ovs-vsctl set Interface port0 type=internal

 

[root@D128 BUILD_ovs]# ./bin/ovs-vsctl add-port br0 port0 -- set Interface port0 type=internal
[root@D128 BUILD_ovs]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 00:0c:29:2f:cf:32 brd ff:ff:ff:ff:ff:ff
3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether 92:e5:c6:d2:ec:a2 brd ff:ff:ff:ff:ff:ff
4: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether be:e8:bd:df:ff:41 brd ff:ff:ff:ff:ff:ff
5: port0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether f6:cb:c2:69:fc:e0 brd ff:ff:ff:ff:ff:ff
[root@D128 BUILD_ovs]# ./bin/ovs-vsctl show
528b5679-22e8-484b-947b-4499959dc341
    Bridge "br0"
        Port "port0"
            Interface "port0"
                type: internal
        Port "br0"
            Interface "br0"
                type: internal
    ovs_version: "2.7.0"
[root@D128 BUILD_ovs]# 

 

查看br0,port0这两个设备。

[root@D128 BUILD_ovs]# ethtool -i br0
driver: openvswitch
version: 
firmware-version: 
expansion-rom-version: 
bus-info: 
supports-statistics: no
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
[root@D128 BUILD_ovs]# ethtool -i port0
driver: openvswitch
version: 
firmware-version: 
expansion-rom-version: 
bus-info: 
supports-statistics: no
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
[root@D128 BUILD_ovs]# 

 

增加namespace

[root@D128 BUILD_ovs]# ip netns add ns0
[root@D128 BUILD_ovs]# ip link set port0 netns ns0
[root@D128 BUILD_ovs]# ip netns exec ns0 ip addr add 192.168.1.100/24 dev port0
[root@D128 BUILD_ovs]# ip netns exec ns0 ifconfig port0 promisc up

查看:

[root@D128 BUILD_ovs]# ./bin/ovs-ofctl show br0
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000bee8bddfff41
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
 2(port0): addr:00:00:00:00:0c:e1
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
 LOCAL(br0): addr:be:e8:bd:df:ff:41
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
[root@D128 BUILD_ovs]# ./bin/ovs-dpctl show
system@ovs-system:
    lookups: hit:174 missed:44 lost:0
    flows: 0
    masks: hit:263 total:0 hit/pkt:1.21
    port 0: ovs-system (internal)
    port 1: br0 (internal)
    port 2: port0 (internal)
[root@D128 BUILD_ovs]# 

 

[root@D128 BUILD_ovs]# ip addr add 192.168.1.101/24 dev br0
[root@D128 BUILD_ovs]# ip link set br0 up

 

现在两个namespace可以通过br0 互通了。

增加一个of规则

[root@D128 BUILD_ovs]# ./bin/ovs-ofctl add-flow br0 "priority=1 idle_timeout=0, in_port=2,actions=mod_nw_src:9.181.137.1,normal"
[root@D128 BUILD_ovs]# ./bin/ovs-ofctl show br0
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000bee8bddfff41
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
 2(port0): addr:00:00:00:00:0c:e1
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
 LOCAL(br0): addr:be:e8:bd:df:ff:41
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
[root@D128 BUILD_ovs]# ./bin/ovs-

 

抓包可以看见,原地址已经被修改为9.181.137.1

[root@D128 BUILD_ovs]# tcpdump -i br0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:39:00.988146 IP 9.181.137.1 > localhost: ICMP echo request, id 3101, seq 271, length 64
11:39:01.988227 IP 9.181.137.1 > localhost: ICMP echo request, id 3101, seq 272, length 64
11:39:02.988113 IP 9.181.137.1 > localhost: ICMP echo request, id 3101, seq 273, length 64
11:39:03.988133 IP 9.181.137.1 > localhost: ICMP echo request, id 3101, seq 274, length 64

 

暂时先到这。作为一个初步了解。

 

--------------------------------------------------   update @ 2018-03-30 20:11 -----------------------------------

在OVS port上抓包。

把port流量镜像出来。

[root@vrouter-ovs ~]# ip link add dev mirror type dummy
[root@vrouter-ovs ~]# ip link set mirror up

[root@dr-lb ~]# ovs-vsctl add-port ovs-br0 mirror
[root@vrouter-ovs ~]# ovs-vsctl -- set Bridge br-tun mirror=@mi -- --id=@pmirror get Port mirror -- --id=@patch get Port tun-to-int -- --id=@mi create Mirror name=mymi select-dst-port=@patch select-src-port=@patch output-port=@pmirror
[root@vrouter-ovs ~]# tcpdump -i mirror -nn

 

查看和删除mirror

# ovs-vsctl list Mirror 
# ovs-vsctl clear bridge ovsbr0 mirrors 

 

ovs-vsctl set Bridge ovs-br0 mirrors=@mi -- --id=@pmirror get Port mirror-br0 -- --id=@patch get port vxlanclient0 -- --id=@mi create Mirror name=mymi select-dst-port=@patch select-src-port=@patch output-port=@pmirror

 

转载于:https://www.cnblogs.com/hugetong/p/8666024.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
openvswitch是一种用于虚拟交换的开源软件。而ovs-dpdk是openvswitch的一种插件,它提供了使用Data Plane Development Kit(DPDK)的功能来加速虚拟交换。 硬件offload是指将一些计算任务交给硬件来执行,以提高性能和效率。在ovs-dpdk中,硬件offload用于加速虚拟交换功能,提供更高的吞吐量和更低的延迟。 通过使用DPDK,ovs-dpdk可以直接与网络接口卡(NIC)进行交互,绕过操作系统内核,减少了数据包在主机上的处理和拷贝操作,提高了数据包的处理速度。 ovs-dpdk的硬件offload功能主要包括以下几个方面: 1. 网络虚拟化加速:ovs-dpdk可以将网络虚拟化的一些关键任务,如虚拟交换机的转发、过滤、隧道封装等,通过硬件加速实现,提高虚拟机之间的通信性能。 2. 动态流表:通过与DPDK和硬件交互,ovs-dpdk可以动态地配置硬件流表,对数据包进行分类和处理,从而减少了软件处理所需的时间和资源。 3. 超大流量处理:ovs-dpdk支持高达数百万个数据包每秒(Mpps)的数据包处理能力,适用于高密度数据中心或网络交换机等场景,能够应对大规模流量需求。 总结起来,ovs-dpdk的硬件offload功能能够加速虚拟交换功能,提供更高的性能和效率。通过与DPDK和硬件配合,ovs-dpdk可以实现网络虚拟化加速、动态流表和超大流量处理等功能,满足高性能虚拟交换的需求。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值