一、规划设计
如下图所示,安装 OpenDaylight 的节点作为控制器,通过下流表控制两个节点上面的虚拟交换机。如何安装配置 OpenDaylight,请参考《氮版本OpenDaylight安装配置》
两个节点分别安装 Open vSwitch,并再分别启动两个虚拟机连接到 OVS 的网桥ovsbr1。
两个节点的 eth1 网卡连接管理网络,需要配置 IP(其中 node1 节点 IP 为192.168.128.61,node2 节点 IP 为 192.168.128.62);eth2 节点作为 OVS 之间通信网络,不需要配置 IP。
通过控制器下流表控制网桥端口的数据流向,控制 kvm 虚拟机的通信。
二、安装配置OVS
在CentOS7上面安装OVS,具体请参见《CentOS7上实践Open vSwitch+VXLAN》 的安装Open vSwitch部分。
安装完毕后,运行ovs-vsctl show
命令,可以看到还没有任何网桥。
# ovs-vsctl show
c1659532-102a-4aa2-92ac-ba0433d6da91
ovs_version: "2.5.1"
创建网桥,并和规划图示的 eth2 网卡连接(这里仅列出一个节点操作示例,网卡名称也需要根据实际情况修改)
创建网桥 ovsbr1
# ovs-vsctl add-br ovsbr1
设置协议版本为 OpenFlow13
# ovs-vsctl set bridge ovsbr1 protocols=OpenFlow13
添加端口,和网卡连接(我这里网卡名称为 eno33554984)
# ovs-vsctl add-port ovsbr1 eno33554984
三、实现和控制器连接
设置控制器
ovs-vsctl set-controller ovsbr1 tcp:192.168.128.19:6633
查看OVS上面的网桥信息,可以看到控制器已经设置成功
# ovs-vsctl show
c1659532-102a-4aa2-92ac-ba0433d6da91
Bridge "ovsbr1"
Controller "tcp:192.168.128.19:6633"
is_connected: true
Port "eno33554984"
Interface "eno33554984"
Port "ovsbr1"
Interface "ovsbr1"
type: internal
ovs_version: "2.5.1"
登录 OpenDaylight 页面查看 OVS 连接拓扑,和规划的一致
查看初始流表
# ovs-ofctl dump-flows ovsbr1 -O openflow13
OFPST_FLOW reply (OF1.3) (xid=0x2):
cookie=0x2b00000000000003, duration=2.795s, table=0, n_packets=1, n_bytes=105, priority=100,dl_type=0x88cc actions=CONTROLLER:65535
cookie=0x2b00000000000003, duration=2.795s, table=0, n_packets=0, n_bytes=0, priority=0 actions=drop
查看 node1 节点信息,目前仅网桥 ovsbr1 和网卡 eno33554984 连接
在 node1上启动 kvm01 虚拟机,并挂接到网桥 ovsbr1,IP为 192.168.57.11
启动 kvm01 后,查看网桥 ovsbr1,多了一个端口 vnet0
# ovs-vsctl show
c1659532-102a-4aa2-92ac-ba0433d6da91
Bridge "ovsbr1"
Controller "tcp:192.168.128.19:6633"
is_connected: true
Port "eno33554984"
Interface "eno33554984"
Port "ovsbr1"
Interface "ovsbr1"
type: internal
Port "vnet0"
Interface "vnet0"
ovs_version: "2.5.1"
同样,在 node2 上启动 kvm01 虚拟机,并挂接到网桥 ovsbr1,IP为 192.168.57.101
下面测试,node1 上的 kvm01 ping node2 上的 kvm01,无法 ping 通
再看两个节点上面的流表,port 1 进入的数据都丢弃了
# ovs-ofctl dump-flows ovsbr1 -O openflow13
OFPST_FLOW reply (OF1.3) (xid=0x2):
cookie=0x2b00000000000003, duration=828.802s, table=0, n_packets=167, n_bytes=17490, priority=100,dl_type=0x88cc actions=CONTROLLER:65535
cookie=0x2b00000000000007, duration=823.160s, table=0, n_packets=18, n_bytes=1777, priority=2,in_port=1 actions=drop
cookie=0x2b00000000000003, duration=828.802s, table=0, n_packets=78, n_bytes=3492, priority=0 actions=drop
四、通过Openflow控制连通性
下面我们实现 node1 的 kvm01 可以 ping 通 node2 的 kvm01
通过REST API下发流表,我用的是 Chrome 插件 Advanced REST Client
下面是给节点 openflow:52243514621 的 table 0 添加 flow 1 的 URL
http://192.168.128.19:8181/restconf/config/opendaylight-inventory:nodes/node/openflow:52243514621/table/0/flow/1
请求和返回我们都选择json格式
请求的消息体如下:
{
"flow": [
{
"id": "1",
"match": {
"in-port": "1"
},
"instructions": {
"instruction": [
{
"apply-actions": {
"action": [ { "output-action": { "output-node-connector": "2" }, "order": "0" } ] },
"order": "0"
}
]
},
"buffer_id": "65535",
"installHw": "true",
"barrier": "true",
"strict": "true",
"priority": "2",
"idle-timeout": "0",
"hard-timeout": "0",
"table_id": "0"
}
]
}
填好后,点击“Send”按钮,如果是第一次操作,需要输入用户名密码(admin/admiin)
操作成功会返回 201 Created(表示成功新建一条流表)或 200 OK(表示成功更新一条流表)
由上面的节点信息我们知道,每个节点上面需要加两条流表:
in_port=1 actions=output:2
in_port=2 actions=output:1,CONTROLLER:65535
用上面的方式调用REST API加完流表后,再查看流表
# ovs-ofctl dump-flows ovsbr1 -O openflow13
OFPST_FLOW reply (OF1.3) (xid=0x2):
cookie=0x2b00000000000005, duration=2.438s, table=0, n_packets=0, n_bytes=0, priority=100,dl_type=0x88cc actions=CONTROLLER:65535
cookie=0x0, duration=2.464s, table=0, n_packets=0, n_bytes=0, priority=2,in_port=2 actions=output:1,CONTROLLER:65535
cookie=0x0, duration=2.464s, table=0, n_packets=0, n_bytes=0, priority=2,in_port=1 actions=output:2
cookie=0x2b00000000000005, duration=2.438s, table=0, n_packets=0, n_bytes=0, priority=0 actions=drop
再 ping,已经能 ping 通
再看拓扑图
我们再启动 node1、node2 上面的 kvm02,查看节点信息,kvm01 连接 vnet0,kvm02 连接 vnet1
再查看网桥及端口
# ovs-vsctl show
c1659532-102a-4aa2-92ac-ba0433d6da91
Bridge "ovsbr1"
Controller "tcp:192.168.128.19:6633"
is_connected: true
Port "eno33554984"
Interface "eno33554984"
Port "ovsbr1"
Interface "ovsbr1"
type: internal
Port "vnet1"
Interface "vnet1"
Port "vnet0"
Interface "vnet0"
ovs_version: "2.5.1"
下面我们再通过下流表实现 node1上面的 kvm01 和 kvm02 ping通
需要改动 node1上面的 flow 1, flow 2,新增 flow 3,flow 4, flow 5,具体不再详述,完成后流表如下:
# ovs-ofctl dump-flows ovsbr1 -O openflow13
OFPST_FLOW reply (OF1.3) (xid=0x2):
cookie=0x2b00000000000007, duration=10829.790s, table=0, n_packets=2178, n_bytes=228150, priority=100,dl_type=0x88cc actions=CONTROLLER:65535
cookie=0x0, duration=8613.319s, table=0, n_packets=0, n_bytes=0, priority=10,dl_src=52:54:00:b5:99:36,dl_dst=52:54:00:8f:83:bc actions=output:3
cookie=0x0, duration=8575.440s, table=0, n_packets=0, n_bytes=0, priority=10,dl_src=52:54:00:8f:83:bc,dl_dst=52:54:00:b5:99:36 actions=output:2
cookie=0x0, duration=94.031s, table=0, n_packets=1578, n_bytes=627856, priority=2,in_port=1 actions=output:2,output:3
cookie=0x0, duration=48.327s, table=0, n_packets=23, n_bytes=1134, priority=2,in_port=2 actions=output:1,output:3,CONTROLLER:65535
cookie=0x0, duration=5.002s, table=0, n_packets=0, n_bytes=0, priority=2,in_port=3 actions=output:1,output:2,CONTROLLER:65535
cookie=0x2b00000000000007, duration=10829.789s, table=0, n_packets=84, n_bytes=3744, priority=0 actions=drop
再看拓扑展示
node1 上面的 kvm01 和 kvm02 能 ping通,和 node2上面的 kvm01 也能 ping 通
但和 node2 上面的 kvm02 不能 ping 通
原因是 node2 上面的流表没有像 node1 那样更新
# ovs-ofctl dump-flows ovsbr1 -O openflow13
OFPST_FLOW reply (OF1.3) (xid=0x2):
cookie=0x2b00000000000006, duration=12853.096s, table=0, n_packets=2586, n_bytes=270900, priority=100,dl_type=0x88cc actions=CONTROLLER:65535
cookie=0x2b0000000000001a, duration=11056.868s, table=0, n_packets=1684, n_bytes=636264, priority=2,in_port=1 actions=output:2
cookie=0x2b0000000000001b, duration=11056.868s, table=0, n_packets=19, n_bytes=1414, priority=2,in_port=2 actions=output:1,CONTROLLER:65535
cookie=0x2b00000000000006, duration=12853.096s, table=0, n_packets=75, n_bytes=3366, priority=0 actions=drop
所以,给大家留个作业,如何在 node2 上面下流表,实现 node1 ,node2 上面的 kvm01, kvm02 (也就是 4 个 kvm 虚拟机)都互相能 ping 通呢?
参考文档
https://www.jianshu.com/p/b2471807c7a3/
https://wiki.opendaylight.org/view/OpenDaylight_OpenFlow_Plugin:End_to_End_Flows
https://www.sdnlab.com/community/article/odl/972