RYU实验demo

参考:
https://osrg.github.io/ryu-book/zh_tw/html/switching_hub.html

首先使用命令行mn生成

➜  ~ sudo mn --topo single,3 --mac --switch ovsk --controller remote -x
*** Creating network
*** Adding controller
Unable to contact the remote controller at 127.0.0.1:6653
Unable to contact the remote controller at 127.0.0.1:6633
Setting remote controller to 127.0.0.1:6653
*** Adding hosts:
h1 h2 h3 
*** Adding switches:
s1 
*** Adding links:
(h1, s1) (h2, s1) (h3, s1) 
*** Configuring hosts
h1 h2 h3 
*** Running terms on :0
*** Starting controller
c0 
*** Starting 1 switches
s1 ...
*** Starting CLI:
mininet>

然后会生成五个窗口,c0, s1, h1, h2, h3.
这里写图片描述
查看Open vSwitch的初始状态。

root@ubuntu:~# ovs-vsctl show
af5d51dc-3216-4bea-89eb-2ed33b51a9bd
    Bridge "s1"
        Controller "tcp:127.0.0.1:6653"
        Controller "ptcp:6654"
        fail_mode: secure
        Port "s1-eth3"
            Interface "s1-eth3"
        Port "s1-eth1"
            Interface "s1-eth1"
        Port "s1"
            Interface "s1"
                type: internal
        Port "s1-eth2"
            Interface "s1-eth2"
    ovs_version: "2.5.2"

然后在交换机s1上设置OpenFlow版本(默认是1.0)

root@ubuntu:~# ovs-vsctl set Bridge s1 protocols=OpenFlow13

然后查看交换机s1的Flow Table(此时为空)

root@ubuntu:~# ovs-ofctl -O OpenFlow13 dump-flows s1
OFPST_FLOW reply (OF1.3) (xid=0x2):

准备工作到此结束,现在执行Ryu控制器。

root@ubuntu:~# ryu-manager --verbose ryu.app.simple_switch_13
loading app ryu.app.simple_switch_13
loading app ryu.controller.ofp_handler
instantiating app ryu.app.simple_switch_13 of SimpleSwitch13
instantiating app ryu.controller.ofp_handler of OFPHandler
BRICK SimpleSwitch13
  CONSUMES EventOFPSwitchFeatures
  CONSUMES EventOFPPacketIn
BRICK ofp_event
  PROVIDES EventOFPSwitchFeatures TO {'SimpleSwitch13': set(['config'])}
  PROVIDES EventOFPPacketIn TO {'SimpleSwitch13': set(['main'])}
  CONSUMES EventOFPPortDescStatsReply
  CONSUMES EventOFPHello
  CONSUMES EventOFPEchoRequest
  CONSUMES EventOFPEchoReply
  CONSUMES EventOFPPortStatus
  CONSUMES EventOFPSwitchFeatures
  CONSUMES EventOFPErrorMsg
connected socket:<eventlet.greenio.base.GreenSocket object at 0x7f135e5bdc50> address:('127.0.0.1', 34620)
hello ev <ryu.controller.ofp_event.EventOFPHello object at 0x7f135b062510>
move onto config mode
EVENT ofp_event->SimpleSwitch13 EventOFPSwitchFeatures
switch features ev version=0x4,msg_type=0x6,msg_len=0x20,xid=0xc03d35d4,OFPSwitchFeatures(auxiliary_id=0,capabilities=79,datapath_id=1,n_buffers=256,n_tables=254)
move onto main mode

其中

EVENT ofp_event->SimpleSwitch13 EventOFPSwitchFeatures
switch features ev version=0x4,msg_type=0x6,msg_len=0x20,xid=0xc03d35d4,OFPSwitchFeatures(auxiliary_id=0,capabilities=79,datapath_id=1,n_buffers=256,n_tables=254)

这里是在安装Table-miss的Flow Entry。 处于等待安装Packet-In消息的状态。
安装Table-miss的Flow Entry是在ryu/ryu/app/simple_switch_13.py中的switch_features_handler()方法中实现的。

    @set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
    def switch_features_handler(self, ev):
        datapath = ev.msg.datapath
        ofproto = datapath.ofproto
        parser = datapath.ofproto_parser

        # install table-miss flow entry
        #
        # We specify NO BUFFER to max_len of the output action due to
        # OVS bug. At this moment, if we specify a lesser number, e.g.,
        # 128, OVS will send Packet-In with invalid buffer_id and
        # truncated packet data. In that case, we cannot output packets
        # correctly.  The bug has been fixed in OVS v2.1.0.
        match = parser.OFPMatch()
        actions = [parser.OFPActionOutput(ofproto.OFPP_CONTROLLER,
                                          ofproto.OFPCML_NO_BUFFER)]
        self.add_flow(datapath, 0, match, actions)

然后在交换机s1上确认Table-miss的Flow Entry已加入。

root@ubuntu:~# ovs-ofctl -O OpenFlow13 dump-flows s1
OFPST_FLOW reply (OF1.3) (xid=0x2):
 cookie=0x0, duration=263.549s, table=0, n_packets=0, n_bytes=0, priority=0 actions=CONTROLLER:65535

优先级(priority)为0,沒有 match到数据包(n_packets=0),action 為 CONTROLLER,重传的数据大小为65535 (0xffff = ofproto.OFPCML_NO_BUFFER )。
然后查看网卡
其中c0和s1的网卡是一样的。

mininet> c0 ifconfig
ens33     Link encap:Ethernet  HWaddr 00:0c:29:2e:bb:c6  
          inet addr:192.168.170.193  Bcast:192.168.170.255  Mask:255.255.255.0
          inet6 addr: fe80::cd6d:663c:4123:3d87/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:193620 errors:0 dropped:0 overruns:0 frame:0
          TX packets:82893 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:239805455 (239.8 MB)  TX bytes:7894837 (7.8 MB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:62479 errors:0 dropped:0 overruns:0 frame:0
          TX packets:62479 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1 
          RX bytes:3610021 (3.6 MB)  TX bytes:3610021 (3.6 MB)

s1-eth1   Link encap:Ethernet  HWaddr 1a:6a:81:05:37:ac  
          inet6 addr: fe80::186a:81ff:fe05:37ac/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0
          TX packets:32 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:920 (920.0 B)  TX bytes:3640 (3.6 KB)

s1-eth2   Link encap:Ethernet  HWaddr 56:23:c2:cf:cb:73  
          inet6 addr: fe80::5423:c2ff:fecf:cb73/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:13 errors:0 dropped:0 overruns:0 frame:0
          TX packets:32 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1010 (1.0 KB)  TX bytes:3640 (3.6 KB)

s1-eth3   Link encap:Ethernet  HWaddr 22:64:90:f1:e9:9b  
          inet6 addr: fe80::2064:90ff:fef1:e99b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:9 errors:0 dropped:0 overruns:0 frame:0
          TX packets:32 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:738 (738.0 B)  TX bytes:3680 (3.6 KB)

而h1, h2, h3各有各自的网卡。如h1

mininet> h1 ifconfig
h1-eth0   Link encap:Ethernet  HWaddr 00:00:00:00:00:01  
          inet addr:10.0.0.1  Bcast:10.255.255.255  Mask:255.0.0.0
          inet6 addr: fe80::200:ff:fe00:1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:32 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:3640 (3.6 KB)  TX bytes:920 (920.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

然后在mininet命令行用h1 ping h2.(即使用ping命令发送一个ICMP数据包)。结果显示成功收到来自h2的响应。

mininet> h1 ping -c1 h2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=36.1 ms

--- 10.0.0.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 36.159/36.159/36.159/0.000 ms

然后h1, h2, h3在各自的网卡上使用tcpdump监控数据包。
其中h1和h2都能收到同样的数据包(以h1为例)。

root@ubuntu:~# tcpdump -en -i h1-eth0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on h1-eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
23:13:56.845367 00:00:00:00:00:01 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 10.0.0.2 tell 10.0.0.1, length 28
23:13:56.865322 00:00:00:00:00:02 > 00:00:00:00:00:01, ethertype ARP (0x0806), length 42: Reply 10.0.0.2 is-at 00:00:00:00:00:02, length 28
23:13:56.865340 00:00:00:00:00:01 > 00:00:00:00:00:02, ethertype IPv4 (0x0800), length 98: 10.0.0.1 > 10.0.0.2: ICMP echo request, id 8564, seq 1, length 64
23:13:56.881496 00:00:00:00:00:02 > 00:00:00:00:00:01, ethertype IPv4 (0x0800), length 98: 10.0.0.2 > 10.0.0.1: ICMP echo reply, id 8564, seq 1, length 64
23:14:01.896890 00:00:00:00:00:02 > 00:00:00:00:00:01, ethertype ARP (0x0806), length 42: Request who-has 10.0.0.1 tell 10.0.0.2, length 28
23:14:01.896902 00:00:00:00:00:01 > 00:00:00:00:00:02, ethertype ARP (0x0806), length 42: Reply 10.0.0.1 is-at 00:00:00:00:00:01, length 28

而h3只能收到:

root@ubuntu:~# tcpdump -en -i h3-eth0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on h3-eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
23:13:56.857422 00:00:00:00:00:01 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 10.0.0.2 tell 10.0.0.1, length 28

这里写图片描述
这时候确认一下交换机s1上的Flow Table Entry。

root@ubuntu:~# ovs-ofctl -O OpenFlow13 dump-flows s1
OFPST_FLOW reply (OF1.3) (xid=0x2):
 cookie=0x0, duration=287.994s, table=0, n_packets=3, n_bytes=182, priority=1,in_port=2,dl_src=00:00:00:00:00:02,dl_dst=00:00:00:00:00:01 actions=output:1
 cookie=0x0, duration=287.978s, table=0, n_packets=2, n_bytes=140, priority=1,in_port=1,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 actions=output:2
 cookie=0x0, duration=1399.543s, table=0, n_packets=3, n_bytes=182, priority=0 actions=CONTROLLER:65535

可以发现除了Table-miss的Flow Entry之外,新增了两个优先级为1的Flow Entry。

接收埠( in_port ):2, 目的 MAC 位址( dl_dst ):host 1 → actions:host 1 轉送
接收埠( in_port ):1, 目的 MAC 位址( dl_dst ):host 2 → actions:host 2 轉送
(1) 的 Flow Entry 會被 match 2 次( n_packets )、(2) 的 Flow Entry 則被 match 1 次。 因為 (1) 用來讓 host 2 向 host 1 傳送封包用,ARP reply 和 ICMP echo reply 都會發生 match。 (2) 是用來從 host 1 向 host 2 發送訊息,由於 ARP request 是採用廣播的方式,原則上透過 ICMP echo request 完成。

这其中有这样几个过程:
1. h1 -> ff:ff:ff:ff:ff:ff(广播) ARP Request, 查询h2的MAC地址;
2. h2-> h1 ARP Reply, 响应h2的MAC地址;
3. h1-> h2 ICMP echo request;
4. h2-> h1 ICMP echo reply

控制器c0的log输出:

EVENT ofp_event->SimpleSwitch13 EventOFPPacketIn
packet in 1 00:00:00:00:00:01 ff:ff:ff:ff:ff:ff 1
EVENT ofp_event->SimpleSwitch13 EventOFPPacketIn
packet in 1 00:00:00:00:00:02 00:00:00:00:00:01 2
EVENT ofp_event->SimpleSwitch13 EventOFPPacketIn
packet in 1 00:00:00:00:00:01 00:00:00:00:00:02 1

第一個 Packet-In 是由 host 1 發送的 ARP request ,因為透過廣播的方式所以沒有 Flow Entry 存在,故發送 Packet-Out 。
第二個是從 host 2 回覆的 ARP reply,目的 MAC 位址為 host 1 因此前述的 Flow Entry (1) 被新增。
第三個是從 host 1 向 host 2 發送的 ICMP echo request,因此新增 Flow Entry (2)。
host 2 向 host 1 回覆的 ICMP echo reply 則會和 Flow Entry (1) 發生 match,故直接轉送封包至 host 1 而不需要發送 Packet-In。

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值