VPC内互访流量模型和流表跟踪

记录了VPC内流量模型和流表跳转记录,根据同计算节点VM互访和不同计算节点的VM互访分别跟踪,画了流量模型,记录了经过的流表。也记录了一种特殊情况的流表:未绑定安全组drop.

1. VM拉起

vm结构图

创建网络子网

tianchi net-create ipv4_yanxia_network1_1 178.1.0.0/16
            fa177aca-f909-4a77-9d7b-10551d105cb3
            
tianchi router-create router_yanxia_01 fa177aca-f909-4a77-9d7b-10551d105cb3
          d35d9cd1-3d71-4ccb-a6c3-3ea22a7e111a
          
curl -g -i -X POST http://127.0.0.1:9797/v2.0/networks -H "Content-Type: application/json" -H "User-Agent: openstacksdk/0.17.0 keystoneauth1/4.3.1 python-requests/2.18.4 CPython/3.6.8" -H "X-Auth-Token: `openstack token issue | grep " id" | awk '{print $4}'`" -d '{"network": {"name": "ipv4_yanxia_network2_2", "router_id":"d35d9cd1-3d71-4ccb-a6c3-3ea22a7e111a"}}'
   cf4b66f8-00b2-43db-ac97-2b3df786c9df
   
tianchi subnet-create cf4b66f8-00b2-43db-ac97-2b3df786c9df 178.1.3.0/24 4 N001-JS-CK-TCCS01
56a84738-7ef6-49d5-a68b-bb745d482d50拉起vm

拉起的vm汇总如下:

[root@P6F2-edge-smart-18 ~]# nova list
| ID                                   | Name              | Status | Task State | Power State | Networks             
| 6c8d619f-7131-460f-8756-1d20fd7b746f | ipv4_yanxia_test1 | ACTIVE | -          | Running     | ipv4_yanxia_network2_2=178.1.3.2, 2.1.0.197                  |
| 5ab7f4b4-48aa-4b4f-9c72-3e123a917e8a | ipv4_yanxia_test2 | ACTIVE | -          | Running     | ipv4_yanxia_network2_2=178.1.3.3, 2.1.0.205                  |
| f4b4655d-415d-44cb-9d80-16c0398cc6fd | ipv4_yanxia_test5 | ACTIVE | -          | Running     | ipv4_yanxia_network2_2=178.1.3.6, 2.1.0.229 |

1.1 拉起node1的VM1, VM2

P6F2-edge-smart-16上拉起vm1和vm2

拉起vm1

tianchi port-create --subnet_id=56a84738-7ef6-49d5-a68b-bb745d482d50 cf4b66f8-00b2-43db-ac97-2b3df786c9df
	458ec446-9e78-40ab-a386-f2c29ee0a53b  port的mac fa:16:3e:98:90:e7    ip 178.1.3.2

nova boot --flavor 1005 --image e41aa12c-f80f-40d1-9a3e-c5ca65edd9b8 --nic port-id=458ec446-9e78-40ab-a386-f2c29ee0a53b --availability-zone ecloud_compute_c3_zone:P6F2-edge-smart-16 ipv4_yanxia_test1
	6c8d619f-7131-460f-8756-1d20fd7b746f  

[root@P6F2-edge-smart-18 ~]# nova get-vnc-console 6c8d619f-7131-460f-8756-1d20fd7b746f novnc

拉起vm2

tianchi port-create --subnet_id=56a84738-7ef6-49d5-a68b-bb745d482d50 cf4b66f8-00b2-43db-ac97-2b3df786c9df
	ff4028a6-4f26-471b-9d6c-346e23e7515e  port的mac fa:16:3e:68:a5:98    ip 178.1.3.3

nova boot --flavor 1005 --image e41aa12c-f80f-40d1-9a3e-c5ca65edd9b8 --nic port-id=ff4028a6-4f26-471b-9d6c-346e23e7515e --availability-zone ecloud_compute_c3_zone:P6F2-edge-smart-16 ipv4_yanxia_test2
	5ab7f4b4-48aa-4b4f-9c72-3e123a917e8a 

[root@P6F2-edge-smart-18 ~]# nova get-vnc-console 5ab7f4b4-48aa-4b4f-9c72-3e123a917e8a novnc

1.2 拉起node2的VM3

P6F2-edge-smart-15上拉起vm3

tianchi port-create --subnet_id=56a84738-7ef6-49d5-a68b-bb745d482d50 cf4b66f8-00b2-43db-ac97-2b3df786c9df
	6c0a14df-895b-4e38-9a0a-be99c923505e 

nova boot --flavor 1005 --image e41aa12c-f80f-40d1-9a3e-c5ca65edd9b8 --nic port-id=6c0a14df-895b-4e38-9a0a-be99c923505e --availability-zone ecloud_compute_c3_zone:P6F2-edge-smart-15 ipv4_yanxia_test5
	f4b4655d-415d-44cb-9d80-16c0398cc6fd

[root@P6F2-edge-smart-18 ~]# nova get-vnc-console f4b4655d-415d-44cb-9d80-16c0398cc6fd novnc

1.3 信息copy入口

一层 network_uuid: fa177aca-f909-4a77-9d7b-10551d105cb3  
reg1
router uuid: d35d9cd1-3d71-4ccb-a6c3-3ea22a7e111a
二层 network_uuid: cf4b66f8-00b2-43db-ac97-2b3df786c9df
subnet_uuid: 56a84738-7ef6-49d5-a68b-bb745d482d50
tunid:0x170b

vm1 信息:
port_uuid : 458ec446-9e78-40ab-a386-f2c29ee0a53b   
port_vid: 0x27
port_mac: fa:16:3e:98:90:e7  
port_ip: 178.1.3.2 
vm1_uuid(device_id): 6c8d619f-7131-460f-8756-1d20fd7b746f
port上的浮动IP(floating_ip_address): 2.1.0.197

vm2信息:
port_id : ff4028a6-4f26-471b-9d6c-346e23e7515e  
port_vid: 0x2d
port_mac: fa:16:3e:68:a5:98  
port_ip:178.1.3.3
vm2_uuid(device_id): 5ab7f4b4-48aa-4b4f-9c72-3e123a917e8a
port上的浮动IP(floating_ip_address): 2.1.0.205

vm3信息:
port_id : 6c0a14df-895b-4e38-9a0a-be99c923505e    
port_vid: 0x26
port_mac: fa:16:3e:73:5d:00 
port_ip:178.1.3.6
vm2_uuid(device_id): 5ab7f4b4-48aa-4b4f-9c72-3e123a917e8a
port上的浮动IP(floating_ip_address): 2.1.0.229

zk hash:
http://100.71.31.45:9089/home?zkPath=/cns/topologies/networks/34/fa177aca-f909-4a77-9d7b-10551d105cb3/ports_v2
vm2_port hash -> 8C     vm1_port hash -> 36

寄存器 copy

XXREG0(REG0-3) ← network uuid    0xfa177acaf9094a779d7b10551d105cb3
XXREG1(REG4-7) ← subnet uuid     0x56a847387ef649d5a68bbb745d482d50
XXREG2(REG8-11) ← port uuid   0x458ec4469e7840aba386f2c29ee0a53b
REG12 ← input port vid   
REG13 ← output port vid  
REG14 ← VNI  (vxlan tunid)    0x170b

2. 创建default安全组并绑定

tianchi security-group-create default
	f16f84a8-4250-4913-9d7d-08a584b630aa

安全组rule1: ingress ipv4

curl -g -i -X POST http://100.71.31.45:9797/v2.0/security-group-rules -H "Content-Type: application/json" -H "User-Agent: openstacksdk/0.17.0 keystoneauth1/4.3.1 python-requests/2.18.4 CPython/3.6.8" -H "X-Auth-Token: `openstack token issue | grep " id" | awk '{print $4}'`" -d '{"security_group_rule": {"security_group_id": "f16f84a8-4250-4913-9d7d-08a584b630aa","description": "ingress","direction": "ingress","remote_ip_prefix": "0.0.0.0/0", "ethertype": "IPv4"}}'
     825d1956-775a-4b01-a732-05d7a29f9708

安全组rule2 egress ipv4

curl -g -i -X POST http://100.71.31.45:9797/v2.0/security-group-rules -H "Content-Type: application/json" -H "User-Agent: openstacksdk/0.17.0 keystoneauth1/4.3.1 python-requests/2.18.4 CPython/3.6.8" -H "X-Auth-Token: `openstack token issue | grep " id" | awk '{print $4}'`" -d '{"security_group_rule": {"security_group_id": "f16f84a8-4250-4913-9d7d-08a584b630aa","description": "egress","direction": "egress","remote_ip_prefix": "0.0.0.0/0", "ethertype": "IPv4"}}'
	6a9b2fdd-ee48-475f-90ba-6870cb03b048

安全组rule3 egress ipv6

curl -g -i -X POST http://100.71.31.45:9797/v2.0/security-group-rules -H "Content-Type: application/json" -H "User-Agent: openstacksdk/0.17.0 keystoneauth1/4.3.1 python-requests/2.18.4 CPython/3.6.8" -H "X-Auth-Token: `openstack token issue | grep " id" | awk '{print $4}'`" -d '{"security_group_rule": {"security_group_id": "f16f84a8-4250-4913-9d7d-08a584b630aa","direction": "egress","remote_ip_prefix": "::/0", "ethertype": "IPv6"}}'
	82bbe131-82d5-4e7d-b916-7af25ce214f2

安全组rule4 ingress ipv6

curl -g -i -X POST http://100.71.31.45:9797/v2.0/security-group-rules -H "Content-Type: application/json" -H "User-Agent: openstacksdk/0.17.0 keystoneauth1/4.3.1 python-requests/2.18.4 CPython/3.6.8" -H "X-Auth-Token: `openstack token issue | grep " id" | awk '{print $4}'`" -d '{"security_group_rule": {"security_group_id": "f16f84a8-4250-4913-9d7d-08a584b630aa","direction": "ingress","remote_ip_prefix": "::/0", "ethertype": "IPv6"}}'
	e65e7619-023a-437b-a11c-14e18808090c

port绑定default安全组

tianchi port-update 458ec446-9e78-40ab-a386-f2c29ee0a53b --security-group f16f84a8-4250-4913-9d7d-08a584b630aa
tianchi port-update ff4028a6-4f26-471b-9d6c-346e23e7515e  --security-group f16f84a8-4250-4913-9d7d-08a584b630aa
tianchi port-update 6c0a14df-895b-4e38-9a0a-be99c923505e  --security-group f16f84a8-4250-4913-9d7d-08a584b630aa

vm1 ping vm2 已验证可以ping通

zk 拓扑对应节点可查到security rule信息

3. 同节点VM二三层转发

vm1 vm2在同一个计算节点中,使用vm1 ping vm2

3.1 流量模型

VPC内_同计算节点流量模型

(1) 数据包从vm1的eth0口发出,沿着数据通道,到达对端的vhuff4028a6-4f

(2)数据包进入br-int的流表,如 4.2 所示, 进行流转

(3) 流表将src mac塞入dst, 源mac置为固定的fe:ff:ff:ff:ff:ff, 数据包从vhuff4028a6-4f端口送出

(4) port:vhuff4028a6-4f 送到数据通道的对端, vm2的eth0, vm2收到数据包。

3.2 vm->vm2完整流表记录

new connection时的流表跟踪

INPUT(table=0)

#ovs-ofctl dump-flows br-int -O openflow13 table=0,in_port=xxx

cookie=0x63bfd88400160c27, duration=79316.927s, table=0, n_packets=35, n_bytes=0, priority=30,ip,in_port=136,vlan_tci=0x0000/0x1fff,dl_src=fa:16:3e:98:90:e7,nw_src=178.1.3.2 

actions=set_field:0xfa177acaf9094a779d7b10551d105cb3->xxreg0,set_field:0x56a847387ef649d5a68bbb745d482d50->xxreg1,set_field:0x458ec4469e7840aba386f2c29ee0a53b->xxreg2,load:0x27->NXM_NX_REG12[0..15],set_field:0x170b->reg14,write_metadata:0x9000000000000000/0x9000000000000000,goto_table:60

给寄存器赋值
set XXREG0(REG0-3) ← network uuid    0xfa177acaf9094a779d7b10551d105cb3
set XXREG1(REG4-7) ← subnet uuid     0x56a847387ef649d5a68bbb745d482d50
set XXREG2(REG8-11) ← port uuid   0x458ec4469e7840aba386f2c29ee0a53b
set REG12 ← input port vid   vm1对应端口的vid = 0x27
set metadata 

Qos部分+分类部分,不用太关心,简单写下

EGRESS_QOS_BAND(table=60)   goto_table:65  //# ovs-ofctl dump-flows br-int -O openflow13 table=60,reg12=xxx
EGRESS_QOS_PPS(table=65)   goto_table:66   //# ovs-ofctl dump-flows br-int -O openflow13 table=65,reg12=xxx
EGRESS_QOS_SYN(table=66)   goto_table:67  //# ovs-ofctl dump-flows br-int -O openflow13 table=66,reg12=xxx

EGRESS_ACL_CLASSIFY(table=67)  goto_table:68  //# ovs-ofctl dump-flows br-int -O openflow13 table=67
EGRESS_SEC_CLASSIFY(table=68)  goto_table:70  //ovs-ofctl dump-flows br-int -O openflow13 table=68

EGRESS_DISPATCH(table=70)

# ovs-ofctl dump-flows br-int -O openflow13 table=70

cookie=0x63bfd88400f00000, duration=409805.874s, table=70, n_packets=232060328, n_bytes=22751253330, priority=50,icmp actions=ct(table=71,zone=NXM_NX_REG12[0..15],nat)

ct记录了一波

安全组 begin: (1) 无法ct, 直接进入安全组。 (2) 可以ct, +new 时,进入安全组校验

BASE_EGRESS(table=71)

# ovs-ofctl dump-flows br-int -O openflow13 table=71

cookie=0x63bfd88400f00000, duration=91903.503s, table=71, n_packets=3886095, n_bytes=351104836, priority=50,ct_state=+new,ip,metadata=0x1000000000000000/0x1000000000000000 actions=goto_table:72

只有+new, 第一次的时候会匹配

RULES_EGRESS(table=72) :安全组的核心表, port未绑定安全组时,会再这边drop

cookie=0x63bfd88400f60c27, duration=17157.881s, table=72, n_packets=3, n_bytes=294, priority=2010,ip,reg12=0x27 actions=goto_table:74

根据input port_vid 匹配 
只有+new, 第一次的时候会匹配

EGRESS_PORT_SEC(table=74)

cookie=0x63bfd88400f00000, duration=338298.341s, table=74, n_packets=2055065, n_bytes=172577954, priority=0,ip actions=goto_table:80

只有+new, 第一次的时候会匹配

VIP_EGRESS(table=80)

cookie=0x63bfd88400b00000, duration=338351.646s, table=80, n_packets=329, n_bytes=31850, priority=20,icmp actions=ct(commit,zone=NXM_NX_REG12[0..15]),goto_table:104

VPC_GROUPENI_REV(table=104)

# ovs-ofctl dump-flows br-int -O openflow13 table=104

cookie=0x63bfd88400000000, duration=338398.141s, table=104, n_packets=474817907, n_bytes=502227622008, priority=0 actions=goto_table:105

VPC_POLICY_ROUTE(table=105)

# ovs-ofctl dump-flows br-int -O openflow13 table=105,reg7=xxx

cookie=0x63bfd88400c60c00, duration=337416.857s, table=105, n_packets=782, n_bytes=76636, priority=5,ip,reg4=0x56a84738,reg5=0x7ef649d5,reg6=0xa68bbb74,reg7=0x5d482d50,nw_src=178.1.3.0/24,nw_dst=178.1.3.0/24 actions=go
to_table:110

VPC_ROUTE(table=110) : 路由相关最核心的表

# ovs-ofctl dump-flows br-int -O openflow13 table=110,reg7=xxx

cookie=0x63bfd88400c60c00, duration=337599.432s, table=110, n_packets=788, n_bytes=77224, priority=7201,ip,reg4=0x56a84738,reg5=0x7ef649d5,reg6=0xa68bbb74,reg7=0x5d482d50,nw_dst=178.1.3.0/24 actions=goto_table:115

local类型的,可以根据nw_dst匹配到

VPC_TRANSLATE(table=115)

# ovs-ofctl dump-flows br-int -O openflow13 table=115

cookie=0x63bfd88400000000, duration=339186.593s, table=115, n_packets=525510203, n_bytes=549612811995, priority=0 actions=goto_table:120

VPC_FORWARD(table=120)

# ovs-ofctl dump-flows br-int -O openflow13 table=120,reg0=xxx

cookie=0x63bfd88400160c2d, duration=263102.317s, table=120, n_packets=401, n_bytes=39298, priority=30,ip,reg0=0xfa177aca,reg1=0xf9094a77,reg2=0x9d7b1055,reg3=0x1d105cb3,nw_dst=178.1.3.3 actions=set_field:0x56a847387ef6
49d5a68bbb745d482d50->xxreg1,group:638320642

group:638320642

# ovs-ofctl dump-groups br-int |grep group_id=638320642

group_id=638320642,type=all,bucket=bucket_id:1,actions=load:0x2d->NXM_NX_REG13[],resubmit(0,133)

给reg13赋了值

开始转为入方向了

FORWARD_FILTER(table=133)

# ovs-ofctl dump-flows br-int -O openflow13 table=133

cookie=0x63bfd88400000000, duration=339760.401s, table=133, n_packets=93115648, n_bytes=73907902203, priority=0 actions=load:0->NXM_OF_IN_PORT[],goto_table:135

这个table里的reg12=0x2d,reg13=0x2d, 是不想自己访问自己?

INGRESS_ACL_CLASSIFY(table=135)

# ovs-ofctl dump-flows br-int -O openflow13 table=135

cookie=0x63bfd88400000000, duration=339816.748s, table=135, n_packets=471420219, n_bytes=634722114415, priority=0 actions=goto_table:140

INGRESS_ROUTER_POLICY(table=140)

# ovs-ofctl dump-flows br-int -O openflow13 table=140,reg7=xxx

cookie=0x63bfd88400c60c00, duration=341222.720s, table=140, n_packets=854, n_bytes=83692, priority=50,ip,reg4=0x56a84738,reg5=0x7ef649d5,reg6=0xa68bbb74,reg7=0x5d482d50,nw_src=178.1.3.0/24,nw_dst=178.1.3.0/24 actions=goto_table:148

INGRESS_SEC_CLASSIFY(table=148)

# ovs-ofctl dump-flows br-int -O openflow13 table=148

cookie=0x63bfd88400f00000, duration=341783.093s, table=148, n_packets=93083106, n_bytes=73904881500, priority=0,ip actions=goto_table:150

INGRESS_DISPATCH(table=150)

# ovs-ofctl dump-flows br-int -O openflow13 table=150

cookie=0x63bfd88400f00000, duration=341857.420s, table=150, n_packets=34180712, n_bytes=3370861072, priority=10,icmp actions=ct(table=151,zone=NXM_NX_REG13[0..15],nat)

入安全组begin

BASE_INGRESS(table=151) 还是+new

 # ovs-ofctl dump-flows br-int -O openflow13 table=151
 
 cookie=0x63bfd88400f00000, duration=342579.059s, table=151, n_packets=376625, n_bytes=28578223, priority=50,ct_state=+new,ip actions=goto_table:152

RULES_INGRESS(table=152)

cookie=0x63bfd88400f60c2d, duration=21510.589s, table=152, n_packets=28, n_bytes=2744, priority=2010,ip,reg13=0x2d actions=goto_table:154

 只有+new, 第一次的时候会匹配

INGRESS_PORT_SEC(table=154)

cookie=0x63bfd88400f00000, duration=342700.127s, table=154, n_packets=375550, n_bytes=28472873, priority=0,ip actions=goto_table:155

 只有+new, 第一次的时候会匹配

INGRESS_COMMIT(table=155)

cookie=0x63bfd88400000000, duration=342820.195s, table=155, n_packets=507, n_bytes=49854, priority=5,icmp actions=ct(commit,zone=NXM_NX_REG13[0..15]),goto_table:159

ct记录
 只有+new, 第一次的时候会匹配

INGRESS_CT_GROUP(table=159)

# ovs-ofctl dump-flows br-int -O openflow13 table=159

cookie=0x63bfd88400000000, duration=342081.811s, table=159, n_packets=73922832, n_bytes=53466739904, priority=0,ip actions=goto_table:160   

QOS

# ovs-ofctl dump-flows br-int -O openflow13 table=160,reg13=xxx

INGRESS_QOS_BAND(table=160)
cookie=0x63bfd88400160c2d, duration=266085.414s, table=160, n_packets=417, n_bytes=40866, priority=20,reg13=0x2d actions=goto_table:165

# ovs-ofctl dump-flows br-int -O openflow13 table=165,reg13=xxx
INGRESS_QOS_PPS(table=165)
 cookie=0x63bfd88400160c2d, duration=266187.269s, table=165, n_packets=428, n_bytes=41944, priority=20,reg13=0x2d actions=goto_table:180

VPC_OUTPUT(table=180)

# ovs-ofctl dump-flows br-int -O openflow13 table=180,reg13=xxx

cookie=0x63bfd88400160c2d, duration=268266.323s, table=180, n_packets=449, n_bytes=44002, priority=20,reg13=0x2d actions=set_field:fe:ff:ff:ff:ff:ff->eth_src,set_field:fa:16:3e:68:a5:98->eth_dst,output:"vhuff4028a6-4f"

src mac塞入dst,  源mac置为固定的fe:ff:ff:ff:ff:ff  找到vm2的port

4. 不同节点VM二三层转发

4.1 流量模型

VPC内_跨计算节点流量模型

(1) 数据包从vm1的eth0口发出,沿着数据通道,到达对端的vhuff4028a6-4f

(2)数据包进入br-int的流表,如 4.2 所示, 进行流转

(3) 流表修改dst_mac=02:00:00:00:00:00,dst ip =10.163.133.42, vxlan封装,把包送到vf-10.163.133.43

(4) 查询内核路由表, 根据路由表,把信息导入到br-phy设备中

[root@P6F2-edge-smart-16 ~]# ip route
...
10.163.133.0/24 dev br-phy proto kernel scope link src 10.163.133.43
...

(5) 经过br-phy流表指引, 使用group负载均衡。 添加vlan=101, 从dpdk_phy0或dpdk_phy1发出(两张网卡bond的,共用IP 10.163.133.43)

[root@P6F2-edge-smart-16 ~]# ovs-ofctl dump-flows br-phy -O openflow13
 cookie=0x63bfd88400000000, duration=425256.550s, table=0, n_packets=468766364, n_bytes=678091820226, priority=20,in_port=LOCAL actions=group:0
[root@P6F2-edge-smart-16 ~]# ovs-ofctl dump-groups br-phy 0 -O openflow13
NXST_GROUP_DESC reply (xid=0x2):
group_id=0,type=select,selection_method=dp_hash,bucket=bucket_id:1,watch_port:"dpdk_phy0",actions=mod_vlan_vid:101,output:"dpdk_phy0",bucket=bucket_id:2,watch_port:"dpdk_phy1",actions=mod_vlan_vid:101,output:"dpdk_phy1"

(6)经过隧道到达对端br-phy网卡, 经过br-phy的流表,剥离vlan, 送到LOCAL中

 cookie=0x63bfd88400000000, duration=425805.354s, table=0, n_packets=169068723, n_bytes=248536257378, priority=20,in_port="dpdk_phy0",dl_vlan=101 actions=strip_vlan,LOCAL
 cookie=0x63bfd88400000000, duration=425805.354s, table=0, n_packets=248797940, n_bytes=364449720280, priority=20,in_port="dpdk_phy1",dl_vlan=101 actions=strip_vlan,LOCAL

(7) 根据vxlan外层的dst_ip ,找到隧道的vtep点

(8) br-int的vf port配置了vtepIp, br-int的vf-10.163.133.42获取到流量,解vxlan封装

(9) 经过流表转发,如4.3所示,output:vhu6c0a14df-89

(10) port:vhu6c0a14df-89 送到数据通道的对端, vm3的eth0, vm3收到数据包。

4.2 VM1出流表记录

使用vm1 ping vm3

INPUT(table=0)

# ovs-ofctl dump-flows br-int -O openflow13 table=0,in_port=xxx

cookie=0x63bfd88400160c27, duration=408283.385s, table=0, n_packets=518, n_bytes=50764, priority=30,ip,in_port=136,vlan_tci=0x0000/0x1fff,dl_src=fa:16:3e:98:90:e7,nw_src=178.1.3.2 actions=set_field:0xfa177acaf9094a779d
7b10551d105cb3->xxreg0,set_field:0x56a847387ef649d5a68bbb745d482d50->xxreg1,set_field:0x458ec4469e7840aba386f2c29ee0a53b->xxreg2,load:0x27->NXM_NX_REG12[0..15],set_field:0x170b->reg14,write_metadata:0x9000000000000000/0
x9000000000000000,goto_table:60

Qos部分+分类部分,不用太关心,简单写下

EGRESS_QOS_BAND(table=60)   goto_table:65  //# ovs-ofctl dump-flows br-int -O openflow13 table=60,reg12=xx
EGRESS_QOS_PPS(table=65)   goto_table:66  //# ovs-ofctl dump-flows br-int -O openflow13 table=65,reg12=xx
EGRESS_QOS_SYN(table=66)   goto_table:67  // # ovs-ofctl dump-flows br-int -O openflow13 table=66,reg12=xx

EGRESS_ACL_CLASSIFY(table=67)  goto_table:68  //# ovs-ofctl dump-flows br-int -O openflow13 table=67
EGRESS_SEC_CLASSIFY(table=68)  goto_table:70  // # ovs-ofctl dump-flows br-int -O openflow13 table=68

EGRESS_DISPATCH(table=70)

# ovs-ofctl dump-flows br-int -O openflow13 table=70,icmp

cookie=0x63bfd88400f00000, duration=409805.874s, table=70, n_packets=232060328, n_bytes=22751253330, priority=50,icmp actions=ct(table=71,zone=NXM_NX_REG12[0..15],nat)

ct记录了一波

BASE_EGRESS(table=71)

 # ovs-ofctl dump-flows br-int -O openflow13 table=71

cookie=0x63bfd88400f00000, duration=416412.585s, table=71, n_packets=4314997, n_bytes=393103808, priority=50,ct_state=+new,ip,metadata=0x1000000000000000/0x1000000000000000 actions=goto_table:72

icmp可以ct, 每次ping都是一个new connection, 匹配ct_state=+new

RULES_EGRESS(table=72) :安全组的核心表, port未绑定安全组时,会再这边drop

# ovs-ofctl dump-flows br-int -O openflow13 table=72,reg12=0x20

cookie=0x63bfd88400f60c27, duration=95401.397s, table=72, n_packets=34, n_bytes=3332, priority=2010,ip,reg12=0x27 actions=goto_table:74

根据input port_vid 匹配 
只有+new, 第一次的时候会匹配

EGRESS_PORT_SEC(table=74)

# ovs-ofctl dump-flows br-int -O openflow13 table=74

cookie=0x63bfd88400f00000, duration=416597.265s, table=74, n_packets=2055382, n_bytes=172610054, priority=0,ip actions=goto_table:80

只有+new, 第一次的时候会匹配

VIP_EGRESS(table=80)

# ovs-ofctl dump-flows br-int -O openflow13 table=80

cookie=0x63bfd88400b00000, duration=417018.301s, table=80, n_packets=360, n_bytes=34888, priority=20,icmp actions=ct(commit,zone=NXM_NX_REG12[0..15]),goto_table:104

ct记录

只有+new, 第一次的时候会匹配

VPC_GROUPENI_REV(table=104)

#ovs-ofctl dump-flows br-int -O openflow13 table=104

cookie=0x63bfd88400000000, duration=417330.292s, table=104, n_packets=483068116, n_bytes=503033531230, priority=0 actions=goto_table:105

VPC_POLICY_ROUTE(table=105) 区分是否同一个子网 TODO

# ovs-ofctl dump-flows br-int -O openflow13 table=105,reg7=xxx

cookie=0x63bfd88400c60c00, duration=416344.395s, table=105, n_packets=7665, n_bytes=751170, priority=5,ip,reg4=0x56a84738,reg5=0x7ef649d5,reg6=0xa68bbb74,reg7=0x5d482d50,nw_src=178.1.3.0/24,nw_dst=178.1.3.0/24 actions=
goto_table:110

根据ip层的src和dst, 匹配是否是同一个子网的转发

VPC_ROUTE(table=110)

# ovs-ofctl dump-flows br-int -O openflow13 table=110,reg7=xxx

cookie=0x63bfd88400c60c00, duration=416546.444s, table=110, n_packets=7671, n_bytes=751758, priority=7201,ip,reg4=0x56a84738,reg5=0x7ef649d5,reg6=0xa68bbb74,reg7=0x5d482d50,nw_dst=178.1.3.0/24 actions=goto_table:115

VPC_TRANSLATE(table=115)

# ovs-ofctl dump-flows br-int -O openflow13 table=115

cookie=0x63bfd88400000000, duration=418724.327s, table=115, n_packets=533790749, n_bytes=550420419139, priority=0 actions=goto_table:120

VPC_FORWARD(table=120)

# ovs-ofctl dump-flows br-int -O openflow13 table=120,reg3=xxx |grep nw_dst=xxx

cookie=0x63bfd88400260c00, duration=12879.441s, table=120, n_packets=6791, n_bytes=665518, priority=20,ip,reg0=0xfa177aca,reg1=0xf9094a77,reg2=0x9d7b1055,reg3=0x1d105cb3,metadata=0/0x4000000000000000,nw_dst=178.1.3.6 
actions=set_field:10.163.133.42->tun_dst,set_field:0x170b->tun_id,set_field:02:00:00:00:00:00->eth_dst,output:"vf-10.163.133.4"

vxlan封包,  数据包出计算节点:
dst mac 置为固定的02:00:00:00:00:00
dst ip 置为对端计算节点的IP 10.163.133.42
从vf端口发出去

4.3 VM3入流表记录

从计算节点master16出来的包信息:10.163.133.42->tun_dst,set_field:0x170b->tun_id,set_field:02:00:00:00:00:00->eth_dst

vf port解封装vxlan

INPUT(table=0)

cookie=0x63bce2d10028a000, duration=14786.589s, table=0, n_packets=6791, n_bytes=665518, priority=20,tun_id=0x170b,in_port="vf-10.163.133.4" actions=set_field:0xfa177acaf9094a779d7b10551d105cb3->xxreg0,0x170b->reg14,write_metadata:0x4000000000000000/0x4000000000000000,goto_table:100

根据tun_id, in_port 找到入口的流表

给寄存器赋值
set XXREG0(REG0-3) ← network uuid    0xfa177acaf9094a779d7b10551d105cb3
set reg14 <- tun_id   0x170b

VPC_FROM_REMOTE(table=100)

# ovs-ofctl dump-flows br-int -O openflow13 table=100

cookie=0x63bce2d100000000, duration=615599.353s, table=100, n_packets=6385844274, n_bytes=9642815366929, priority=0 actions=goto_table:101

dl_src=02:00:00:00:00:01 actions=goto_table:130  VPC_DIRECT  自定义路由
dl_src=05:00:00:00:00:00/ff:00:ff:ff:ff:ff actions=goto_table:101   VPC_REV_FIP_DNAT
dl_src=04:00:00:00:00:00/ff:00:ff:ff:ff:00 actions=goto_table:120   nlb/dpvs
其它    actions=goto_table:101  VPC_REV_FIP_DNAT  普通的3层报文

VPC_REV_FIP_DNAT(table=101)

# ovs-ofctl dump-flows br-int -O openflow13 table=101

cookie=0x63bce2d100000000, duration=615866.902s, table=101, n_packets=6494273896, n_bytes=9623043016855, priority=0 actions=goto_table:120

TODO 下面这个流表什么时候会走到?
cookie=0x63bce2d100c8a026, duration=15574.426s, table=101, n_packets=0, n_bytes=0, priority=300,ip,nw_dst=2.1.0.229 actions=set_field:178.1.3.6->ip_dst,goto_table:120

VPC_FORWARD(table=120) 塞入子网的值

# ovs-ofctl dump-flows br-int -O openflow13 table=120,reg3=xxx

cookie=0x63bce2d10018a026, duration=15726.043s, table=120, n_packets=6824, n_bytes=668752, priority=30,ip,reg0=0xfa177aca,reg1=0xf9094a77,reg2=0x9d7b1055,reg3=0x1d105cb3,nw_dst=178.1.3.6 actions=set_fiel
d:0x56a847387ef649d5a68bbb745d482d50->xxreg1,group:681574401

set XXREG1(REG4-7) ← subnet uuid     0x56a847387ef649d5a68bbb745d482d50

group:681574401

#ovs-ofctl dump-groups br-int -O openflow13 |grep group_id=681574401

group_id=681574401,type=all,bucket=bucket_id:1,actions=load:0x26->NXM_NX_REG13[],resubmit(0,133)

FORWARD_FILTER(table=133)

#  ovs-ofctl dump-flows br-int -O openflow13 table=133

cookie=0x63bce2d100000000, duration=616197.914s, table=133, n_packets=15390227538, n_bytes=19013349772254, priority=0 actions=load:0->NXM_OF_IN_PORT[],goto_table:135

为什么要把0 写入 NXM_OF_IN_PORT

INGRESS_ACL_CLASSIFY(table=135)

#  ovs-ofctl dump-flows br-int -O openflow13 table=135

cookie=0x63bce2d100000000, duration=616330.433s, table=135, n_packets=122279555589, n_bytes=180718350125776, priority=0 actions=goto_table:140

INGRESS_ROUTER_POLICY(table=140)

#  ovs-ofctl dump-flows br-int -O openflow13 table=140,reg7=0x111409b1

cookie=0x63bce2d100c8a000, duration=16107.239s, table=140, n_packets=6836, n_bytes=669928, priority=50,ip,reg4=0x56a84738,reg5=0x7ef649d5,reg6=0xa68bbb74,reg7=0x5d482d50,nw_src=178.1.3.0/24,nw_dst=178.1.
3.0/24 actions=goto_table:148

INGRESS_SEC_CLASSIFY(table=148)

#  ovs-ofctl dump-flows br-int -O openflow13 table=148

cookie=0x63bce2d100f00000, duration=616445.598s, table=148, n_packets=118451311361, n_bytes=177025464904953, priority=0,ip actions=goto_table:150

INGRESS_DISPATCH(table=150)

#  ovs-ofctl dump-flows br-int -O openflow13 table=150

cookie=0x63bce2d100f00000, duration=616507.593s, table=150, n_packets=102147771, n_bytes=10042548502, priority=10,icmp actions=ct(table=151,zone=NXM_NX_REG13[0..15],nat)

BASE_INGRESS(table=151)

#  ovs-ofctl dump-flows br-int -O openflow13 table=151

cookie=0x63bce2d100f00000, duration=616563.151s, table=151, n_packets=317140055, n_bytes=21890176449, priority=50,ct_state=+new,ip actions=goto_table:152

只有+new, 第一次的时候会匹配

RULES_INGRESS(talbe=152)

 #  ovs-ofctl dump-flows br-int -O openflow13 table=152,reg13=xxx
 
cookie=0x63bce2d100f8a026, duration=13575.692s, table=152, n_packets=24, n_bytes=2352, priority=2010,ip,reg13=0x26 actions=goto_table:154
 
 只有+new, 第一次的时候会匹配
 配了安全组规则,放行。
 如果没有绑定,这边直接drop了

INGRESS_PORT_SEC(table=154)

#  ovs-ofctl dump-flows br-int -O openflow13 table=154

cookie=0x63bce2d100f00000, duration=616936.366s, table=154, n_packets=317138294, n_bytes=21890004087, priority=0,ip actions=goto_table:155

 只有+new, 第一次的时候会匹配

INGRESS_COMMIT(table=155)

# ovs-ofctl dump-flows br-int -O openflow13 table=155

cookie=0x63bce2d100000000, duration=617080.431s, table=155, n_packets=2187, n_bytes=213790, priority=5,icmp actions=ct(commit,zone=NXM_NX_REG13[0..15]),goto_table:159

ct记录
只有+new, 第一次的时候会匹配

INGRESS_CT_GROUP(table=159)

# ovs-ofctl dump-flows br-int -O openflow13 table=159

cookie=0x63bce2d100000000, duration=617163.053s, table=159, n_packets=6427966615, n_bytes=9646938246635, priority=0,ip actions=goto_table:160

QOS

INGRESS_QOS_BAND(table=160) goto_table:165  // # ovs-ofctl dump-flows br-int -O openflow13 table=160,reg13=xx
INGRESS_QOS_PPS(table=165) goto_table:180  //# ovs-ofctl dump-flows br-int -O openflow13 table=165,reg13=xx

VPC_OUTPUT(table=180)

# ovs-ofctl dump-flows br-int -O openflow13 table=180,reg13=xxx

cookie=0x63bce2d10018a026, duration=17035.210s, table=180, n_packets=7183, n_bytes=703934, priority=20,reg13=0x26 actions=set_field:fe:ff:ff:ff:ff:ff->eth_src,set_field:fa:16:3e:73:5d:00->eth_dst,output:"vhu6c0a14df-89"

src mac塞入dst,  源mac置为固定的fe:ff:ff:ff:ff:ff  找到vm2的port

5. FLOW跟踪: 未绑定安全组

不执行操作2时候,默认网络都是不通的,除了ARP、DHCP, 以下是无安全组的流表

2.2 入流表

RULES_EGRESS(table=72)

cookie=0x63bfd88400000000, duration=92029.927s, table=72, n_packets=1833011, n_bytes=178713622, priority=0 actions=drop

reg12=0x27 找不到匹配项,直接drop

2.3 出流表

RULES_INGRESS(table=152)

 cookie=0x63bce2d100000000, duration=616863.670s, table=152, n_packets=1833, n_bytes=178982, priority=0 actions=drop
 
 找不到匹配项,直接drop
  • 14
    点赞
  • 23
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值