under_the_hood_openvswitch

转自 http://docs.openstack.org/trunk/openstack-network/admin/content/under_the_hood_openvswitch.html

Open vSwitch

This section describes how the Open vSwitch plugin implements the OpenStack Networking abstractions.

 Configuration

This example uses VLAN isolation on the switches to isolate tenant networks. This configuration labels the physical network associated with the public network as physnet1, and the physical network associated with the data network as physnet2, which leads to the following configuration options in ovs_quantum_plugin.ini:

[ovs]
tenant_network_type = vlan
network_vlan_ranges = physnet2:100:110
integration_bridge = br-int
bridge_mappings = physnet2:br-eth1

 Scenario 1: one tenant, two networks, one router

The first scenario has two private networks (net01, and net02), each with one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.102.0/24). Both private networks are attached to a router that contains them to the public network (10.64.201.0/24).

Under the service tenant, create the shared router, define the public network, and set it as the default gateway of the router

$ tenant=$(keystone tenant-list | awk '/service/ {print $2}')
$ quantum router-create router01
$ quantum net-create --tenant-id $tenant public01 \
          --provider:network_type flat \
          --provider:physical_network physnet1 \
          --router:external=True
$ quantum subnet-create --tenant-id $tenant --name public01_subnet01 \
          --gateway 10.64.201.254 public01 10.64.201.0/24 --enable_dhcp False
$ quantum router-gateway-set router01 public01

Under the demo user tenant, create the private network net01 and corresponding subnet, and connect it to the router01 router. Configure it to use VLAN ID 101 on the physical switch.

$ tenant=$(keystone tenant-list|awk '/demo/ {print $2}'
$ quantum net-create --tenant-id $tenant net01 \
          --provider:network_type vlan \
          --provider:physical_network physnet2 \
          --provider:segmentation_id 101
$ quantum subnet-create --tenant-id $tenant --name net01_subnet01 net01 192.168.101.0/24
$ quantum router-interface-add router01 net01_subnet01

Similarly, for net02, using VLAN ID 102 on the physical switch:

$ quantum net-create --tenant-id $tenant net02 \
                --provider:network_type vlan \
                --provider:physical_network physnet2 \
                --provider:segmentation_id 102
            $ quantum subnet-create --tenant-id $tenant --name net02_subnet01 net02 192.168.102.0/24
            $ quantum router-interface-add router01 net02_subnet01

 Scenario 1: Compute host config

The following figure shows how to configure various Linux networking devices on the compute host:

 Types of network devices

[Note]Note

There are four distinct type of virtual networking devices: TAP devices, veth pairs, Linux bridges, and Open vSwitch bridges. For an ethernet frame to travel from eth0 of virtual machine vm01, to the physical network, it must pass through nine devices inside of the host: TAP vnet0, Linux bridge qbrXXX, veth pair (qcbXXX, qvoXXX), Open vSwitch bridge br-int, veth pair (int-br-eth1, phy-br-eth1), and, finally, the physical network interface card eth1.

A TAP device, such as vnet0 is how hypervisors such as KVM and Xen implement a virtual network interface card (typically called a VIF or vNIC). An ethernet frame sent to a TAP device is received by the guest operating system.

A veth pair is a pair of virtual network interfaces correctly directly together. An ethernet frame sent to one end of a veth pair is received by the other end of a veth pair. OpenStack networking makes use of veth pairs as virtual patch cables in order to make connections between virtual bridges.

A Linux bridge behaves like a hub: you can connect multiple (physical or virtual) network interfaces devices to a Linux bridge. Any ethernet frames that come in from one interface attached to the bridge is transmitted to all of the other devices.

An Open vSwitch bridge behaves like a virtual switch: network interface devices connect to Open vSwitch bridge's ports, and the ports can be configured much like a physical switch's ports, including VLAN configurations.

 Integration bridge

The br-int OpenvSwitch bridge is the integration bridge: all of the guests running on the compute host connect to this bridge. OpenStack Networking implements isolation across these guests by configuring the br-int ports.

 Physical connectivity bridge

The br-eth1 bridge provides connectivity to the physical network interface card, eth1. It connects to the integration bridge by a veth pair: (int-br-eth1, phy-br-eth1).

 VLAN translation

In this example, net01 and net02 have VLAN ids of 1 and 2, respectively. However, the physical network in our example only supports VLAN IDs in the range 101 through 110. The Open vSwitch agent is responsible for configuring flow rules on br-int and br-eth1 to do VLAN translation. When br-eth1 receives a frame marked with VLAN ID 1 on the port associated with phy-br-eth1, it modifies the VLAN ID in the frame to 101. Similarly, when br-int receives a frame marked with VLAN ID 101 on the port associated with int-br-eth1, it modifies the VLAN ID in the frame to 1.

 Security groups: iptables and Linux bridges

Ideally, the TAP device vnet0 would be connected directly to the integration bridge, br-int. Unfortunately, this isn't possible because of how OpenStack security groups are currently implemented. OpenStack uses iptables rules on the TAP devices such as vnet0 to implement security groups, and Open vSwitch is not compatible with iptables rules that are applied directly on TAP devices that are connected to an Open vSwitch port.

OpenStack Networking uses an extra Linux bridge and a veth pair as a workaround for this issue. Instead of connecting vnet0 to an Open vSwitch bridge, it is connected to a Linux bridge, qbrXXX. This bridge is connected to the integration bridge, br-int, via the (qvbXXX, qvoXXX) veth pair.

 Scenario 1: Network host config

Recall that the network host runs the quantum-openvswitch-plugin-agent, the quantum-dhcp-agent, quantum-l3-agent, and quantum-metadata-agent services.

On the network host, assume that eth0 is connected to the external network, and eth1 is connected it to the data network, which leads to the following configuration options in ovs_quantum_plugin.ini:

[ovs]
tenant_network_type = vlan
network_vlan_ranges = physnet2:101:110
integration_bridge = br-int
bridge_mappings = physnet1:br-ex,physnet2:br-eth1

The following figure shows the network devices on the network host:

As on the compute host, there is an Open vSwitch integration bridge (br-int) and an Open vSwitch bridge connected to the data network (br-eth1), and the two are connected by a veth pair, and the quantum-openvswitch-plugin-agent configures the ports on both switches to do VLAN translation.

There is also an additional Open vSwitch bridge, br-ex, which connects to the physical interface that is connected to the external network. In this example, that physical interface is eth0.

[Note]Note

While the integration bridge and the external bridge are connected by a veth pair (int-br-ex, phy-br-ex), this example uses layer 3 connectivity to route packets from the internal networks to the public network: no packets traverse that veth pair in this example.

 Open vSwitch internal ports

The network host uses Open vSwitch internal ports. Internal ports are a mechanism that allows you to assign one or more IP addresses to an Open vSwitch bridge. In previous example, the br-int bridge has four internal ports: tapXXX, qr-YYY, qr-ZZZ, tapWWW. Each internal port has a separate IP address associated with it. There is also an internal port, qg-VVV, on the br-ex bridge.

 DHCP agent

By default, The OpenStack Networking DHCP agent uses a program called dnsmasq to provide DHCP services to guests. OpenStack Networking must create an internal port for each network that requires DHCP services and attach a dnsmasq process to that port. In the previous example, the interface tapXXX is on subnet net01_subnet01, and the interface tapWWW is on net02_subnet01.

 L3 agent (routing)

The OpenStack Networking L3 agent implements routing through the use of Open vSwitch internal ports and relies on the network host to route the packets across the interfaces. In this example: interfaceqr-YYY, which is on subnet net01_subnet01, has an IP address of 192.168.101.1/24, interface qr-ZZZ, which is on subnet net02_subnet01, has an IP address of 192.168.102.1/24, and interface qg-VVV, which has an IP address of 10.64.201.254/24. Because of each of these interfaces is visible to the network host operating system, it will route the packets appropriately across the interfaces, as long as an administrator has enabled IP forwarding.

The L3 agent uses iptables to implement floating IPs to do the network address translation (NAT).

 Overlapping subnets and network namespaces

One problem with using the host to implement routing is that there is a chance that one of the OpenStack Networking subnets might overlap with one of the physical networks that the host uses. For example, if the management network is implemented on eth2 (not shown in the previous example), by coincidence happens to also be on the 192.168.101.0/24 subnet, then this will cause routing problems because it is impossible ot determine whether a packet on this subnet should be sent to qr-YYY or eth2. In general, if end-users are permitted to create their own logical networks and subnets, then the system must be designed to avoid the possibility of such collisions.

OpenStack Networking uses Linux network namespaces to prevent collisions between the physical networks on the network host, and the logical networks used by the virtual machines. It also prevents collisions across different logical networks that are not routed to each other, as you will see in the next scenario.

A network namespace can be thought of as an isolated environment that has its own networking stack. A network namespace has its own network interfaces, routes, and iptables rules. You can think of like a chroot jail, except for networking instead of a file system. As an aside, LXC (Linux containers) use network namespaces to implement networking virtualization.

OpenStack Networking creates network namespaces on the network host in order to avoid subnet collisions.

Tn this example, there are three network namespaces, as depicted in the following figure.

  • qdhcp-aaa: contains the tapXXX interface and the dnsmasq process that listens on that interface, to provide DHCP services for net01_subnet01. This allows overlapping IPs between net01_subnet01 and any other subnets on the network host.

  • qrouter-bbbb: contains the qr-YYY, qr-ZZZ, and qg-VVV interfaces, and the corresponding routes. This namespace implements router01 in our example.

  • qdhcp-ccc: contains the tapWWW interface and the dnsmasq process that listens on that interface, to provide DHCP services for net02_subnet01. This allows overlapping IPs between net02_subnet01 and any other subnets on the network host.

 Scenario 2: two tenants, two networks, two routers

The second scenario has two tenants (A, B). Each tenant has a network with one subnet, and each one has a router that connects them to the public Internet.

Under the service tenant, define the public network:

$ tenant=$(keystone tenant-list | awk '/service/ {print $2}')
$ quantum net-create --tenant-id $tenant public01 \
	--provider:network_type flat \
	--provider:physical_network physnet1 \
	--router:external=True
$ quantum subnet-create --tenant-id $tenant --name public01_subnet01 \
	--gateway 10.64.201.254 public01 10.64.201.0/24 --enable_dhcp False

Under the tenantA user tenant, create the tenant router and set its gateway for the public network.

$ tenant=$(keystone tenant-list|awk '/tenantA/ {print $2}')
$ quantum router-create --tenant-id $tenant router01
$ quantum router-gateway-set router01 public01

Then, define private network net01 using VLAN ID 102 on the physical switch, along with its subnet, and connect it to the router.

$ quantum net-create --tenant-id $tenant net01 \
	--provider:network_type vlan \
	--provider:physical_network physnet2 \
	--provider:segmentation_id 101
$ quantum subnet-create --tenant-id $tenant --name net01_subnet01 net01 192.168.101.0/24
$ quantum router-interface-add router01 net01_subnet01

Similarly, for tenantB, create a router and another network, using VLAN ID 102 on the physical switch:

$ tenant=$(keystone tenant-list|awk '/tenantB/ {print $2}')
$ quantum router-create --tenant-id $tenant router02
$ quantum router-gateway-set router02 public01
$ quantum net-create --tenant-id $tenant net02 \
	--provider:network_type vlan \
	--provider:physical_network physnet2 \
	--provider:segmentation_id 102
$ quantum subnet-create --tenant-id $tenant --name net02_subnet01 net01 192.168.101.0/24
$ quantum router-interface-add router02 net02_subnet01

 Scenario 2: Compute host config

The following figure shows how the various Linux networking devices would be configured on the compute host under this scenario.

[Note]Note

The configuration on the compute host is very similar to the configuration in scenario 1. The only real difference is that scenario 1 had a guest that was connected to two subnets, and in this scenario the subnets belong to different tenants.

 Scenario 2: Network host config

The following figure shows the network devices on the network host for the second scenario.

The main difference between the configuration in this scenario and the previous one is the organization of the network namespaces, in order to provide isolation across the two subnets, as shown in the following figure.

In this scenario, there are four network namespaces (qhdcp-aaa, qrouter-bbbb, qrouter-cccc, and qhdcp-dddd), instead of three. Since there is no connectivity between the two networks, and so each router is implemented by a separate namespace.


### 回答1: robin_hood::unordered_set是一种基于开放寻址的哈希表实现,它是C++ STL中的一个无序集合容器。与标准的unordered_set相比,robin_hood::unordered_set有着更高的性能。 robin_hood::unordered_set的实现方式采用了"robin hood"哈希算法,这种算法通过再哈希的方式处理冲突,将冲突的元素移到更远的位置,从而保持高效的查找性能。这种算法使得插入和删除操作具有O(1)的时间复杂度,而查找操作虽然在最坏情况下也是O(n),但实际上在大多数情况下是O(1)的。 此外,robin_hood::unordered_set在内存使用上也比标准的unordered_set更为高效。它采用了连续的内存布局,并使用了布隆过滤器来减少哈希冲突的数量,从而减少了内存的占用。 使用robin_hood::unordered_set时,可以通过插入、删除和查找等操作来管理集合中的元素。插入操作可以将元素添加到集合中,删除操作可以从集合中移除指定的元素,而查找操作可以判断集合中是否存在某个元素。 总的来说,robin_hood::unordered_set是一种高效的无序集合容器,适用于需要频繁进行插入、删除和查找操作的场景。它通过"robin hood"哈希算法和优化的内存使用方式,在性能和内存占用方面均有优势。 ### 回答2: Robin Hood是一个著名的英雄人物,他以偷取富人财物来帮助穷人而闻名。而unordered_set是C++ STL库中的一个数据结构,它是一个无序的集合,允许快速地插入、查找和删除元素。 尽管二者似乎没有直接联系,但是我们可以通过一些类比来理解它们之间的关系。就像Robin Hood通过偷取富人的财物来帮助穷人一样,unordered_set可以用来解决一些问题,比如查找和删除元素,这些问题在其他数据结构中可能需要更多的时间和资源。 就像Robin Hood能够迅速地从富人身上夺取财物,unordered_set在最佳情况下能够以O(1)的时间复杂度插入、查找和删除元素,这取决于哈希函数的性能。这使得它在一些需要高效率操作的场景中非常有用,比如去重、查找等。 然而,就像Robin Hood有时候可能会遇到困难一样,unordered_set也有一些限制。由于其无序的特点,它在有序访问元素方面相对较弱。此外,当元素数量较大时,哈希冲突的概率也会增加,导致性能下降。因此,在某些情况下,我们可能需要考虑使用其他更适合的数据结构。 总之,尽管Robin Hood和unordered_set在本质上是不同的,但通过类比,我们可以更好地理解unordered_set的特点和用途。无论是Robin Hood还是unordered_set,它们都有自己独特的功能和限制,我们需要根据具体的问题和需求来选择使用它们。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值