Examining Open vSwitch Traffic Patterns

Great artical:http://blog.scottlowe.org/2013/05/15/examining-open-vswitch-traffic-patterns/

In this post, I want to provide some additional insight on how the use of Open vSwitch (OVS) affects—or doesn’t affect, in some cases—how a Linux host directs traffic through physical interfaces,OVS internal interfaces, and OVS bridges. This is something that I had a hard time understanding as I started exploring more advanced OVS configurations, and hopefully the information I share here will be helpful to others.

To help structure this discussion, I’m going to walk through a few different OVS configurations and scenarios. In these scenarios, I’ll use the following assumptions:

  • The physical host has four interfaces (eth0, eth1, eth2, and eth3)
  • The host is running Linux with KVM, libvirt, and OVS installed

Scenario 1: Simple OVS Configuration

In this first scenario let’s look at a relatively simple OVS configuration, and examine how Linux host and guest domain traffic moves into or out of the network.

Let’s assume that our OVS configuration looks something like this (this is the output from ovs-vsctl show):

bc6b9e64-11d6-415f-a82b-5d8a61ed3fbd
    Bridge "br0"
        Port "br0"
            Interface "br0"
                type: internal
        Port "eth0"
            Interface "eth0"
    Bridge "br1"
        Port "br1"
            Interface "br1"
                type: internal
        Port "eth1"
            Interface "eth1"
    ovs_version: "1.7.1"

This is a pretty simple configuration; there are two bridges, each with a single physical interface. Let’s further assume, for the purposes of this scenario, that eth2 has an IP address and is working properly to communicate with other hosts on the network. The eth3 interface is shutdown.

So, in this scenario, how does traffic move into or out of the host?

  1. Traffic from a guest domain: Traffic from a guest domain willtravel through the OVS bridge to which it is attached (you’d see an additional “vnet0″ port and interface appear on that bridge when you start the guest domain). So, a guest domain attached to br0 would communicate via eth0, and a guest domain attached to br1 would communicate via eth1. No real surprises here.

  2. Traffic from the Linux host: Traffic from the Linux host itself will notcommunicate over any of the configured OVS bridges, but will instead use its native TCP/IP stack and any configured interfaces. Thus, since eth2 is configured and operational, all traffic to/from the Linux host itself will travel through eth2.

The interesting point (to me, at least) about #2 above is that this includes traffic from the OVS process itself. In other words, if the OVS process(es) need to communicate across the network, they won’t use the bridges—they’ll use whatever interfaces the Linux host uses to communicate. This is one thing that threw me off: because OVS is itself a Linux process, when OVS needs to communicate across the network it will use the Linux network stack to do so. In this scenario, then, OVS would not communicate over any configured bridge, but instead using eth2. (This makes perfect sense now, but I recall that it didn’t earlier. Maybe it’s just me.)

Scenario 2: Adding Bonding

In this second scenario, our OVS configuration changes only slightly:

bc6b9e64-11d6-415f-a82b-5d8a61ed3fbd
    Bridge "br0"
        Port "br0"
            Interface "br0"
                type: internal
        Port "bond0"
            Interface "eth0"
            Interface "eth1"
    ovs_version: "1.7.1"

In this case, we’re now leveraging a bond that contains two physical interfaces (eth0 and eth1). (By the way, I have a write-up on configuring OVS and bonds, if you need/want more information.) The eth2 interface still has an IP address assigned and is up and communicating properly. The physical eth3 interface is shutdown.

How does this affect the way in which traffic is handled? It doesn’t, really.Traffic from guest domains will still travel across br0 (since this is the only configured OVS bridge), and traffic from the Linux host—including traffic from OVS itself—will still use whatever interfaces aredetermined by the host’s TCP/IP stack. In this case,that would be eth2.

Scenario 3: The Isolated Bridge

Let’s look at another OVS configuration, the so-called “isolated bridge”. This is a configuration that is commonly found in implementations using NVP, OpenStack, and others, and it’s a configuration that I recently discussed in my post on GRE tunnels and OVS.

Here’s the configuration:

bc6b9e64-11d6-415f-a82b-5d8a61ed3fbd
    Bridge "br0"
        Port "br0"
            Interface "br0"
                type: internal
        Port "bond0"
            Interface "eth0"
            Interface "eth1"
    Bridge "br-int"
        Port "br-int"
            Interface "br-int"
                type: internal
        Port "gre0"
            Interface "gre0"
                type: gre
                options: {remote_ip="192.168.1.100"}
    ovs_version: "1.7.1"

As with previous configurations, we’ll assume that eth2 is up and operational, and eth3 is shutdown. So how does traffic get directed in this configuration?

  1. Traffic from guest domains attached to br0: This is as before—traffic will go out one of the physical interfaces in the bond, according to the bonding configuration (active-standby, LACP, etc.). Nothing unusual here.

  2. Traffic from the Linux host: As before, traffic from processes on the Linux host will travel outaccording to the host’s TCP/IP stack. There are no changes from previous configurations.

  3. Traffic from guest domains attached to br-int: Now, this is where it gets interesting. Guest domains attached to br-int (named “br-int” because in this configuration the isolated bridge is often called the “integration bridge”)don’t have any physical interfaces they can use; theycan only use the GRE tunnel. Here’s the “gotcha”, so to speak:the GRE tunnel is created and maintained by the OVS process, and thereforeit uses the host’s TCP/IP stack to communicate across the network. Thus, traffic from guest domains attached to br-int would hit the GRE tunnel, which would travel through eth2.

I’ll give you a second to let that sink in.

Ready now? Good! The key to understanding #3 is, in my opinion, understanding that the tunnel (a GRE tunnel in this case, but the same would apply to a VXLAN or STT tunnel) is created and maintained by the OVS process.Thus, because it is created and maintained by a process on the Linux host (OVS itself),the traffic for the tunnel is directed according to the host’s TCP/IP stack and IP routing table(s). In this configuration, the tunnels don’t travel through any of the configured OVS bridges.

Scenario 4: Leveraging an OVS Internal Interface

Let’s keep ramping up the complexity. For this scenario, we’ll use an OVS configuration that is the same as in the previous scenario:

bc6b9e64-11d6-415f-a82b-5d8a61ed3fbd
    Bridge "br0"
        Port "br0"
            Interface "br0"
                type: internal
        Port "bond0"
            Interface "eth0"
            Interface "eth1"
    Bridge "br-int"
        Port "br-int"
            Interface "br-int"
                type: internal
        Port "gre0"
            Interface "gre0"
                type: gre
                options: {remote_ip="192.168.1.100"}
    ovs_version: "1.7.1"

The difference, this time, is that we’ll assume that eth2 and eth3 are both shutdown. Instead, we’ve assigned an IP address to the br0 interface on bridge br0. OVS internal interfaces, like br0, can appear as “physical” interfaces to the Linux host, and therefore can be assigned IP addresses and used for communication. This is the approach I used in describing how to run host management across OVS.

Here’s how this configuration affects traffic flow:

  1. Traffic from guest domains attached to br0: No change here. Traffic from guest domains attached to br0 will continue to travel across the physical interfaces in the bond (eth0 and eth1, in this case).

  2. Traffic from the Linux host: This time, the only interface that the Linux host has is the br0 internal interface. The br0 internal interface is attached to br0, so all traffic from the Linux host will travel across thephysical interfaces attached to the bond (again, eth0 and eth1).

  3. Traffic from guest domains attached to br-int: BecauseLinux host traffic is directed through br0 by virtue of using the br0 internal interface, this means that tunnel traffic is also directed through br0, as dictated by the Linux host’s TCP/IP stack and IP routing table(s).

As you can see, assigning an IP address to an OVS internal interface has a real impact on the way in which the Linux host directs traffic through OVS. This has both positive and negative impacts:

  • One positive impact is that it allows for Linux host traffic (such as management or tunnel traffic) totake advantage of OVS bonds, thus gaining some level of NIC redundancy.
  • A negative impact is that OVS is now “in band,” so upgrades to OVS will be disruptive to all traffic moving through OVS—which could potentially include host management traffic.

Let’s take a look at one final scenario.

Scenario 5: Using Multiple Bridges and Internal Interfaces

In this configuration, we’ll use an OVS configuration that is very similar to the configuration I showed in my post on GRE tunnels with OVS:

bc6b9e64-11d6-415f-a82b-5d8a61ed3fbd
    Bridge "br0"
        Port "br0"
            Interface "br0"
                type: internal
        Port "mgmt0"
            Interface "mgmt0"
                type: internal
        Port "bond0"
            Interface "eth0"
            Interface "eth1"
    Bridge "br1"
        Port "br1"
            Interface "br1"
                type: internal
        Port "tep0"
            Interface "tep0"
                type: internal
        Port "bond1"
            Interface "eth2"
            Interface "eth3"
    Bridge "br-int"
        Port "br-int"
            Interface "br-int"
                type: internal
        Port "gre0"
            Interface "gre0"
                type: gre
                options: {remote_ip="192.168.1.100"}
    ovs_version: "1.7.1"

In this configuration, we have three bridges. br0 uses a bond that contains eth0 and eth1; br1 uses a bond that contains eth2 and eth3; and br-int is an isolated bridge with no physical interfaces. We have two “custom” internal interfaces, mgmt0 (on br0) and tep0 (on br1), to which IP addresses have been assigned and which are successfully communicating across the network. We’ll assume thatmgmt0 and tep0 are on different subnets, and that tep0 is assigned to the 192.168.1.0/24 subnet.

How does traffic flow in this scenario?

  1. Traffic from guest domains attached to br0: The behavior here is as it has been in previous configurations—guest domains attached to br0 will communicate across the physical interfaces in the bond.

  2. Traffic from the Linux host: As it has been in previous scenarios, traffic from the Linux host is driven by the host’s TCP/IP stack and IP routing table(s). Because mgmt0 and tep0 are on different subnets, traffic from the Linux host will go out either br0 (for traffic moving through mgmt0) or br1 (for traffic moving through tep0), and thus will utilize the corresponding physical interfaces in the bonds on those bridges.

  3. Traffic from guest domains attached to br-int: Because the GRE tunnel is on the 192.168.1.0/24 subnet, traffic for the GRE tunnel—which is created and maintained by the OVS process on the Linux host itself—willtravel through tep0, which is attached to br1. Thus, the physical interfaces eth2 and eth3 would be leveraged for the GRE tunnel traffic.

Summary

The key takeaway from this post, in my mind, is understanding where traffic originates, and separating the idea of OVS as a switching mechanism (to handle guest domain traffic) as well as a Linux host process itself (to create and maintain tunnels between hosts).

Hopefully this information is helpful. I am, of course, completely open to your comments, questions, and corrections, so feel free to speak up in the comments below. Courteous comments are always welcome!


Commets:

1.

Sascha, I’m the one that called it an isolated bridge, and that’s because it has no physical interfaces associated with it. I apologize if my wording threw you off. The reason the traffic hits the host’s IP stack is because of the tunnel interface. Without the tunnel interface, the bridge would truly be isolated—not able to communicate outside the host at all. The purpose of the tep0 interface is simply to control which NICs the tunnel endpoint uses. Because tep0 is utilized by the host’s IP stack, and because the tunnel interface connects the bridge to the host’s IP stack, that’s what allows the traffic to flow from the isolated bridge through tep0. You could just as easily have used a physical interface for the tunnel endpoint instead of an OVS internal interface.

Lennie, more “complicated” configurations are on the way. First, though, I need to establish the correct base understanding upon which I can build more in-depth configurations that leverage things like VLANs, network namespaces, source routing, and similar. Patience, my friend…patience. :-)

2.

Let’s say the guest has a port called vnet0 which connected to an OVS bridge br-int. And a GRE-tunnel is created called gre0 and it is also connected to br-int.

As you know a switch and a bridge is pretty much the same thing.

The OVS/bridge uses MAC-learning like a normal swich.

When traffic comes onto the bridge from the guest through vnet0, the bridge will look at the forwarding table and might decide that the MAC-address of the destination is on gre0. In that case the traffic is forwarded to gre0.

At gre0 it gets encapsulated with a GRE-header. The GRE-tunnel ishandled by the host.

The host just routes the GRE-tunnel packets to the remote_ip, where it gets unpacked and delivered on an other bridge which will hopefully know what to do with it and deliver it at the right port, which is probably connected to a VM.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
资源包主要包含以下内容: ASP项目源码:每个资源包中都包含完整的ASP项目源码,这些源码采用了经典的ASP技术开发,结构清晰、注释详细,帮助用户轻松理解整个项目的逻辑和实现方式。通过这些源码,用户可以学习到ASP的基本语法、服务器端脚本编写方法、数据库操作、用户权限管理等关键技术。 数据库设计文件:为了方便用户更好地理解系统的后台逻辑,每个项目中都附带了完整的数据库设计文件。这些文件通常包括数据库结构图、数据表设计文档,以及示例数据SQL脚本。用户可以通过这些文件快速搭建项目所需的数据库环境,并了解各个数据表之间的关系和作用。 详细的开发文档:每个资源包都附有详细的开发文档,文档内容包括项目背景介绍、功能模块说明、系统流程图、用户界面设计以及关键代码解析等。这些文档为用户提供了深入的学习材料,使得即便是从零开始的开发者也能逐步掌握项目开发的全过程。 项目演示与使用指南:为帮助用户更好地理解和使用这些ASP项目,每个资源包中都包含项目的演示文件和使用指南。演示文件通常以视频或图文形式展示项目的主要功能和操作流程,使用指南则详细说明了如何配置开发环境、部署项目以及常见问题的解决方法。 毕业设计参考:对于正在准备毕业设计的学生来说,这些资源包是绝佳的参考材料。每个项目不仅功能完善、结构清晰,还符合常见的毕业设计要求和标准。通过这些项目,学生可以学习到如何从零开始构建一个完整的Web系统,并积累丰富的项目经验。
资源包主要包含以下内容: ASP项目源码:每个资源包中都包含完整的ASP项目源码,这些源码采用了经典的ASP技术开发,结构清晰、注释详细,帮助用户轻松理解整个项目的逻辑和实现方式。通过这些源码,用户可以学习到ASP的基本语法、服务器端脚本编写方法、数据库操作、用户权限管理等关键技术。 数据库设计文件:为了方便用户更好地理解系统的后台逻辑,每个项目中都附带了完整的数据库设计文件。这些文件通常包括数据库结构图、数据表设计文档,以及示例数据SQL脚本。用户可以通过这些文件快速搭建项目所需的数据库环境,并了解各个数据表之间的关系和作用。 详细的开发文档:每个资源包都附有详细的开发文档,文档内容包括项目背景介绍、功能模块说明、系统流程图、用户界面设计以及关键代码解析等。这些文档为用户提供了深入的学习材料,使得即便是从零开始的开发者也能逐步掌握项目开发的全过程。 项目演示与使用指南:为帮助用户更好地理解和使用这些ASP项目,每个资源包中都包含项目的演示文件和使用指南。演示文件通常以视频或图文形式展示项目的主要功能和操作流程,使用指南则详细说明了如何配置开发环境、部署项目以及常见问题的解决方法。 毕业设计参考:对于正在准备毕业设计的学生来说,这些资源包是绝佳的参考材料。每个项目不仅功能完善、结构清晰,还符合常见的毕业设计要求和标准。通过这些项目,学生可以学习到如何从零开始构建一个完整的Web系统,并积累丰富的项目经验。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值