Diving into OpenStack Network Architecture - Part 4 - Connecting to Public Network

https://blogs.oracle.com/ronen/entry/diving_into_openstack_network_architecture3

Diving into OpenStack Network Architecture - Part 4 - Connecting to Public Network

By Ronen Kofman on Jul 13, 2014

 In the previous post we discussed routing in OpenStack, we sawhow routing is done between two networks inside an OpenStack deployment using arouter implemented inside a network namespace. In this post we will extend therouting capabilities and show how we can route not only between two internalnetworks but also how we route to a public network. We will also see howNeutron can assign a floating IP to allow VMs to receive a public IP and becomeaccessible from the public network.

Use case #5: Connecting VMs to the public network

A “public network”, for the purpose of this discussion, isany network which is external to the OpenStack deployment. This could beanother network inside the data center or the internet or just another privatenetwork which is not controlled by OpenStack.

To connect the deployment to a public network we first haveto create a network in OpenStack and designate it as public. This network willbe the target for all outgoing traffic from VMs inside the OpenStackdeployment. At this time VMs cannot be directly connected to a networkdesignated as public, the traffic can only be routed from a private network to apublic network using an OpenStack created router. To create a public network inOpenStack we simply use the net-create command from Neutron and setting the router:external option as True. Inour example we will create public network in OpenStack called “my-public”:

# neutron net-create my-public --router:external=True

Created a newnetwork:

+---------------------------+--------------------------------------+

| Field                     | Value                                |

+---------------------------+--------------------------------------+

| admin_state_up           | True                                |

| id                        |5eb99ac3-905b-4f0e-9c0f-708ce1fd2303 |

| name                     | my-public                           |

| provider:network_type     | vlan                                 |

| provider:physical_network |default                              |

| provider:segmentation_id  | 1002                                 |

| router:external           | True                                 |

| shared                    | False                                |

| status                   | ACTIVE                               |

| subnets                  |                                     |

| tenant_id                | 9796e5145ee546508939cd49ad59d51f    |

+---------------------------+--------------------------------------+

In our deployment eth3 on the control node is a non-IP’ed interface and we will use it as the connection pointto the external public network. To do that we simply add eth3 to a bridge onOVS called “br-ex”. This is the bridgeNeutron will route the traffic to when a VM is connecting with the publicnetwork:

# ovs-vsctl add-port br-ex eth3

# ovs-vsctl show

8a069c7c-ea05-4375-93e2-b9fc9e4b3ca1

.

.

.

    Bridge br-ex

        Port br-ex

            Interface br-ex

                type:internal

        Port "eth3"

            Interface "eth3"

.

.

.

 

For this exercise we have created a public network with theIP range 180.180.180.0/24 accessible from eth3. This public network is providedfrom the datacenter side and has a gateway at 180.180.180.1 whichconnects it to the datacenter network. To connect this network to our OpenStackdeployment we will create a subnet on our “my-public” network with thesame IP range and tell Neutron what is its gateway:

# neutron subnet-create my-public 180.180.180.0/24 --name public_subnet --enable_dhcp=False--allocation-pool start=180.180.180.2,end=180.180.180.100--gateway=180.180.180.1

Created a newsubnet:

+------------------+------------------------------------------------------+

| Field            | Value                                                |

+------------------+------------------------------------------------------+

| allocation_pools | {"start":"180.180.180.2", "end": "180.180.180.100"} |

| cidr             | 180.180.180.0/24                                     |

| dns_nameservers  |                                                      |

| enable_dhcp      |False                                               |

| gateway_ip       |180.180.180.1                                        |

| host_routes     |                                                      |

| id               |ecadf103-0b3b-46e8-8492-4c5f4b3ea4cd                 |

| ip_version       |4                                                   |

| name             | public_subnet                                        |

| network_id       |5eb99ac3-905b-4f0e-9c0f-708ce1fd2303                 |

| tenant_id        |9796e5145ee546508939cd49ad59d51f                     |

+------------------+------------------------------------------------------+

Next we need to connect the router to our newly created publicnetwork, we do this using the following command:

# neutron router-gateway-set my-router my-public

Set gateway forrouter my-router

Note: We use the term “public network” for two things, oneis the actual public network available from the datacenter (180.180.180.0/24) forclarity we’ll call this network “external public network”. The second place weuse the term “public network” is within OpenStack for the network we call “my-public”which is the interface network inside the OpenStack deployment. We also referto two “gateways”, one of them is the gateway used by the external publicnetwork (180.180.180.1) and another is the gateway interface on the router(180.180.180.2).

After performing the operation above the router which hadtwo interfaces is also connected to a third interface which is called gateway(this is the router gateway). A router can have multiple interfaces, to connectto regular internal subnets, and one gateway to connect to the “my-public”network. A common mistake would be to try to connect the public network as aregular interface, the operation can succeed but no connection will be made tothe external world. After we have created a public network, a subnet andconnected them to the router we the network topology view will look like this:

Looking into the router’s namespace we see that anotherinterface was added with an IP on the 180.180.180.0/24 network, this IP is180.180.180.2 which is the router gateway interface:

# ip netnsexec qrouter-fce64ebe-47f0-4846-b3af-9cf764f1ff11 ip addr

.

.

22:qg-c08b8179-3b: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN                                                      

    link/etherfa:16:3e:a4:58:40 brd ff:ff:ff:ff:ff:ff

    inet180.180.180.2/24 brd 180.180.180.255 scopeglobal qg-c08b8179-3b

    inet62606:b400:400:3441:f816:3eff:fea4:5840/64 scope global dynamic

       valid_lft2591998sec preferred_lft 604798sec

    inet6 fe80::f816:3eff:fea4:5840/64scope link

       valid_lftforever preferred_lft forever

.

.

At this point the router’s gateway (180.180.180.2) addressis connected to the VMs and the VMs can ping it. We can also ping the externalgateway (180.180.180.1) from the VMs as well as reach the network this gatewayis connected to.

If we look into the router namespace we see that two linesare added to the NAT table in iptables:

# ip netnsexec qrouter-fce64ebe-47f0-4846-b3af-9cf764f1ff11 iptables-save

.

.

-Aneutron-l3-agent-snat -s 20.20.20.0/24 -j SNAT --to-source 180.180.180.2

-A neutron-l3-agent-snat-s 10.10.10.0/24 -j SNAT --to-source 180.180.180.2

 

.

.

This will change the source IP of outgoing packets from thenetworks net1 and net2 to 180.180.180.2. When we ping from within the VMs willone the network we will see as if the request comes from this IP address.

The routing table inside the namespace will route anyoutgoing traffic to the gateway of the public network as we defined it when wecreated the subnet, in this case 180.180.180.1

#  ip netnsexec  qrouter-fce64ebe-47f0-4846-b3af-9cf764f1ff11route -n

Kernel IProuting table

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface

0.0.0.0         180.180.180.1   0.0.0.0         UG   0      0        0qg-c08b8179-3b

10.10.10.0      0.0.0.0         255.255.255.0   U    0      0        0qr-15ea2dd1-65

20.20.20.0      0.0.0.0         255.255.255.0   U    0      0        0qr-dc290da0-0a

180.180.180.0   0.0.0.0         255.255.255.0   U    0      0        0qg-c08b8179-3b

 

Those two pieces will assure that a request from a VM trying to reachthe public network will be NAT’ed to 180.180.180.2 asa source and routed to the public network’s gateway. We can also see that ip forwarding is enabled inside the namespace to allowrouting:

# ip netnsexec qrouter-fce64ebe-47f0-4846-b3af-9cf764f1ff11 sysctlnet.ipv4.ip_forward

net.ipv4.ip_forward= 1

 

Use case #6: Attaching a floating IP to a VM

Now that the VMs can access the public network we would liketo take the next step allow an external client to access the VMs inside theOpenStack deployment, we will do that using a floating IP. A floating IP is anIP provided by the public network which the user can assign to a particular VMmaking it accessible to an external client.

To create a floating IP, the first step is to connect the VMto a public network as we have shown in the previous use case. The second step willbe to generate a floating IP from command line:

# neutron floatingip-create public

Created a new floatingip:

+---------------------+--------------------------------------+

| Field               | Value                                |

+---------------------+--------------------------------------+

| fixed_ip_address   |                                     |

| floating_ip_address | 180.180.180.3                        |

| floating_network_id | 5eb99ac3-905b-4f0e-9c0f-708ce1fd2303|

| id                  |25facce9-c840-4607-83f5-d477eaceba61 |

| port_id            |                                     |

| router_id          |                                      |

| tenant_id           |9796e5145ee546508939cd49ad59d51f     |

+---------------------+--------------------------------------+

The user can generate as many IPs as are available on the“my-public” network. Assigning the floating IP can be done either from the GUIor from command line, in this example we go to the GUI:

Under the hood we can look at the router namespace and seethe following additional lines in the iptables of therouter namespace:

-Aneutron-l3-agent-OUTPUT -d 180.180.180.3/32 -j DNAT --to-destination 20.20.20.2

-Aneutron-l3-agent-PREROUTING -d 180.180.180.3/32 -j DNAT --to-destination20.20.20.2

-Aneutron-l3-agent-float-snat -s 20.20.20.2/32 -j SNAT --to-source 180.180.180.3

These lines are performing the NAT operation for the floatingIP. In this case if and incoming request arrives and its destination is180.180.180.3 it will be translated to 20.20.20.2 and vice versa.

Once a floating IP is associated we can connect to the VM, it isimportant to make sure there aresecurity groups rule which will allow this for example:

nova secgroup-add-ruledefault icmp -1 -1 0.0.0.0/0

nova secgroup-add-ruledefault tcp 22 22 0.0.0.0/0

 

Those will allow ping and ssh.

Iptables is a sophisticated andpowerful tool, to better understand all the bits and pieces on how the chainsare structured in the different tables we can look at one of the many iptables tutorials available online and read more to understandany specific details.

Summary

This post was about connecting VMs in the OpenStack deploymentto a public network. It shows how using namespaces and routing tables we canroute not only inside the OpenStack environment but also to the outside world.

This will also be the last post in the series for now.Networking is one of the most complicated areas in OpenStack and gaining goodunderstanding of it is key. If you read all four postsyou should have a good starting point to analyze and understand differentnetwork topologies in OpenStack. We can apply the same principles shown here tounderstand more network concepts such as Firewall as a service, Load Balance asa service, Metadata service etc. The general method will be to look into anamespace and figure out how certain functionality is implemented using theregular Linux networking features in the same way we did throughout thisseries.

As we said in the beginning, the use cases shown here arejust examples of one method to configure networking in OpenStack and there aremany others. All the examples here are using the Open vSwitchplugin and can be used right out of the box. Whenanalyzing another plugin or specific featureoperation it will be useful to compare the features here to their equivalentmethod with the plugin you choose to use. In manycases vendor plugins will use Open vSwitch ,bridges or namespaces and some of the same principles and methods shown here.

The goal of this series is to make the OpenStack networkingaccessible to the average user. This series takes a bottom up approach andusing simple use cases tries to build a complete picture of how the networkarchitecture is working. Unlike some other resources we did not start out byexplaining the different agents and their functionality but tried to explainwhat they do , how does the end result looks like. Agood next step would be to go to one of those resources and try to see how thedifferent agents implement the functionality explained here.

That’s it for now

@RonenKofman



  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值