Linux ip netns 命令

https://blog.kghost.info/2013/03/01/linux-network-emulator/

 

创建虚拟网络环境

使用命令

$ ip netns add net0

可以创建一个完全隔离的新网络环境,这个环境包括一个独立的网卡空间,路由表,ARP表,ip地址表,iptables,ebtables,等等。总之,与网络有关的组件都是独立的。

ip命令需要root权限的,但是由于本文大量使用ip命令,于是笔者给ip命令添加了capability,使普通用户也能使用ip命令

使用命令

$ ip netns list
net0

可以看到我们刚才创建的网络环境

进入虚拟网络环境

使用命令

$ ip netns exec net0 `command`

我们可以在 net0 虚拟环境中运行任何命令

$ ip netns exec net0 bash
$ ip ad
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

这样我们可以在新的网络环境中打开一个shell,可以看到,新的网络环境里面只有一个lo设备,并且这个lo设备与外面的lo设备是不同的,之间不能互相通讯。

连接两个网络环境

新的网络环境里面没有任何网络设备,并且也无法和外部通讯,就是一个孤岛,通过下面介绍的这个方法可以把两个网络环境连起来,简单的说,就是在两个网络环境之间拉一根网线

$ ip netns add net1

先创建另一个网络环境net1,我们的目标是把net0与net1连起来

$ ip link add type veth
$ ip ad
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
81: veth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 12:39:09:81:3a:dd brd ff:ff:ff:ff:ff:ff
82: veth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 32:4f:fd:cc:79:1b brd ff:ff:ff:ff:ff:ff

这里创建连一对veth虚拟网卡,类似pipe,发给veth0的数据包veth1那边会收到,发给veth1的数据包veth0会收到。就相当于给机器安装了两个网卡,并且之间用网线连接起来了

$ ip link set veth0 netns net0
$ ip link set veth1 netns net1

这两条命令的意思就是把veth0移动到net0环境里面,把veth1移动到net1环境里面,我们看看结果

$ ip ad
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
$ ip netns exec net0 ip ad
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
81: veth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 12:39:09:81:3a:dd brd ff:ff:ff:ff:ff:ff
$ ip netns exec net1 ip ad
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
82: veth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 32:4f:fd:cc:79:1b brd ff:ff:ff:ff:ff:ff

veth0 veth1已经在我们的环境里面消失了,并且分别出现在net0与net1里面。下面我们简单测试一下net0与net1的联通性

$ ip netns exec net0 ip link set veth0 up
$ ip netns exec net0 ip address add 10.0.1.1/24 dev veth0
$ ip netns exec net1 ip link set veth1 up
$ ip netns exec net1 ip address add 10.0.1.2/24 dev veth1

分别配置好两个设备,然后用ping测试一下联通性:

$ ip netns exec net0 ping -c 3 10.0.1.2
PING 10.0.1.2 (10.0.1.2) 56(84) bytes of data.
64 bytes from 10.0.1.2: icmp_req=1 ttl=64 time=0.101 ms
64 bytes from 10.0.1.2: icmp_req=2 ttl=64 time=0.057 ms
64 bytes from 10.0.1.2: icmp_req=3 ttl=64 time=0.048 ms

--- 10.0.1.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.048/0.068/0.101/0.025 ms
标题

 

创建虚拟网络环境并且连接网线

ip netns add net0
ip netns add net1
ip netns add bridge
ip link add type veth
ip link set dev veth0 name net0-bridge netns net0
ip link set dev veth1 name bridge-net0 netns bridge
ip link add type veth
ip link set dev veth0 name net1-bridge netns net1
ip link set dev veth1 name bridge-net1 netns bridge

在bridge中创建并且设置br设备

ip netns exec bridge brctl addbr br
ip netns exec bridge ip link set dev br up
ip netns exec bridge ip link set dev bridge-net0 up
ip netns exec bridge ip link set dev bridge-net1 up
ip netns exec bridge brctl addif br bridge-net0
ip netns exec bridge brctl addif br bridge-net1

然后配置两个虚拟环境的网卡

ip netns exec net0 ip link set dev net0-bridge up
ip netns exec net0 ip address add 10.0.1.1/24 dev net0-bridge
ip netns exec net1 ip link set dev net1-bridge up
ip netns exec net1 ip address add 10.0.1.2/24 dev net1-bridge

测试

$ ip netns exec net0 ping -c 3 10.0.1.2
PING 10.0.1.2 (10.0.1.2) 56(84) bytes of data.
64 bytes from 10.0.1.2: icmp_req=1 ttl=64 time=0.121 ms
64 bytes from 10.0.1.2: icmp_req=2 ttl=64 time=0.072 ms
64 bytes from 10.0.1.2: icmp_req=3 ttl=64 time=0.069 ms

--- 10.0.1.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.069/0.087/0.121/0.025 ms

配置lldpd检查线路链接情况

随着虚拟网络环境增加,环境中网卡数量也在不断增加,经常会忘记环境中哪些网卡连接到哪里,通过 lldp [2] 协议,我们可以清楚看到每个网卡连接到了哪些环境中的哪个网卡。

github 上有一个 lldp 在 linux 下的开源实现 [3],通过在每个环境中起一个 lldp daemon,我们就可以实时查看每个网卡的连接情况

Bridge 上 lldp 的数据

$ lldpcli show neighbors

LLDP neighbors:

Interface:    bridge-net0, via: LLDP, RID: 2, Time: 0 day, 00:06:53
  Chassis:
    ChassisID:    mac 82:be:2a:ec:70:69
    SysName:      localhost
    SysDescr:     net0
    Capability:   Bridge, off
    Capability:   Router, off
    Capability:   Wlan, off
  Port:
    PortID:       mac 82:be:2a:ec:70:69
    PortDescr:    net0-bridge

Interface:    bridge-net1, via: LLDP, RID: 1, Time: 0 day, 00:06:53
  Chassis:
    ChassisID:    mac b2:34:28:b1:be:49
    SysName:      localhost
    SysDescr:     net1
    Capability:   Bridge, off
    Capability:   Router, off
    Capability:   Wlan, off
  Port:
    PortID:       mac b2:34:28:b1:be:49
    PortDescr:    net1-bridge

 

============================================

 

 

 

ip netns 命令用来管理 network namespace。它可以创建命名的 network namespace,然后通过名字来引用 network namespace,所以使用起来很方便。

ip netns 命令格式如下:
ip [ OPTIONS ] netns  { COMMAND | help }

可以通过 help 命令查看 ip netns 所有操作的帮助信息:

技术分享图片

network namespace

network namespace 在逻辑上是网络堆栈的一个副本,它有自己的路由、防火墙规则和网络设备。
默认情况下,子进程继承其父进程的 network namespace。也就是说,如果不显式创建新的 network namespace,所有进程都从 init 进程继承相同的默认 network namespace。
根据约定,命名的 network namespace 是可以打开的 /var/run/netns/ 目录下的一个对象。比如有一个名称为 net1 的 network namespace 对象,则可以由打开 /var/run/netns/net1 对象产生的文件描述符引用 network namespace net1。通过引用该文件描述符,可以修改进程的 network namespace。

显示所有命名的 network namespace

ip netns list 命令显示所有命名的 network namesapce,其实就是显示 /var/run/netns 目录下的所有 network namespace 对象:

技术分享图片

创建命名的 network namespace

ip netns add NAME 命令创建一个命名的 network namespace:

技术分享图片

删除命名的 network namespace

ip [-all] netns del [ NAME ] 命令删除指定名称的 network namespace。如果指定了 -all 选项,则尝试删除所有的 network namespace。

注意,如果我们把网卡设置到了某个 network namespace 中,并在该 network namespace 中启动了进程:

$ sudo ip netns add net0
$ sudo ip link set dev eth0 netns net0
$ sudo ip netns exec net0 bash

 在另一个 bash 进程中删除 network namespace net0:

$ sudo ip netns del net0

此时虽然可以删除 netowrk namespace,但是在进程退出之前,网卡一直会保持在你已经删除了的那个 network namespace 中。

查看进程的 network namespace

ip netns identify [PID] 命令用来查看进程的 network namespace。如果不指定 PID 就显示当前进程的 network namespace:

技术分享图片

下面的命令指定了 PID:

技术分享图片

查看 network namespace 中进程的 PID

ip netns pids NAME 命令用来查看指定的 network namespace 中的进程的 PID。这个命令其实就是去检查 /proc 下的所有进程,看进程的 network namespace 是不是指定的 network namespace:

技术分享图片

在指定的 network namespace 中执行命令

ip [-all] netns exec [ NAME ] cmd 命令用来在指定的 network namespace 中执行命令。比如我们要看一下某个 network namespace 中有哪些网卡:

技术分享图片

ip netns exec 后面跟着 namespace 的名字,比如这里的 neta,然后是要执行的命令,只要是合法的 shell 命令都能运行,比如上面的 ip addr 或者 bash。
更棒的是,执行的可以是任何命令,不只是和网络相关的(当然,和网络无关命令执行的结果和在外部执行没有区别)。比如下面例子中,执行 bash 命令之后,后面所有的命令都是在这个 network namespace 中执行的,好处是不用每次执行命令都要把 ip netns exec NAME 补全,缺点是你无法清楚知道自己当前所在的 shell,容易混淆:

技术分享图片

通过 -all 参数我们可以同时在所有的 network namespace 执行命令:

技术分享图片

输出中的 netns: 指示在某个 network namespace 中执行的结果。

监控对 network namespace 的操作

ip netns monitor 命令用来监控对 network namespace 的操作。比如我们删除一个 network namespace 时就会收到相应的通知:

技术分享图片

理解 ip netns add 命令

我们通过下面的演示来理解 ip netns add 命令的本质。
查看默认 network namespace 的 ID:

$ readlink /proc/$$/ns/net

在 /var/run/netns 目录下创建一个用于绑定 network namespace 的文件,名为 mynet:

$ sudo mkdir -p /var/run/netns
$ sudo touch /var/run/netns/mynet

技术分享图片

通过 unshare 命令创建新的 network namespace,并在新的 namespace 中启动新的 bash:

$ sudo unshare --net bash

查看新的 network namespace ID:

# readlink /proc/$$/ns/net

技术分享图片

通过绑定挂载把当前 bash 进程的 network namespace 文件挂载到前面创建的 mynet 文件上:

# mount --bind /proc/$$/ns/net /var/run/netns/mynet
# ls -I /var/run/netns/mynet

技术分享图片

可以看出 mynet 文件的 inode 没有发生变化,说明我们使用了绑定挂载后,虽然新的 network namespace 中已经没有进程了,但这个新的 network namespace 还继续存在。

上面的一系列操作其实等同于执行了命令:sudo ip netns add mynet
下面的 nsenter 命令则等同于执行了命令: sudo ip netns exec mynet bash

$ sudo nsenter --net=/var/run/netns/mynet bash
# readlink /proc/$$/ns/net

技术分享图片

通过 nsenter 命令新建了一个 bash 进程,并把它加入 mynet 所关联的 network namespace(net:[4026532616])。

从上面的示例可以看出,创建命名的 network namespace 其实就是创建一个文件,然后通过绑定挂载的方式将新创建的 network namespace 文件(/proc/$$/ns/net)和该文件绑定,就算该 network namespace 里的所有进程都退出了,内核还是会保留该 network namespace,以后我们还可以通过这个绑定的文件来加入该 network namespace。

参考:
ip netns man page
Linux Namespace系列(06):network namespace

 

原文地址:https://www.cnblogs.com/sparkdev/p/9253409.html

https://www.cnblogs.com/-xuan/p/10838052.html

ip命令
       linux 的强大的网络配置命令‘ip’。

[root@localhost ~]# ip
Usage: ip [ OPTIONS ] OBJECT { COMMAND | help }
       ip [ -force ] -batch filename
where  OBJECT := { link | address | addrlabel | route | rule | neigh | ntable |
                   tunnel | tuntap | maddress | mroute | mrule | monitor | xfrm |
                   netns | l2tp | fou | macsec | tcp_metrics | token | netconf | ila |
                   vrf }
       OPTIONS := { -V[ersion] | -s[tatistics] | -d[etails] | -r[esolve] |
                    -h[uman-readable] | -iec |
                    -f[amily] { inet | inet6 | ipx | dnet | mpls | bridge | link } |
                    -4 | -6 | -I | -D | -B | -0 |
                    -l[oops] { maximum-addr-flush-attempts } | -br[ief] |
                    -o[neline] | -t[imestamp] | -ts[hort] | -b[atch] [filename] |
                    -rc[vbuf] [size] | -n[etns] name | -a[ll] | -c[olor]}
[root@localhost ~]#

netns可以让一台机器上模拟多个网络设备,是网络虚拟化的重要组成,将不同类型的网络应用隔离。

一个net namespace有自己独立的路由表,iptables策略,设备管理。说来说去,它就是用来隔离的。比如将eth0加入了netns 1,那么netns 2中的应用程序就找不到eth0了。netns 1中的iptables策略,不会去影响netns 2中的iptables策略。

netns的用法

[root@localhost ~]# ip netns help
Usage: ip netns list
       ip netns add NAME
       ip netns set NAME NETNSID
       ip [-all] netns delete [NAME]
       ip netns identify [PID]
       ip netns pids NAME
       ip [-all] netns exec [NAME] cmd ...
       ip netns monitor
       ip netns list-id

先打开内核的网络转发功能。

[root@localhost ~]# vim /etc/sysctl.conf 
[root@localhost ~]# sysctl -p
net.ipv4.ip_forward = 1

添加两个namespace

[root@monitor ~]# ip netns add r1
[root@monitor ~]# ip netns add r2
[root@monitor ~]# ip netns list
r2
r1

查看r1的网络。

[root@monitor ~]# ip netns exec r1 ifconfig -a
lo: flags=8<LOOPBACK> mtu 65536
loop txqueuelen 0 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

为r1的回环接口添加一个ip地址。

[root@monitor ~]# ip netns exec r1 ifconfig lo 127.0.0.1 up
[root@monitor ~]# ip netns exec r1 ifconfig -a
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

此时的r2并没有地址,因为他们是被隔离的


在网络名称空间上添加一对网卡,一个在r1,一个在r2.

[root@localhost ~]# ip link add veth1.1 type veth peer name veth1.2
[root@localhost ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT 
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br-ex state UP mode DEFAULT qlen 1000
link/ether 00:0c:29:4b:bb:d0 brd ff:ff:ff:ff:ff:ff
3: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT 
link/ether 00:0c:29:4b:bb:d0 brd ff:ff:ff:ff:ff:ff
4: br-in: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT 
link/ether 56:8d:9f:d2:96:21 brd ff:ff:ff:ff:ff:ff
5: veth1.2@veth1.1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
link/ether 7e:ea:fe:98:30:cd brd ff:ff:ff:ff:ff:ff
6: veth1.1@veth1.2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
link/ether a2:48:54:92:c2:ed brd ff:ff:ff:ff:ff:ff

将一对网卡分别添加给2个名称空间。

[root@localhost ~]# ip link set veth1.1 netns r1
[root@localhost ~]# ip link set veth1.2 netns r2

查看r1的网络信息

[root@localhost ~]# ip netns exec r1 ifconfig -a
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth1.1: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether a2:48:54:92:c2:ed txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

给r1的veth1.1改个名字,为eth0

[root@localhost ~]# ip netns exec r1 ip link set veth1.1 name eth0

为两个网卡添加ip地址。

[root@localhost ~]# ip netns exec r1 ifconfig eth0 10.10.1.20/24 up
[root@localhost ~]# ip netns exec r2 ifconfig eth0 10.10.1.21/24 up

ping功能

[root@localhost ~]# ip netns exec r1 ping 10.10.1.21
PING 10.10.1.21 (10.10.1.21) 56(84) bytes of data.
64 bytes from 10.10.1.21: icmp_seq=1 ttl=64 time=0.042 ms
64 bytes from 10.10.1.21: icmp_seq=2 ttl=64 time=0.036 ms
64 bytes from 10.10.1.21: icmp_seq=3 ttl=64 time=0.043 ms
^C
--- 10.10.1.21 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.036/0.040/0.043/0.006 ms

到目前为止,看吧,此时就好像创建了两个虚拟机一样。两个网络是互相独立的。但是在一个网段内的时候,又可以互相联通。

 

现在利用netns来创建1个虚拟网络空间。大致内容如下图。

删除上面创建的ip netns  和veth虚拟网络,安装上面的图重新规划 但是IP 不是上面的我重新规划了IP 。虚机网络vm1:192.168.1.20和vm2:192.168.1.21,网关是192.168.1.1在r3的网络空间中,r3还有一个虚拟网卡IP是10.10.1.20。所谓的外网ip也就是物理机的(enp49s0f1)ip或桥(br-ex)的ip是10.10.1.3 (10.10.1.3 是可以链接外网的)

ip netns delete r1
ip netns delete r2
ip link del dev veth1.1
ip link del dev veth1.2

 

目前我的网络情况,2个网卡。拿enp49s0f1网卡做实验,不影响enp49s0f0.

[root@localhost ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp49s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether d4:5d:64:07:a8:ea brd ff:ff:ff:ff:ff:ff
3: enp49s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether d4:5d:64:07:a8:eb brd ff:ff:ff:ff:ff:ff
[root@localhost ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp49s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether d4:5d:64:07:a8:ea brd ff:ff:ff:ff:ff:ff
    inet 10.10.1.2/24 brd 10.10.1.255 scope global enp49s0f0
       valid_lft forever preferred_lft forever
    inet6 fe80::d65d:64ff:fe07:a8ea/64 scope link 
       valid_lft forever preferred_lft forever
3: enp49s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether d4:5d:64:07:a8:eb brd ff:ff:ff:ff:ff:ff
    inet 10.10.1.3/24 brd 10.10.1.255 scope global enp49s0f1
       valid_lft forever preferred_lft forever
    inet6 fe80::d65d:64ff:fe07:a8eb/64 scope link 
       valid_lft forever preferred_lft forever

vm1和vm2两个虚机,我用ip netns add r1 和r2  2个网络空间来代替,(此处是3个后面要用到我就创建了。)其实是一样的,每一个网络空间代表一个虚机。然后在创建2对虚拟网络,共4个网络,每一对都是一头放入各自的网络空间中,另一头放入交接(交换机)中。

r1为路由器R1添加一对网卡并且启动。用过网关192.168.1.1 其实最后r1网络是双网卡

[root@localhost ~]# ip netns add r1
[root@localhost ~]# ip netns add r2
[root@localhost ~]# ip netns add r3
[root@localhost ~]# ip netns list
r3
r2
r1

然后在创建2对虚拟网络,(此处是3对后面要用,我就直接创建了)

[root@localhost ~]# ip link add veth1.1 type veth peer name veth1.2
[root@localhost ~]# ip link add veth2.1 type veth peer name veth2.2
[root@localhost ~]# ip link add veth3.1 type veth peer name veth3.2
[root@localhost ~]# ip link set dev veth1.2 up
[root@localhost ~]# ip link set dev veth2.2 up
[root@localhost ~]# ip link set dev veth3.2 up
[root@localhost ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp49s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether d4:5d:64:07:a8:ea brd ff:ff:ff:ff:ff:ff
3: enp49s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether d4:5d:64:07:a8:eb brd ff:ff:ff:ff:ff:ff
33: veth1.2@veth1.1: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue state LOWERLAYERDOWN mode DEFAULT group default qlen 1000
    link/ether 0a:85:cc:42:57:4f brd ff:ff:ff:ff:ff:ff
34: veth1.1@veth1.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 0a:54:87:03:1c:8f brd ff:ff:ff:ff:ff:ff
35: veth2.2@veth2.1: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue state LOWERLAYERDOWN mode DEFAULT group default qlen 1000
    link/ether 0a:e4:37:41:e3:c6 brd ff:ff:ff:ff:ff:ff
36: veth2.1@veth2.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether fe:74:70:bc:fe:cd brd ff:ff:ff:ff:ff:ff
37: veth3.2@veth3.1: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue state LOWERLAYERDOWN mode DEFAULT group default qlen 1000
    link/ether ba:5a:ea:11:1e:e2 brd ff:ff:ff:ff:ff:ff
38: veth3.1@veth3.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 22:47:55:1e:37:49 brd ff:ff:ff:ff:ff:ff
[root@localhost ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp49s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether d4:5d:64:07:a8:ea brd ff:ff:ff:ff:ff:ff
    inet 10.10.1.2/24 brd 10.10.1.255 scope global enp49s0f0
       valid_lft forever preferred_lft forever
    inet6 fe80::d65d:64ff:fe07:a8ea/64 scope link 
       valid_lft forever preferred_lft forever
3: enp49s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether d4:5d:64:07:a8:eb brd ff:ff:ff:ff:ff:ff
    inet 10.10.1.3/24 brd 10.10.1.255 scope global enp49s0f1
       valid_lft forever preferred_lft forever
    inet6 fe80::d65d:64ff:fe07:a8eb/64 scope link 
       valid_lft forever preferred_lft forever
33: veth1.2@veth1.1: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
    link/ether 0a:85:cc:42:57:4f brd ff:ff:ff:ff:ff:ff
34: veth1.1@veth1.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0a:54:87:03:1c:8f brd ff:ff:ff:ff:ff:ff
35: veth2.2@veth2.1: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
    link/ether 0a:e4:37:41:e3:c6 brd ff:ff:ff:ff:ff:ff
36: veth2.1@veth2.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether fe:74:70:bc:fe:cd brd ff:ff:ff:ff:ff:ff
37: veth3.2@veth3.1: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
    link/ether ba:5a:ea:11:1e:e2 brd ff:ff:ff:ff:ff:ff
38: veth3.1@veth3.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 22:47:55:1e:37:49 brd ff:ff:ff:ff:ff:ff

为r1,r2,r3的回环接口添加一个ip地址。

[root@localhost ~]# ip netns exec r1 ifconfig lo 127.0.0.1 up
[root@localhost ~]# ip netns exec r2 ifconfig lo 127.0.0.1 up
[root@localhost ~]# ip netns exec r3 ifconfig lo 127.0.0.1 up
[root@localhost ~]# ip netns exec r1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
[root@localhost ~]# ip netns exec r2 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
[root@localhost ~]# ip netns exec r3 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever

创建桥接

[root@localhost ~]# brctl addbr br-ex
[root@localhost ~]# ip link set br-ex up
[root@localhost ~]# ifconfig 
br-ex: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 1e:d6:fd:9b:2a:fc txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

给桥设备添加IP地址。

# ip addr del 10.10.1.3/24 dev enp49s0f1
# ip addr add 10.10.1.3/24 dev br-ex
# brctl addif br-ex enp49s0f1

[root@localhost ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp49s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether d4:5d:64:07:a8:ea brd ff:ff:ff:ff:ff:ff
    inet 10.10.1.2/24 brd 10.10.1.255 scope global enp49s0f0
       valid_lft forever preferred_lft forever
    inet6 fe80::d65d:64ff:fe07:a8ea/64 scope link 
       valid_lft forever preferred_lft forever
3: enp49s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br-ex state UP group default qlen 1000
    link/ether d4:5d:64:07:a8:eb brd ff:ff:ff:ff:ff:ff
    inet6 fe80::d65d:64ff:fe07:a8eb/64 scope link 
       valid_lft forever preferred_lft forever
33: veth1.2@veth1.1: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
    link/ether 0a:85:cc:42:57:4f brd ff:ff:ff:ff:ff:ff
34: veth1.1@veth1.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0a:54:87:03:1c:8f brd ff:ff:ff:ff:ff:ff
35: veth2.2@veth2.1: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
    link/ether 0a:e4:37:41:e3:c6 brd ff:ff:ff:ff:ff:ff
36: veth2.1@veth2.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether fe:74:70:bc:fe:cd brd ff:ff:ff:ff:ff:ff
37: veth3.2@veth3.1: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
    link/ether ba:5a:ea:11:1e:e2 brd ff:ff:ff:ff:ff:ff
38: veth3.1@veth3.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 22:47:55:1e:37:49 brd ff:ff:ff:ff:ff:ff
39: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d4:5d:64:07:a8:eb brd ff:ff:ff:ff:ff:ff
    inet 10.10.1.3/24 scope global br-ex
       valid_lft forever preferred_lft forever
    inet6 fe80::b82c:97ff:fe7e:8562/64 scope link 
       valid_lft forever preferred_lft forever
[root@localhost ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
br-ex		8000.d45d6407a8eb	no		enp49s0f1

再添加一个桥

[root@localhost ~]# brctl addbr br-in
[root@localhost ~]# ip link set br-in up

将3对网卡分别添加给3个名称空间和网桥br-in中

#r1 <- veth1.1 ;veth1.2>br-in
#ip link set veth1.1 netns r1
#ip link set veth1.2 up
#brctl addif br-in veth1.2

#r2 <- veth2.1 ;veth2.2>br-in
#ip link set veth2.1 netns r2
#ip link set veth2.2 up
#brctl addif br-in veth2.2


#r3 <- veth3.1 ;veth3.2>br-in
#ip link set veth3.1 netns r3
#ip link set veth3.2 up
#brctl addif br-in veth3.2



[root@localhost ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp49s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether d4:5d:64:07:a8:ea brd ff:ff:ff:ff:ff:ff
3: enp49s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br-ex state UP mode DEFAULT group default qlen 1000
    link/ether d4:5d:64:07:a8:eb brd ff:ff:ff:ff:ff:ff
33: veth1.2@if34: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue master br-in state LOWERLAYERDOWN mode DEFAULT group default qlen 1000
    link/ether 0a:85:cc:42:57:4f brd ff:ff:ff:ff:ff:ff link-netnsid 1
35: veth2.2@if36: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue master br-in state LOWERLAYERDOWN mode DEFAULT group default qlen 1000
    link/ether 0a:e4:37:41:e3:c6 brd ff:ff:ff:ff:ff:ff link-netnsid 2
37: veth3.2@veth3.1: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue state LOWERLAYERDOWN mode DEFAULT group default qlen 1000
    link/ether ba:5a:ea:11:1e:e2 brd ff:ff:ff:ff:ff:ff
39: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether d4:5d:64:07:a8:eb brd ff:ff:ff:ff:ff:ff
40: br-in: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 0a:85:cc:42:57:4f brd ff:ff:ff:ff:ff:ff
[root@localhost ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
br-ex		8000.d45d6407a8eb	no		enp49s0f1
br-in		8000.0a85cc42574f	no		veth1.2
							            veth2.2
                                        veth3.2

给每个r2 r3 网络空间ip

先重命名

[root@localhost ~]# ip netns exec r2 ip link set dev veth2.1 name eth0
[root@localhost ~]# ip netns exec r3 ip link set dev veth3.1 name eth0
[root@localhost ~]# ip netns exec r1 ip link set dev veth1.1 name eth0
[root@localhost ~]# ip netns exec r2 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
36: eth0@if35: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether fe:74:70:bc:fe:cd brd ff:ff:ff:ff:ff:ff link-netnsid 0
[root@localhost ~]#

分配ip

[root@localhost ~]# ip netns exec r1 ifconfig eth0 192.168.1.1/24 up
[root@localhost ~]# ip netns exec r2 ifconfig eth0 192.168.1.2/24 up
[root@localhost ~]# ip netns exec r3 ifconfig eth0 192.168.1.3/24 up
[root@localhost ~]# ip netns exec r2 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
36: eth0@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:74:70:bc:fe:cd brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.1.2/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::fc74:70ff:febc:fecd/64 scope link 
       valid_lft forever preferred_lft forever
[root@localhost ~]# ip netns exec r3 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
38: eth0@if37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 22:47:55:1e:37:49 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.1.3/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2047:55ff:fe1e:3749/64 scope link 
       valid_lft forever preferred_lft forever
[root@localhost ~]# ip netns exec r1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
34: eth0@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0a:54:87:03:1c:8f brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.1.1/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::854:87ff:fe03:1c8f/64 scope link 
       valid_lft forever preferred_lft forever

 

[root@localhost ~]# ip netns exec r2 ip link set dev eth0 up
[root@localhost ~]# ip netns exec r1 ip link set dev eth0 up
[root@localhost ~]# ip netns exec r3 ip link set dev eth0 up
[root@localhost ~]# ip netns exec r2 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
36: eth0@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:74:70:bc:fe:cd brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::fc74:70ff:febc:fecd/64 scope link 
       valid_lft forever preferred_lft forever
[root@localhost ~]#

veth2.2 和veth3.2都是桥接在br-in上了。

好了,现在的情况相当于是vm1,vm2在一个交换机上。这个交换机就是br-in。为了这两个vm虚拟机可以和外界通信,必须要再创建一个虚拟的路由器

以下已经在上面操作过了 无需在操作。 start

[root@localhost ~]# ip netns add r1

为路由器R1添加一对网卡并且启动。

[root@localhost ~]# ip link add veth1.1 type veth peer name veth1.2
[root@localhost ~]# ip link set veth1.1 up
[root@localhost ~]# ip link set veth1.2 up

将网卡添加到桥上去。

[root@localhost ~]# brctl addif br-in veth1.2
[root@localhost ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
br-ex		8000.d45d6407a8eb	no		enp49s0f1
br-in		8000.0a85cc42574f	no		veth1.2
							            veth2.2
							            veth3.2

给veth1.1改个名字,并且启动

[root@localhost ~]# ip link set veth1.1 netns r1 #将网卡rinr添加至r1
[root@localhost ~]# ip netns exec r1 ip link set veth1.1 name eth0
[root@localhost ~]# ip netns exec r1 ip link set eth0 up

添加一个IP,作为网关。

ip netns exec r1 ifconfig eth0 192.168.1.1/24 up
[root@localhost ~]# ip netns exec r1 ifconfig -a 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.1  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::854:87ff:fe03:1c8f  prefixlen 64  scopeid 0x20<link>
        ether 0a:54:87:03:1c:8f  txqueuelen 1000  (Ethernet)
        RX packets 30  bytes 2292 (2.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 12  bytes 936 (936.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

以下已经在上面操作过了 无需在操作。 end

以下需要操作了

两个虚拟机的网关都指向192.168.1.1  此处的虚机我用ip netns r2和r3代替

[root@localhost ~]# ip netns exec r2 route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
[root@localhost ~]# ip netns exec r2 route add default gw 192.168.1.1
[root@localhost ~]# ip netns exec r2 route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.1.1     0.0.0.0   UG    0      0              0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
[root@localhost ~]# ip netns exec r3 route add default gw 192.168.1.1
[root@localhost ~]# ip netns exec r3 route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.1.1     0.0.0.0   UG    0      0              0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0

目前来说整幅图的左半边完全好了。
开始右半边。

添加一对网卡,再把其中一个桥接

[root@localhost ~]# ip link add veth4.1 type veth peer name veth4.2
[root@localhost ~]# ip link set veth4.1 up
[root@localhost ~]# ip link set veth4.2 up
[root@localhost ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
br-ex		8000.d45d6407a8eb	no		enp49s0f1
br-in		8000.0a85cc42574f	no		veth1.2
							            veth2.2
							            veth3.2
[root@localhost ~]#  brctl addif br-ex veth4.2
[root@localhost ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
br-ex		8000.8aa7e4e58374	no		enp49s0f1
							            veth4.2
br-in		8000.0a85cc42574f	no		veth1.2
							            veth2.2
							            veth3.2

将另一个网卡添加到路由器的另一边,且给个另一个网络的地址

[root@localhost ~]# ip link set veth4.1 netns r1
[root@localhost ~]# ip netns exec r1 ip link set veth4.1 name eth1
[root@localhost ~]# ip netns exec r1 ifconfig eth1 10.10.1.20/24 up

[root@localhost ~]# ip netns exec r1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
34: eth0@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0a:54:87:03:1c:8f brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.1.1/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::854:87ff:fe03:1c8f/64 scope link 
       valid_lft forever preferred_lft forever
46: eth1@if45: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 36:69:14:6b:a8:9c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.1.20/24 brd 10.10.1.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::3469:14ff:fe6b:a89c/64 scope link 
       valid_lft forever preferred_lft forever

利用防火墙的源地址转换,实现将内网中的地址转换 就是r1中的2个网卡(eth0和eth1)地址转换

[root@localhost ~]# ip netns exec r1 iptables -t nat -A POSTROUTING -s 192.168.1.0/24 ! -d 192.168.1.0/24 -j SNAT --to-source 10.10.1.20

测试。vm1可以ping同vm2.vm1可以访问真机所在局域网的主机。

[root@localhost ~]# ip netns exec  r3 ping 10.10.1.20
PING 10.10.1.20 (10.10.1.20) 56(84) bytes of data.
64 bytes from 10.10.1.20: icmp_seq=1 ttl=64 time=0.055 ms
64 bytes from 10.10.1.20: icmp_seq=2 ttl=64 time=0.030 ms
^C
--- 10.10.1.20 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1038ms
rtt min/avg/max/mdev = 0.030/0.042/0.055/0.014 ms
[root@localhost ~]# 
[root@localhost ~]# ip netns exec  r3 ping 10.10.1.20
PING 10.10.1.20 (10.10.1.20) 56(84) bytes of data.
64 bytes from 10.10.1.20: icmp_seq=1 ttl=64 time=0.072 ms
^C
--- 10.10.1.20 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms
[root@localhost ~]# ip netns exec  r3 ping 10.10.1.2
PING 10.10.1.2 (10.10.1.2) 56(84) bytes of data.
64 bytes from 10.10.1.2: icmp_seq=1 ttl=63 time=0.248 ms
^C
--- 10.10.1.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms
[root@localhost ~]# ip netns exec  r3 ping 10.10.1.3
PING 10.10.1.3 (10.10.1.3) 56(84) bytes of data.
64 bytes from 10.10.1.3: icmp_seq=1 ttl=63 time=0.194 ms
^C
--- 10.10.1.3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms
[root@localhost ~]# ip netns exec  r3 ping 10.10.3.160
PING 10.10.3.160 (10.10.3.160) 56(84) bytes of data.
From 192.168.1.1 icmp_seq=1 Destination Net Unreachable
^C
--- 10.10.3.160 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

[root@localhost ~]# ip netns exec  r3 ping 10.10.1.5
PING 10.10.1.5 (10.10.1.5) 56(84) bytes of data.
64 bytes from 10.10.1.5: icmp_seq=1 ttl=63 time=0.433 ms
^C
--- 10.10.1.5 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms
[root@localhost ~]# ip netns exec  r3 ping 10.10.1.6
PING 10.10.1.6 (10.10.1.6) 56(84) bytes of data.
64 bytes from 10.10.1.6: icmp_seq=1 ttl=63 time=0.426 ms
^C
--- 10.10.1.6 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms
[root@localhost ~]# ip netns exec  r3 ping 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.054 ms
^C
--- 192.168.1.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms
[root@localhost ~]# ip netns exec  r3 ping 192.168.1.3
PING 192.168.1.3 (192.168.1.3) 56(84) bytes of data.
64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.031 ms
^C
--- 192.168.1.3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms
[root@localhost ~]# ip netns exec  r3 ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.054 ms
^C
--- 192.168.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms
[root@localhost ~]# ^C
[root@localhost ~]# 
[root@localhost ~]# ip netns exec  r3 ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.070 ms
^C
--- 192.168.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms
[root@localhost ~]# ip netns exec  r3 ping 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.045 ms
^C
--- 192.168.1.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms
[root@localhost ~]# ip netns exec  r3 ping 192.168.1.3
PING 192.168.1.3 (192.168.1.3) 56(84) bytes of data.
64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.027 ms
^C
--- 192.168.1.3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms
[root@localhost ~]# ip netns exec  r3 ping 192.168.1.4
PING 192.168.1.4 (192.168.1.4) 56(84) bytes of data.
^C
--- 192.168.1.4 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

[root@localhost ~]# ip netns exec  r3 ping 10.10.1.1
PING 10.10.1.1 (10.10.1.1) 56(84) bytes of data.
64 bytes from 10.10.1.1: icmp_seq=1 ttl=254 time=4.49 ms
^C
--- 10.10.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 4.494/4.494/4.494/0.000 ms
[root@localhost ~]# ip netns exec  r3 ping 10.10.1.2
PING 10.10.1.2 (10.10.1.2) 56(84) bytes of data.
64 bytes from 10.10.1.2: icmp_seq=1 ttl=63 time=0.169 ms
^C
--- 10.10.1.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms
[root@localhost ~]# ip netns exec  r3 ping 10.10.1.3
PING 10.10.1.3 (10.10.1.3) 56(84) bytes of data.
64 bytes from 10.10.1.3: icmp_seq=1 ttl=63 time=0.133 ms
^C
--- 10.10.1.3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms
[root@localhost ~]# ip netns exec  r3 ping 10.10.1.4
PING 10.10.1.4 (10.10.1.4) 56(84) bytes of data.
64 bytes from 10.10.1.4: icmp_seq=1 ttl=63 time=0.260 ms
^C
--- 10.10.1.4 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms
[root@localhost ~]# ip netns exec  r3 ping 10.10.1.5
PING 10.10.1.5 (10.10.1.5) 56(84) bytes of data.
64 bytes from 10.10.1.5: icmp_seq=1 ttl=63 time=0.272 ms
^C
--- 10.10.1.5 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms
[root@localhost ~]# ip netns exec  r3 ping 10.10.1.6
PING 10.10.1.6 (10.10.1.6) 56(84) bytes of data.
64 bytes from 10.10.1.6: icmp_seq=1 ttl=63 time=0.257 ms
^C
--- 10.10.1.6 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms
[root@localhost ~]# ip netns exec  r3 ping 10.10.1.20
PING 10.10.1.20 (10.10.1.20) 56(84) bytes of data.
64 bytes from 10.10.1.20: icmp_seq=1 ttl=64 time=0.054 ms
^C
--- 10.10.1.20 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms

此时在br-ex交换机(隐藏一个虚拟网卡)去ping r1网络空间的eth1 ip10.10.1.20也应该是通的,但是不同,需要修改参数

[root@localhost ~]# ping -I br-ex 10.10.1.20
PING 10.10.1.20 (10.10.1.20) from 10.10.1.3 br-ex: 56(84) bytes of data.
[root@localhost ~]#  echo 1 > /proc/sys/net/ipv4/conf/br-ex/accept_local
[root@localhost ~]#  echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter
[root@localhost ~]#  echo 0 > /proc/sys/net/ipv4/conf/br-ex/rp_filter

此时在ping就可以了 

[root@localhost ~]# ping -I br-ex 10.10.1.20
PING 10.10.1.20 (10.10.1.20) from 10.10.1.3 br-ex: 56(84) bytes of data.
64 bytes from 10.10.1.20: icmp_seq=1 ttl=64 time=0.076 ms
64 bytes from 10.10.1.20: icmp_seq=2 ttl=64 time=0.027 ms
64 bytes from 10.10.1.20: icmp_seq=3 ttl=64 time=0.023 ms
64 bytes from 10.10.1.20: icmp_seq=4 ttl=64 time=0.021 ms

此时不可以外网10.10.1网段的直接ping通192.168.1网段的内网。如果想需要,应该在 br-ex 网卡上面做iptable nat 这个是我猜测的。iptables -t nat -A POSTROUTING -s 10.10.1.0/24 ! -d 10.10.1.0/24  -o br-ex -j SNAT --to-source 192.168.1.1 测试不通 

当然。在左边那个网络中,还可以运行一个dhcp服务器,并且将网关自动指向192.168.1.1。 

[root@localhost ~]# yum -y install dnsmasq
[root@localhost ~]# ip netns exec r1 dnsmasq -F 192.168.1.3,192.168.1.200 --dhcp-option=option:router,192.168.1.1

参考https://www.cnblogs.com/-xuan/p/10838052.html

 

  • 0
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值