docker1.11搭建swarm(consul发现)-centos7

1. 安装Docker Machine

Docker Machine 在各种Linux系统上都支持的很好。首先,我们需要从Github上下载最新版本的Docker Machine。我们使用curl命令来下载最先版本Docker Machine ie 0.2.0。
64位操作系统:

 curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machine

32位操作系统:

curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-i386 > /usr/local/bin/docker-machine

下载了最先版本的Docker Machine之后,我们需要对 /usr/local/bin/ 目录下的docker-machine文件的权限进行修改。命令如下:

#  chmod +x /usr/local/bin/docker-machine

在做完上面的事情以后,我们要确保docker-machine已经安装正确。怎么检查呢?运行docker-machine -v指令,该指令将会给出我们系统上所安装的docker-machine版本。

# docker-machine -v

centos7不用装

Docker搭建swarm(docker 1.10.x+swarm 1.1.2+consul)-centos7

亮点:可以实现容器的跨主机,跨network通信;可以实现容器的固定ip。
环境:

项目Value
swarm manager00/consul:192.168.12.190,centos7,docker 1.10.2
swarm manager01:192.168.12.191,centos7,docker 1.10.2
swarm node01:192.168.12.192,centos7,docker 1.10.2
swarm node02:192.168.12.197,centos7,docker 1.10.2

1.四台主机均安装docker 1.10.2

	12.190:
vi /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=consul://192.168.12.190:8500 --cluster-advertise=192.168.12.190:0
 
	12.191:
vi /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=consul://192.168.12.190:8500 --cluster-advertise=192.168.12.191:0
 
 
	12.192:
vi /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=consul://192.168.12.190:8500 --cluster-advertise=192.168.12.192:0
 
	12.197:
vi /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=consul://192.168.12.190:8500 --cluster-advertise=192.168.12.197:0
 
重启docker服务:
systemctl daemon-reload
systemctl restart docker
systemctl status docker

部署:
1.部署consul:(12.190)

[root@swarm_consul ~]# docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap
 
[root@swarm_consul ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                                            NAMES
2b1d7075409c        progrium/consul     "/bin/start -server -"   52 minutes ago      Up 52 minutes       53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp   consul

2.部署swarm-manager(12.190/191)

[root@swarm_consul ~]# docker run -d -p 4000:4000 --name swarm_manager00 swarm manage -H :4000 --replication --advertise 192.168.12.190:4000  consul://192.168.12.190:8500
 
[root@swarm_manager01 ~]# docker run -d -p 4000:4000 --name swarm_manager01 swarm manage -H :4000 --replication --advertise 192.168.12.191:4000  consul://192.168.12.190:8500

3.部署swarm-node(12.192/197)

[root@swarm_node01 ~]# docker run -d --name swarm_node01 swarm join --advertise=192.168.12.192:2375 consul://192.168.12.190:8500

[root@swarm_node02 ~]# docker run -d --name swarm_node02 swarm join --advertise=192.168.12.197:2375 consul://192.168.12.190:8500

4.状态检查

[root@swarm_consul ~]# docker -H :4000 info
Containers: 2
 Running: 2
 Paused: 0
 Stopped: 0
Images: 7
Server Version: swarm/1.1.2
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 2
 swarm_node01: 192.168.12.192:2375
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 2
  └ Reserved Memory: 0 B / 3.885 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.10.0-229.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
  └ Error: (none)
  └ UpdatedAt: 2016-02-23T08:06:29Z
 swarm_node02: 192.168.12.197:2375
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.887 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.10.0-229.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
  └ Error: (none)
  └ UpdatedAt: 2016-02-23T08:05:49Z
Plugins: 
 Volume: 
 Network: 
Kernel Version: 3.10.0-229.el7.x86_64
Operating System: linux
Architecture: amd64
CPUs: 3
Total Memory: 5.772 GiB
Name: 80fed3a89a0b
[root@swarm_consul ~]#

可以看到此时是12.190主用。
5.新建容器

[root@swarm_consul ~]# docker -H :4000 run -d --name nginx01 nginx
d3351e5dd561da312e92aa039dde4457e7e45f8f0e5ef339bef5f26bfadfd70b
[root@swarm_consul ~]# docker -H :4000 ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
d3351e5dd561        nginx               "nginx -g 'daemon off"   9 seconds ago       Up 8 seconds        80/tcp, 443/tcp     swarm_node02/nginx01
[root@swarm_consul ~]#

6.验证manager高可用性
目前12.190主用,先关闭该主机
目前12.190主用,先关闭该主机的manager容器,看是否会切换至12.191

停止12.190的manager容器:

[root@swarm_consul ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                                            NAMES
80fed3a89a0b        swarm               "/swarm manage -H :40"   49 minutes ago      Up 49 minutes       2375/tcp, 0.0.0.0:4000->4000/tcp                                                 swarm_manager00
2b1d7075409c        progrium/consul     "/bin/start -server -"   4 hours ago         Up 2 hours          53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp   consul
[root@swarm_consul ~]# docker stop swarm_manager00
[root@swarm_consul ~]# docker -H :4000 info
Cannot connect to the Docker daemon. Is the docker daemon running on this host?

12.191查看状态,已经变为主用:

[root@swarm_manager01 ~]# docker -H :4000 info
Containers: 3
 Running: 3
 Paused: 0
 Stopped: 0
Images: 8
Server Version: swarm/1.1.2
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 2
 swarm_node01: 192.168.12.192:2375
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 2
  └ Reserved Memory: 0 B / 3.885 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.10.0-229.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
  └ Error: (none)
  └ UpdatedAt: 2016-02-23T08:14:09Z
 swarm_node02: 192.168.12.197:2375
  └ Status: Healthy
  └ Containers: 2
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.887 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.10.0-229.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
  └ Error: (none)
  └ UpdatedAt: 2016-02-23T08:14:00Z
Plugins: 
 Volume: 
 Network: 
Kernel Version: 3.10.0-229.el7.x86_64
Operating System: linux
Architecture: amd64
CPUs: 3
Total Memory: 5.772 GiB
Name: 19a1634047d1
 

原容器仍在运行:

[root@swarm_manager01 ~]# docker -H :4000 ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
d3351e5dd561        nginx               "nginx -g 'daemon off"   4 minutes ago       Up 4 minutes        80/tcp, 443/tcp     swarm_node02/nginx01

这些可以参考螃蟹写的一个博客http://www.pangxie.space/docker/480

7.跨主机容器互通(同一个network)这个要用到vxlan技术实现 注意了

安装Openvswitch 2.3.0 LTS

二、安装
1、安装依赖包:yum -y install openssl-devel wget kernel-devel
2、安装开发工具:yum groupinstall "Development Tools"
3、添加用户:adduser ovswitch, 切换用户并跳转至用户文件夹:su - ovswitch
4、下载源码:wget http://openvswitch.org/releases/openvswitch-2.3.0.tar.gz
5、解压:tar xfz openvswitch-2.3.0.tar.gz
6、创建编译目录:mkdir -p ~/rpmbuild/SOURCES &&cd ~/rpmbuild/SOURCES && mv /root/ openvswitch-2.3.0.tar.gz
7、从spec文件中删除openvswitch-kmod的依赖包,并创建一个新的spec文件:

sed 's/openvswitch-kmod, //g' openvswitch-2.3.0/rhel/openvswitch.spec > openvswitch-2.3.0/rhel/openvswitch_no_kmod.spec

8、开始编译:rpmbuild -bb --without check ~/openvswitch-2.3.0/rhel/openvswitch_no_kmod.spec
10、安装编译生成的rpm文件:

yum localinstall /home/ovswitch/rpmbuild/RPMS/x86_64/openvswitch-2.3.0-1.x86_64.rpm

11、启动服务:systemctl start openvswitch.service
12、查看服务状态:systemctl -l status openvswitch.service
可能出现以下错误:(我做的时候没有出现过)

openvswitch.service - LSB: Open vSwitch switch
  Loaded: loaded (/etc/rc.d/init.d/openvswitch)
  Active: activating (start) since 四 2014-12-04 18:35:32 CST; 1min 30s ago
  Control: 13694 (openvswitch)
  CGroup: /system.slice/openvswitch.service
          ├─13694 /bin/sh /etc/rc.d/init.d/openvswitch start
          ├─13696 /bin/sh /usr/share/openvswitch/scripts/ovs-ctl start --system-id=random
          ├─13697 tee -a /var/log/openvswitch/ovs-ctl.log
          ├─13723 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach --monitor
          ├─13724 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach --monitor
          └─13725 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach --monitor
1204 18:35:33 localhost.localdomain openvswitch[13694]: /etc/openvswitch/conf.db does not exist ... (warning).
1204 18:35:33 localhost.localdomain openvswitch[13694]: Creating empty database /etc/openvswitch/conf.db ovsdb-tool: I/O error: /etc/openvswitch/conf.db: failed to lock lockfile (Resource temporarily unavailable)
1204 18:35:33 localhost.localdomain openvswitch[13694]: [FAILED]
1204 18:35:33 localhost.localdomain openvswitch[13694]: Inserting openvswitch module [  OK  ]

解决办法:

yum install policycoreutils-python.x86_64 0:2.2.5-11.el7
mkdir /etc/openvswitch
semanage fcontext -a -t openvswitch_rw_t "/etc/openvswitch(/.*)?"
restorecon -Rv /etc/openvswitch

再次使用:

systemctl stop openvswitch.service
systemctl start openvswitch.service

1. OpenvSwitch简介

Open vSwitch(下面简称为 OVS)是由 Nicira Networks 主导的,运行在虚拟化平台(例如 KVM,Xen)上的虚拟交换机。在虚拟化平台上,OVS可以为动态变化的端点提供 2 层交换功能,很好的控制虚拟网络中的访问策略、网络隔离、流量监控等等。
主要实现代码为可移植的C代码。http://orangebrain.blog.51cto.com/11178429/1741242(参考)

在这里插入图片描述
环境:192.168.1.100 , 192.168.1.101
装上docker 前面装了不在叙述了
2.2 配置ovs
(1)这里通过两个脚本来配置host10和host11两台主机的OpenvSwitch部分。如下:

	#host10
[root@host10 ~]# cat vsctl-add.sh
#!/bin/bash
ovs-vsctl add-br br0  #新建两个虚拟交换机
ovs-vsctl add-br br1
 
ifconfig eno16777736 0 up    #将物理主机ip赋值给br1
ifconfig br1 192.168.1.100/24 up
route add default gw 192.168.1.1
 
ovs-vsctl add-port br1 eno16777736  #将eth0加入br1
ovs-vsctl add-port br0 docker0  #将docker0加入br0
 
ifconfig br0 172.17.0.2/16 up  #配置br0和docker0的IP
ifconfig docker0 172.17.0.1/16 up
 
#host11
[root@host11 ~]# cat vsctl-add.sh
#!/bin/bash
ovs-vsctl add-br br0  
ovs-vsctl add-br br1
 
ifconfig eno16777736 0 up
ifconfig br1 192.168.1.101/24 up
route add default gw 192.168.1.1
ovs-vsctl add-port br1 eno16777736
ovs-vsctl add-port br0 docker0
 
ifconfig br0 172.17.0.4/16 up
ifconfig docker0 172.17.0.3/16 up

提示:
以上两个脚本在物理主机上通过ssh执行时通过 nohup ./vsctl-add.sh & 方式执行,
否则会出现网络断掉执行不成功
这个如果是远程机房操作的话 最好所有步骤先检查一下在执行脚本
当然服务器如果是两张一张公网一张内网 就最好了 断了就蛋疼了

有问题的时候直接
ovs-vsctl del-br br0 ovs-vsctl del-br br1
重来。。。
(2)配置VxLAN实现跨主机互联

	#host10
ovs-vsctl add-port br0 vx1 -- set interface vx1 type=vxlan options:remote_ip=192.168.1.101
 
#host11
ovs-vsctl add-port br0 vx1 -- set interface vx1 type=vxlan options:remote_ip=192.168.1.100
 
#执行完毕后查看
[root@host10 ~]# ovs-vsctl show
a8251e22-bb31-4ee6-8321-49fbd0f1b735
    Bridge "br0"
        Port "vx1"
            Interface "vx1"
                type: vxlan
                options: {remote_ip="192.168.1.11"}
        Port "veth1pl5407"
            Interface "veth1pl5407"
        Port "br0"
            Interface "br0"
                type: internal
        Port "docker0"
            Interface "docker0"
        Port "veth1pl4977"
            Interface "veth1pl4977"
    Bridge "br1"
        Port "eth0"
            Interface "eth0"
        Port "br1"
            Interface "br1"
                type: internal

创建四个容器
这里通过pipework固定容器IP地址,以后加入到/etc/rc.local里面实现开机启动自动配置。

	#host10
docker run -itd --net=none --name test1 centos:6 /bin/bash
docker run -itd --net=none --name test2 centos:6 /bin/bash
pipework br0 test1 172.17.0.101/16@172.17.0.1
pipework br0 test2 172.17.0.102/16@172.17.0.1
 
#host11
docker run -itd --net=none --name test3 centos:6 /bin/bash
docker run -itd --net=none --name test4 centos:6 /bin/bash
pipework br0 test3 172.17.0.103/16@172.17.0.3
pipework br0 test4 172.17.0.104/16@172.17.0.3

在这里插入图片描述

出现的问题:无法互通 第一台机器连自己网桥都通不了 docker stop test1 docker start test1
重新分配IP

ifconfig br0 172.17.0.2/16 up   
ifconfig docker0 172.17.0.1/16 up

重新启动一下up down
pipework br0 test1 172.17.0.100/16@172.17.0.1 应该就能所有都通了

ubuntu 部署

建议参考:http://www.cnblogs.com/yuuyuu/p/5180827.html

docker容器网络通信原理分析http://ju.outofmemory.cn/entry/255894
这篇很详细 涵盖了三种模式

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值