【Consul】CONSUL环境部署

Consul是一个支持多数据中心分布式高可用的服务发现配置共享的服务软件,由HashiCorp公司用Go语言开发,基于Mozilla Public License 2.0的协议进行开源。Consul支持健康检查,并允许HTTP和DNS协议调用 API 存储键值对

1   如何获取

目前最新版本是V0.6.4。

源码地址:

https://github.com/hashicorp/consul

可执行文件地址:

https://www.consul.io/downloads.html

MailList:

https://groups.google.com/group/consul-tool/

官方地址:

Consul by HashiCorp

官方演示:

http://demo.consul.io/ui/

2   搭建CONSUL环境

2.1 网络规划

主机名称

IP

角色

数据中心

node0

192.168.192.120

Server

DataCenter1

node1

192.168.192.121

Server

DataCenter1

node2

192.168.192.122

Server

DataCenter1

node3

192.168.192.123

Client

DataCenter1

2.2 软件环境

CentOS:7.2.1511

Consul:V0.6.4

2.3 构建

1. 安装可执行文件consul

   从上述所属地址中下载(linux类别X64),然后执行

[ceph@node0 cousul]$sudo consul /usr/bin/
[ceph@node0 ~]# chmod755 consul

查看是否安装成功

[ceph@node0 cousul]$consul version

Consul v0.6.4

Consul Protocol: 3 (Understands back to: 1)

[ceph@node0 cousul]$

 

查看帮助

[ceph@node0 consul]$consul --help
usage: consul[--version] [--help] <command> [<args>]

Available commandsare:
    agent          Runs a Consul agent
    configtest     Validate config file
    event          Fire a new event
    exec           Executes a command on Consul nodes
    force-leave    Forces a member of the cluster to enter the"left" state
    info           Provides debugging information foroperators
    join           Tell Consul agent to join cluster
    keygen         Generates a new encryption key
    keyring        Manages gossip layer encryption keys
    leave          Gracefully leaves the Consul clusterand shuts down
    lock           Execute a command holding a lock
    maint          Controls node or service maintenancemode
    members        Lists the members of a Consul cluster
    monitor        Stream logs from a Consul agent
    reload         Triggers the agent to reloadconfiguration files
    rtt            Estimates network round trip timebetween nodes
    version        Prints the Consul version
    watch          Watch for changes in Consul

Consul有两种方式搭建方式:一是bootstrap模式,二是非bootstrap模式。

2.3.1  bootstrap模式

1. 在启动agent

在第一台节点上启动agent,以server模式运行,指定server角色的节点数目,指定节点名(在datacenter内节点名唯一),同时提供监听地址。

命令如下:

[ceph@node0 cousul]$ consul agent -server-bootstrap-expect=3 -data-dir=/tmp/consul -node=node0 -bind=192.168.192.120-dc=dc1
==> WARNING: Expect Mode enabled,expecting 3 servers
==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Consul agent running!
         Node name: 'node0'
        Datacenter: 'dc1'
            Server: true (bootstrap: false)
       Client Addr: 127.0.0.1 (HTTP: 8500,HTTPS: -1, DNS: 8600, RPC: 8400)
     Cluster Addr: 192.168.192.120 (LAN: 8301, WAN: 8302)
   Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
             Atlas: <disabled>

==> Log data will now stream inas it occurs:

   2016/07/05 20:52:06 [INFO] serf: EventMemberJoin: node0 192.168.192.120
   2016/07/05 20:52:06 [INFO] serf: EventMemberJoin: node0.dc1192.168.192.120
   2016/07/05 20:52:06 [INFO] raft: Node at 192.168.192.120:8300 [Follower]entering Follower state
   2016/07/05 20:52:06 [INFO] consul: adding LAN server node0 (Addr:192.168.192.120:8300) (DC: dc1)
   2016/07/05 20:52:06 [INFO] consul: adding WAN server node0.dc1 (Addr:192.168.192.120:8300) (DC: dc1)
   2016/07/05 20:52:06 [ERR] agent: failed to sync remote state: No clusterleader
   2016/07/05 20:52:08 [WARN] raft: EnableSingleNode disabled, and no knownpeers. Aborting election.

之所以失败,原因在于当前的Datacenter没有 leader server

依次在另外两台机器部署agent作为server

节点node1

[ceph@node1 consul]$ consul agent-server -bootstrap-expect=3 -data-dir=/tmp/consul -node=node1-bind=192.168.192.121 -dc=dc1

节点node2

[ceph@node2 consul]$ consul agent-server -bootstrap-expect=3 -data-dir=/tmp/consul -node=node2-bind=192.168.192.122 -dc=dc1

目前,三个节点均不知道其他Server节点的存在,以node0为例

[ceph@node0 consul]$ consul members
Node   Address               Status  Type   Build  Protocol  DC
node0  192.168.192.120:8301  alive  server  0.6.4  2        dc1
[ceph@node0 consul]$

查看consul集群信息

[ceph@node0 consul]$ consul info
agent:
    check_monitors= 0
    check_ttls= 0
    checks= 0
    services= 1
build:
    prerelease=
    revision= 26a0ef8c
    version= 0.6.4
consul:
    bootstrap = false
    known_datacenters= 1
    leader = false
    server = true
……

当前节点为follow节点。

2. 触发选举leader

   因为consul一般需要3~5个Server,因此,在节点node0上添加node1和node2。

[ceph@node0 consul]$ consul join192.168.192.121
Successfully joined cluster bycontacting 1 nodes.
[ceph@node0 consul]$ consul join192.168.192.122
Successfully joined cluster bycontacting 1 nodes.
[ceph@node0 consul]$  

观察三个节点consul日志:

Node0:
   2016/07/05 21:10:55 [INFO] agent: (LAN) joining: [192.168.192.122]
   2016/07/05 21:10:55 [INFO] serf: EventMemberJoin: node2 192.168.192.122
   2016/07/05 21:10:55 [INFO] agent: (LAN) joined: 1 Err: <nil>
   2016/07/05 21:10:55 [INFO] consul: adding LAN server node2 (Addr:192.168.192.122:8300) (DC: dc1)
   2016/07/05 21:10:55 [INFO] consul: Attempting bootstrap with nodes:[192.168.192.120:8300 192.168.192.121:8300 192.168.192.122:8300]
   2016/07/05 21:10:55 [INFO] consul: New leader elected: node2
   2016/07/05 21:10:56 [INFO] agent: Synced service 'consul'


Node1
  2016/07/05 21:10:55 [INFO] serf:EventMemberJoin: node2 192.168.192.122
   2016/07/05 21:10:55 [INFO] consul: adding LAN server node2 (Addr:192.168.192.122:8300) (DC: dc1)
   2016/07/05 21:10:55 [INFO] consul: Attempting bootstrap with nodes:[192.168.192.121:8300 192.168.192.120:8300 192.168.192.122:8300]
   2016/07/05 21:10:56 [INFO] consul: New leader elected: node2
2016/07/05 21:10:57 [INFO] agent:Synced service 'consul'


Node2
   2016/07/05 21:10:55 [INFO] serf: EventMemberJoin: node0 192.168.192.120
   2016/07/05 21:10:55 [INFO] serf: EventMemberJoin: node1 192.168.192.121
   2016/07/05 21:10:55 [INFO] consul: adding LAN server node0 (Addr:192.168.192.120:8300) (DC: dc1)
   2016/07/05 21:10:55 [INFO]consul: Attempting bootstrap with nodes: [192.168.192.122:8300192.168.192.120:8300 192.168.192.121:8300]
   2016/07/05 21:10:55 [INFO] consul: adding LAN server node1 (Addr:192.168.192.121:8300) (DC: dc1)
   2016/07/05 21:10:55 [WARN] raft: Heartbeat timeout reached, startingelection
   2016/07/05 21:10:55 [INFO] raft: Node at 192.168.192.122:8300[Candidate] entering Candidate state
   2016/07/05 21:10:55 [INFO] raft: Election won. Tally: 2
   2016/07/05 21:10:55 [INFO] raft: Node at 192.168.192.122:8300 [Leader]entering Leader state
   2016/07/05 21:10:55 [INFO] consul: cluster leadership acquired
   2016/07/05 21:10:55 [INFO] consul: New leader elected: node2
   2016/07/05 21:10:55 [INFO] raft: pipelining replication to peer192.168.192.121:8300
   2016/07/05 21:10:55 [INFO] raft: pipelining replication to peer192.168.192.120:8300
   2016/07/05 21:10:55 [INFO] consul: member 'node2' joined, marking healthalive
   2016/07/05 21:10:55 [INFO] consul: member 'node0' joined, marking healthalive
   2016/07/05 21:10:55 [INFO] consul: member 'node1' joined, marking healthalive
   2016/07/05 21:10:58 [INFO] agent: Synced service 'consul'
​​​​​​​

由日志可知,举出了leadernode2

在node0查看members

[ceph@node0 consul]$ consul members
Node   Address               Status  Type   Build  Protocol  DC
node0  192.168.192.120:8301  alive  server  0.6.4  2        dc1
node1  192.168.192.121:8301  alive  server  0.6.4  2        dc1
node2  192.168.192.122:8301  alive  server  0.6.4  2         dc1
[ceph@node0 consul]$

查看info信息

[ceph@node0 consul]$ consul info
agent:
    check_monitors= 0
    check_ttls= 0
    checks= 0
    services= 1
build:
    prerelease=
    revision= 26a0ef8c
    version= 0.6.4
consul:
    bootstrap = false
    known_datacenters= 1
    leader = false
    server = true
……

节点node2上查看consul信息

[ceph@node2 consul]$ consul info
agent:
    check_monitors= 0
    check_ttls= 0
    checks= 0
    services= 1
build:
    prerelease=
    revision= 26a0ef8c
    version= 0.6.4
consul:
    bootstrap= false
    known_datacenters= 1
    leader = true
    server = true

3. 在node3上以client启动agent

[ceph@node3 consul]$ consul agent-data-dir=/tmp/consul -node=node3 -bind=192.168.192.123 -dc=dc1
==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Consul agent running!
         Node name: 'node3'
        Datacenter: 'dc1'
            Server: false (bootstrap: false)
       Client Addr: 127.0.0.1 (HTTP: 8500,HTTPS: -1, DNS: 8600, RPC: 8400)
     Cluster Addr: 192.168.192.123 (LAN: 8301, WAN: 8302)
   Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
             Atlas: <disabled>

==> Log data will now stream inas it occurs:

   2016/07/05 21:21:02 [INFO] serf: EventMemberJoin: node3 192.168.192.123
   2016/07/05 21:21:02 [ERR] agent: failed to sync remote state: No knownConsul servers

在节点node0上添加node3

[ceph@node0 consul]$ consul join192.168.192.123
Successfully joined cluster bycontacting 1 nodes.
[ceph@node0 consul]$ consul members
Node   Address               Status  Type   Build  Protocol  DC
node0  192.168.192.120:8301  alive   server  0.6.4 2         dc1
node1  192.168.192.121:8301  alive  server  0.6.4  2        dc1
node2  192.168.192.122:8301  alive   server  0.6.4 2         dc1
node3  192.168.192.123:8301  alive  client  0.6.4 2         dc1
[ceph@node0 consul]$

节点node3的日志如下:

   2016/07/05 21:21:57 [INFO] serf: EventMemberJoin: node0 192.168.192.120
   2016/07/05 21:21:57 [INFO] serf: EventMemberJoin: node2 192.168.192.122
   2016/07/05 21:21:57 [INFO] serf: EventMemberJoin: node1 192.168.192.121
   2016/07/05 21:21:57 [INFO] consul: adding server node0 (Addr:192.168.192.120:8300) (DC: dc1)
   2016/07/05 21:21:57 [INFO] consul: adding server node2 (Addr:192.168.192.122:8300) (DC: dc1)
   2016/07/05 21:21:57 [INFO] consul: adding server node1 (Addr:192.168.192.121:8300) (DC: dc1)
   2016/07/05 21:21:57 [INFO] consul: New leader elected: node2
   2016/07/05 21:21:57 [INFO] agent: Synced node info

3. 依次关闭node3 node2:

Node0和node1的日志如下:

Node0
   2016/07/05 21:24:00 [INFO] serf: EventMemberLeave: node2 192.168.192.122
   2016/07/05 21:24:00 [INFO] consul: removing LAN server node2 (Addr:192.168.192.122:8300) (DC: dc1)
   2016/07/05 21:24:00 [WARN] raft: Heartbeat timeout reached, startingelection
   2016/07/05 21:24:00 [INFO] raft: Node at 192.168.192.120:8300[Candidate] entering Candidate state
   2016/07/05 21:24:01 [INFO] raft: Duplicate RequestVote for same term: 2
   2016/07/05 21:24:02 [WARN] raft: Election timeout reached, restartingelection
   2016/07/05 21:24:02 [INFO] raft: Node at 192.168.192.120:8300[Candidate] entering Candidate state
   2016/07/05 21:24:02 [INFO] raft: Election won. Tally: 2
   2016/07/05 21:24:02 [INFO] raft: Node at 192.168.192.120:8300 [Leader]entering Leader state
   2016/07/05 21:24:02 [INFO] consul: cluster leadership acquired
   2016/07/05 21:24:02 [INFO] consul: New leader elected: node0
   2016/07/05 21:24:02 [INFO] raft: pipelining replication to peer192.168.192.121:8300
   2016/07/05 21:24:02 [INFO] consul: member 'node2' left, deregistering
   2016/07/05 21:24:03 [INFO] agent.rpc: Accepted client: 127.0.0.1:35701

Node1
   2016/07/05 21:24:00 [INFO] consul: removing LAN server node2 (Addr:192.168.192.122:8300) (DC: dc1)
   2016/07/05 21:24:00 [WARN] raft: Rejecting vote request from192.168.192.120:8300 since we have a leader: 192.168.192.122:8300
   2016/07/05 21:24:01 [WARN] raft: Heartbeat timeout reached, startingelection
   2016/07/05 21:24:01 [INFO] raft: Node at 192.168.192.121:8300[Candidate] entering Candidate state
   2016/07/05 21:24:02 [INFO] raft: Node at 192.168.192.121:8300 [Follower]entering Follower state
   2016/07/05 21:24:02 [INFO]consul: New leader elected: node0

3   参考文献

[01]http://www.cnblogs.com/yatingyang/articles/4495098.html

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
Docker Consul是一个用于服务发现、配置和分布式协调的开源工具。它可以帮助您在Docker环境中管理和监控多个容器的状态和连接。 使用Docker Consul,您可以注册和发现容器中的服务,并利用其提供的API进行服务发现、健康检查和负载均衡。它还提供了键值存储和事件系统,用于共享和同步配置信息。 下面是一些使用Docker Consul的常见场景: 1. 服务发现:您可以在Docker容器中运行Consul代理,并将每个容器注册为Consul服务。这样,其他容器或应用程序可以通过Consul发现和访问这些服务。 2. 健康检查:Consul可以定期检查容器中的服务是否正常运行,并根据检查结果自动更新服务的状态。这样,其他容器或应用程序可以根据服务的健康状态进行决策。 3. 配置管理:Consul提供了一个键值存储系统,用于存储和管理配置信息。您可以将配置信息存储在Consul中,并使用其API在容器中访问这些配置。 4. 事件订阅:Consul的事件系统允许您订阅特定类型的事件,例如服务注册或注销、健康检查状态变化等。这样,您可以根据事件触发自定义逻辑。 要使用Docker Consul,您可以在Docker环境部署Consul容器,并使用其提供的API和命令行工具进行配置和管理。您可以通过Docker Compose或Docker Swarm等工具来编排和管理Consul集群。 请注意,这里提到的Docker Consul是指使用Docker容器部署和管理的Consul实例,并不是指Consul本身由Docker开发。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

YoungerChina

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值