Consul使用小记
1,概要
项目中原使用eureka作为注册中心,鉴于2.0不继续开源,适当对其他组件也了解下,以适应未知情况
consul getting started:https://www.consul.io/intro/getting-started/install.html
2,consul vs eureka
参考:https://www.consul.io/intro/vs/eureka.html
官方表示consul有其他发现注册产品没有的特性,but目前很多特性我们没用到,后续再继续研究
对比差异主要关系我们使用和可能使用的部分,健康检查两者都支持,spring clound支持;cap 方面consul强调一致性,eureka强调分区耐久性,consul服务注册相比 Eureka 会稍慢一些。因为 Consul 的 raft 协议要求必须过半数的节点都写入成功才认为注册成功 Leader 挂掉时,重新选举期间整个 Consul 不可用。保证了强一致性但牺牲了可用性;eureka服务注册相对要快,因为不需要等注册信息 replicate 到其它节点,也不保证注册信息是否 replicate 成功 当数据出现不一致时,虽然 A, B 上的注册信息不完全相同,但每个 Eureka 节点依然能够正常对外提供服务,这会出现查询服务信息时如果请求 A 查不到,但请求 B 就能查到。如此保证了可用性但牺牲了一致性;牺牲一致性在注册发现场景问题不大,在注册发现场景eureka性能更高
主流注册发现组件比较(摘自网络)
Feature | Consul | zookeeper | etcd | euerka |
---|---|---|---|---|
服务健康检查 | 服务状态,内存,硬盘等 | (弱)长连接,keepalive | 连接心跳 | 可配支持 |
多数据中心 | 支持 | — | — | — |
kv存储服务 | 支持 | 支持 | 支持 | — |
一致性 | raft | paxos | raft | — |
cap | ca | cp | cp | ap |
使用接口(多语言能力) | 支持http和dns | 客户端 | http/grpc | http(sidecar) |
watch支持 | 全量/支持long polling | 支持 | 支持 long polling | 支持 long polling/大部分增量 |
自身监控 | metrics | — | metrics | metrics |
安全 | acl /https | acl | https支持(弱) | — |
spring cloud集成 | 已支持 | 已支持 | 已支持 | 已支持 |
开发语言 | go |
3,安装使用
Consul agent有两种运行模式:Server和Client。这里的Server和Client只是Consul集群层面的区分,与搭建在Cluster之上 的应用服务无关。以Server模式运行的Consul agent节点用于维护Consul集群的状态,官方建议每个Consul Cluster至少有3个或以上的运行在Server mode的Agent,Client节点不限
consul借助agent来运行,类似elk的logstash agent 或 zabbix监控系统的agent ,每个需要被发现的服务上,通过consul agent client 来收集服务本身的信息,然后向consul agent server汇报, consul server 可以集群部署
mac上安装:brew install consul
参考:https://www.consul.io/intro/getting-started/install.html
启动agent 及ui:consul agent -ui -dev -advertise 127.0.0.1
默认ui页面:http://localhost:8500/ui
单机测试代码:https://github.com/yugj/spring-clound-test.git
说明:consul-client-1 请求consul-rest-server、consul-rest-server2 实现服务发现负载均衡
集群配置:
server:
Server Configuration
The configuration files are stored as JSON files. Consul has built-in encryption support using a shared-secret system for the gossip protocol.
In the terminal, we can use the Consul command to generate a key of the necessary length and encoding as follows:
root@server1:~#consul keygen EXz7LFN8hpQ4id8EDYiFoQ==
It should be the same for all the servers and clients in a datacenter. If it's different the consul members will refuse to join.
Create the first file in the /etc/consul.d/server directory in server1.example.com (IP Address: 192.168.1.11). Use the encryption key generated above as the value for the "encrypt" key in the JSON.
server1.example.com
root@server1:~#vim /etc/consul.d/server/config.json { "bind_addr": "192.168.1.11", "datacenter": "dc1", "data_dir": "/var/consul", "encrypt": "EXz7LFN8hpQ4id8EDYiFoQ==", "log_level": "INFO", "enable_syslog": true, "enable_debug": true, "node_name": "ConsulServer1", "server": true, "bootstrap_expect": 3, "leave_on_terminate": false, "skip_leave_on_interrupt": true, "rejoin_after_leave": true, "retry_join": [ "192.168.1.11:8301", "192.168.1.12:8301", "192.168.1.13:8301" ] }
More information on what each key means can be found in the documentation.
You should copy the contents of this configuration file to the other machines that will be acting as your Consul servers. Place them in the same location just like you did in the first server.
The only value you need to modify on the other servers is the IP addresses that it should attempt to listen on - the value of the 'bind_addr' key.
server2.example.com
root@server2:~#vim /etc/consul.d/server/config.json { "bind_addr": "192.168.1.12", "datacenter": "dc1", "data_dir": "/var/consul", "encrypt": "EXz7LFN8hpQ4id8EDYiFoQ==", "log_level": "INFO", "enable_syslog": true, "enable_debug": true, "node_name": "ConsulServer2", "server": true, "bootstrap_expect": 3, "leave_on_terminate": false, "skip_leave_on_interrupt": true, "rejoin_after_leave": true, "retry_interval": "30s", "retry_join": [ "192.168.1.11:8301", "192.168.1.12:8301", "192.168.1.13:8301" ] }
server3.example.com
root@server3:~#vim /etc/consul.d/server/config.json { "bind_addr": "192.168.1.13", "datacenter": "dc1", "data_dir": "/var/consul", "encrypt": "EXz7LFN8hpQ4id8EDYiFoQ==", "log_level": "INFO", "enable_syslog": true, "enable_debug": true, "node_name": "ConsulServer3", "server": true, "bootstrap_expect": 3, "leave_on_terminate": false, "skip_leave_on_interrupt": true, "rejoin_after_leave": true, "retry_interval": "30s", "retry_join": [ "192.168.1.11:8301", "192.168.1.12:8301", "192.168.1.13:8301" ] }
Starting the cluster
Now we have everything in place to get a consul cluster up and running.
Start server1.example.com first.
root@server1:~#su consul consul@server1:~$consul agent -config-dir /etc/consul.d/server/ ==> WARNING: Expect Mode enabled, expecting 3 servers ==> Starting Consul agent... ==> Starting Consul agent RPC... ==> Joining cluster... Join completed. Synced with 1 initial agents ==> Consul agent running! Node name: 'ConsulServer1' Datacenter: 'dc1' Server: true (bootstrap: false) Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400) Cluster Addr: 192.168.1.11 (LAN: 8301, WAN: 8302) Gossip encrypt: true, RPC-TLS: false, TLS-Incoming: false Atlas: <disabled>
The service should start up and occupy the terminal window. It will attempt to connect to the two other servers and keep retrying until they are up.
Now start Consul in the other two servers (server2.example.com, server3.example.com) in the same manner.
server2.example.com root@server2:~#su consul consul@server2:~$consul agent -config-dir /etc/consul.d/server/ ==> WARNING: Expect Mode enabled, expecting 3 servers ==> Starting Consul agent... ==> Starting Consul agent RPC... ==> Joining cluster... Join completed. Synced with 1 initial agents ==> Consul agent running! Node name: 'ConsulServer2' Datacenter: 'dc1' Server: true (bootstrap: false) Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400) Cluster Addr: 192.168.1.12 (LAN: 8301, WAN: 8302) Gossip encrypt: true, RPC-TLS: false, TLS-Incoming: false Atlas: <disabled> server3.example.com root@server3:~#su consul consul@server3:~$consul agent -config-dir /etc/consul.d/server/ ==> WARNING: Expect Mode enabled, expecting 3 servers ==> Starting Consul agent... ==> Starting Consul agent RPC... ==> Joining cluster... Join completed. Synced with 1 initial agents ==> Consul agent running! Node name: 'ConsulServer3' Datacenter: 'dc1' Server: true (bootstrap: false) Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400) Cluster Addr: 192.168.1.13 (LAN: 8301, WAN: 8302) Gossip encrypt: true, RPC-TLS: false, TLS-Incoming: false Atlas: <disabled>
These servers (server2.example.com, server3.example.com) will connect to the server1.example.com, completing the cluster with one of the servers as the leader and the other two as followers.
You can see the members of the cluster (servers and clients) by asking consul for its members on any of the machines:
root@server3:~#consul members
Node | Address | Status | Type | Build | Protocol | DC |
---|---|---|---|---|---|---|
ConsulServer1 | 192.168.1.11:8301 | alive | server | 0.6.4 | 2 | dc1 |
ConsulServer2 | 192.168.1.12:8301 | alive | server | 0.6.4 | 2 | dc1 |
ConsulServer3 | 192.168.1.13:8301 | alive | server | 0.6.4 | 2 | dc1 |
The cluster is fully operational now and applications can connect. We'll configure and start the client next.
Client Configuration
The client is also a member of the system, and can connect to servers for information about the infrastructure.
Consul clients are very light-weight and simply forward requests to the servers. They provide a method of insulating your servers and offload the responsibility of knowing the servers' addresses from the applications that use Consul.
Create the configuration file under the /etc/consul.d/client directory.
root@client:~#vim /etc/consul.d/client/config.json { "bind_addr": "192.168.1.21", "datacenter": "dc1", "data_dir": "/var/consul", "encrypt": "EXz7LFN8hpQ4id8EDYiFoQ==", "log_level": "INFO", "enable_syslog": true, "enable_debug": true, "node_name": "ConsulClient", "server": false, "service": {"name": "Apache", "tags": ["HTTP"], "port": 80, "check": {"script": "curl localhost >/dev/null 2>&1", "interval": "10s"}}, "rejoin_after_leave": true, "retry_join": [ "192.168.1.11", "192.168.1.12", "192.168.1.13" ] }
Start Consul
root@client:~#su consul consul@client:~$consul agent -config-dir /etc/consul.d/client consul@client:~$consul members
Node | Address | Status | Type | Build | Protocol | DC |
---|---|---|---|---|---|---|
ConsulClient | 192.168.1.21:8301 | alive | client | 0.6.4 | 2 | dc1 |
ConsulServer1 | 192.168.1.11:8301 | alive | server | 0.6.4 | 2 | dc1 |
ConsulServer2 | 192.168.1.12:8301 | alive | server | 0.6.4 | 2 | dc1 |
ConsulServer3 | 192.168.1.13:8301 | alive | server | 0.6.4 | 2 | dc1 |
You can see that the client has joined the cluster. Let us try to list the available services from this client using the REST API.
consul@client:~$curl -s http://client.example.com:8500/v1/catalog/services {"ConsulClient": { "ID":"ConsulClient", "Service":"Apache", "Tags":["HTTP"], "Address":"", "Port":80, "EnableTagOverride":false, "CreateIndex":0, "ModifyIndex":0} }
We can also list the members via the same REST endpoint.
consul@client:~$curl -s http://client.example.com:8500/v1/agent/members [{"Name":"ConsulClient", "Addr":"192.168.1.21", "Port":8301, "Tags": {"build":"0.6.4:26a0ef8c", "dc":"dc1", "role":"node", "vsn":"2", "vsn_max":"3", "vsn_min":"1"}, "Status":1, "ProtocolMin":1, "ProtocolMax":3, "ProtocolCur":2, "DelegateMin":2, "DelegateMax":4, "DelegateCur":4 }, {"Name":"ConsulServer1", "Addr":"192.168.1.11", "Port":8301, "Tags":{"build":"0.6.4:26a0ef8c", "dc":"dc1", "expect":"3", "port":"8300", "role":"consul", "vsn":"2", "vsn_max":"3", "vsn_min":"1"}, "Status":1, "ProtocolMin":1, "ProtocolMax":3, "ProtocolCur":2, "DelegateMin":2, "DelegateMax":4, "DelegateCur":4 }, {"Name":"ConsulServer2", "Addr":"192.168.1.12", "Port":8301, "Tags":{"build":"0.6.4:26a0ef8c", "dc":"dc1", "expect":"3", "port":"8300", "role":"consul", "vsn":"2", "vsn_max":"3", "vsn_min":"1"}, "Status":1, "ProtocolMin":1, "ProtocolMax":3, "ProtocolCur":2, "DelegateMin":2, "DelegateMax":4, "DelegateCur":4 }, {"Name":"ConsulServer3", "Addr":"192.168.1.13", "Port":8301, "Tags":{"build":"0.6.4:26a0ef8c", "dc":"dc1", "expect":"3", "port":"8300", "role":"consul", "vsn":"2", "vsn_max":"3", "vsn_min":"1"}, "Status":1, "ProtocolMin":1, "ProtocolMax":3, "ProtocolCur":2, "DelegateMin":2, "DelegateMax":4, "DelegateCur":4 }]
In case the client fails or is shutdown, after a restart it will rejoin the cluster automatically, as the required parameters are in JSON configuration.
参考地址:
https://imaginea.gitbooks.io/consul-devops-handbook/content/server_configuration.html