Etcd教程 — 第二章 Etcd集群静态发现

1 Etcd集群安装方式

在生产环境中,为了整个集群的高可用,Etcd正常都会集群部署,避免单点故障。本节将会介绍如何进行Etcd集群部署。引导Etcd集群的启动有以下三种机制:

静态发现Etcd集群要求提前知道集群中所有成员信息,在启动时通过--initial-cluster参数直接指定好Etcd的各个节点地址。

Etcd动态发现:静态发现前提是在搭建集群之前已经提前知道各节点成员的信息,而实际应用中可能存在预先并不知道各节点ip的情况,这时可通过已搭建的Etcd集群来辅助搭建新的Etcd集群

通过把已有的Etcd集群作为数据交互点,然后在扩展新的集群时,通过已有集群来进行服务发现的机制。比如官方提供的集群:discovery.etcd.io。

DNS发现:通过DNS查询方式获取其他节点地址信息。

2 Etcd集群静态发现

通常都是将集群节点部署为3,5,7,9个节点,为什么不能选择偶数个节点?

  1. 偶数个节点集群不可用风险更高,表现在选主过程中,有较大概率或等额选票,从而触发下一轮选举。
  2. 偶数个节点集群在某些网络分割的场景下无法正常工作。当网络分割发生后,将集群节点对半分割开。
    此时集群将无法工作。按照RAFT协议,此时集群写操作无法使得大多数节点同意,从而导致写失败,集群无法正常工作。

2.1 静态启动的方式

  1. 单机上搭建Etcd集群
  2. 多机搭建Etcd集群
  3. Docker启动集群

2.2 开放端口

在搭建集群前都需要先开放相关端口,否则会出现集群之间通信中断,导致集群搭建失败。

开放每台机器上的2379、2380端口的命令:

firewall-cmd --zone=public --add-port=2379/tcp --permanent
firewall-cmd --zone=public --add-port=2380/tcp --permanent

重启防火墙:

firewall-cmd --reload

查看开放的端口:

firewall-cmd --list-port

关闭firewall:

systemctl stop firewalld.service #停止firewall

systemctl disable firewalld.service #禁止firewall开机启动

※2.3 单机搭建Etcd集群

单台机器上搭建Etcd集群,可以使用goreman工具来操作

在单机上搭建Etcd集群前应先在机器上安装Etcd软件,然后再使用goreman来搭建单机版的集群,否则会报未找到etcd命令

步骤:

  1. 安装Etcd软件。
  2. goreman安装配置 。
  3. 使用goreman来搭建单机版的集群。

2.3.1 安装Etcd软件

安装Etcd软件(参考:Etcd教程 — 第一章 Etcd简介、Etcd单机安装 2.2 Linux安装Etcd)。

2.3.2 安装goreman工具

goreman安装配置

2.3.3 编写Procfile文件

在这里插入图片描述
本文是在 /root/下创建了gore目录,然后在gore下创建编写Procfile.learner文件:

# Use goreman to run `go get github.com/mattn/goreman`
# Change the path of bin/etcd if etcd is located elsewhere

etcd1: etcd --name infra1 --listen-client-urls http://127.0.0.1:12379 --advertise-client-urls http://127.0.0.1:12379 --listen-peer-urls http://127.0.0.1:12380 --initial-advertise-peer-urls http://127.0.0.1:12380 --initial-cluster-token etcd-cluster-1 --initial-cluster 'infra1=http://127.0.0.1:12380,infra2=http://127.0.0.1:22380,infra3=http://127.0.0.1:32380' --initial-cluster-state new --enable-pprof

etcd2: etcd --name infra2 --listen-client-urls http://127.0.0.1:22379 --advertise-client-urls http://127.0.0.1:22379 --listen-peer-urls http://127.0.0.1:22380 --initial-advertise-peer-urls http://127.0.0.1:22380 --initial-cluster-token etcd-cluster-1 --initial-cluster 'infra1=http://127.0.0.1:12380,infra2=http://127.0.0.1:22380,infra3=http://127.0.0.1:32380' --initial-cluster-state new --enable-pprof

etcd3: etcd --name infra3 --listen-client-urls http://127.0.0.1:32379 --advertise-client-urls http://127.0.0.1:32379 --listen-peer-urls http://127.0.0.1:32380 --initial-advertise-peer-urls http://127.0.0.1:32380 --initial-cluster-token etcd-cluster-1 --initial-cluster 'infra1=http://127.0.0.1:12380,infra2=http://127.0.0.1:22380,infra3=http://127.0.0.1:32380' --initial-cluster-state new --enable-pprof
#proxy: bin/etcd grpc-proxy start --endpoints=127.0.0.1:2379,127.0.0.1:22379,127.0.0.1:32379 --listen-addr=127.0.0.1:23790 --advertise-client-url=127.0.0.1:23790 --enable-pprof

# A learner node can be started using Procfile.learner

2.3.4 集群配置参数说明

--name
etcd集群中的节点名,这里可以随意,可区分且不重复就行

--listen-peer-urls
监听用于和节点之间通信的url,可监听多个,集群内部将通过这些url进行数据交互(如选举,数据同步等)

--initial-advertise-peer-urls
建议用于和节点之间通信的url,节点间将以该值进行通信。

--listen-client-urls
监听的用于和客户端通信的url,同样可以监听多个。

--advertise-client-urls
建议使用的和客户端通信url,该值用于etcd代理或etcd成员与etcd节点通信。

--initial-cluster-token etcd-cluster-1
节点的token值,设置该值后集群将生成唯一id,并为每个节点也生成唯一id,当使用相同配置文件再启动一个集群时,只要该token值不一样,etcd集群就不会相互影响。

--initial-cluster
也就是集群中所有的initial-advertise-peer-urls 的合集

--initial-cluster-state new
新建集群的标志,初始化状态使用 new,建立之后改此值为 existing

2.3.5 执行 goreman启动命令

/root/gore下执行:

goreman -f ./Procfile.learner start

注意:如果报以下错误,则说明应先安装Etcd然后在使用goreman来配置集群

07:29:58 etcd1 | Starting etcd1 on port 5000
07:29:58 etcd2 | Starting etcd2 on port 5100
07:29:58 etcd3 | Starting etcd3 on port 5200
07:29:58 etcd1 | /bin/sh: etcd: 未找到命令
07:29:58 etcd2 | /bin/sh: etcd: 未找到命令
07:29:58 etcd3 | /bin/sh: etcd: 未找到命令
07:29:58 etcd1 | Terminating etcd1
07:29:58 etcd2 | Terminating etcd2
07:29:58 etcd3 | Terminating etcd3

2.3.6 查看Etcd启动情况

命令:

etcdctl --endpoints=http://localhost:22379  member list

集群内的成员:

8211f1d0f64f3269, started, infra1, http://127.0.0.1:12380, http://127.0.0.1:12379, false
91bc3c398fb3c146, started, infra2, http://127.0.0.1:22380, http://127.0.0.1:22379, false
fd422379fda50e48, started, infra3, http://127.0.0.1:32380, http://127.0.0.1:32379, false

至此,单机上搭建Etcd集群就完成了。

※2.4 多机搭建Etcd集群

多机搭建Etcd集群需要注意的是每个机器上的 2379、2380端口要保持开放,否则会搭建失败。

2.4.1 各主机节点信息

etcd集群

2.4.2 下载安装Etcd

这里我们在 第一章 2.2 Linux安装Etcd基础上进行改造从而实现Etcd多机集群,故不再单独在每台机器上下载安装Etcd。

2.4.3 各节点修改配置

1、 在每台机器上 切换至 /opt/soft/etcd/etcd-download-test/ 目录,将etcdetcdctl这两个二进制文件复制到/usr/local/bin目录下,这样就可以在系统中直接调用etcd/etcdctl这两个程序了。

cp etcd etcdctl /usr/local/bin

2、输入命令etcd,即可启动一个单节点的etcd服务,ctrl+c即可停止服务。这里讲解一下etcd服务启动后控制台显示的各个参数的意义:

{"level":"info","ts":"2022-06-05T22:46:31.751+0800","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd"]}
{"level":"warn","ts":"2022-06-05T22:46:31.751+0800","caller":"etcdmain/etcd.go:105","msg":"'data-dir' was empty; using default","data-dir":"default.etcd"}
{"level":"info","ts":"2022-06-05T22:46:31.751+0800","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"default.etcd","dir-type":"member"}
{"level":"info","ts":"2022-06-05T22:46:31.751+0800","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["http://localhost:2380"]}
{"level":"info","ts":"2022-06-05T22:46:31.752+0800","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["http://localhost:2379"]}
{"level":"info","ts":"2022-06-05T22:46:31.752+0800","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.4","git-sha":"08407ff76","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":1,"max-cpu-available":1,"member-initialized":true,"name":"default","data-dir":"default.etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"default.etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":100000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"],"listen-client-urls":["http://localhost:2379"],"listen-metrics-urls":[],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
{"level":"info","ts":"2022-06-05T22:46:31.752+0800","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"default.etcd/member/snap/db","took":"75.889µs"}
{"level":"info","ts":"2022-06-05T22:46:31.752+0800","caller":"etcdserver/server.go:529","msg":"No snapshot found. Recovering WAL from scratch!"}
{"level":"info","ts":"2022-06-05T22:46:31.753+0800","caller":"etcdserver/raft.go:483","msg":"restarting local member","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","commit-index":6}
{"level":"info","ts":"2022-06-05T22:46:31.753+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=()"}
{"level":"info","ts":"2022-06-05T22:46:31.753+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became follower at term 3"}
{"level":"info","ts":"2022-06-05T22:46:31.753+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8e9e05c52164694d [peers: [], term: 3, commit: 6, applied: 0, lastindex: 6, lastterm: 3]"}
{"level":"warn","ts":"2022-06-05T22:46:31.754+0800","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2022-06-05T22:46:31.754+0800","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":1}
{"level":"info","ts":"2022-06-05T22:46:31.755+0800","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2022-06-05T22:46:31.755+0800","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8e9e05c52164694d","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
{"level":"info","ts":"2022-06-05T22:46:31.757+0800","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8e9e05c52164694d","initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"],"listen-client-urls":["http://localhost:2379"],"listen-metrics-urls":[]}
{"level":"info","ts":"2022-06-05T22:46:31.757+0800","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2022-06-05T22:46:31.758+0800","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"127.0.0.1:2380"}
{"level":"info","ts":"2022-06-05T22:46:31.758+0800","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"127.0.0.1:2380"}
{"level":"info","ts":"2022-06-05T22:46:31.758+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=(10276657743932975437)"}
{"level":"info","ts":"2022-06-05T22:46:31.758+0800","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls":["http://localhost:2380"]}
{"level":"info","ts":"2022-06-05T22:46:31.758+0800","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-05T22:46:31.758+0800","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-05T22:46:33.453+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d is starting a new election at term 3"}
{"level":"info","ts":"2022-06-05T22:46:33.453+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became pre-candidate at term 3"}
{"level":"info","ts":"2022-06-05T22:46:33.454+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d received MsgPreVoteResp from 8e9e05c52164694d at term 3"}
{"level":"info","ts":"2022-06-05T22:46:33.454+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became candidate at term 4"}
{"level":"info","ts":"2022-06-05T22:46:33.454+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 4"}
{"level":"info","ts":"2022-06-05T22:46:33.454+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became leader at term 4"}
{"level":"info","ts":"2022-06-05T22:46:33.454+0800","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 4"}
{"level":"info","ts":"2022-06-05T22:46:33.474+0800","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8e9e05c52164694d","local-member-attributes":"{Name:default ClientURLs:[http://localhost:2379]}","request-path":"/0/members/8e9e05c52164694d/attributes","cluster-id":"cdf818194e3a8c32","publish-timeout":"7s"}
{"level":"info","ts":"2022-06-05T22:46:33.475+0800","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-06-05T22:46:33.475+0800","caller":"embed/serve.go:140","msg":"serving client traffic insecurely; this is strongly discouraged!","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-06-05T22:46:33.475+0800","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-06-05T22:46:33.475+0800","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}


1.etcd-version:etcd的版本。
2.git-sha。
3.go-version:基于的go语言版本。
4.go-os:运行的系统。
5.go-arch:运行的系统架构。
6.max-cpu-set:设置的CPU数量。
7.max-cpu-available:最多可用的CPU数量。
8.member-initialized:集群成员是否初始化,默认false。

9.name表示节点名称,默认为default。
2.data-dir 保存日志和快照的数据目录,默认为当前工作目录default.etcd/目录下。
3.在http://localhost:2380和集群中其他节点通信。
4.在http://localhost:2379提供和客户端交互。
5.heartbeat-interval:为100ms,该参数的作用是leader多久发送一次心跳到followers,默认值是100ms。
6.election-timeout:为1000ms,该参数的作用是重新投票的超时时间,如果follow在该时间间隔没有收到心跳包,会触发重新投票,默认为1000ms。
7.snapshot-count:为10000,该参数的作用是指定有多少事务被提交时,触发截取快照保存到磁盘。
8.集群和每个节点都会生成一个uuid,且固定不变,`cluster-id`:集群UUID,`local-member-id`:本机UUID。

{"level":"info","ts":"2022-06-05T22:46:31.758+0800","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls":["http://localhost:2380"]}

9.启动的时候会运行raft,选举出leader

2.4.4 各节点 建立Etcd相关目录(即数据文件和配置文件的保存位置)

mkdir -p /opt/soft/etcd/etcd_data/ && mkdir -p /opt/soft/etcd/etcd_conf/

2.4.5 各节点 创建Etcd配置文件

/opt/soft/etcd/etcd_conf/下创建。

vim etcd.conf

输入内容:

# 节点名称 注意每个机器上的节点名称应不重复
ETCD_NAME="etcd01"
# 指定数据文件存放位置
ETCD_DATA_DIR="/opt/soft/etcd/etcd_data/"

2.4.6 各节点 创建systemd配置文件

vim /etc/systemd/system/etcd.service

输入内容:

[Unit]
Description=Etcd Server
After=network.target
Wants=network-online.target
 
[Service]
User=root
Type=notify
## 根据实际情况修改WorkingDirectory、EnvironmentFile、ExecStart这三个参数值
## 1.WorkingDirectory:etcd数据存放的位置
WorkingDirectory=/opt/soft/etcd/etcd_data/
## 2.EnvironmentFile即配置文件的位置,注意“-”不能少
EnvironmentFile=-/opt/soft/etcd/etcd_conf/etcd.conf
## 3.ExecStart即etcd启动程序位置
ExecStart=/usr/local/bin/etcd
Restart=always
RestartSec=10s
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

2.4.7 各节点 启动/查看Etcd服务

systemctl daemon-reload && systemctl enable etcd && systemctl start etcd && systemctl status etcd

附:

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
systemctl stop etcd
systemctl restart etcd

2.4.8 各节点 添加集群信息

编辑各机器的 etcd.conf 文件,添加集群信息,注意修改对应的ip。

vim /opt/soft/etcd/etcd_conf/etcd.conf

节点1的etcd.conf文件内容:

#########################################################
########### 请根据各节点服务器实际情况修改配置 ############
#########################################################
#[Member]
#1.节点名称,必须唯一
ETCD_NAME="etcd01"
 
#2.设置数据保存的目录
ETCD_DATA_DIR="/opt/soft/etcd/etcd_data"
 
#3.本节点机器用于监听其他节点的url,url是本机上的2380
ETCD_LISTEN_PEER_URLS="http://192.168.13.21:2380"
 
#4.建议本节点用于和其他节点之间通信的url,且会通告集群的其余成员节点
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.13.21:2380"

#5.本节点机器用于和客户端通信的url,url是本机上的 2379
ETCD_LISTEN_CLIENT_URLS="http://192.168.13.21:2379,http://127.0.0.1:2379"
 
#[Clustering]
#6.建议本节点和客户端通信使用的url
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.13.21:2379"
 
#7.集群中所有节点的信息
ETCD_INITIAL_CLUSTER="etcd01=http://192.168.13.21:2380,etcd02=http://192.168.13.22:2380,etcd03=http://192.168.13.23:2380"
 
#8.创建集群的token,这个值每个集群均相同
ETCD_INITIAL_CLUSTER_TOKEN="148e7b13-51fc-4cc8-965b-a7c9d58c18f5"
 
#9.初始集群状态,新建集群的时候,这个值为new,后续再启动时需要将“new”更改为“existing”
ETCD_INITIAL_CLUSTER_STATE="new"
 
#10.flannel操作etcd使用的是v2的API,而kubernetes操作etcd使用的v3的API
#   为了兼容flannel,将默认开启v2版本,故配置文件中设置 
ETCD_ENABLE_V2="true"

节点2的etcd.conf文件内容:

#########################################################
########### 请根据各节点服务器实际情况修改配置 ############
#########################################################
#[Member]
#1.节点名称,必须唯一
ETCD_NAME="etcd02"
 
#2.设置数据保存的目录
ETCD_DATA_DIR="/opt/soft/etcd/etcd_data"
 
#3.本节点机器用于监听其他节点的url,url是本机上的2380
ETCD_LISTEN_PEER_URLS="http://192.168.13.22:2380"
 
#4.建议本节点用于和其他节点之间通信的url,且会通告集群的其余成员节点
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.13.22:2380"

#5.本节点机器用于和客户端通信的url,url是本机上的 2379
ETCD_LISTEN_CLIENT_URLS="http://192.168.13.22:2379,http://127.0.0.1:2379"
 
#[Clustering]
#6.建议本节点和客户端通信使用的url
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.13.22:2379"
 
#7.集群中所有节点的信息
ETCD_INITIAL_CLUSTER="etcd01=http://192.168.13.21:2380,etcd02=http://192.168.13.22:2380,etcd03=http://192.168.13.23:2380"
 
#8.创建集群的token,这个值每个集群均相同
ETCD_INITIAL_CLUSTER_TOKEN="148e7b13-51fc-4cc8-965b-a7c9d58c18f5"
 
#9.初始集群状态,新建集群的时候,这个值为new,后续再启动时需要将“new”更改为“existing”
ETCD_INITIAL_CLUSTER_STATE="new"
 
#10.flannel操作etcd使用的是v2的API,而kubernetes操作etcd使用的v3的API
#   为了兼容flannel,将默认开启v2版本,故配置文件中设置 
ETCD_ENABLE_V2="true"

节点3的etcd.conf文件内容:

#########################################################
########### 请根据各节点服务器实际情况修改配置 ############
#########################################################
#[Member]
#1.节点名称,必须唯一
ETCD_NAME="etcd03"
 
#2.设置数据保存的目录
ETCD_DATA_DIR="/opt/soft/etcd/etcd_data"
 
#3.本节点机器用于监听其他节点的url,url是本机上的2380
ETCD_LISTEN_PEER_URLS="http://192.168.13.23:2380"
 
#4.建议本节点用于和其他节点之间通信的url,且会通告集群的其余成员节点
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.13.23:2380"

#5.本节点机器用于和客户端通信的url,url是本机上的 2379
ETCD_LISTEN_CLIENT_URLS="http://192.168.13.23:2379,http://127.0.0.1:2379"
 
#[Clustering]
#6.建议本节点和客户端通信使用的url
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.13.23:2379"
 
#7.集群中所有节点的信息
ETCD_INITIAL_CLUSTER="etcd01=http://192.168.13.21:2380,etcd02=http://192.168.13.22:2380,etcd03=http://192.168.13.23:2380"
 
#8.创建集群的token,这个值每个集群均相同
ETCD_INITIAL_CLUSTER_TOKEN="148e7b13-51fc-4cc8-965b-a7c9d58c18f5"
 
#9.初始集群状态,新建集群的时候,这个值为new,后续再启动时需要将“new”更改为“existing”
ETCD_INITIAL_CLUSTER_STATE="new"
 
#10.flannel操作etcd使用的是v2的API,而kubernetes操作etcd使用的v3的API
#   为了兼容flannel,将默认开启v2版本,故配置文件中设置 
ETCD_ENABLE_V2="true"

2.4.9 各节点清除旧数据

修改/opt/soft/etcd/etcd_conf/etcd.conf文件后,要先删除/opt/soft/etcd/etcd_data目录下保存的数据,不然再重新启用服务会失败。

cd /opt/soft/etcd/etcd_data && rm -rf *

2.4.10 为各节点创建Etcd服务 :etcd.service

因为在 2.4.6 创建systemd配置文件 已完成,故不再重复操作。
刷新:

systemctl daemon-reload

2.4.11 集群测试

各节点配置完成后,systemctl start etcd 开启Etcd服务,然后在任意节点执行etcdctl member list可列所有集群节点信息:

如果启动失败报已存在,则需要在 2.4.8 各节点 添加集群信息 中将:ETCD_INITIAL_CLUSTER_STATE="new" 改为 "existing"

相关命令:

etcdctl member list
etcdctl member list -w table
etcdctl endpoint health
etcdctl endpoint status

集群节点信息:

328665ad6ab85e87, started, etcd03, http://192.168.13.23:2380, http://192.168.13.23:2379, false
3e3ae68e74e2bac0, started, etcd01, http://192.168.13.21:2380, http://192.168.13.21:2379, false
50ba1b948dea2fe9, started, etcd02, http://192.168.13.22:2380, http://192.168.13.22:2379, false

服务相关指令:

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
systemctl stop etcd
systemctl restart etcd

至此,多机搭建Etcd集群就完成了。

2.5 Docker搭建Etcd集群

Docker搭建Etcd集群:需要在多台已安装Docker的机器上同时进行Etcd集群的配置。Docker的安装步骤参见:centos7安装docker

本次使用到的服务器节点IP:
在这里插入图片描述
Etcd教程 — 第一章 Etcd简介、Etcd单机安装 中,2.5 Docker安装Etcd 过程类似。

2.5.1 创建Etcd的数据目录

mkdir -p /opt/soft/etcd/data

2.5.2 Docker下载Etcd

# 拉取etcd镜像
docker pull quay.io/coreos/etcd:v3.5.13

2.5.3 各节点Docker部署Etcd配置

镜像下载完毕后,我们就可以配置有3个节点的Etcd集群了,我是在每个服务器节点的 /opt/soft/etcd 目录下新建的 etcd 脚本文件。

cd /opt/soft/etcd/ && vim etcd

三个节点的etcd脚本命令分别如下:
节点1

REGISTRY=quay.io/coreos/etcd

# 集群节点公共配置
ETCD_VERSION=v3.5.13
TOKEN=my-etcd-token
CLUSTER_STATE=new
NAME_1=etcd-node-1
NAME_2=etcd-node-2
NAME_3=etcd-node-3
HOST_1=192.168.1.221
HOST_2=192.168.1.222
HOST_3=192.168.1.223
CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380
DATA_DIR=/opt/soft/etcd/data

## 节点1
THIS_NAME=${NAME_1}
THIS_IP=${HOST_1}
docker run -d \
  -p 2379:2379 \
  -p 2380:2380 \
  --restart=always \
  --volume=${DATA_DIR}:/etcd-data \
  --name etcd ${REGISTRY}:${ETCD_VERSION} \
  /usr/local/bin/etcd \
  --data-dir=/etcd-data --name ${THIS_NAME} \
  --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://0.0.0.0:2380 \
  --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://0.0.0.0:2379 \
  --initial-cluster ${CLUSTER} \
  --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}

节点2

REGISTRY=quay.io/coreos/etcd

# 集群节点公共配置
ETCD_VERSION=v3.5.13
TOKEN=my-etcd-token
CLUSTER_STATE=new
NAME_1=etcd-node-1
NAME_2=etcd-node-2
NAME_3=etcd-node-3
HOST_1=192.168.1.221
HOST_2=192.168.1.222
HOST_3=192.168.1.223
CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380
DATA_DIR=/opt/soft/etcd/data

## 节点2
THIS_NAME=${NAME_2}
THIS_IP=${HOST_2}
docker run -d \
  -p 2379:2379 \
  -p 2380:2380 \
  --restart=always \
  --volume=${DATA_DIR}:/etcd-data \
  --name etcd ${REGISTRY}:${ETCD_VERSION} \
  /usr/local/bin/etcd \
  --data-dir=/etcd-data --name ${THIS_NAME} \
  --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://0.0.0.0:2380 \
  --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://0.0.0.0:2379 \
  --initial-cluster ${CLUSTER} \
  --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}

节点3

REGISTRY=quay.io/coreos/etcd

# 集群节点公共配置
ETCD_VERSION=v3.5.13
TOKEN=my-etcd-token
CLUSTER_STATE=new
NAME_1=etcd-node-1
NAME_2=etcd-node-2
NAME_3=etcd-node-3
HOST_1=192.168.1.221
HOST_2=192.168.1.222
HOST_3=192.168.1.223
CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380
DATA_DIR=/opt/soft/etcd/data

# 节点3
THIS_NAME=${NAME_3}
THIS_IP=${HOST_3}
docker run -d \
  -p 2379:2379 \
  -p 2380:2380 \
  --restart=always \
  --volume=${DATA_DIR}:/etcd-data \
  --name etcd ${REGISTRY}:${ETCD_VERSION} \
  /usr/local/bin/etcd \
  --data-dir=/etcd-data --name ${THIS_NAME} \
  --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://0.0.0.0:2380 \
  --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://0.0.0.0:2379 \
  --initial-cluster ${CLUSTER} \
  --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}

上面的脚本是部署在三台机器上面,在各节点的 /opt/soft/etcd 目录下执行./etcd 即可,如果提示权限不够,chmod 777 etcd,然后再执行脚本 ./etcd

2.5.4 查看Docker部署的Etcd集群情况

# 查看Etcd服务器版本
docker exec etcd /usr/local/bin/etcd --version

# 查看Etcd客户端版本
docker exec etcd /usr/local/bin/etcdctl version
docker exec etcd /usr/local/bin/etcdutl version

# 查看Etcd集群情况
docker exec etcd /usr/local/bin/etcdctl member list

# 查看Etcd集群情况(表格形式)
docker exec etcd /usr/local/bin/etcdctl member list -w table

# 查看Etcd集群集群状态
docker exec etcd /usr/local/bin/etcdctl endpoint status

# 查看Etcd健康状况
docker exec etcd /usr/local/bin/etcdctl endpoint health

# Etcd添加数据
docker exec etcd /usr/local/bin/etcdctl put foo bar

# Etcd查看数据
docker exec etcd /usr/local/bin/etcdctl get foo

# Etcd查看所有数据
docker exec etcd /usr/local/bin/etcdctl get --prefix ""

# Etcd删除数据
docker exec etcd /usr/local/bin/etcdctl del foo

至此,Docker搭建Etcd集群就完成了。

  • 0
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值