(一) ES集群搭建
1 安装 elasticsearch-8.13.3
-
下载安装包
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.13.3-linux-x86_64.tar.gz
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.13.3-linux-x86_64.tar.gz.sha512
sha512sum elasticsearch-8.13.3-linux-x86_64.tar.gz # 数据完整性验证
cat elasticsearch-8.13.3-linux-x86_64.tar.gz.sha512 # 内容一致 验证数据完整
-
安装插件(master和data一样)
下载插件
mkdir && cd /elastic
wget https://halo.corp.kuaishou.com/api/cloud-storage/v1/public-objects/docs_heap/elasticsearch-analysis-hanlp-8.13.3.1.zip
wget https://halo.corp.kuaishou.com/api/cloud-storage/v1/public-objects/docs_heap/elasticsearch-analysis-pinyin-8.13.3.1.zip
解压tar包
tar -xzf elasticsearch-8.13.3-linux-x86_64.tar.gz。 # master和data共用一个tar包 改名区分
cd /elastic/elasticsearch-8.13.3-data或master
列出现有插件
./bin/elasticsearch-plugin list
安装
./bin/elasticsearch-plugin install -b file:/elastic/elasticsearch-analysis-pinyin-8.13.3.1.zip
./bin/elasticsearch-plugin install -b file:/elastic/elasticsearch-analysis-hanlp-8.13.3.1.zip
如果原来有 卸载旧插件
./bin/elasticsearch-plugin remove analysis-pinyin
./bin/elasticsearch-plugin remove elasticsearch-analysis-ansj
-
修改master配置
cd elasticsearch-master-8.13.3/
- 创建data 和log目录
mkdir data logs
cd config
mkdir certs
- 生成存储证书和私钥的 PKCS12 格式文件 放到config/certs下 一个集群内认识彼此的凭证
# elastic-certificates.p12和elastic-stack-ca.p12
./bin/elasticsearch-certutil ca -out elastic-stack-ca.p12
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 -out elastic-certificates.p12
- 验证集群间cert一致性
md5sum elastic-stack-ca.p12
md5sum elastic-certificates.p12
修改配置
vim elasticsearch.yml
cluster.name: cluster-01 # 集群名
node.name: xxx-master1 # 节点名
path.data: /elastic/elasticsearch-8.13.3-master/data
path.logs: /elastic/elasticsearch-8.13.3-master/logs
network.host: ${HOSTNAME}
http.port: 9200 # 对外服务的端口
network.publish_host: ${HOSTNAME}
transport.profiles.default.port: 9300 # 节点间通讯端口
discovery.seed_hosts: ["172.28.212.92:9300","172.28.212.86:9300","172.28.212.75:9300","172.28.212.92:9301","172.28.212.86:9301","172.28.212.75:9301"] # 用于集群发现的节点列表 注:这里三个master和三个data用的是三台主机 避免端口占用情况 master 节点间通讯端口是9300 data节点间通讯端口用9301 master 对外服务端口 9200 data对外服务端口 9201
cluster.initial_master_nodes: ["xxx-master1","xxx-master2","xxx-master3"] #首次启动时的主节点列表(node-name),防止脑裂
node.roles: [master] # 角色
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.monitoring.collection.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /elastic/elasticsearch-8.13.3-master/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /elastic/elasticsearch-8.13.3-master/config/certs/elastic-certificates.p12
-
修改data配置
cd elasticsearch-data-8.13.3/
- 创建data 和log目录
mkdir data logs
cd config
mkdir certs
- 生成存储证书和私钥的 PKCS12 格式文件 放到config/certs下 一个集群内认识彼此的凭证
# elastic-certificates.p12和elastic-stack-ca.p12
./bin/elasticsearch-certutil ca -out elastic-stack-ca.p12
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 -out elastic-certificates.p12
- 验证集群间cert一致性
md5sum elastic-stack-ca.p12
md5sum elastic-certificates.p12
修改配置
vim elasticsearch.yml
cluster.name: cluster-01 #集群名
node.name: xxx-data1
path.data: /elastic/elasticsearch-8.13.3-data/data
path.logs: /elastic/elasticsearch-8.13.3-data/logs
network.host: ${HOSTNAME}
http.port: 9201 # 对外服务的端口
network.publish_host: ${HOSTNAME}
transport.profiles.default.port: 9301 # 节点间通讯端口
discovery.seed_hosts: ["172.28.212.92:9300","172.28.212.86:9300","172.28.212.75:9300","172.28.212.92:9301","172.28.212.86:9301","172.28.212.75:9301"] # 用于集群发现的节点列表
cluster.initial_master_nodes: ["xxx-master1","xxx-master2","xxx-master3"] # 首次启动时的主节点列表,防止脑裂
node.roles: [master] # 角色
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.monitoring.collection.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /elastic/elasticsearch-8.13.3-data/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /elastic/elasticsearch-8.13.3-data/config/certs/elastic-certificates.p12
-
配置master service
vim /usr/lib/systemd/system/elasticsearch-master-8.13.3.service
[Unit]
Description=elasticsearch
After=network.target
[Service]
LimitNOFILE=65535
Type=simple
User=web_server
Group=web_server
Restart=on-failure
RestartSec=10s
ExecStart=/elastic/elasticsearch-8.13.3-master/bin/elasticsearch
PrivateTmp=true
[Install]
WantedBy=multi-user.target
-
配置data service
vim /usr/lib/systemd/system/elasticsearch-data-8.13.3.service
[Unit]
Description=elasticsearch
After=network.target
[Service]
LimitNOFILE=65535
Type=simple
User=web_server
Group=web_server
Restart=on-failure
RestartSec=10s
ExecStart=/elastic/elasticsearch-8.13.3-data/bin/elasticsearch
PrivateTmp=true
[Install]
WantedBy=multi-user.target
-
设置开机自启动
systemctl daemon-reload
chown -R web_server:web_server ES目录 # 因为ES不支持root启动 service文件里用户是web_server
systemctl enable elasticsearch-data-8.13.3.service
systemctl enable elasticsearch-master-8.13.3.service
-
调整内核参数及启动
- 调整内核参数 不然ES起不来
使用内存映射文件(MMAP)来存储和访问索引数据,这有助于提高数据的读写性能。为了有效地管理索引数据,Elasticsearch 需要创建大量的内存映射区域。默认情况下,Linux 系统的 vm.max_map_count 值可能较低,无法满足 Elasticsearch 的需求
sysctl -w vm.max_map_count=262144 (临时)
vim /etc/sysctl.conf (永久)
添加 vm.max_map_count=262144
启动
systemctl start elasticsearch-master-8.13.3.service
systemctl start elasticsearch-data-8.13.3.service
2 安装kibana-8.13.3
-
下载安装包 修改配置
curl -O https://artifacts.elastic.co/downloads/kibana/kibana-8.13.3-linux-x86_64.tar.gz
解压 修改配置
vim /kibana-8.13.3/config/kibana.yml
修改 server.host: 本机ip
修改 日志路径
添加
elasticsearch.hosts: ['http://x.x.x.x:9200']
elasticsearch.username: 用户名
elasticsearch.password: 密码
monitoring.ui.ccs.enabled: false
-
配置 kibana service
vim /usr/lib/systemd/system/kibana-8.13.3.service
[Unit]
Description=kibana
After=network.target
[Service]
Type=simple
User=web_server
Group=web_server
Restart=on-failure
RestartSec=10s
ExecStart=/elastic/kibana-8.13.3/bin/kibana
PrivateTmp=true
[Install]
WantedBy=multi-user.target
-
设置开机自启及启动
systemctl daemon-reload
systemctl enable kibana.service
chown -R web_server:web_server ES目录
systemctl start kibana.service
到这里就ES 集群 8.13.3版本搭建完啦 接下来就该升级了! 8.13.3 -----> 8.17.3
(二) 滚动升级 (8.13==>8.17)
只能小版本之间的升级 如果大版本升级 需要先把小版本升到最高级,再升大版本
升级前准备
-
检查当前版本与目标版本之间的变更,对弃用、禁用等功能进行分析,避免对现有业务造成影响
-
按照节点属性先后升级节点,分别是
-
追层升级数据节点,**从冻结层开始,然后是冷层,然后是温层,然后是热层,最后是任何不在层中的其他数据节点GET /nodes/data_frozen:true/none**
-
升级所有剩余的既不符合主节点资格又不符合数据节点的节点。这包括专用 ML 节点、专用采集节点和专用协调节点。
-
最后升级符合主节点条件的节点。**GET /nodes/master:true/none**
-
此顺序可确保升级期间所有节点均可加入集群。升级后的节点可以加入具有较旧主节点的集群,但较旧的节点无法始终加入具有升级主节点的集群。
官方文档:Upgrade your deployment or cluster | Elastic Docs
1 禁用分片分配 or 节点排水(二选一)
(1)禁用分片分配
当关闭数据节点时,分配过程会等待 (默认情况下为一分钟)才开始将该节点上的分片复制到集群中的其他节点,这可能涉及大量 I/O
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.enable": "primaries"
}
}
(2)排空该节点上的数据
PUT /_cluster/settings
{
"transient": {
"cluster.routing.allocation.exclude._name": "{node.name}"
}
}
#查看进度,所有分片迁移完成后,可再通过以下命令查看节点数据是否已迁移,都是 0 表示数据也已经迁移,如
GET /_nodes/{node.name}/stats/indices?pretty
...
"indices" : {
"docs" : {
"count" : 0,
"deleted" : 0
},
"store" : {
"size_in_bytes" : 0
},
...
2 搭建高版本的ES
-
拷贝低版本的config配置并修改
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.17.3-linux-x86_64.tar.gz
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.17.3-linux-x86_64.tar.gz.sha512
sha512sum elasticsearch-8.17.3-linux-x86_64.tar.gz
tar -xzf elasticsearch-8.17.3-linux-x86_64.tar.gz
cd elasticsearch-8.17.3/
拷贝旧目录的config文件夹到当下
修改 elasticsearch.yml 文件,主要修改目录
执行滚动升级时 # 保留cluster.initial_master_nodes未设置 每个升级的节点都会加入现有集群,因此无需进行集群引导
-
安装插件
wget https://halo.corp.kuaishou.com/api/cloud-storage/v1/public-objects/docs_heap/elasticsearch-analysis-hanlp-8.17.3.1.zip
wget https://halo.corp.kuaishou.com/api/cloud-storage/v1/public-objects/docs_heap/elasticsearch-analysis-pinyin-8.17.3.1.zip
cd /elastic/elasticsearch-8.17.3-data
./bin/elasticsearch-plugin list # 列出现有插件
./bin/elasticsearch-plugin install -b file:/elastic/elasticsearch-analysis-pinyin-8.17.3.1.zip
./bin/elasticsearch-plugin install -b file:/elastic/elasticsearch-analysis-hanlp-8.17.3.1.zip # 安装
-
配置master/data-8.17.3 service并设置开机自启动
参考8.13.3
3 关闭低版本节点
systemctl stop elasticsearch-data/master-8.13.3
4 把旧节点的数据拷贝到新节点目录下
- 数据量小直接cp
一般用方案二排水的话直接用cp就可以 因为数据已经排到其他节点了
- 数据量大 用同步的方式
rsync -avP --delete /elastic/elasticsearch-8.13.3-data/data/ /elastic/elasticsearch-8.17.3-data/data/
5 启动新节点
systemctl start elasticsearch-data/master-8.17.3.service
6 启用分片分配或关闭排水(二选一)
(1)启用分片分配
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.enable": null
}
}
(2)关闭排水
PUT /_cluster/settings
{
"transient": {
"cluster.routing.allocation.exclude._name": null
}
}
7 观察集群状态
GET _cat/nodes
GET _cat/health
在滚动升级期间,分配给运行新版本的节点的主分片不能将其副本分配给运行旧版本的节点。新版本可能具有旧版本无法理解的不同数据格式。如果无法将副本分片分配给另一个节点(集群中只有一个升级节点),则副本分片将保持未分配状态并且状态保持不变yellow。
在这种情况下,只要没有初始化或重新定位分片(检查init和relo列),您就可以继续。一旦另一个节点升级,就可以分配副本并且状态将变为green(非red状态就可以升级下一个节点)
GET /_nodes/master:true
GET /_nodes/data_frozen:true 可以用来检查是否升级成功
GET _cluster/settings
GET /_nodes/x.x.x.x-9201/stats/indices?pretty 查看排水进度
8 安装高版本的kibana
curl -O https://artifacts.elastic.co/downloads/kibana/kibana-8.17.3-linux-x86_64.tar.gz
curl https://artifacts.elastic.co/downloads/kibana/kibana-8.17.3-linux-x86_64.tar.gz.sha512 | shasum -a 512 -c -
tar -xzf kibana-8.17.3-linux-x86_64.tar.gz
其他的参考 ES集群搭建 记得拷贝旧版的data
9 QA&研发验证功能,SRE验证集群状态
-
kibana & cerebro(查看数据切片实时状态)
PS 题外: 拷贝旧配置需要机器间免密登陆scp
ssh-keygen # 生成密钥对
cat ~/.ssh/id_rsa.pub # 查看公钥并复制
vim ~/.ssh/authorized_keys # 写到被免密登陆的机器(ssh允许root账号登录)
关注➕ 蛋蛋不迷路~ 有疑问随时留言哦😯 不定时更新!