ELASTICSEARCH 8X
- 平衡els jvm内存大小大小
- 7.9之前配置节点需设置节点类型,8X配置角色
8X新概念
1.节点角色
概述
8X版本中节点类型升级成节点角色。不主动指定节点角色,默认是cdfhilmrstw
节点角色示意
角色缩写 | 英文释义 | 中文释义 |
---|---|---|
c | cold node | 冷数据节点 |
d | data node | 数据节点 |
f | frozen node | 冷冻数据节点 |
h | hot node | 热数据节点 |
i | ingest node | 数据预处理节点 |
l | machine learning node | 机器学习节点 |
m | master-eligible node | 候选主节点 |
r | remote cluster client node | 远程节点 |
s | content node | 内容数据节点 |
t | transform node | 转换节点 |
v | voting-only node | 仅投票节点 |
w | warm node | 温数据节点 |
/ | coordinating node only | 仅协调节点 |
当集群规模较大之后(比如集群节点数大于6个),就需要手动设定、配置节点角色
主节点
数据节点
内容数据节点
热数据节点
温数据节点
冷数据节点
冷冻数据节点
注意: 热、温、冷、冷冻数据节点要和内容数据节点一起配,否则数据无法落库,因为前面的节点角色是标识性角色,实际存储内容还是要内容数据节点
ingest节点
- 执行由预处理管道组成的预处理任务 //todo
node.roles:[ ingest ]
仅协调节点
- 类似于智能负载均衡,负载路由分发请求、聚拢搜索和聚合结果
node.roles:[ ]
远程节点
- 跨集群检索或跨集群复制
node.roles: [ remote_cluster_client ]
机器学习节点
- 负责机器学习,收费
node.roles: [ m1, remote_cluster_client ]
转换节点
- 对数据进行格式、类型转换节点
node.roles: [ transform, remote_cluster_client ]
单节点部署docker
- 创建专有网络
docker network create elastic
- 设置文件同时打开最大数
vim /etc/profiles
#配置文件同时打开数
ulimit -n 65535
# 重新载入
source /etc/profiles
- 修改最大映射数量
sysctl -w vm.max_map_count=262144
- 安装docker es
# 先安装一次 把默认配置数据复制到本地 /mydata/elasticsearch/(指config,直接挂载启动会报错)
docker run --name es --net elastic -v /mydata/elasticsearch/data:/usr/share/elasticsearch/data -p 9200:9200 -di -e ES_JAVA_OPTS="-Xms1g -Xmx1g" elasticsearch:8.10.4
# 把所有默认配置拉到本地
docker cp es:/usr/share/elasticsearch/config /mydata/elasticsearch/config
# 移除安装的es
docker stop es
docker rm es
# 正式安装es
docker run --name es --net elastic -v /mydata/elasticsearch/config:/usr/share/elasticsearch/config -v /mydata/elasticsearch/data:/usr/share/elasticsearch/data -p 9200:9200 -di -e ES_JAVA_OPTS="-Xms1g -Xmx1g" elasticsearch:8.10.4
- 配置用户名密码
# 为 elastic 配置用户名密码
docker exec -it es /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
# 官网建议把密码存在 环境变量中 export ELASTIC_PASSWORD="your_password"
#生成 kibana token
docker exec -it es /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana
证书配置(容器内)
# 配置认证中心
./bin/elasticsearch-certutil ca
# 创建公开密钥证书
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
-
将 两个以p12结尾文件copy 到 config 下
-
修改es.yml的配置引用到证书
# Enable encryption and mutual authentication between cluster
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
client_authentication: required
keystore.path: elastic-certificates.p12
truststore.path: elastic-certificates.p12
- 如果创建节点证书时配置了密码,需要把密码存在es密钥库,还要配置tls相关的https配置
# 添加 keystore 密码到 Elasticsearch Keystore
./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
# 添加 truststore 密码到 Elasticsearch Keystore
./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
- 这时重启会失败,还要配置http,可以用刚生成的ca去生成
# 用ca生成的证书授权实体去配置https认证elastic-stack-ca.p12
./bin/elasticsearch-certutil http
./bin/elasticsearch-certutil http配置时要注意 域名||ip 方便对应的节点连接
- 生成一个zip文件,解压生成的zip文件,有两个 一个els 一个kibana
- 安装kibana时要把这个kibana文件夹下的es-ca.pem 文件考到主机kibana的config下,并且修改kibana.yml的配置(参考这个kibana文件夹下的sample-kibana.yml)
- 把http.p12文件移动到config/certs (覆盖) 或是直接改 els.yml的certs/http.p12的配置
# 对应配置
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
truststore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: elastic-certificates.p12
truststore.path: elastic-certificates.p12
## es不允许同时指定这两个密码配置项,要用keystore,上面的ssl也是
#keystore.password: "123456"
#truststore.password: "123456"
### 如果ssl加密了也要存在keystore
# 添加 HTTP SSL 的 keystore 密码
./bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password
# 添加 HTTP SSL 的 truststore 密码
./bin/elasticsearch-keystore add xpack.security.http.ssl.truststore.secure_password
- 最终配置
[root@localhost config]# cat elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
node.roles: [ master,data ]
node.name: "testEs"
#discovery.type: single-node
cluster.initial_master_nodes: [ "testEs" ]
#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 02-06-2024 08:28:47
#
# --------------------------------------------------------------------------------
# Enable security features
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
truststore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: elastic-certificates.p12
truststore.path: elastic-certificates.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
#cluster.initial_master_nodes: ["c6c86714b1bb"]
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------
- 然后重启docker
# 为 elastic 配置用户名密码
# 配置用户名密码
docker exec -it es /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
# 官网建议把密码存在 环境变量中 export ELASTIC_PASSWORD="your_password"
#生成 kibana token
docker exec -it es /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana
添加节点
# 用存在的节点生成令牌 (默认有效期30分钟)
docker exec -it es /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node
# 创建新的节点将上面生成的token存在环境变量上(步骤省略) <token> 替换成环境变量的key
docker run -e ENROLLMENT_TOKEN="<token>" --name es02 --net elastic -it -m 1GB docker.elastic.co/elasticsearch/elasticsearch:8.10.4
# 验证下
curl --cacert http_ca.crt -u elastic:$ELASTIC_PASSWORD https://localhost:9200/_cat/nodes
装kibana(8.10.4)
docker run -di --name=kibana -p 5601:5601 -v /mydata/kibana/config:/usr/share/kibana/config --net elastic kibana:8.10.4
移动http生成的压缩包里面kibana文件夹下 elasticsearch-ca.pem复制到config下
修改配置
=======================
#
# ** THIS IS AN AUTO-GENERATED FILE **
#
# Default Kibana configuration for docker target
server.host: "0.0.0.0"
server.shutdownTimeout: "5s"
# 容器里es地址
elasticsearch.hosts: [ "https://xxxx:9200" ]
monitoring.ui.container.elasticsearch.enabled: true
elasticsearch.ssl.certificateAuthorities: [ "config/elasticsearch-ca.pem" ]
elasticsearch.username: "kibana_master"
elasticsearch.password: "123456"
===============
上面配置的用户要有kibana_system,kibana_admin的权限
安装logstash
- 配置时注意用户名和密码 && els-ca.pem
logstash.yml
#http.host: "0.0.0.0"
#xpack.monitoring.elasticsearch.hosts: [ "https://172.18.0.2:9200" ]
http.host: "0.0.0.0"
http.port: 9600-9700
# Elasticsearch 输出的相关配置
xpack.monitoring.enabled: true
# 容器里es地址
xpack.monitoring.elasticsearch.hosts: ["https://xxxxx:9200"]
xpack.monitoring.elasticsearch.username: "log_user"
xpack.monitoring.elasticsearch.password: "123456"
# SSL/TLS 配置
xpack.monitoring.elasticsearch.ssl.certificate_authority: "config/elasticsearch-ca.pem"
示例
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
file {
path => "/usr/share/logstash/config/tst.txt" # 使用绝对路径
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["https://172.18.0.2:9200"]
index => "ttt-%{[@metadata][version]}-%{+YYYY.MM.dd}"
user => "log_user"
password => "123456"
ssl => true
ssl_certificate_authorities => "/usr/share/logstash/config/elasticsearch-ca.pem"
}
}
注意配置pipeline
docker run --network elastic --name logstash -di -v F:/dataStore/docker/mydata/logstash/conf:/usr/share/logstash/config -v F:/dataStore/docker/mydata/logstash/pipeline:/usr/share/logstash/pipeline -p 5400:5400 -p 9600:9600 logstash:8.10.4