docker-compose 搭建集群
docker-compose.yml
version: '2.2'
services:
prod01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
container_name: prod01
hostname: prod01
restart: always
environment:
- node.name=prod01
- network.host=0.0.0.0
- network.publish_host=ip1
- cluster.name=prod-es-cluster
- discovery.seed_hosts=[ip1,ip2,ip3]
- cluster.initial_master_nodes=[ip1,ip2,ip3]
- bootstrap.memory_lock=true
- http.cors.enabled=true
- http.cors.allow-origin="*"
- cluster.initial_master_nodes:prod-gateway
- action.destructive_requires_name=true
# - path.logs=/opt/dockers/elasticsearch/elastic-es-7.6/logs
# - xpack.security.enabled=true
# - xpack.security.transport.ssl.enabled=true
# - xpack.security.transport.ssl.verification_mode=certificate
# - xpack.security.transport.ssl.keystore.path=elastic-certificates.p12
# - xpack.security.transport.ssl.truststore.path=elastic-certificates.p12
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
extra_hosts:
- "prod-gateway:ip2"
- "prod01:ip1"
- "prod02:ip3"
ulimits:
memlock:
soft: -1
hard: -1
# 先注释 后面 解开
# volumes:
# - /opt/dockers/elasticsearch/elastic-es/data:/usr/share/elasticsearch/data
# - /opt/dockers/elasticsearch/elastic-es/logs:/usr/share/elasticsearch/logs
# - /opt/dockers/elasticsearch/elastic-es/config:/usr/share/elasticsearch/config
ports:
- 9200:9200
- 9300:9300
# logging:
# driver: fluentd
# options:
# fluentd-address: "localhost:24224"
# fluentd-retry-wait: '1s'
# fluentd-max-retries: '60'
# fluentd-async-connect: 'true'
# tag: "{{.DaemonName}}.{{.Name}}"
networks:
- elastic-prod
networks:
elastic-prod:
driver: bridge
volumes:
data:
driver: local
先把 volumes 以及内容注释了
docker-compose up -d
查看日志
docker logs -f prod01
应该是说明节点间还没互相发现 没有一个master的节点 三个节点都配置好了就不会出这个错误了
docker cp prod01:/usr/share/elasticsearch/config .
docker cp prod01:/usr/share/elasticsearch/data .
docker cp prod01:/usr/share/elasticsearch/logs .
将容器中的cp到 本地
本地文件没有的话会报错
讲这些映射到本地可以方便修改配置
而且在一定程度上能保证重启docker后数据不丢失
也能方便后面做日志的配置
停止
docker-compose down
解开docker-compose.yml的注释
volumes:
- /opt/dockers/elasticsearch/elastic-es/data:/usr/share/elasticsearch/data
- /opt/dockers/elasticsearch/elastic-es/logs:/usr/share/elasticsearch/logs
- /opt/dockers/elasticsearch/elastic-es/config:/usr/share/elasticsearch/config
重新执行
docker-compose up -d
其它节点依次执行相应的操作
集群搭建完毕
输入
curl -L http://localhost:9200/_cat/nodes
查看集群信息
elasticsearch 配置xpack
使用
sudo docker exec -it prod01 /bin/bash
进入容器内部 可以看到红色的改变
生成证书文件
./bin/elasticsearch-certutil ca
这里直接回车 生成到当前目录下
输入密码 为了好记 把所有的密码都输入成一个即可
生成了elastic-stack-ca.p12 的文件
生成密钥文件
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
输入密码
让输入路径 回车即可
最后输入密码
cp elastic-certificates.p12 config/
复制到 config目录下
cd config/
vi elasticsearch.yml
加入配置
i 进入编辑模式
ESC 后 : wq 保存退出
vi命令 不会的自己百度了
cluster.name: "docker-cluster"
network.host: 0.0.0.0
path.logs: /usr/share/elasticsearch/logs
xpack.security.enabled: true
xpack.security.authc.accept_default_password: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
修改完后 exit
退出容器
cd 到本地的config下面 看到了密钥文件
这个文件每个集群共用 只需要一台主机生成
将这些密钥文件同步到其它主机 需要输入other ip的密码
scp elastic-certificates.p12 other ip:/opt/dockers/elasticsearch/elastic-es/config/
转移到 other ip的主机下
因为容器内部和外部是映射(挂载)关系的,所以直接在
/opt/dockers/elasticsearch/elastic-es/config/
中编辑 elasticsearch.yml
vi elasticsearch.yml
分别在两台主机下 重启es
docker restart prod01
docker restart prod02
docker logs -f prod01
查看日志
启动失败了 好像是权限问题
进入 本机的config/
下 赋权
chmod 777 elastic-certificates.p12
重启
docker restart prod01
还是不行
ElasticsearchSecurityException[failed to load SSL configuration [xpack.security.transport.ssl]]; nested: ElasticsearchException[failed to initialize SSL TrustManager]; nested: IOException[keystore password was incorrect]; nested: UnrecoverableKeyException[failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.];
Likely root cause: java.security.UnrecoverableKeyException: failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.
at java.base/sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:2118)
at java.base/sun.security.util.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:222)
at java.base/java.security.KeyStore.load(KeyStore.java:1472)
at org.elasticsearch.xpack.core.ssl.TrustConfig.getStore(TrustConfig.java:97)
at org.elasticsearch.xpack.core.ssl.StoreTrustConfig.createTrustManager(StoreTrustConfig.java:65)
at org.elasticsearch.xpack.core.ssl.SSLService.createSslContext(SSLService.java:427)
at java.base/java.util.HashMap.computeIfAbsent(HashMap.java:1138)
at org.elasticsearch.xpack.core.ssl.SSLService.loadConfiguration(SSLService.java:521)
at org.elasticsearch.xpack.core.ssl.SSLService.loadSSLConfigurations(SSLService.java:501)
at org.elasticsearch.xpack.core.ssl.SSLService.<init>(SSLService.java:142)
at org.elasticsearch.xpack.core.ssl.SSLService.<init>(SSLService.java:130)
at org.elasticsearch.xpack.core.XPackPlugin.createComponents(XPackPlugin.java:259)
at org.elasticsearch.node.Node.lambda$new$9(Node.java:456)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:271)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1621)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
at org.elasticsearch.node.Node.<init>(Node.java:459)
at org.elasticsearch.node.Node.<init>(Node.java:257)
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:221)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:221)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:349)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:125)
at org.elasticsearch.cli.Command.main(Command.java:90)
<<<truncated>>>
For complete error details, refer to the log at /usr/share/elasticsearch/logs/prod-es-cluster.log
好像是密码不对 忘了还没输入密码
不要着急
编辑本地目录下的elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
path.logs: /usr/share/elasticsearch/logs
#xpack.security.enabled: true
#xpack.security.authc.accept_default_password: true
#xpack.security.transport.ssl.enabled: true
#xpack.security.transport.ssl.verification_mode: certificate
#xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
#xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
注释掉认证部分 然后重启 prod01
docker restart prod01
sudo docker exec -it prod01 /bin/bash
配置密码
./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
然后输入密码 建议跟上面一致
./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
继续输密码 一致即可
解开 配置文件的注释
等有时间了 更新 kibana和elasticsearch 的xpack认证
- 还有java配置es集群
exit 退出docker容器后 docker restart prod01
其它节点都进行这个操作 等其它节点都重启完成后
进入 sudo docker exec -it prod01 /bin/bash
容器内部
设置密码
./bin/elasticsearch-setup-passwords interactive
这个密码至关重要 一定记住
输入一系列的密码
随后进入退出容器 执行
curl -l http://localhost:9200
报错
{
"error": {
"root_cause": [{
"type": "security_exception",
"reason": "missing authentication credentials for REST request [/]",
"header": {
"WWW-Authenticate": "Basic realm=\"security\" charset=\"UTF-8\""
}
}],
"type": "security_exception",
"reason": "missing authentication credentials for REST request [/]",
"header": {
"WWW-Authenticate": "Basic realm=\"security\" charset=\"UTF-8\""
}
},
"status": 401
}
在xpack认证后就要执行
curl -u elastic localhost:9200
输入密码后 能看到集群信息啦
输错密码会报错
{
"error": {
"root_cause": [{
"type": "security_exception",
"reason": "failed to authenticate user [elastic]",
"header": {
"WWW-Authenticate": "Basic realm=\"security\" charset=\"UTF-8\""
}
}],
"type": "security_exception",
"reason": "failed to authenticate user [elastic]",
"header": {
"WWW-Authenticate": "Basic realm=\"security\" charset=\"UTF-8\""
}
},
"status": 401
}
kibana
kibana.yml
server.name: kibana
server.host: "0.0.0.0"
elasticsearch.hosts: [ "http://ip1:9200/","http://ip2:9200/","http://ip3:9200/" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
elasticsearch.username: "elastic"
elasticsearch.password: 密码
docker-compose.yml
version: "2.2"
services:
kibana:
image: kibana:7.6.2
container_name: kibana
hostname: kibana
environment:
SERVER_NAME: kibana
server.host: "0.0.0.0"
i18n.locale: zh-CN
volumes:
- /opt/dockers/elasticsearch/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
extra_hosts:
- "prod-gateway:ip1"
- "prod01:ip2"
- "prod02:ip3"
ports:
- 5601:5601
networks:
- elastic-prod
networks:
elastic-prod:
driver: bridge
直接执行
docker-compose up -d
完事