1、配置es集群
在系统分别建立三个文件夹,如/mydata/elasticsearch1/conf/、/mydata/elasticsearch2/conf/、/mydata/elasticsearch3/conf/,
为目录设置权限chmod 777 /mydata/elasticsearch1/conf/,
分别建立3个文件,依次为es1.yml,es2.yml,es3.yml,
暂无太多虚拟机,暂时将集群配置在一台上,配置为:
es1.yml
cluster.name: elasticsearch
node.name: es1
network.bind_host: 0.0.0.0
network.publish_host: 192.168.134.189
http.port: 9200
transport.tcp.port: 9300
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: true
node.data: true
discovery.zen.ping.unicast.hosts: ["192.168.134.189:9300","192.168.134.189:9301","192.168.134.189:9302"]
discovery.zen.minimum_master_nodes: 2
es2.yml
cluster.name: elasticsearch
node.name: es2
network.bind_host: 0.0.0.0
network.publish_host: 192.168.134.189
http.port: 9201
transport.tcp.port: 9301
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: true
node.data: true
discovery.zen.ping.unicast.hosts: ["192.168.134.189:9300","192.168.134.189:9301","192.168.134.189:9302"]
discovery.zen.minimum_master_nodes: 2
es3.yml
cluster.name: elasticsearch
node.name: es3
network.bind_host: 0.0.0.0
network.publish_host: 192.168.134.189
http.port: 9202
transport.tcp.port: 9302
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: true
node.data: true
discovery.zen.ping.unicast.hosts: ["192.168.134.189:9300","192.168.134.189:9301","192.168.134.189:9302"]
discovery.zen.minimum_master_nodes: 2
配置注意点:集群的名字(cluster.name)必须保持一致。
执行docker命令运行es集群:
docker run -e ES_JAVA_OPTS="-Xms256m -Xmx256m" -d -p 9200:9200 -p 9300:9300 -v /mydata/elasticsearch/conf/es1.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /mydata/elasticsearch/data:/usr/share/elasticsearch/data --name es1 elasticsearch:6.8.1
docker run -e ES_JAVA_OPTS="-Xms256m -Xmx256m" -d -p 9201:9201 -p 9301:9301 -v /mydata/elasticsearch2/conf/es2.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /mydata/elasticsearch2/data:/usr/share/elasticsearch/data --name es2 elasticsearch:6.8.1
docker run -e ES_JAVA_OPTS="-Xms256m -Xmx256m" -d -p 9202:9202 -p 9302:9302 -v /mydata/elasticsearch3/conf/es3.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /mydata/elasticsearch3/data:/usr/share/elasticsearch/data --name es3 elasticsearch:6.8.1
注意:运行es时会有一行提示,避免运行时再次出现,需进行一个设置
vim /usr/lib/sysctl.d/00-system.conf
添加如下代码:
net.ipv4.ip_forward=1
重启network服务
systemctl restart network
配置ik分词器
下载分词器插件,将文件拷贝到3个集群es容器内:
docker cp elasticsearch-analysis-ik-6.8.1.zip cc0116a84ff6:/usr/share/elasticsearch/plugins
cc0116a84ff6这个指的是docker容器的CONTAINER ID;
通过docker exec -it es1 /bin/bash进入es容器中,进入到指定目录
cd plugins/
执行以下命令,解压到当前目录并更名,
unzip elasticsearch-analysis-ik-6.8.1.zip -d ik-analyzer
执行结束后需要重启各个es才能生效。
测试
GET _analyze
{
"analyzer": "ik_smart",
"text": "我是中国人"
}
GET _analyze
{
"analyzer": "ik_max_word",
"text": "我是中国人"
}
2、配置kibana
然后进行kibana的配置,如上步骤,先建立目录/mydata/kibana, 对目录进行赋权,然后创建kibana.yml,
kibana.yml
# Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
elasticsearch.hosts: ["http://192.168.134.189:9200", "http://192.168.134.189:9201", "http://192.168.134.189:9202"]
xpack.monitoring.ui.container.elasticsearch.enabled: true
执行docker命令运行kibana
docker run -d --restart=always --log-driver json-file --log-opt max-size=100m --log-opt max-file=2 --name kibana -p 5601:5601 -v /mydata/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml kibana:6.8.1
3、配置Logstash
1)、在mydata/logstash中创建logstash.conf文件:文件内容如下
input {
tcp {
port => 4560
codec => json_lines
}
}
output{
elasticsearch {
hosts => ["192.168.159.130:9200"]
index => "applog"
}
stdout { codec => rubydebug }
}
注意:
hosts一定不要写127或者localhost;这样docker容器内部127没有es实例,连不上
docker run -d -p 4560:4560 \
-v /mydata/logstash/logstash.conf:/etc/logstash.conf \
--link elasticsearch:elasticsearch \
--name logstash logstash:6.8.1 \
logstash -f /etc/logstash.conf
安装插件:
https://github.com/logstash-plugins
docker exec –it logstash /bin/bash (进入容器内容)
cd /usr/share/logstash/bin (可以whereis logstash找到这个位置)
logstash-plugin install logstash-codec-json_lines