下载ELK镜像文件
docker pull elasticsearch:7.2.0
docker pull mobz/elasticsearch-head:5
docker pull kibana:7.2.0
docker pull logstash:7.2.0
1.安装es
mkdir -p /home/elasticsearch/config
mkdir -p /home/elasticsearch/data
echo “http.host: 0.0.0.0”>>/home/elasticsearch/config/elasticsearch.yml
chmod -R 777 /home/elasticsearch/
运行容器
docker run --privileged=true --name myelasticsearch -p 9200:9200 -p 9300:9300 -e “discovery.type=single-node” -e ES_JAVA_OPTS=“-Xms64m -Xmx128m” -v /home/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /home/elasticsearch/data:/usr/share/elasticsearch/data -v /home/elasticsearch/plugins:/usr/share/elasticsearch/plugins -d elasticsearch:7.2.0
修改/home/elasticsearch/config/elasticsearch.yml文件
cluster.name: “docker-cluster”
network.host: 0.0.0.0
访问ID限定,0.0.0.0为不限制,生产环境请设置为固定IP
transport.host: 0.0.0.0
elasticsearch节点名称
node.name: node-1
elasticsearch节点信息
cluster.initial_master_nodes: [“node-1”]
下面的配置是关闭跨域验证(可以不开启)
http.cors.enabled: true
http.cors.allow-origin: “*”
重启es
docker restart myelasticsearch
验证结果
http://ip:9200/
安装ik分词器
下载ik分词器
https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.2.0/elasticsearch-analysis-ik-7.2.0.zip
解压后通过ftp上传到服务器下
将这个文件夹拷贝到elasticsearch中
docker cp /home/silence/elk/ik myelasticsearch:/usr/share/elasticsearch/plugins
查看是否拷贝成功,进入elasticsearch/plugins/ik中:
docker exec -it es /bin/bash
cd /usr/share/elasticsearch/plugins
ls
cd ik
ls
重新启动elasticsearch容器
docker restart myelasticsearch
运行head容器
docker run -d --restart=always --name myelasticsearch_head -p 9100:9100 mobz/elasticsearch-head:5
2、安装kibana
docker run --name mykibana --link myelasticsearch:elasticsearch -p 5601:5601 -d kibana:7.2.0
配置kibana汉化版
进入容器
docker exec -it mykibana /bin/bash
进入配置文件夹
cd config
修改配置文件
vi kibana.yml
修改elasticsearch连接(如果是本机部署,ip需填写内网ip)
elasticsearch.hosts: [ “http://elasticsearch的ip地址:端口号” ]
加入中文配置(保存后重启kibana)
i18n.locale: “zh-CN”
重启
docker restart mykibana
验证结果
http://ip:5601/
3、安装logstash
运行容器
docker run --name mylogstash -d -p 5044:5044 -p 9600:9600 logstash:7.2.0
创建目录设置权限
mkdir -p /home/elk/logstash/config;
mkdir -p /home/elk/logstash/pipeline;
chown -R 1000 /home/elk/logstash;
拷贝文件
docker cp mylogstash:/usr/share/logstash/config /home/elk/logstash/config;
docker cp mylogstash:/usr/share/logstash/pipeline /home/elk/logstash/pipeline;
删除容器doc
docker rm -f mylogstash
运行容器
docker run --name mylogstash -d -p 5044:5044 -p 9600:9600 -v /home/elk/logstash/config/config:/usr/share/logstash/config -v /home/elk/logstash/pipeline/pipeline:/usr/share/logstash/pipeline logstash:7.2.0
修改/home/elk/logstash/config/config/logstash.yml,将里面改成
http.host: “0.0.0.0”
xpack.monitoring.elasticsearch.hosts: [ “http://192.168.13.95:9200” ]
如果需要接收filebeat等其他应用的日志转发到ES,需要修改/home/elk/logstash/pipeline/pipeline/logstash.conf
input {
tcp {
mode => "server"
host => "0.0.0.0"
port => 5044
codec => json_lines
}
}
output {
elasticsearch {
hosts => ["http://192.168.13.95:9200"]
index => "springboot-logstash-test-%{+YYYY.MM.dd}"
#user => "elastic&#