准备镜像:官方已经有维护的镜像(https://www.docker.elastic.co/)
镜像列表:
docker pull docker.elastic.co/beats/filebeat:7.4.2
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.4.2
docker pull docker.elastic.co/logstash/logstash:7.4.2
docker pull docker.elastic.co/kibana/kibana:7.4.2
官方镜像名字太长,我重新打tag:
docker tag docker.elastic.co/beats/filebeat:7.4.2 filebeat:7.4.2
docker tag docker.elastic.co/elasticsearch/elasticsearch:7.4.2 elasticsearch:7.4.2
docker tag docker.elastic.co/logstash/logstash:7.4.2 logstash:7.4.2
docker tag docker.elastic.co/kibana/kibana:7.4.2 kibana:7.4.2
安装 docker 版本的 elsaticsearch
参考官方地址:https://www.elastic.co/guide/en/elasticsearch/reference/7.4/docker.html
参考文档中,官方提到 /etc/sysctl.conf 需要配置成 262144
[root@c721v198 ~]# grep vm.max_map_count /etc/sysctl.conf
vm.max_map_count=262144
正在运行:sysctl -w vm.max_map_count=262144
可以使用如下命令:
docker run -p 9200:9200 -p 9300:9300 --name elasticsearch -e "discovery.type=single-node" -d elasticsearch:7.4.2
docker-compose.yml 启动:
version: '2.2'
services:
es01:
image: elasticsearch:7.4.2
container_name: es01
environment:
- node.name=es01
- discovery.seed_hosts=es02
- cluster.initial_master_nodes=es01,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
es02:
image: elasticsearch:7.4.2
container_name: es02
environment:
- node.name=es02
- discovery.seed_hosts=es01
- cluster.initial_master_nodes=es01,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata02:/usr/share/elasticsearch/data
networks:
- esnet
volumes:
esdata01:
driver: local
esdata02:
driver: local
networks:
esnet:
注意:使用 docker volume , esdata01
and esdata02 ,没有需要部署前创建
docker volume create esdata01
docker volume create esdata02
查看:docker volume ls
docker volume inspect esdata02
docker volume rm esdata02
启动:docker-compose up
停止:docker-compose down
如果要清理数据:docker-compose down -v
检查:
[root@c721v198 ~]# curl http://127.0.0.1:9200/_cat/health
1573119694 09:41:34 docker-cluster green 1 1 0 0 0 0 0 0 - 100.0%
页面访问:
安装 docker 版本的kibana
参考地址:https://www.elastic.co/guide/en/kibana/7.4/docker.html
docker run --link YOUR_ELASTICSEARCH_CONTAINER_NAME_OR_ID:elasticsearch -p 5601:5601 {docker-repo}:{version}
部署好的 es 和 kibana 关联起来,主要用到的参数是 --link
:
eg:
docker run -d -p 5601:5601 --link elasticsearch -e ELASTICSEARCH_URL=http://elasticsearch:9200 kibana:7.4.2
验证:
安装 logstach
touch ~/elk/yaml/logstash.conf
vi ~/elk/yaml/logstash.conf
input {
beats {
host => "localhost"
port => "5043"
}
}
filter {
if [fields][doc_type] == 'order' {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} %{JAVALOGMESSAGE:msg}" }
}
}
if [fields][doc_type] == 'customer' { # 这里写两个一样的grok,实际上可能出现多种不同的日志格式,这里做个提示而已,当然如果是相同的格式,这里可以不写的
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} %{JAVALOGMESSAGE:msg}" }
}
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => [ "localhost:9200" ]
index => "%{[fields][doc_type]}-%{+YYYY.MM.dd}"
}
}
docker run --rm -it --name logstash --link elasticsearch -d -v ~/elk/yaml/logstash.conf:/usr/share/logstash/pipeline/logstash.conf logstash:7.4.2
安装 filebeat
touch ~/elk/yaml/filebeat.yml
vi ~/elk/yaml/filebeat.yml
filebeat.prospectors:
- paths:
- /home/user/elk/logs/order/*.log
multiline:
pattern: ^\d{4}
negate: true
match: after
fields:
doc_type: order
- paths:
- /home/user/elk/logs/customer/*.log
multiline:
pattern: ^\d{4}
negate: true
match: after
fields:
doc_type: customer
output.logstash: # 输出地址
hosts: ["logstash:5043"]
docker run --name filebeat -d --link logstash -v ~/elk/yaml/filebeat.yml:/usr/share/filebeat/filebeat.yml -v ~/elk/logs/:/home/logs/ filebeat:7.4.2
常见问题解决:
Exiting: 1 error: setting 'filebeat.prospectors' has been removed
参考;
修改 :filebeat.yml
filebeat.prospectors 换成 filebeat.inputs 解决。