基于Docker部署ELK
ElasticSearch
部署安装
拉取镜像
[root@localhost ~]# docker pull elasticsearch:7.14.0 7.14.0: Pulling from library/elasticsearch 7a0437f04f83: Pull complete 7718d2f58c47: Pull complete cc5c16bd8bb9: Pull complete e3d829b4b297: Pull complete 1ad944c92c79: Pull complete 373fb8fbaf74: Pull complete 5908d3eb2989: Pull complete Digest: sha256:81c126e4eddbc5576285670cb3e23d7ef7892ee5e757d6d9ba870b6fe99f1219 Status: Downloaded newer image for elasticsearch:7.14.0 docker.io/library/elasticsearch:7.14.0
创建目录
[root@localhost service]# mkdir -p /home/gzga/docker/elasticsearch [root@localhost elasticsearch]# mkdir -p {config,data,plugins}
创建临时容器
[root@localhost config]# docker run --name elasticsearch -d -e ES_JAVA_OPTS="-Xms128m -Xmx512m" -e "discovery.type=single-node" -p 9200:9200 -p 9300:9300 elasticsearch:7.14.0
复制配置文件
[root@localhost config]# docker cp elasticsearch:/usr/share/elasticsearch/config/elasticsearch.yml /home/gzga/docker/elasticsearch/config
删除容器
[root@localhost config]# docker stop elasticsearch elasticsearch [root@localhost config]# docker rm elasticsearch elasticsearch
启动ES
[root@localhost config]# docker run -d \ --name elasticsearch \ -p 9200:9200 \ -p 9300:9300 \ -e "discovery.type=single-node" \ -e TZ=Asia/Shanghai \ -e ES_JAVA_OPTS="-Xms128m -Xmx1024m" \ -v /home/gzga/docker/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \ -v /home/gzga/docker/elasticsearch/data:/usr/share/elasticsearch/data \ -v /home/gzga/docker/elasticsearch/plugins:/usr/share/elasticsearch/plugins \ elasticsearch:7.14.0
注意事项
一定要给目录授权,否则可能启动失败,自动退出
chmod 777 config&chmod 777 data&chmod 777 plugins
设置用户
开启x-pack验证
# 修改 elasticsearch.yml,在文末增加以下 xpack.security.enabled: true xpack.security.transport.ssl.enabled: true # 开启跨域认证 # http.cors.allow-headers: Authorization # 重启elasticsearch [root@localhost config]# docker restart elasticsearch
设置用户名和密码
[root@localhost config]# docker exec -it elasticsearch bash [root@bf3f72293795 elasticsearch]# bin/elasticsearch-setup-passwords interactive [root@localhost config]# docker restart elasticsearch elasticsearch [root@localhost config]# docker exec -it elasticsearch bash #手动设置密码 [root@bf3f72293795 elasticsearch]# bin/elasticsearch-setup-passwords interactive Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user. You will be prompted to enter passwords as the process progresses. Please confirm that you would like to continue [y/N]y Enter password for [elastic]: Reenter password for [elastic]: Enter password for [apm_system]: Reenter password for [apm_system]: Enter password for [kibana_system]: Reenter password for [kibana_system]: Enter password for [logstash_system]: Reenter password for [logstash_system]: Enter password for [beats_system]: Reenter password for [beats_system]: Enter password for [remote_monitoring_user]: Reenter password for [remote_monitoring_user]: Changed password for user [apm_system] Changed password for user [kibana_system] Changed password for user [kibana] Changed password for user [logstash_system] Changed password for user [beats_system] Changed password for user [remote_monitoring_user] Changed password for user [elastic] #密码统一设置为a12345 #自动生成密码 [root@bf3f72293795 elasticsearch]# bin/elasticsearch-setup-passwords auto #修改密码命令 curl -H "Content-Type:application/json" -XPOST -u elastic 'http://127.0.0.1:9200/_xpack/security/user/elastic/_password' -d '{ "password" : "xxxx" }'
常见错误
启动错误
TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark
curl -X PUT "http://localhost:9200/esbizlog/_settings" -H "Content-Type: application/json" -u "username:password" -d '{"index.blocks.read_only_allow_delete": null}'
Kibana
部署安装
拉取镜像
[root@localhost kibana]# docker run -d --name kibana --link elasticsearch:elasticsearch -p 5601:5601 kibana:7.14.0 Unable to find image 'kibana:7.14.0' locally 7.14.0: Pulling from library/kibana 7a0437f04f83: Already exists d92a27ccb611: Pull complete 5c7ecfbf36a1: Pull complete 8d9081d817c4: Pull complete a530cdbe89f1: Pull complete ccffe653ddc0: Pull complete af29ddcbaa8e: Pull complete 8e4704d6a270: Pull complete 458a467a0651: Pull complete 4fcc78271e5a: Pull complete d86420aa7083: Pull complete 50e699604220: Pull complete d53c69cf1db7: Pull complete Digest: sha256:7188839aee88057c1f92aaff12d6ca4f54f5f89c1a07caedbc0247c4ec041392 Status: Downloaded newer image for kibana:7.14.0 7e7f7b92ec98906d4300b778b101ac9db162914e6f0a28bfe99de0081e74f667
修改配置
#复制配置文件 [root@localhost kibana]# docker cp kibana:/usr/share/kibana/config/kibana.yml /home/gzga/docker/kibana/config [root@localhost kibana]# cd /home/gzga/docker/kibana/config [root@localhost config]# vi kibana.yml #修改 elasticsearch.hosts 的值 # # ** THIS IS AN AUTO-GENERATED FILE ** # # Default Kibana configuration for docker target server.host: "0" server.shutdownTimeout: "5s" elasticsearch.hosts: [ "http://192.168.1.206:9200" ] monitoring.ui.container.elasticsearch.enabled: true # 设置汉化 i18n.locale: "zh-CN"
启动容器
[root@localhost config]# docker stop kibana kibana [root@localhost config]# docker rm kibana kibana [root@localhost kibana]# docker run -d --name kibana --link elasticsearch:elasticsearch -p 5601:5601 -v /home/gzga/docker/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml kibana:7.14.0
访问Kibana
设置Kibana密码
# 修改配置文件 [root@localhost kibana]# cd /home/gzga/docker/kibana/config [root@localhost config]# vi kibana.yml # 增加两行配置并重启 elasticsearch.username: "kibana_system" elasticsearch.password: "xxx密码" #重启 [root@localhost config]# docker restart kibana
使用说明
使用方法
discover使用
-
在管理中创建索引模式
-
选择对应的索引模式进行查询
设置默认索引
curl -X POST -H "kbn-xsrf:true" -H "Content-Type: application/json" -d "{\"changes\":{\"defaultIndex\":\"nk-*\"}}" http://192.168.1.206:5601/api/kibana/settings
Logstash
部署安装
拉取镜像
[root@localhost tensorflow]# docker pull logstash:7.14.0 7.14.0: Pulling from library/logstash 2d473b07cdd5: Pull complete d3a7759e9ad2: Pull complete 42f451c261d7: Pull complete fa33a67e94b1: Pull complete 34d1ee3f4428: Pull complete 983fac569dd3: Pull complete 61ad6cb97e4c: Pull complete b9f4df95ea5b: Pull complete 5b2018eb0e9f: Pull complete 240a339160a2: Pull complete b02655e33c0a: Pull complete Digest: sha256:cda21243aa471c4bef46b89aebe6a51c6e2a2f6e96e16bd08f47e8035176eb07 Status: Downloaded newer image for logstash:7.14.0 docker.io/library/logstash:7.14.0
拷贝配置
[root@localhost docker]# docker run -d -P --name logstash logstash:7.14.0 # 拷贝数据 [root@localhost docker]# docker cp logstash:/usr/share/logstash/config logstash/ [root@localhost docker]# docker cp logstash:/usr/share/logstash/data logstash/ [root@localhost docker]# docker cp logstash:/usr/share/logstash/pipeline logstash/ #文件夹赋权 [root@localhost docker]# chmod -R 777 logstash/
修改配置
-
修改 logstash/config 下的 logstash.yml 文件,主要修改 es 的地址:
http.host: "0.0.0.0" xpack.monitoring.elasticsearch.hosts: [ "http://192.168.1.145:9200" ]
-
修改 logstash/pipeline 下的 logstash.conf 文件:
-
修改前文件
input { beats { port => 5044 } } output { stdout { codec => rubydebug } elasticsearch { hosts => ["http://elasticsearch:9200"] # ElasticSearch 的地址和端口 index => "%{[serviceName]}-%{+YYYY.MM.dd}" # 指定索引名 codec => "json" } }
-
修改后文件
input { tcp { mode => "server" host => "0.0.0.0" port => 5044 codec => json_lines } } # 增加以下配置,解决elk系统使用utc时区带来的八小时问题 filter { ruby { code => "event.set('timestamp', event.get('@timestamp').time.localtime + 8*60*60)" } ruby { code => "event.set('@timestamp',event.get('timestamp'))" } mutate { remove_field => ["timestamp"] } } output { stdout { codec => rubydebug } elasticsearch { hosts => ["http://192.168.1.206:9200"] # ElasticSearch 的地址和端口 index => "%{[serviceName]}-%{+YYYY.MM.dd}" # 指定索引名 #codec => "json" } }
启动容器并挂载
#注意先删除之前的容器 docker stop logstash docker rm -f logstash # 启动容器并挂载 docker run -d --name logstash \ --privileged=true \ -p 5044:5044 -p 9600:9600 \ -v $PWD/logstash/data/:/usr/share/logstash/data \ -v $PWD/logstash/config/:/usr/share/logstash/config \ -v $PWD/logstash/pipeline/:/usr/share/logstash/pipeline \ -e TZ=Asia/Shanghai \ logstash:7.14.0
密码连接elastic
修改pipeline配置文件
input { tcp { mode => "server" host => "0.0.0.0" port => 5044 codec => json_lines } } # 增加以下配置,解决elk系统使用utc时区带来的八小时问题 filter { ruby { code => "event.set('timestamp', event.get('@timestamp').time.localtime + 8*60*60)" } ruby { code => "event.set('@timestamp',event.get('timestamp'))" } mutate { remove_field => ["timestamp"] } } output { stdout { codec => rubydebug } elasticsearch { hosts => ["http://192.168.1.206:9200"] # ElasticSearch 的地址和端口 index => "%{[serviceName]}-%{+YYYY.MM.dd}" # 指定索引名 #codec => "json" #设置连接elastic的账号密码,增加下面两行,账号要有索引写入权限 user => "logstash_system" password => "xxx密码" } }
修改config下配置文件
http.host: "0.0.0.0" xpack.monitoring.elasticsearch.hosts: [ "http://192.168.1.145:9200" ] #增加以下两行 xpack.monitoring.elasticsearch.username: "logstash_system" xpack.monitoring.elasticsearch.password: "xxx密码"
安装json_lines格式插件
# 进入logstash容器 docker exec -it logstash /bin/bash # 进入bin目录 cd /bin/ # 安装插件 logstash-plugin install logstash-codec-json_lines # 退出容器 exit # 重启logstash服务 docker restart logstash
重要配置参数
配置文件
Logstash本身的配置文件位于安装路径的config目录下,包括以下几个:
-
logstash.yml
:用于控制Logstash本身的启动和运行。在通过命令行启动Logstash时手动指定的参数值会覆盖此文件中的同名参数值。 -
pipelines.yml
:用于指定在一个Logstash实例中运行多个管道的框架和指令配置。 -
jvm.options
:JVM配置。 -
log4j2.properties
:log4j2库的默认配置。 -
startup.options
:Logstash本身不会读取该配置文件。但是在通过Debian包或者RPM包安装Logstash时,$LS_HOME/bin/system-install
程序会读取该文件中的配置来为Logstash创建systemd(或者upstart)启动脚本。如果修改了该配置文件,需要重新运行system-install
来使新的配置生效
启动参数
#测试配置文件 /usr/share/logstash/bin/logstash -t -f logstash-simple.conf # 指定配置文件启动 ./bin/logstash -f logstash-simple.conf
多管道配置
如果要在同一个进程中运行多个管道,Logstash提供了一种方法,通过名为pipelines.yml
的配置文件来实现。该配置文件必须放在path.settings
路径(/etc/logstash
或者安装目录下的config目录)下,并遵循以下结构:
- pipeline.id: my-pipeline_1 path.config: "/etc/path/to/p1.config" pipeline.workers: 3 - pipeline.id: my-other-pipeline path.config: "/etc/different/path/p2.cfg" queue.type: persisted
防火墙设置
elastic
-
9200端口:用于所有通过HTTP协议进行的API调用。包括搜索、聚合、监控、以及其他任何使用HTTP协议的请求。所有的客户端库都会使用该端口与ElasticSearch进行交互。
-
9300端口:是一个自定义的二进制协议,用于集群中各节点之间的通信。用于诸如集群变更、主节点选举、节点加入/离开、分片分配等事项
#开放elasticseach端口,外部使用一般开放9200就行 firewall-cmd --zone=public --add-port=9200/tcp --permanent firewall-cmd --zone=public --add-port=9300/tcp --permanent #开放kibata端口 firewall-cmd --zone=public --add-port=5601/tcp --permanent #开放logstash端口 firewall-cmd --zone=public --add-port=5044/tcp --permanent firewall-cmd --zone=public --add-port=9600/tcp --permanent #重新载入 firewall-cmd --reload
常见问题
logstash时间差8个小时
-
修改配置文件logstash.conf
# 增加以下配置,解决elk系统使用utc时区带来的八小时问题 filter { ruby { code => "event.set('timestamp', event.get('@timestamp').time.localtime + 8*60*60)" } ruby { code => "event.set('@timestamp',event.get('timestamp'))" } mutate { remove_field => ["timestamp"] } }
kibana日志时间显示超前
-
可能kibana环境时区与elasticsearch环境时区不同
#检查环境时区 [root@localhost ~]# docker exec -it kibana bash bash-4.4$ date Mon Oct 9 14:55:59 UTC 2023 [root@localhost ~]# docker exec -it elasticsearch bash [root@bf3f72293795 elasticsearch]# date Mon Oct 9 22:48:52 CST 2023 #由上可见,时区不一致,修改kinaba时区 [root@localhost ~]# docker cp /etc/localtime kibana:/etc/localtime #以上方法处理出错则使用以下命令 [root@localhost ~]# docker cp /usr/share/zoneinfo/Asia/Shanghai kibana:/etc/localtime #查看时区 [root@localhost ~]# docker exec -it kibana bash bash-4.4$ date Mon Oct 9 23:06:26 CST 2023 # 设置 logstash 、kibana 、elasticsearch 三个环境时区一致