一、ELK介绍
1.ELK简介
如今,绝大部分系统都是一个分布式的环境,机器分布在不同的环境中。而如果我们需要去查看日志信息,按照以前的方式一台台登录去查看,效率非常低,而且很耗时间。所以这里需要一个集中式的日志存储分析系统。而一个集中式的日志存储系统又以下几个特点:
- 收集-能够采集多种来源的日志数据
- 传输-能够稳定的把日志数据传输到中央系统
- 存储-如何存储日志数据
- 分析-可以支持 UI 分析
- 警告-能够提供错误报告,监控机制
而目前市面上Splunk都满足上述特点,而且非常优秀,但是它是一款商业收费的软件,让很多人望而却步。而ELK的出现,弥补了开源集中式日志存储软件的空白。当然除了ELK,还有其他的很多开源日志存储软件,如:
FaceBook 公司的 Scribe,Apache 的 Chukwa,Linkedin 的 Kafak,Cloudera 的 Fluentd等等。
目前业界应用最多最广泛的,还是属ELK。国内的新浪、腾讯、华为、美团、饿了吗以及国外的IBM等公司都采用ELK。至于为什么要用ELK,我觉得这些大公司的广泛使用就能很好说明ELK的优秀了。
2.ELK协议栈介绍
这里需要说明的是,ELK不是一个软件,而是一套解决方案。ELK是ElasticSearch、Logstash和Kibana三个软件的缩写,它们通常是搭配使用,同时也可以搭配Filebeat一起来使用,其协议栈如下图所示:

这几个软件的关系如下图流程图所示:

数据采集可以通过Filebeat等软件或者可以直接采用Logstash收集,然后数据发送给Logstash进行过滤后,再写入ElasticSearch,ElasticSearch对这些数据创建索引,最后由Kibana对其进行各种分析并以图表形式展示出来。
(1)ElasticSearch
ElasticSearch是一个基于Lucene的企业级开源搜索引擎,ElasticSearch使用Java开发,并使用Lucene作为其核心来实现所有索引和搜索功能, 它提供了一个分布式的全文搜索引擎。 主要特点如下:
- 实时分析
- 分布式文件存储,支持将每一个字段都编入索引
- 文档导向,所有对象都是文档
- 高可用性、易扩展、支持集群、分片和复制
- 接口友好,支持RESTFUL API交互
常见集群结构如下:

(2)Logstash
Logstash是一个具有实时渠道能力的数据收集引擎,使用Ruby语言编写,它能够同时从多个来源采集数据、转换数据并发送给存储引擎存储,如ElasticSearch。

它由三部分组成:
- Shipper:发送日志数据
- Broker:收集数据
- Indexer:数据写入

(3)Kibana
Kibana使用JavaScript语言编写,主要为ElasticSearch提供分析和可视化的Web平台。它可以在ElasticSearch的索引中国查找、交互数据,并生成各种唯独的表图。如下图所示:

(4)Filebeat
Filebeat是ELK协议栈中的新成员,是一个轻量级的开源日志文件收集软件, 基于 Logstash-Forwarder 源代码开发,是对它的一个替代。
在需要采集日志的服务器上安装Filebeat,并制定要采集的日志目录,Filebeat就能读取数据并发送给Logstash进行过滤解析,或者可以直接发送给ElasticSearch等存储引擎进行集中式存储和分析。
下图是filebeat的工作流程,当开启filebeat服务后, 它会启动一个或多个探测器(prospectors)去检测你指定的日志目录或文件,对于探测器找出的每一个日志文件,filebeat启动收割进程(harvester),每一个收割进程读取一个日志文件的新内容,并发送这些新的日志数据到处理程序(spooler),处理程序会集合这些事件,最后filebeat会发送集合的数据到你指定的地点,如logstash或者Elasticsearch等。

二、ELK搭建
备注:下面的所有安装操作,都是在centos7.2 x86_64环境下进行。
1.安装Java
yum -y install java-1.8.0-openjdk-devel.x86_64
返回下面的输出,表示安装成功。
1. [root@VM_16_17_centos ~]# java -version
2. openjdk version "1.8.0_191"
3. OpenJDK Runtime **Environment** (build 1.8.0_191-b12)
4. OpenJDK 64-Bit Server **VM** (build 25.191-b12, mixed mode)
2.ElasticSearch安装
(1)下载安装ElasticSearch
1. wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.rpm
2. rpm -ivh elasticsearch-6.2.4.rpm
(2)配置ElasticSearch
1. vim /etc/elasticsearch/elasticsearch.yml
取消如下配置的注释:
1. bootstrap.memory_lock: true
2. network.host: localhost
3. http.port: 9200
这里,elasticsearch监听的是本地的9200默认端口
1. vim /etc/sysconfig/elasticsearch
取消如下配置的注释
1. MAX_LOCKED_MEMORY=unlimited
(3)启动ElasticSearch并设置开机自启
1. systemctl daemon-reload
2. systemctl enable elasticsearch
3. systemctl start elasticsearch
通过下面这个命令,我们可以看到Elastic Search正在监听9200端口:

我们也可以通过:curl localhost:9200 这个命令来查看ElasticSearch服务是否运行成功,运行正常则会返回如下输出:
1. {
2. "name" : "Ibmm5BR",
3. "cluster_name" : "elasticsearch",
4. "cluster_uuid" : "_azcjoJxR3Guci8DhOMhdA",
5. "version" : {
6. "number" : "6.2.4",
7. "build_hash" : "ccec39f",
8. "build_date" : "2018-04-12T20:37:28.497551Z",
9. "build_snapshot" : false,
10. "lucene_version" : "7.2.1",
11. "minimum_wire_compatibility_version" : "5.6.0",
12. "minimum_index_compatibility_version" : "5.0.0"
13. },
14. "tagline" : "You Know, for Search"
15. }
(4)配置ElasticSearch支持外网访问
上一步的配置,我们无法通过外网访问ElasticSearch服务。这是因为我们本地端口监听的是127.0.0.1,如果想要支持外网访问,需要改成监听:0.0.0.0的ip地址。
1. vim /etc/elasticsearch/elasticsearch.yml
改成:
network.host: 0.0.0.0
然后我们再次重启elasticsearch:
1. systemctl restart elasticsearch
这时候,我们再去查看9200端口监听,会发现已经没有监听9200端口了。同时curl localhost:9200也返回失败,显示:
1. curl: (7) Failed connect to localhost:9200; Connection refused
这是什么原因呢?我们查看elasticsearch的日志文件:
1. vim /var/log/elasticsearch/elasticsearch.log
1. [2019-01-19T23:17:02,110][WARN ][o.e.b.JNANatives ] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
2. [2019-01-19T23:17:02,111][WARN ][o.e.b.JNANatives ] This can result in part of the JVM being swapped out.
3. [2019-01-19T23:17:02,111][WARN ][o.e.b.JNANatives ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
4. [2019-01-19T23:17:02,111][WARN ][o.e.b.JNANatives ] These can be adjusted by modifying /etc/security/limits.conf, for example:
5. \# allow user 'elasticsearch' mlockall
6. elasticsearch soft memlock unlimited
7. elasticsearch hard memlock unlimited
8. [2019-01-19T23:17:02,111][WARN ][o.e.b.JNANatives ] If you are logged in interactively, you will have to re-login for the new limits to take effect.
9. [2019-01-19T23:17:02,345][INFO ][o.e.n.Node ] [] initializing ...
10. ......
11. [2019-01-19T23:17:08,384][ERROR][o.e.b.Bootstrap ] [Ibmm5BR] node validation exception
12. [1] bootstrap checks failed
13. [1]: memory locking requested for elasticsearch process but memory is not locked
14. [2019-01-19T23:17:08,409][INFO ][o.e.n.Node ] [Ibmm5BR] stopping ...
15. [2019-01-19T23:17:08,501][INFO ][o.e.n.Node ] [Ibmm5BR] stopped
16. [2019-01-19T23:17:08,501][INFO ][o.e.n.Node ] [Ibmm5BR] closing ...
17. [2019-01-19T23:17:08,536][INFO ][o.e.n.Node ] [Ibmm5BR] closed
我们看日志文件里面,有几个WARN和ERROR信息。为什么监听127.0.0.1本地地址可以运行正常,而监听0.0.0.0允许外部访问重启就报错了呢?
这是因为ElasticSearch配置成允许外部访问,则ElasticSearch会把机器当成生产环境看待,ElasticSerach就会强制检查,所以才会出现一些告警和报错信息。
下面我们来一个个去处理。
(5)ElasticSearch启动失败修复
在上面的错误日志中,我们看到了如下的提示。
1. ......
2. [2019-01-19T23:17:02,111][WARN ][o.e.b.JNANatives ] These can be adjusted by modifying /etc/security/limits.conf, for example:
3. # allow user 'elasticsearch' mlockall
4. elasticsearch soft memlock unlimited
5. elasticsearch hard memlock unlimited
6. ......
vim /etc/security/limits.conf
添加如下两行配置:
1. elasticsearch soft memlock unlimited
2. elasticsearch hard memlock unlimited
这里我们需要创建一个普通用户来启动ElasticSearch服务,否则会报如下的错误,ElasticSearch是不允许root用户启动的。
1. [root@bogon ~]# /usr/share/elasticsearch/bin/elasticsearch
2. [2019-01-19T23:29:44,863][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
3. org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
4. at org.elasticsearch.bootstrap.Elasticsearch.**init**(Elasticsearch.java:125) ~[elasticsearch-6.2.4.jar:6.2.4]
5. at org.elasticsearch.bootstrap.Elasticsearch.**execute**(Elasticsearch.java:112) ~[elasticsearch-6.2.4.jar:6.2.4]
6. at org.elasticsearch.cli.EnvironmentAwareCommand.**execute**(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.2.4.jar:6.2.4]
7. at org.elasticsearch.cli.Command.**mainWithoutErrorHandling**(Command.java:124) ~[elasticsearch-cli-6.2.4.jar:6.2.4]
8. at org.elasticsearch.cli.Command.**main**(Command.java:90) ~[elasticsearch-cli-6.2.4.jar:6.2.4]
9. at org.elasticsearch.bootstrap.Elasticsearch.**main**(Elasticsearch.java:92) ~[elasticsearch-6.2.4.jar:6.2.4]
10. at org.elasticsearch.bootstrap.Elasticsearch.**main**(Elasticsearch.java:85) ~[elasticsearch-6.2.4.jar:6.2.4]
11. Caused by: java.lang.RuntimeException: can not run elasticsearch as root
12. at org.elasticsearch.bootstrap.Bootstrap.**initializeNatives**(Bootstrap.java:105) ~[elasticsearch-6.2.4.jar:6.2.4]
13. at org.elasticsearch.bootstrap.Bootstrap.**setup**(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]
14. at org.elasticsearch.bootstrap.Bootstrap.**init**(Bootstrap.java:323) ~[elasticsearch-6.2.4.jar:6.2.4]
15. at org.elasticsearch.bootstrap.Elasticsearch.**init**(Elasticsearch.java:121) ~[elasticsearch-6.2.4.jar:6.2.4]
16. ... 6 more
我们可以通过如下命令创建elasticsearch用户专门用于启动ElasticSearch服务。
1. adduser elasticsearch //新建elasticsearch用户
2. passwd elasticsearch //给elasticsearch用户设置密码
然后我们切换到 elasticsearch 用户:su elasticsearch,这时候会报下面的错误:
1. This account is currently not available.
这时候需要运行shell修改用户:
1. usermod -s /bin/bash elasticsearch
再次切换就成功了。这时候我们再次启动就不会报不允许root登录的错误,但是又出现了新的错误:
1. ......
2. ERROR: [2] bootstrap checks failed
3. [1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
4. [2]: max number of threads [3798] for user [elasticsearch] is too low, increase to at least [4096]
5. ......
第一个错误是因为linux会限制进程的最大打开文件数,
这里面我们使用elasticsearch来启动的,所以需要用root用户添加如下配置:
- vim /etc/security/limits.conf
添加的配置如下:
1. elasticsearch - nofile 65536
2.
3. * soft nproc 2048
4. * hard nproc 4096
再次启动后,会输出如下的信息,说明启动成功。
如果想要后台运行ElasticSearch,只需要加 “-d”参数启动即可。
1. ......
2. [2019-01-19T23:59:03,476][INFO ][o.e.n.Node ] initialized
3. [2019-01-19T23:59:03,477][INFO ][o.e.n.Node ] [Ibmm5BR] starting ...
4. [2019-01-19T23:59:04,715][INFO ][o.e.t.TransportService ] [Ibmm5BR] publish_address {192.168.88.128:9300}, bound_addresses {[::]:9300}
5. [2019-01-19T23:59:04,726][INFO ][o.e.b.BootstrapChecks ] [Ibmm5BR] bound or publishing to a non-loopback address, enforcing bootstrap checks
6. [2019-01-19T23:59:07,881][INFO ][o.e.c.s.MasterService ] [Ibmm5BR] zen-disco-elected-as-**master** ([0] nodes joined), reason: new_master {Ibmm5BR}{Ibmm5BRcQkWYM7ce6ovIyQ}{H00dVdtzQHSYE3VZF-NdGg}{192.168.88.128}{192.168.88.128:9300}
7. [2019-01-19T23:59:07,885][INFO ][o.e.c.s.ClusterApplierService] [Ibmm5BR] new_master {Ibmm5BR}{Ibmm5BRcQkWYM7ce6ovIyQ}{H00dVdtzQHSYE3VZF-NdGg}{192.168.88.128}{192.168.88.128:9300}, reason: apply cluster **state** (from master [master {Ibmm5BR}{Ibmm5BRcQkWYM7ce6ovIyQ}{H00dVdtzQHSYE3VZF-NdGg}{192.168.88.128}{192.168.88.128:9300} committed version [1] source [zen-disco-elected-as-**master** ([0] nodes joined)]])
8. [2019-01-19T23:59:07,998][INFO ][o.e.h.n.Netty4HttpServerTransport] [Ibmm5BR] publish_address {192.168.88.128:9200}, bound_addresses {[::]:9200}
9. [2019-01-19T23:59:07,998][INFO ][o.e.n.Node ] [Ibmm5BR] started
10. [2019-01-19T23:59:08,000][INFO ][o.e.g.GatewayService ] [Ibmm5BR] recovered [0] indices into cluster_state
现在我们看到9200监听的就是所有ip地址,这时候我们就可以通过外网去访问这个ElasticSearch服务。
1. esktop]# netstat -anp|grep 9200
2. tcp6 0 0 :::9200 :::* LISTEN 18037/java
3. tcp6 0 0 ::1:48854 ::1:9200 TIME_WAIT -
备注:这里不建议配置成0.0.0.0监听地址,这容易造成安全问题,最好的办法是监听内网ip,然后通过iptables来限制外网ip访问,同时也可以限制访问的机器。
3.Kibana安装
1. wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.4-x86_64.rpm
2. rpm -ivh kibana-6.2.4-x86_64.rpm
配置Kibana
1. vim /etc/kibana/kibana.yml
取消以下注释:
1. server.port: 5601
2. server.host: "localhost"
3. elasticsearch.url: "http://localhost:9200"
备注:如果像支持外网访问,可以将server.host改成”0.0.0.0″
启动Kibana:
1. systemctl enable kibana
2. systemctl start kibana
返回如下信息表明Kibana安装成功。
1. [root@bogon Desktop]# curl localhost:5601
2. <script>var hashRoute = '/app/kibana';
3. var defaultRoute = '/app/kibana';
4.
5. var hash = window.location.hash;
6. **if** (hash.length) {
7. window.location = hashRoute + hash;
8. } else {
9. window.location = defaultRoute;
10. }</script>
这时候在浏览器中访问:http://localhost:5601,就可以看到如下的管理页面:

4.Logstash安装
1. wget https://artifacts.elastic.co/downloads/logstash/logstash-6.2.4.rpm
2. rpm -ivh logstash-6.2.4.rpm
启动:
1. systemctl restart logstash
2. systemctl enable logstash
也可以通过以下方式启动:
1. /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-filebeat-nginx.conf
备注:配置logstash文件,我会在后面专门介绍实际的案例讲解。
5.FileBeat安装
1. wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.4-x86_64.rpm
2. rpm -ivh filebeat-6.2.4-x86_64.rpm
设置开机启动以及启动系统服务:
1. systemctl daemon-reload
2. systemctl enable filebeat
到这里,我们已经把ELK+FileBeat的一套东西都安装完毕了,接下来,我们会介绍一个实际的使用案例,加深大家对ELK的使用。
三、ELK、FileBeat实战
1. nginx日志收集
这里我们采用的方案是,LogStash采集日志,ElasticSearch存储数据,Kibana展示数据。nginx服务器产生的日志文件作为u数据源。

首先修改nginx配置,将nginx日志输出转换成json格式(也可以不修改,这里主要是为了方便后期的日志分析),将/etc/nginx/nginx.conf文件中的log_format改成如下:
1. log_format access_json '{"@timestamp":"$time_iso8601",'
2. '"host":"$server_addr",'
3. '"clientip":"$remote_addr",'
4. '"size":"$body_bytes_sent",'
5. '"responsetime":"$request_time",'
6. '"user_agent":"$http_user_agent",'
7. '"request":"$request",'
8. '"uri":"$uri",'
9. '"domain":"$host",'
10. '"xff":"$http_x_forwarded_for",'
11. '"referer":"$http_referer",'
12. '"status":"$status"}';
13. access_log /var/log/nginx/access.log access_json;
重新访问nginx服务器首页,此时打开/var/log/nginx/access.log文件后生成的日志就变成json格式的:
1. {"@timestamp":"2019-01-22T13:48:13-08:00","host":"::1","clientip":"::1","size":"0","responsetime":"0.000","user_agent":"Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0","request":"GET /nginx-logo.png HTTP/1.1","uri":"/nginx-logo.png","domain":"localhost","xff":"-","referer":"http://localhost/","status":"304"}
2. {"@timestamp":"2019-01-22T13:48:13-08:00","host":"::1","clientip":"::1","size":"0","responsetime":"0.000","user_agent":"Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0","request":"GET /poweredby.png HTTP/1.1","uri":"/poweredby.png","domain":"localhost","xff":"-","referer":"http://localhost/","status":"304"}
配置logstash收集nginx访问日志:
vim /etc/logstash/conf.d/nginx.conf
1. input {
2. file {
3. path => "/var/log/nginx/access.log"
4. start_position => "end"
5. codec => "json"
6. type => "nginx-accesslog"
7. }
8. }
9.
10. filter {}
11.
12. output {
13. if [type] == "nginx-accesslog" {
14. elasticsearch {
15. hosts => ["localhost:9200"]
16. index => "nginx-accesslog-%{+YYYY.MM.dd}"
17. }
18. }
19. }
验证配置文件是否正确:
- /usr/share/logstash/bin/logstash –path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf –config.test_and_exit
输出如下内容表示配置文件正确:
- Sending Logstash’s logs to /var/log/logstash which is now configured via log4j2.properties
- Configuration OK
如果想在终端测试看实际记录的日志,可以在/etc/logstash/conf.d/nginx.conf配置文件中增加stdout {}。
1. input {
2. file {
3. path => "/var/log/nginx/access.log"
4. start_position => "end"
5. codec => "json"
6. type => "nginx-accesslog"
7. }
8. }
9.
10. filter {}
11.
12. output {
13. stdout{}
14. if [type] == "nginx-accesslog" {
15. elasticsearch {
16. hosts => ["localhost:9200"]
17. index => "nginx-accesslog-%{+YYYY.MM.dd}"
18. }
19. }
20. }
然后用下面的命令启动:
/usr/share/logstash/bin/logstash –path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf
访问nginx服务器,看到下面的json输出就表示运行正常。
1. [root@bogon bin]# /usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf
2. Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
3. {
4. "host" => "::1",
5. "size" => "0",
6. "responsetime" => "0.000",
7. "@timestamp" => 2019-01-22T21:54:56.000Z,
8. "referer" => "-",
9. "uri" => "/index.html",
10. "domain" => "localhost",
11. "@version" => "1",
12. "xff" => "-",
13. "clientip" => "::1",
14. "user_agent" => "Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0",
15. "path" => "/var/log/nginx/access.log",
16. "status" => "304",
17. "request" => "GET / HTTP/1.1",
18. "type" => "nginx-accesslog"
19. }
20. {
21. "host" => "::1",
22. "size" => "0",
23. "responsetime" => "0.000",
24. "@timestamp" => 2019-01-22T21:54:57.000Z,
25. "referer" => "http://localhost/",
26. "uri" => "/nginx-logo.png",
27. "domain" => "localhost",
28. "@version" => "1",
29. "xff" => "-",
30. "clientip" => "::1",
31. "user_agent" => "Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0",
32. "path" => "/var/log/nginx/access.log",
33. "status" => "304",
34. "request" => "GET /nginx-logo.png HTTP/1.1",
35. "type" => "nginx-accesslog"
36. }
这里也可以直接用:systemctl restart logstash命令重启
接下来通过访问http://localhost:9200/_cat/indices?v,我们可以看到索引文件也成功生成了:
1. [root@bogon Desktop]# curl http://localhost:9200/_cat/indices?v
2. health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
3. yellow open nginx-accesslog-2019.01.22 nXAZ4gHZT-uB4USTDRB_YA 5 1 9 0 71.9kb 71.9kb
4. [root@bogon Desktop]#
最后一步,我们就可以在kibana上配置索引查看日志文件
访问 localhost:5601:

到这里,我们就把一个简单的日志采集系统给搭建起来了。如果想要在logstash里记录多个日志源,例如想要采集nginx的错误日志,配置文件可以修改成如下,并按照上面的步骤配置即可。
1. input {
2. file {
3. path => "/var/log/nginx/access.log"
4. start_position => "end"
5. codec => "json"
6. type => "nginx-accesslog"
7. }
8.
9. file {
10. path => "/var/log/nginx/error.log"
11. start_position => "end"
12. codec => "json"
13. type => "nginx-errorlog"
14. }
15. }
16.
17. filter {}
18.
19. output {
20. if [type] == "nginx-accesslog" {
21. elasticsearch {
22. hosts => ["localhost:9200"]
23. index => "nginx-accesslog-%{+YYYY.MM.dd}"
24. }
25. }
26.
27. if [type] == "nginx-errorlog" {
28. elasticsearch {
29. hosts => ["localhost:9200"]
30. index => "nginx-errorlog-%{+YYYY.MM.dd}"
31. }
32. }
33.
34. }
这里type字段是用来区分日志文件类型
2.使用FileBeat采集,Logstash过滤,ES存储,Kibana展示
先停止Logstash和Filebeat
1. systemctl stop logstash
2. systemctl stop filebeat
删除上面产生的全部索引日志数据:
1. curl -XDELETE http://localhost:9200/_all
编辑Filebeat配置文件(注释掉output.elasticsearch):
vim /etc/filebeat/filebeat.yml
1. #output.elasticsearch:
2. # Array of hosts to connect to.
3. # hosts: ["localhost:9200"]
4. filebeat.prospectors:
5. - type: log
6. enable: true
7. paths:
8. - /var/log/nginx/access.log
9. fields:
10. service: filebeat-nginx-accesslog
11. scan_frequency: 10s
12.
13. - type: log
14. enable: true
15. paths:
16. - /var/log/nginx/error.log
17. fields:
18. service: filebeat-nginx-errorlog
19. scan_frequency: 10s
20.
21. output.logstash:
22. hosts: ["localhost:10515"]
这里fields、service字段是我们在logstash中用来区分日志类型的(见下面logstash的配置)。
配置Logstash配置文件
vim /etc/logstash/conf.d/logstash-filebeat-nginx.conf
1. input {
2. beats {
3. port => 10515
4. client_inactivity_timeout=>"1200"
5. }
6. }
7.
8. filter {}
9.
10. output {
11. if [fields][service] == "filebeat-nginx-accesslog" {
12. elasticsearch {
13. hosts => ["localhost:9200"]
14. index => "nginx-accesslog-%{+YYYY.MM.dd}"
15. }
16. }
17.
18. if [fields][service] == "filebeat-nginx-errorlog" {
19. elasticsearch {
20. hosts => ["localhost:9200"]
21. index => "nginx-errorlog-%{+YYYY.MM.dd}"
22. }
23. }
24. }
至此,我们就成功配置好filebeat和logstash文件。
接下来重启logstash和filebeat:
1. systemctl restart logstash
2. systemctl restart filebeat
当然,如果想要观察logstash和filebeat的输出(配置文件中需要配上stdout{}输出),也可以用命令行手动启动:
1. /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-filebeat-nginx.conf
2. /usr/bin/filebeat -e -c /etc/filebeat/filebeat.yml
现在,让我们查看filebeat是否收集正常(备注:需要先多访问机器nginx服务器产生日志)
1. [root@bogon Desktop]# curl http://localhost:9200/_cat/indices?v
2. health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
3. yellow open nginx-accesslog-2019.01.26 qak_Mi16RAGr1vo4ZVyW9g 5 1 9 0 60.7kb 60.7kb
看到上图中索引文件生成后,我们就可以按照上面我们得操作来配置kibana查看日志。
4.使用FileBeat采集,ES存储,Kibana展示
在上面的方案中,我们使用logstash主要是用于过滤。logstash本身是比较消耗资源的,如果只是简单的采集日志而没有过滤操作,可以不适用logstash,采用Filebeat采集日志,然后直接发送给ElasticSearch,最后在Kibana展示这个方案。Filebet本身消耗的资源也比较小,比较推荐使用Filebeat来专门采集日志。
这个方案只需要配置filebeat文件即可:
1. filebeat.prospectors:
2. - type: log
3. enable: true
4. paths:
5. - /var/log/nginx/access.log
6. fields:
7. service: filebeat-nginx-accesslog
8. scan_frequency: 10s
9.
10. - type: log
11. enable: true
12. paths:
13. - /var/log/nginx/error.log
14. fields:
15. service: filebeat-nginx-errorlog
16. scan_frequency: 10s
17.
18. setup.template.name: "index-%{[beat.version]}"
19. setup.template.pattern: "index-%{[beat.version]}"
20.
21. output.elasticsearch:
22. hosts: ["localhost:9200"]
23. index: "index-%{[beat.version]}-%{[fields.service]:other}-%{+yyyy.MM.dd}"
这里,我们通过fields.service来区分access.log和error.log文件
我们还是先把elasticsearch产生的索引文件清空:
1. curl -XDELETE http://localhost:9200/_all
停止Logstash,然后重启filebeta服务,同样还是多访问几次nginx产生日志。
通过curl命令,我们可以看到索引文件成功生成(备注:这里生成索引文件需要等一会,不是马上生成):
1. [root@bogon Desktop]# curl http://localhost:9200/_cat/indices?v
2. health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
3. yellow open index-6.5.4-other-2019.01.25 wKM4QEWaRN63bJz-VjZrJQ 5 1 6 0 67.4kb 67.4kb
至此ELK、FileBeat的搭建就完成了。当然这篇文章还只是ELK的入门,只介绍了ELK的基本功能和环境搭建,ELK还有很多高级复杂的功能,如果像进一步学习,估计得一本书的内容,大家刚兴趣的可以自己查看手册学习。

108

被折叠的 条评论
为什么被折叠?



