搭建filebeat+redis+logstash+elasticsearch+kibana日志监控


EFK日志收集
可参考官方文档https://www.elastic.co/guide/en/beats/filebeat/current/index.html

Elasticsearch: 数据库,存储数据 		//java环境
	logstash: 日志收集,过滤数据			//java环境
	kibana:	分析,过滤,展示				//java环境
	filebeat: 收集日志,传输到ES或logstash   //go语言编写的
	nginx            //测试网站访问日志

大致的环境可以这么理解
在这里插入图片描述
我们先由filebeat收集日志,然后将收集到的日志放入缓存数据库redis中,然后又redis交给logstash,所以redis中是不存数据的,然后再有logstash交给elasticsearch我们刷新可以看到生成日志,然后可以添加到kibana中进行更详细的查看
############################################
首先搭建efk环境(elasticsearch+filebeat+kibana)
可以将这几个服务全部安装到一台主机上
192.168.131.210(内存4g)

首先安装elasticsearch

前提需要安装java环境 jdk-1.8.0
1.通过rpm安装

[root@elfk redis]# rpm -ivh elasticsearch-6.6.0.rpm

2.修改配置文件

[root@elfk redis]# vim /etc/elasticsarch/elasticsearch.yml
~~~javascript
node.name: node-1	
#名字
path.data: /data/elasticsearch
#主数据文件			
path.logs: /var/log/elasticsearch
#服务日志文件			
network.host: 192.168.131.210,127.0.0.1
#服务ip地址
http.port: 9200
#端口号

3.创建数据目录,并修改权限(注意与主配置文件中的数据目录对应)

[root@elft src]# mkdir -p /data/elasticsearch
[root@elft src]# chown -R elasticsearch.elasticsearch /data/elasticsearch/

4.因为没有做锁定内存所以直接启动服务

[root@elft src]# systemctl daemon-reload 
[root@elft src]# systemctl restart elasticsearch.service 

安装kibana

1.安装kibana

[root@elfk ~]# rpm -ivh kibana-6.6.0-x86_64.rpm

2.修改配置文件

[root@elfk ~]# vim /etc/kibana/kibana.yml
server.port: 5601   #端口号
server.host: "192.168.131.210"      #服务所在服务器的IP地址
server.name: "db01" 	#自己所在主机的主机名
elasticsearch.hosts: ["http://192.168.131.210:9200"]   #es服务器的ip,便于接收日志数据

3.启动kibana

[root@elfk ~]# systemctl start kibana

安装filebeat

1.rpm安装filebeat

[root@elfk ~]# rpm -ivh filebeat-6.6.0-x86_64.rpm

2.修改配置文件

vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
output.elasticsearch:
  hosts: ["192.168.131.210:9200"]

3.启动filebeat

[root@elfk ~]# systemctl start filebeat

安装nginx

1.配置yum源,安装nginx httpd-tools(压力测试工具)

[root@elfk ~]# yum -y install epel-release
[root@elfk ~]# yum -y instasll nginx httpd-tools

2.启动nginx

[root@elfk ~]# systemctl start nginx

3.使用ab压力测试工具测试访问

[root@elfk ~]# ab -n 100 -c 20 http://192.168.131.210/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.131.210 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        nginx/1.16.1
Server Hostname:        192.168.131.210
Server Port:            80

Document Path:          /
Document Length:        4833 bytes

Concurrency Level:      20
Time taken for tests:   0.781 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      50680000 bytes
HTML transferred:       48330000 bytes
Requests per second:    12798.13 [#/sec] (mean)
Time per request:       1.563 [ms] (mean)
Time per request:       0.078 [ms] (mean, across all concurrent requests)
Transfer rate:          63340.76 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.3      0       4
Processing:     0    1   1.5      1      34
Waiting:        0    1   1.5      1      31
Total:          0    2   1.6      1      34

Percentage of the requests served within a certain time (ms)
  50%      1
  66%      2
  75%      2
  80%      2
  90%      2
  95%      2
  98%      4
  99%     11
 100%     34 (longest request)

4.在es浏览器上查看数据会有数据生成(可在之前的文章中找到)
5.可以在kibana添加索引以图形化更直观的观察出来
选择management–create index–discover–右上角
在这里插入图片描述
可以查看到详细日志
6.修改nginx的日志格式,也就是每次生成日志文件内的格式

[root@elfk ~]# vim /etc/nginx/nginx.conf
#在http{}内添加
log_format log_json '{ "@timestamp": "$time_local", '
'"remote_addr": "$remote_addr", '
'"referer": "$http_referer", '
'"request": "$request", '
'"status": $status, '
'"bytes": $body_bytes_sent, '
'"agent": "$http_user_agent", '
'"x_forwarded": "$http_x_forwarded_for", '
'"up_addr": "$upstream_addr",'
'"up_host": "$upstream_http_host",'
'"up_resp_time": "$upstream_response_time",'
'"request_time": "$request_time"'
' }';
    access_log  /var/log/nginx/access.log  log_json;

7.修改filebeat日志格式

[root@elfk ~]# vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true

output.elasticsearch:
  hosts: ["192.168.131.210:9200"]
  index: "nginx-%{[beat.version]}-%{+yyyy.MM}"
setup.template.name: "nginx"
setup.template.patten: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true

8.修改完后先清空nginx中access中的日志/var/log/nginx/access.log
然后重启filebeat和nginx,然后再次进行压力测试,最后去es网页上刷新查看
在这里插入图片描述
发现索引名变为在filebeat文件中设置的格式
9.配置access.log和error.log分离,在上面我们生成的日志中正确的错误的日志都以nginx-access------显示出来

[root@elfk ~]# vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]
  
- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]

output.elasticsearch:
  hosts: ["192.168.131.210:9200"]
  #index: "nginx-%{[beat.version]}-%{+yyyy.MM}"
  indices:
    - index: "nginx-access-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "access"
    - index: "nginx-error-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "error"

setup.template.name: "nginx"
setup.template.patten: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true

修改完成后重启这两个服务然后压力测试最后再去es网页上查看

在这里插入图片描述
能正确日志与错误日志成功分开了

安装redis

1.首先创建安装目录

[root@elfk ~]# mkdir -p /opt/redis_cluster/redis_6379/{conf,logs,pid}

2.将redis的包解压到redis_cluster中

[root@elfk src]# tar xf redis-5.0.7.tar.gz -C /opt/redis_cluster/
[root@elfk src]# cd /opt/redis_cluster/
[root@elfk redis_cluster]# ls
redis-5.0.7  redis_6379
[root@elfk redis_cluster]# 

3.将解压后的目录做个链接然后进入编译安装

[root@elfk redis_cluster]# ln -s /opt/redis_cluster/redis-5.0.7 /opt/redis_cluster/redis
[root@elfk redis_cluster]# cd /opt/redis_cluster/redis
[root@elfk redis]# make && make install

4.编写配置文件

[root@elfk redis]# vim /opt/redis_cluster/redis_6379/conf/6379.conf

添加:

bind 127.0.0.1 192.168.131.210
port 6379
daemonize yes
pidfile /opt/redis_cluster/redis_6379/pid/redis_6379.pid
logfile /opt/redis_cluster/redis_6379/logs/redis_6379.log
databases 16
dbfilename redis.rdb
dir /opt/redis_cluster/redis_6379

5.启动redis服务

[root@elfk redis]# redis-server /opt/redis_cluster/redis_6379/conf/6379.conf

6.修改filebeat配置文件(参考文档:https://www.elastic.co/guide/en/beats/filebeat/6.6/index.html)
修改filebeat配置output指向redis。

[root@elfk redis]# vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]

- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]

setup.template.settings:
  index.number_of_shards: 3

setup.kibana:

output.redis:
  hosts: ["192.168.131.210"]
  key: "filebeat"
  db: 0
  timeout: 5

7.登录redis,查看键值
redis-cli #登录
keys * #列出所有键
type filebeat #filebeat为键值名
LLEN filebeat #查看list长度
LRANGE filebeat 0 -1 #查看list所有内容

[root@elfk redis]# redis-cli 
127.0.0.1:6379> LLEN filebeat
(integer) 90000
127.0.0.1:6379> keys *
1) "filebeat"

安装logstash

1.rpm安装

[root@elfk redis]# rpm -ivh logstash-6.6.0.rpm

2.配置logstash,修改logstash配置文件,实现access和error日志分离

[root@elfk redis]# vim /etc/logstash/conf.d/redis.conf
input {
  redis {
    host => "192.168.131.210"
    port => "6379"
    db => "0"
    key => "filebeat"
    data_type => "list"
  }
}

filter {
  mutate {
    convert => ["upstream_time","float"]
    convert => ["request_time","float"]
  }
}

output {
  stdout {}
   if "access" in [tags] {
    elasticsearch {
      hosts => ["http://192.168.131.210:9200"]
      index => "nginx_access-%{+YYYY.MM.dd}"
      manage_template => false
    }
   }
   if "error" in [tags] {
    elasticsearch {
      hosts => ["http://192.168.131.210:9200"]
      index => "nginx_error-%{+YYYY.MM.dd}"
      manage_template => false
    }
   }
}

3.启动logstash

[root@elfk redis]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf

测试:启动logstash后,redis数据库中的列表值会一直减少,直到为0,然后到es网站上查看会有新的日志生成即为成功。

  • 1
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

马总123

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值