ELK日志收集系统(二)

十一:filebeat模块

注意:通过filebeat模块来收集日志,只能发送给ES,不支持发送给如redis

[root@node01 ~]# rpm -qc filebeat
/etc/filebeat/filebeat.yml
/etc/filebeat/modules.d/apache2.yml.disabled
.................................

filebeat与模块相关的配置:

# 配置文件中没有下面这几项配置,执行 filebeat modules list 会报错
[root@node01 filebeat]# cat filebeat.yml
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: true
  reload.period: 10s

查看启动与禁用的相关模块:

[root@node01 filebeat]# filebeat modules list
Enabled:

Disabled:
apache2
...........

激活nginx模块:

filebeat modules enable nginx		# 其实就是把模块文件名中的.disabled给去掉了

前面我们将nginx的日志修改为JSON格式了,下面将nginx的日志修改为main模式

[root@node01 ~]# grep access_log /etc/nginx/nginx.conf
    access_log  /var/log/nginx/access.log  main;

然后重启nginx,访问nginx让其产生新的日志,查看nginx的日志格式
修改nginx模块配置文件:

[root@node01 ~]# grep -Ev '#|^$' /etc/filebeat/modules.d/nginx.yml
- module: nginx
  access:
    enabled: true
    var.paths: ["/var/log/nginx/access.log"]
  error:
    enabled: true
    var.paths: ["/var/log/nginx/error.log"]

filebeat配置文件:

[root@node01 ~]# cat /etc/filebeat/filebeat.yml
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: true
  reload.period: 10s
setup.kibana:
  host: "172.17.2.239:5601"
output.elasticsearch:
  hosts: ["172.17.2.239:9200"]

nginx模块的使用还需要安装两个插件:

# 在线安装
[root@node01 ~]# /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-user-agent
[root@node01 ~]# /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-geoip
# 安装ingest-geoip插件时会有一个警告,直接输入y回车即可
# 离线下载安装
[root@node01 ~]# wget https://artifacts.elastic.co/downloads/elasticsearch-plugins/ingest-user-agent/ingest-user-agent-6.6.0.zip
[root@node01 ~]# wget https://artifacts.elastic.co/downloads/elasticsearch-plugins/ingest-geoip/ingest-geoip-6.6.0.zip
[root@node01 ~]# /usr/share/elasticsearch/bin/elasticsearch-plugin install file:///root/ingest-geoip-6.6.0.zip 
[root@node01 ~]# /usr/share/elasticsearch/bin/elasticsearch-plugin install file:///root/ingest-user-agent-6.6.0.zip 

安装完成后重启filebeat
安装两个插件后,ELK收集日志的整体流程如下图:
ELK日志收集系统(二)
将nginx日志的访问与错误日志分开收集的filebeat配置文件:

[root@node01 ~]# cat /etc/filebeat/filebeat.yml
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: true
  reload.period: 10s
setup.kibana:
  host: "172.17.2.239:5601"
output.elasticsearch:
  hosts: ["172.17.2.239:9200"]
  indices:
    - index: "nginx-access-%{[beat.version]}-%{+yyyy.MM.dd}"
      when.contains:
        fileset.name: "access"
    - index: "nginx-error-%{[beat.version]}-%{+yyyy.MM.dd}"
      when.contains:
        fileset.name: "error"

setup.template.name: "nginx"
setup.template.pattern: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true
systemctl restart elasticsearch
systemctl restart filebeat

重启后es-head页面就会出现nginx相关的索引, 如图
ELK日志收集系统(二)
在kibana中创建nginx错误日志时,注意下面的选择:
ELK日志收集系统(二)

十二:filebeat画图

filebeat配置文件:

[root@node01 ~]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]
- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]
setup.kibana:
  host: "172.17.2.239:5601"
output.elasticsearch:
  hosts: ["172.17.2.239:9200"]
  indices:
    - index: "nginx_access-%{[beat.version]}-%{+yyyy.MM.dd}"
      when.contains:
        tags: "access"
    - index: "nginx_error-%{[beat.version]}-%{+yyyy.MM.dd}"
      when.contains:
        tags: "error"
setup.template.name: "nginx"
setup.template.pattern: "nginx_*"
setup.template.enabled: false
setup.template.overwrite: true

将nginx日志修改为JSON格式,然后制造一些nginx访问日志:

[root@node01 filebeat]# for i in {0..10};do curl -I http://172.17.2.239/login.html; done

画图步骤,下面看图:
ELK日志收集系统(二)
ELK日志收集系统(二)
ELK日志收集系统(二)
ELK日志收集系统
ELK日志收集系统
IP地址显示是竖的,调整为横的:
ELK日志收集系统
保存图形:
ELK日志收集系统
画饼状图:
ELK日志收集系统
画数据图:
ELK日志收集系统

十三:使用redis作为缓存收集日志

13.1 启动redis容器

使用redis作为缓存收集日志
这样做的好处是logstash, es都挂掉了,但数据依然保存在redis中,无论何时启动logstash数据依然从redis中读取。
注意:filebeat只支持将数据传送到redis单节点,不支持redis集群

# 启动一个redis容器,该机器IP为:172.17.2.117
docker run -d --name redis \
-h redis \
-p 10009:6379 \
-v /data/redis_docker/data:/data \
redis:latest --appendonly yes --port 6379 --requirepass "test"

13.2 filebeat配置

filebeat配置文件:

[root@node01 ~]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:

- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]

- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]

setup.kibana:
  host: "172.17.2.239:5601"

output.redis:
  hosts: ["172.17.2.117:10009"]
  password: "test"
  key: "filebeat"
  db: 0
  timeout: 5
  keys:
    - key: "nginx_access"
      when.contains:
        tags: "access"
    - key: "nginx_error"
      when.contains:
        tags: "error"

重启filebeat,制造一些nginx访问日志。如果正常,redis中会有记录
使用redis作为缓存收集日志

13.3 logstash安装与配置

yum localinstall -y logstash-6.6.0.rpm

logstash配置文件:

[root@node01 tools]# cat /etc/logstash/conf.d/redis.conf
input {
  redis {
    host => "172.17.2.117"
    port => "10009"
    password => "test"
    db => "0"
    key => "nginx_access"
    data_type => "list"
  }
  redis {
    host => "172.17.2.117"
    port => "10009"
    password => "test"
    db => "0"
    key => "nginx_error"
    data_type => "list"
  }
}

filter {
  mutate {
    convert => ["upstream_time", "float"]
    convert => ["request_time", "float"]
  }
}

output {
    stdout {}
    if "access" in [tags] {
      elasticsearch {
        hosts => "http://localhost:9200"
        manage_template => false
        index => "nginx_access-%{+yyyy.MM.dd}"
      }
    }
    if "error" in [tags] {
      elasticsearch {
        hosts => "http://localhost:9200"
        manage_template => false
        index => "nginx_error-%{+yyyy.MM.dd}"
      }
    }
}

前台启动logstash:

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf

13.4 配置文件优化

上面的filebeat配置将收集到的日志写入到两个key中;logstash从redis中读取不同的key
ELK日志收集系统
filebeat收集日志写入到一个key中

[root@node01 ~]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:

- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]

- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]

setup.kibana:
  host: "172.17.2.239:5601"

output.redis:
  hosts: ["172.17.2.117:10009"]
  password: "test"
  key: "nginx"
  db: 0
  timeout: 5

logstash根据tag区分一个key里的不同日志

[root@node01 ~]# cat /etc/logstash/conf.d/redis.conf
input {
  redis {
    host => "172.17.2.117"
    port => "10009"
    password => "test"
    db => "0"
    key => "nginx"
    data_type => "list"
  }
}

filter {
  mutate {
    convert => ["upstream_time", "float"]
    convert => ["request_time", "float"]
  }
}

output {
    stdout {}
    if "access" in [tags] {
      elasticsearch {
        hosts => "http://localhost:9200"
        manage_template => false
        index => "nginx_access-%{+yyyy.MM.dd}"
      }
    }
    if "error" in [tags] {
      elasticsearch {
        hosts => "http://localhost:9200"
        manage_template => false
        index => "nginx_error-%{+yyyy.MM.dd}"
      }
    }
}

十四:kibana的x-pack监控

kibana的x-pack监控

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值