ELK中filebeat组件的安装-04

filebeat是一个轻量级的日志收集器

这里写图片描述

此处我的ELK架构是两台elasticsearch+logstash+kibana+filebeat

之前写过elasticsearch+logstash+kibana这三个组件的安装及使用,这里只介绍filebeat组件的部署及使用

客户端操作

Apt

#wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
#sudo apt-get install apt-transport-https
#echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
#sudo apt-get update && sudo apt-get install filebeat
#/etc/init.d/filebeat start

yum

#sudo rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
#vi /etc/yum.repos.d/filebeat.repo
   [elastic-5.x]
   name=Elastic repository for 5.x packages
   baseurl=https://artifacts.elastic.co/packages/5.x/yum
   gpgcheck=1
   gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
   enabled=1
   autorefresh=1
   type=rpm-md
#sudo yum install filebeat
#sudo chkconfig --add filebeat

deb

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.6.3-amd64.deb
sudo dpkg -i filebeat-5.6.3-amd64.deb

rpm

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.6.3-x86_64.rpm
sudo rpm -vi filebeat-5.6.3-x86_64.rpm

配置文件

# vi /etc/filebeat/filebeat.yml 

filebeat.prospectors:
- input_type: log
# Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*/*.log  需要收集的日志文件路径,这里收集所有日志
    #- /var/log/keystone/*.log
    #- c:\programdata\elasticsearch\logs\*


output.elasticsearch:  这里是直接输出到elasticsearch
  hosts: ["192.168.96.208:9200"]


#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["192.168.96.209:5044"] 这里是输出到logstash

注意:因为我安装了logstash在服务端,所以此处选择输出到logstash

配置文件改完之后,可以测试一下是否正确

root@controller:/usr/bin# ./filebeat.sh -configtest -e
2017/10/13 05:28:47.807772 beat.go:297: INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2017/10/13 05:28:47.807831 beat.go:192: INFO Setup Beat: filebeat; Version: 5.6.3
2017/10/13 05:28:47.807970 logstash.go:90: INFO Max Retries set to: 3
2017/10/13 05:28:47.807970 metrics.go:23: INFO Metrics logging every 30s
2017/10/13 05:28:47.808113 outputs.go:108: INFO Activated logstash as output plugin.
2017/10/13 05:28:47.808262 publish.go:300: INFO Publisher name: controller
2017/10/13 05:28:47.809109 async.go:63: INFO Flush Interval set to: 1s
2017/10/13 05:28:47.809133 async.go:64: INFO Max Bulk Size set to: 2048
Config OK
root@controller:/usr/bin# pwd
/usr/bin
root@controller:/usr/bin# 

重启服务

#/etc/init.d/filebeat restart

logstash端操作

[root@ELK-ncnode02 logstash]# pwd
/usr/share/logstash
[root@ELK-ncnode02 logstash]# bin/logstash -f /root/logstash.conf   logstash.conf为自定义的pipeline.conf文件

我此处的pipeline.conf文件的配置如下

input {
  beats {
    port => 5044
  }
}

# The filter part of this file is commented out to indicate that it is
# optional.
# filter {
#
# }

output {
   stdout { codec => rubydebug
    }
  elasticsearch {
    hosts => "192.168.96.208:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

将filebeat收集到的日志通过logstash的解析发送到elasticsearch和kibana

这里写图片描述

这里写图片描述

kibana有过滤功能,如下:

这里写图片描述

有误的地方欢迎大家指正

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值