filebeat + kafka + logstash + ES + Kibana 整合使用

filebeat + kafka + logstash + ES + Kibana 整合使用

环境准备
  • JDK 1.8
  • filebeat 6.4.1
  • kafka 0.10.2.0
  • logstash 6.4.1
  • elasticsearch 6.4.1
  • Kibana 6.4.1
kafka
  • 创建topic
kafka-topics --zookeeper 192.168.23.121,192.168.23.122,192.168.23.123 --create --partitions 3 --replication-factor 3 --topic nginx-data001
filebeat
  • 修改配置文件
cd /etc/filebeat/
vi filebeat.yml
  • filebeat.yml 添加如下配置,输入为Nginx日志目录,输出为kafka
filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
   - /data01/datadir_test/nginx_logs/dataLKOne/access.log
output.kafka:
   enable: true
   hosts: ["dsgcd4121:9092","dsgcd4122:9092","dsgcd4123:9092"]
   topic: 'nginx-data001'
   version: '0.10.2.0'
  • 启动 filebeat
service filebeat start
logstash
  • 自定义 logstash patterns
mkdir -p /usr/local/logstash/patterns
vi /usr/local/logstash/patterns/nginx
  • nginx文件内容如下:
QS1 (.*?)
NGINXACCESS %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" (%{IPORHOST:http_host}.%{WORD:http_port}) %{NUMBER:response_status} %{NUMBER:response_length} (?:%{NUMBER:bytes_read}|-) %{QS1:referrer} %{QS1:agent}  %{NUMBER:request_time:float} %{NUMBER:upstream_response_time:float}
  • 创建配置文件
cd /etc/logstash/conf.d/
vi nginx_datalkone.conf
  • nginx_datalkone.conf 内容如下:
input {
  kafka {
        enable_auto_commit => true
        auto_commit_interval_ms => "1000"
        codec => "json"
        bootstrap_servers => "192.168.23.121:9092,192.168.23.122:9092,192.168.23.123:9092"
        topics => ["nginx-data001"]
  }
}
filter {
     grok {
        patterns_dir => "/usr/local/logstash/patterns"
        match => { "message" => "%{NGINXACCESS}" }
        remove_field => ["message"]
     }
     urldecode {
          all_fields => true
     }
     geoip {
        source => "clientip"
     }
}
output {
    elasticsearch {
        hosts => ["localhost:9200"]
        index => "nginx-data-%{+YYYY.MM.dd}"
    }
}
  • 启动logstashinitctl start logstash
ElasticSearch
  • 通过curl 'localhost:9200/_cat/indices?v'可以查看是否写入了数据
176># curl 'localhost:9200/_cat/indices?v'
health status index                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .kibana               iLXOPq3wTc2PECVeXtXzOw   1   1          2            0      7.7kb          7.7kb
yellow open   nginx-data-2018.09.28 ORbbgojRTEGJVSl_jN2MXg   5   1          0            0       401b           401b
在kibana中配置index

····

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值