ELK 使用 Filebeat 收集日志

10 篇文章 1 订阅
4 篇文章 0 订阅

一、部署 Filebeat

10.0.0.37 安装 filebeat

apt install -y openjdk-8-jdk

# 将 filebeat-7.12.1-amd64.deb 软件包传到 /usr/local/src 目录下,并进行安装
dpkg -i /usr/local/src/filebeat-7.12.1-amd64.deb

二、Filebeat -> Redis -> Logstash -> ES

2.1 Filebeat 配置:Nginx 日志文件输出到 Redis

root@web1:/usr/local/src# grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$"
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/syslog
  fields:
    type: syslog
- type: log
  enabled: true
  paths:
    - /apps/nginx/logs/error.log
  fields:
    service: nginx-errorlog
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:
#output.elasticsearch:
#  hosts: ["10.0.0.31:9200"]

output.redis:
  host: ["10.0.0.35"]
  passwd: "123456"
  key: "lck-nginx"
  db: 0
  timeout: 5
  
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# 启动服务
systemctl restart filebeat

2.2 Logstash 配置:从 Redis 拉取到 elasticsearch

root@ubuntu1804:~# vim /etc/logstash/conf.d/redis-to-es.conf
input {
  redis {
    data_type => "list"
    key => "lck-nginx"
    host => "10.0.0.34"
    port => "6379"
    db => "1"
    password => "123456"
    threads => "4"
  }
}

output {
  if [fields][service] == "nginx-errorlog" {
    elasticsearch {
      hosts => ["10.0.0.31:9200"]
      index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"
    }
  }
  if [fields][type] == "syslog" {
    elasticsearch {
      hosts => ["10.0.0.31:9200"]
      index => "filebeat-nginx-syslog-%{+YYYY.MM.dd}"
    }
  }
}

# 重启服务
systemctl restart logstash

三、Filebeat -> Kafka -> Logstash -> ES

3.1 Filebeat 配置:Nginx 日志文件输出到 Kafka

vim /etc/filebeat/filebeat.yml
- type: log
  enabled: true
  paths:
    - /var/log/syslog
  fields:
    type: syslog

- type: log
  enabled: true
  paths:
    - /apps/nginx/logs/error.log
  fields:
    service: nginx-errorlog

- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  fields:
    service: nginx-accesslog

output.kafka
  host: ["10.0.0.40:9092","10.0.0.41:9092","10.0.0.42:9092"]
  topic: '%{[fields.log_topic]}'
  partition.round_robin: 
    reachable_only: false
    required_acks: 1
    compression: gzip
    max_message_bytes: 1000000

# 启动服务
systemctl restart filebeat

3.2 Logstash 配置:从 Kafka 拉取到 elasticsearch

root@ubuntu1804:~# vim /etc/logstash/conf.d/kafka-to-es.conf
input {
  kafka {
    bootstrap_servers => "10.0.0.40:9092,10.0.0.41:9092,10.0.0.42:9092"
    topics => ["lck-nginx-accesslog","lck-nginx-errorlog"]
    codec => "json"
  }
}

output {
  if [fields][service] == "nginx-errorlog" {
    elasticsearch {
      hosts => ["10.0.0.31:9200"]
      index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"
    }
  }
  if [fields][type] == "syslog" {
    elasticsearch {
      hosts => ["10.0.0.31:9200"]
      index => "filebeat-nginx-syslog-%{+YYYY.MM.dd}"
    }
  }
}

四、Filebeat -> Logstash -> Redis -> Logstash -> ES

因为 filebeat 无法处理 JSON 格式的数据,我们利用 logstash 来处理带有 JSON 格式的数据
在这里插入图片描述

4.1 Filebeat 配置:Nginx 日志文件输出到 Logstash

vim /etc/filebeat/filebeat.yml
- type: log
  enabled: true
  paths:
    - /var/log/syslog
  fields:
    type: syslog

- type: log
  enabled: true
  paths:
    - /apps/nginx/logs/error.log
  fields:
    service: nginx-errorlog

- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  fields:
    service: nginx-accesslog

output.logstash
  host: ["10.0.0.36:5044"]

# 启动服务
systemctl restart filebeat

4.2 Logstash配置:将 filebeat 发送过来的数据传输到 Redis

vim /etc/logstash/conf.d/filebeat-to-redis.conf
input {
  beats {
    port => 5044
    codec => "json"
  }
}

output {
  if [fields][type] == "syslog" {
    redis {
      data_type => "list"
      key => "lck-syslog"
      host => "10.0.0.34"
      port => "6379"
      db => "1"
      password => "123456"
    }
  }
  if [fields][service] == "nginx-errorlog" {
    redis {
      data_type => "list"
      key => "lck-nginx-errorlog"
      host => "10.0.0.34"
      port => "6379"
      db => "1"
      password => "123456"
    }
  }
  if [fields][service] == "nginx-accesslog" {
    redis {
      data_type => "list"
      key => "lck-nginx-accesslog"
      host => "10.0.0.34"
      port => "6379"
      db => "1"
      password => "123456"
    }
  }
}

# 检查配置文件语法是否正确
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/filebeat-to-redis.conf -t

# 启动服务
systemctl restart logstash

# 在 Redis 查看数据
redis-cli
select 1

4.3 Logstash配置:将 Redis 数据数据传输到 elasticsearch

root@ubuntu1804:~# vim /etc/logstash/conf.d/redis-to-es.conf
input {
  redis {
    data_type => "list"
    key => "lck-syslog"
    host => "10.0.0.34"
    port => "6379"
    db => "1"
    password => "123456"
    threads => "4"
  }
  redis {
    data_type => "list"
    key => "lck-nginx-accesslog"
    host => "10.0.0.34"
    port => "6379"
    db => "1"
    password => "123456"
    threads => "4"
  }
    redis {
    data_type => "list"
    key => "lck-nginx-errorlog"
    host => "10.0.0.34"
    port => "6379"
    db => "1"
    password => "123456"
    threads => "4"
  }
}

output {
    if [type] == "syslog" {
    elasticsearch {
      hosts => ["10.0.0.31:9200"]
      index => "logstash-nginx-syslog-%{+YYYY.MM.dd}"
    }
  }
  if [type] == "nginx-accesslog" {
    elasticsearch {
      hosts => ["10.0.0.31:9200"]
      index => "logstash-nginx-accesslog-%{+YYYY.MM.dd}"
    }
  }
  if [type] == "nginx-errorlog" {
    elasticsearch {
      hosts => ["10.0.0.31:9200"]
      index => "logstash-nginx-errorlog-%{+YYYY.MM.dd}"
    }
  }
}

# 注意事项:index的字段中,logstash开头表示可以显示客户端IP归属地,在es中地图可以查看

# 检查配置文件语法是否正确
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-to-es.conf -t

# 启动服务
systemctl restart logstash

五、Filebeat -> Logstash -> Kafka -> Logstash -> ES

在这里插入图片描述

5.1 Filebeat 配置:Nginx 日志文件输出到 Logstash

vim /etc/filebeat/filebeat.yml
- type: log
  enabled: true
  paths:
    - /var/log/syslog
  fields:
    type: syslog

- type: log
  enabled: true
  paths:
    - /apps/nginx/logs/error.log
  fields:
    service: nginx-errorlog

- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  fields:
    service: nginx-accesslog

output.logstash
  hosts: ["10.0.0.36:5044","10.0.0.36:5045"]
  enabled: true
  worker: 1
  compression_level: 3
  loadbalance: true

# 启动服务
systemctl restart filebeat

# 查看服务是否启动
systemctl status filebeat.service

5.2 Logstash配置:将 filebeat 发送过来的数据传输到 Kafka

vim /etc/logstash/conf.d/filebeat-to-kafka.conf
input {
  beats {
    port => 5044
    codec => "json"
  }
}

output {
  if [fields][type] == "syslog" {
    kafka {
      bootstrap_servers => "10.0.0.40:9092,10.0.0.41:9092,10.0.0.42:9092"
      topic_id => "lck-50-syslog"
      codec => "json"
    }
  }
  if [fields][service] == "nginx-errorlog" {
    kafka {
      bootstrap_servers => "10.0.0.40:9092,10.0.0.41:9092,10.0.0.42:9092"
      topic_id => "lck-50-nginx-aerrorlog"
      codec => "json"
    }
  }
  if [fields][service] == "nginx-accesslog" {
    kafka {
      bootstrap_servers => "10.0.0.40:9092,10.0.0.41:9092,10.0.0.42:9092"
      topic_id => "lck-50-nginx-accesslog"
      codec => "json"
    }
  }
}

# 检查配置文件语法是否正确
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/filebeat-to-kafka.conf -t

# 启动服务
systemctl restart logstash

查看kafka是否有数据
在这里插入图片描述

5.3 Logstash配置:将 Kafka 数据数据传输到 elasticsearch

root@ubuntu1804:~# vim /etc/logstash/conf.d/kafka-to-es.conf
input {
  kafka {
    bootstrap_servers => "10.0.0.40:9092,10.0.0.41:9092,10.0.0.42:9092"
    topics => ["lck-50-syslog","lck-50-nginx-errorlog","lck-50-nginx-accesslog"]
    codec => "json"
  }
}

output {
    if [fields][type] == "syslog" {
    elasticsearch {
      hosts => ["10.0.0.31:9200"]
      index => "kafka-0901-syslog-%{+YYYY.MM.dd}"
    }
  }
  if [fields][service] == "nginx-accesslog" {
    elasticsearch {
      hosts => ["10.0.0.31:9200"]
      index => "kafka-0901-nginx-accesslog-%{+YYYY.MM.dd}"
    }
  }
  if [fields][service] == "nginx-errorlog" {
    elasticsearch {
      hosts => ["10.0.0.31:9200"]
      index => "lkafka-0901-nginx-errorlog-%{+YYYY.MM.dd}"
    }
  }
}

# 注意事项:index的字段中,logstash开头表示可以显示客户端IP归属地,在es中地图可以查看

# 检查配置文件语法是否正确
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/kafka-to-es.conf -t

# 启动服务
systemctl restart logstash

验证数据是否写入 elasticsearch
在这里插入图片描述

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Docker ELK是指使用Docker容器化技术搭建的ELK(Elasticsearch、Logstash和Kibana)日志分析平台。其中,FilebeatELK中的一个组件,用于收集和传输日志数据。 在使用Docker ELK搭建平台时,你可以通过执行命令"Docker run"来启动Filebeat容器。启动命令示例如下: ``` docker run -d -u root --name filebeat --net somenetwork -v /var/log/logapp:/var/log/logapp:rw -v /mydata/docker/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro elastic/filebeat:7.16.1 ``` 这个命令会在Docker中启动一个名为"filebeat"的容器,并将日志文件夹和配置文件夹挂载到容器中。 如果需要启动多个Filebeat容器,只需要指定不同的外挂地址即可,这样可以保持架构图的一致性。示例命令如下: ``` docker run -d --network elk-net --ip 172.22.1.5 --name=filebeat -v /mydata/filebeat/log/:/usr/share/filebeat/logs -v /mydata/filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml docker.elastic.co/beats/filebeat:7.4.2 ``` 这个命令会在Docker中启动一个名为"filebeat"的容器,并将日志文件夹和配置文件夹挂载到容器中。 要导入日志进行测试,你可以创建一个Filebeat配置文件filebeat.yml。可以使用以下命令创建配置文件: ``` touch /mydata/filebeat/config/filebeat.yml ``` 这个命令会在指定路径下创建一个名为filebeat.yml的配置文件。然后,你可以根据需要进行相应的配置,包括指定日志路径、过滤条件等。 综上所述,Docker ELK中的Filebeat是用于收集和传输日志数据的组件,在搭建平台时需要执行相应的启动命令,并可以通过创建配置文件来进行必要的配置。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [docker安装elk + filebeat(版本:7.16.1)](https://blog.csdn.net/paidaxinga_/article/details/122210062)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] - *2* *3* [docker搭建elk+filebeat](https://blog.csdn.net/qq_31745863/article/details/129986232)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值