Filebeat+kfaka+ELK架构,采集多个日志

该文章详细介绍了如何部署ELK栈的各个组件,包括Zookeeper的下载、配置与启动,Kafka的安装、配置和启动,以及Filebeat的配置和启动来收集Nginx的日志数据,并通过Logstash处理这些日志数据,最后将数据发送到Elasticsearch。整个过程涵盖了日志采集、传输和处理的完整流程。
摘要由CSDN通过智能技术生成

架构图

ELK部署

参考上一篇文章:ELK7.0-部署

ZK部署

1.下载

下载地址:https://zookeeper.apache.org/releases.html

2.解压安装包
 tar -zxf apache-zookeeper-3.7.1-bin.tar.gz -C /usr/local/
/usr/local/apache-zookeeper-3.7.1-bin/ /usr/local/zookeeper-3.7.1/
3.拷贝配置文件,
cp /usr/local/zookeeper-3.7.1/conf/zoo_sample.cfg /usr/local/zookeeper-3.7.1/conf/zoo.cfg
4.修改配置文件
#在配置文件中加一行监听本机 IP 即可
clientPortAddress=10.0.5.163
zookeeper默认会占用8080端口,如果你本机已有服务在使用8080,可以把下面参数添加到zoo.cfg 文件里,自定义端口
admin.serverPort=8001
5.启动zk
/usr/local/zookeeper-3.7.1/bin/zkServer.sh start
6.查看端口是否监听
netstat -lntp |grep 2181
如果服务未监听,请查看日志排查问题
more zookeeper-root-server-VM-5-163-centos.out

kafka 部署

1.下载

下载地址:https://kafka.apache.org/downloads

2.解压安装包
tar -zxf kafka_2.12-3.4.0.tgz -C /usr/local/
3.修改kafka配置
vim /usr/local/kafka_2.12-3.4.0/config/server.properties 
#修改 zk 的IP
zookeeper.connect=10.0.5.163:2181

#修改监听地址
listeners=PLAINTEXT://10.0.5.163:9092
4.启动kafka
nohup /usr/local/kafka_2.12-3.4.0/bin/kafka-server-start.sh /usr/local/kafka_2.12-3.4.0/config/server.properties >/tmp/kafka.log 2>&1 &
5.查看端口是否监听
netstat -lntp |grep 9092

Flebeat部署

  1. 下载
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.16.1-linux-x86_64.tar.gz
2.解压二进制包
tar zxf filebeat-7.16.1-linux-x86_64.tar.gz -C /usr/local/
mv /usr/local/filebeat-7.16.1-linux-x86_64/ /usr/local/filebeat-7.16.1
3.创建 Filebeat 配置文件
#备份模板文件
mv /usr/local/filebeat-7.16.1/filebeat.yml /usr/local/filebeat-7.16.1/filebeat.yml.bak
#创建配置文件
cat > /usr/local/filebeat-7.16.1/filebeat.yml << "EOF"
filebeat.inputs:
- type: log
  tail_files: true
  backoff: "1s"
  paths:
      - /var/log/nginx/access.json.log
  fields:
    type: access
  fields_under_root: true
- type: log
  tail_files: true
  backoff: "1s"
  paths:
      - /var/log/messages
  fields:
    type: messages
  fields_under_root: true
output:
  kafka:
    hosts: ["10.0.5.163:9092"]
    topic: hosts_10-0-5-163
EOF
4.启动Fielbeat
#查看是否已存在进程,将其停止
ps -ef |grep filebeat |grep -v grep |awk '{print $2}' |xargs kill -9

#启动Filebeat
nohup /usr/local/filebeat-7.16.1/filebeat  -e -c /usr/local/filebeat-7.16.1/filebeat.yml >/tmp/filebeat.log 2>&1 &

#查看进程
ps -ef |grep filebeat

#查看是否与ZK建立连接
netstat -ntp |egrep -w '9092|filebeat'

Logstash 配置

1.修改Logstash 配置文件(下面 output 将日志打印到本地,观察日志是否采集到,日志格式是否正确)
cat > /usr/local/logstash-7.16.1/config/logstash.conf << "EOF"
input {
  kafka {
    bootstrap_servers => "10.0.5.163:9092"
    topics => ["hosts_10-0-5-163"]
    group_id => "test"
    codec => "json"
  }
}

filter {
  if [type] == "access" {
    json {
      source => "message"
      remove_field => ["message","@version","path","beat","input","log","offset","prospector","source","tags"]
    }
  }
}

output {
  stdout {
    codec=>rubydebug
  }
}
EOF
2.执行前台启动命令
#查看是否已存在进程,将其停止
ps -ef |grep logstash |grep -v grep |awk '{print $2}' |xargs kill -9

#启动 Logstash
logstash -f /usr/local/logstash-7.16.1/config/logstash.conf 

我的nginx 日志是 json 格式,这里输出表示正常

3.查看kafka Group 和队列信息
#进入kafka 安装目录
cd /usr/local/kafka_2.12-3.4.0/bin
#查看所有topic
./kafka-topics.sh  --bootstrap-server 10.0.5.163:9092 --lis
#查看Group
./kafka-consumer-groups.sh  --bootstrap-server 10.0.5.163:9092 --list
#查看队列
./kafka-consumer-groups.sh  --bootstrap-server 10.0.5.163:9092 --group test --describe

4.修改配置文件,将output 将日志写入elasticsearch
cat > /usr/local/logstash-7.16.1/config/logstash.conf << "EOF"
input {
  kafka {
    bootstrap_servers => "10.0.5.163:9092"
    topics => ["hosts_10-0-5-163"]
    group_id => "test"
    codec => "json"
  }
}
filter {
  if [type] == "access" {
    json {
      source => "message"
      remove_field => ["message","@version","path","beat","input","log","offset","prospector","source","tags"]
    }
  }
}

output{
  if [type] == "access" {
    elasticsearch {
      hosts => ["http://127.0.0.1:9200"]
      user => "elastic"
      password => "elk@2023"
      index => "access-%{+YYYY.MM.dd}"
    }
  }
  else if [type] == "messages" {
    elasticsearch {
      hosts => ["http://127.0.0.1:9200"]
      user => "elastic"
      password => "elk@2023"
      index => "messages-%{+YYYY.MM.dd}"
    }
  }
}
EOF

4.后台启动 Logstash
#查看是否已存在进程,将其停止
ps -ef |grep logstash |grep -v grep |awk '{print $2}' |xargs kill -9

#启动 Logstash
nohup logstash -f /usr/local/logstash-7.16.1/config/logstash.conf  >/tmp/logstash.log 2>&1 &

查看服务日志是否正常

查看日志是否有 ERROR 持续输出
tailf /tmp/logstash.log

#查看logstash 端口是否监听
netstat -lntp |grep 9600
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值