日志到filebeat-->logstash-->elastic-->kibana

该博客详细介绍了如何通过filebeat收集日志,然后利用logstash进行处理,再将数据发送到elastic,最终在kibana中展示。配置包括filebeat的syslog输入,logstash的过滤与转换规则,以及elasticsearch和kibana的设置验证。
摘要由CSDN通过智能技术生成

1、日志到filebeat。 cat /etc/filebeat/filebeat.yml

filebeat.inputs:
- type: syslog
  format: rfc3164
  protocol.udp:
    host: "0.0.0.0:514"


output.logstash:
  hosts: ["localhost:5044"]

验证方式: tcpdump -i 网卡名称 udp port 514

2、logstash

docker run -d \
  --name=logstash_xx \
  --restart=always \
  -p 5044:5044 \
  -p 9600:9600 \
  -v /data/logstash_xx/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf \
  docker.elastic.co/logstash/logstash:8.10.3

docker run -d \
  --name=logstash_fw_16.5-t2 \
  --restart=always \
  -p 15045:5044/udp \
  -p 15045:5044 \
  -p 19605:9600 \
  -v /data/logstash_fw_16.5/config/logstash.yml:/usr/share/logstash/config/logstash.ymlh \
  -v /data/logstash_fw_16.5/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf \
  -v /data/logstash_fw_16.5/config/jvm.options:/usr/share/logstash/config/jvm.options \
  docker.elastic.co/logstash/logstash:8.10.3
##

 cat   /data/logstash_fw_16.5/config/logstash.yml
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: ["esip1:9200","esip2:9200","esip3:9200" ]
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: 123qwe
##

# 监听从kafka发过来的数据
input {
  # 有多少个kafka topic 下面就需要重复多少个,同类型的可以写到一个topic里面
  kafka {
    type => "zb-zhuanxianfw-10-84-16-5"
    topics => "zb-zhuanxianfw-10-84-16-5-topic"
    bootstrap_servers => "kafka_ip1:9092,"kafka_ip2:9092,"kafka_ip3:9092"
    auto_offset_reset => "latest"
    codec => json
  }
}

filter {
  if [fields][device_model] == "zb-zhuanxianfw-10-84-16-5"{
    grok {
    match => { "message" => "(?<时间>\d{1,4}-\d{1,2}-\d{1,2}\s{1,2}\d{1,2}:\d{1,2}:\d{1,2}) (?<设备名称>[^ ]+) (?<日志类型>[^:]+): (?<系统>[^,]+), (?<协议>[^,]+), (?<源地址>[^,]+), (?<源端口>[^,]+), (?<目的地址>[^,]+), (?<目的端口>[^,]+), (?<时间2>[^,]+), (?<源zone>[^,]+), (?<目的zone>[^,]+), (?<应用名>[^,]+), (?<策略名称>[^.]+)."}
      }

}


####end
  else {}
  # 修改时间 增加8小时
#ruby {
#     code => "event.set('timestamp', event.get('@timestamp').time.localtime + 8*60*60)"
#   }
#   mutate {
#     remove_field => ["@timestamp"]
#   }

  # 删除下面这些标签信息
    mutate {
       remove_field => "agent"
       remove_field => "ecs"
       remove_field => "@version"
       remove_field => "host"
       remove_field => "path"
       remove_field => "log.offset"
       remove_field => "tags"
       remove_field => "input"
       remove_field => "offset"
       remove_field => "_score"
     }
  }

# 打印到屏幕
#output {
#   stdout {}
# }

output {
      elasticsearch {
            user =>"elastic"
            password =>"123qwe"
            hosts => ["esip1:9200","esip2:9200","esip3:9200"]
            index => "%{[fields][device_model]}-%{+YYYY.MM.dd}"
            }
        }

##

 cat /data/logstash_fw_16.5/config/jvm.options


-Xms4g
-Xmx32g


11-13:-XX:+UseConcMarkSweepGC
11-13:-XX:CMSInitiatingOccupancyFraction=75
11-13:-XX:+UseCMSInitiatingOccupancyOnly


-Djava.awt.headless=true

-Dfile.encoding=UTF-8


-Djruby.compile.invokedynamic=true


-XX:+HeapDumpOnOutOfMemoryError


-Djava.security.egd=file:/dev/urandom

-Dlog4j2.isThreadContextMapInheritable=true

查看logstash 状态: http://logstash_ip:9600/_node/pipelines?pretty

2.1 logstash 配置如下:logstash.conf

input {
    beats {
        port => 5044
        host => "0.0.0.0"  # 或者 "localhost"
    }
}
filter {
    if "SZZB-A2F_C06-30U-USG6550_FMS-02_xx" in [message] {
        mutate {
            add_field => { "logfrom" => "huawei-log" }
        }
    }
    else if  "10.84.x.x" in [log][source][address] {
        mutate {
           add_field => { "logfrom" => "caiwu_huawei" }
        }
    }

     else {
        mutate {
        add_field => { "logfrom" => "other" }
        }
    }
     if [logfrom] == "huawei-log" {
        grok {
            match => [
                "message", "<%{BASE10NUM:syslog_pri}>%logstash正则表达式*)"
            ]
        }
    }
}


##

output {
   stdout {}
}

output {
    elasticsearch {
        user => "elastic"
        password => "密码"
        hosts => ["https://xxxip:9200"]
        ssl_certificate_verification => false
        index => "%{[logfrom]}-%{+YYYY.MM.dd}"
    }
}

2.3  验证方式:

tcpdump -i docker0  tcp  port 5044  #docker网卡里面会有数据,说明logstash已经把数据发出来了

kibana配置:可用

  • 8
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值