ELK EFK日志搜索平台 filebeat kafka logstash elasticsearch(es) kibana

介绍

ELK是当前比较主流的分布式日志收集处理工具。
常用日志采集方式: Filebeat→Kafka集群→Logstash→ES→Kibana
Grafana(可视化监控工具)上配置连接ES数据库 进行实时监控

实施步骤:
Filebeat部署在应用服务器上(只负责Logstash的读取和转发,降低CPU负载消耗,确保不会抢占应用资源),Logstash、ES、Kibana在一台服务器上(此处的Logstash负责日志的过滤,会消耗一定的CPU负载,可以考虑如何优化过滤的语法步骤来达到降低负载)。

架构图:
在这里插入图片描述

主流架构

一、Filebeat+Elasticsearch+Kibana

filebeat 直接输出到 es,kibana做搜索展示

二、Filebeat+Kafka+Logstash+Elasticsearch+kibana

filebeat n台 输出到 kafka集群
logstash 1~3台 接收kafka日志 输出到 es集群,logstash挂1台不影响kafka日志接收。
kibana做搜索展示

通过增加消息队列中间件,来避免数据的丢失。当Logstash出现故障,日志还是存在中间件中,当Logstash再次启动,则会读取中间件中积压的日志

Filebeat

轻量级的日志采集组件 Filebeat 讲解与实战操作

docker 部署filebeat 直接到 es

注意 filebeat版本一定要和es 版本一致
logback.xml里日志格式为json

docker run --privileged --name filebeat --net=host -d -m 1000M \
      --log-driver json-file --log-opt max-size=1024m \
      -v /data0/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml \
      -v /data0/filebeat/logs:/root \
      -v /data0/filebeat/data:/data \
      -v /data0:/home/logs \
      registry.api.ww.com/bop_ci/filebeat:6.6.0

vi filebeat.yml

filebeat.inputs:
- type: log
  eabled: true
  paths:
    - /home/logs/bop-fms-account-info/logs/*.log
    - /home/logs/bop-fms-advertiser-info/logs/*.log
    - /home/logs/bop-fms-agent-web/logs/*.log
    - /home/logs/bop-fms-api/logs/*.log
    - /home/logs/bop-fms-config/logs/*.log
    - /home/logs/bop-fms-vip-api/logs/*.log
  ignore_older: 12h
  clean_inactive: 14h
  tags: ["fms-log"]

- type: log
  eabled: true
  paths:
    - /home/logs/bop-cmc-strategy/logs/*.log
    - /home/logs/qualification/logs/*.log
    - /home/logs/bop-cmc-customer/logs/*.log
    - /home/logs/bop-mdm-cmc-diplomat/logs/*.log
    - /home/logs/bop-asm-api/logs/*.log
    - /home/logs/bop-asm-message/logs/*.log
    - /home/logs/bop-asm-notice/logs/*.log
  ignore_older: 12h
  clean_inactive: 14h
  tags: ["others-log"]

json.keys_under_root: true
json.overwrite_keys: true


setup.ilm.enabled: false
setup.template.name: "bop-log"
setup.template.pattern: "bop-log-*"
setup.template.enabled: false
setup.template.overwrite: true
setup.template.settings:
  index.number_of_shards: 1
  index.number_of_replicas: 0
  index.codec: best_compression


output.elasticsearch:
  hosts: ["10.13.177.206:9201"]
  #index: "bop-log-%{+yyyy.MM.dd}"
  pipeline: "test-news-server-online"
  indices:
    - index: "bop-log-fms-%{+yyyy.MM.dd}"
      when.contains:
        tags: "fms-log"
    - index: "bop-log-others-%{+yyyy.MM.dd}"
      when.contains:
        tags: "others-log"

processors:
  - decode_json_fields:
      fields: ["message"]
      target: ""
      overwrite_keys: true
  - rename:
      fields:
        - from: "error"
          to: "run_error"
  - drop_fields:
      fields: ["input_type", "log.offset","log.file.path","beat.version","prospector.type","beat.name", "host.name", "input.type", "agent.hostname"]
      #ignore_missing: false

到 es 示例2:

# 输出到es
output.elasticsearch:
  #username: "elastic"
  #password: "xxxxxxxxxxx"
  #worker: 1
  #bulk_max_size: 1500
  #pipeline: "timestamp-pipeline-id" #@timestamp处理
  hosts: ["elasticsearch1:9200"]
  index: "pb-%{[fields.index_name]}-*"
  indices:
    - index: "pb-nginx-%{+yyyy.MM.dd}"
      when.equals:
        fields.index_name: "nginx_log"
    - index: "pb-log4j-%{+yyyy.MM.dd}"
      when.equals:
        fields.index_name: "log4j_log"
    - index: "pb-biz-%{+yyyy.MM.dd}"
      when.equals:
        fields.index_name: "biz_log"
  

多行合并

异常堆栈的多行合并问题
在 -type下面增加属性:
multiline:
  # pattern for error log, if start with space or cause by 
  pattern: '^[[:space:]]+(at|\.{3})\b|^Caused by:'
  negate:  false
  match:   after

支持es 自定义索引字段的索引类型

setup.template.json.enabled: true
setup.template.json.path: "/usr/share/filebeat/logs_template.json"
setup.template.json.name: "logs_template"

docker 启动命令增加参数
-v /data0/filebeat/logs_template.json:/usr/share/filebeat/logs_template.json
新建logs_template.json

{
    "index_patterns": [
        "bop-log-*"
    ],
    "mappings": {
        "doc": {
            "dynamic_templates": [
                {
                    "strings_as_keyword": {
                        "mapping": {
                            "type": "text",
                            "analyzer": "standard",
                            "fields": {
                                "keyword": {
                                    "type": "keyword"
                                }
                            }
                        },
                        "match_mapping_type": "string",
                        "match": "*"
                    }
                }
            ],
            "properties": {
                "httpmethod": {
                    "type": "text",
                    "fields": {
                        "keyword": {
                            "ignore_above": 256,
                            "type": "keyword"
                        }
                    }
                },
                "responseheader": {
                    "type": "text",
                    "fields": {
                        "keyword": {
                            "ignore_above": 256,
                            "type": "keyword"
                        }
                    }
                },
                "function": {
                    "type": "text",
                    "fields": {
                        "keyword": {
                            "ignore_above": 256,
                            "type": "keyword"
                        }
                    }
                },
                
                "servicename": {
                    "type": "text",
                    "fields": {
                        "keyword": {
                            "ignore_above": 256,
                            "type": "keyword"
                        }
                    }
                },
                "serviceuri": {
                    "type": "text",
                    "fields": {
                        "keyword": {
                            "ignore_above": 256,
                            "type": "keyword"
                        }
                    }
                },
                "serviceurl": {
                    "type": "text",
                    "fields": {
                        "keyword": {
                            "ignore_above": 256,
                            "type": "keyword"
                        }
                    }
                },
                "responsebody": {
                    "type": "text",
                    "fields": {
                        "keyword": {
                            "ignore_above": 256,
                            "type": "keyword"
                        }
                    }
                },
                "args": {
                    "type": "text",
                    "fields": {
                        "keyword": {
                            "ignore_above": 256,
                            "type": "keyword"
                        }
                    }
                },
                "requestheader": {
                    "type": "text",
                    "fields": {
                        "keyword": {
                            "ignore_above": 256,
                            "type": "keyword"
                        }
                    }
                }
            }
        }
    }
}

filebeat @timestamp日期处理

在Kibana Management Advanced Settings 搜索Date format 格式化设置成:yyyy-MM-dd HH:mm:ss.SSS
或者
在 Kibana 中的 Devtools 界面中编写如下 pipeline 并执行
查询 GET _ingest/pipeline/timestamp-pipeline-id

PUT _ingest/pipeline/timestamp-pipeline-id
{
  "description": "timestamp-pipeline-id",
  "processors": [
    {
      "grok": {
        "field": "message",
        "patterns": [
          "%{TIMESTAMP_ISO8601:timestamp}"
        ],
        "ignore_failure": true
      },
      "date": {
        "field": "timestamp",
        "timezone": "Asia/Shanghai",
        "formats": [
          "yyyy-MM-dd HH:mm:ss.SSS"
        ],
        "ignore_failure": true
      }
    }
  ]
}

使用filebeat采集文件到es

docker部署filebeat到es

filebeat采集日志到ES

filebeat7.7.0相关详细配置预览

filebeat7.7.0相关详细配置预览- processors

filebeat采集docker日志

docker部署filebeat到kafka

filebeat采集多个日志(推送给ES或者logstash)

Filebeat+Elasticsearch收集整洁的业务日志

Logstash

docker 部署logstash

mkdir /data0/logstash/log -p
cd /data0/logstash

vi logstash.conf

input {
     kafka {
      topics => "kafkaTopic"  #kafka的topic
      bootstrap_servers => ["192.168.1.100:9092"]  #服务器地址
      codec => "json"  #以Json格式取数据   
           }
}
output {
  elasticsearch {
    hosts => ["192.168.1.110:9009"]  #ES地址
    index => "errorlog"    #ES index,必须使用小写字母     
    user => "elastic"      #这里建议使用  elastic 用户
    password => "**********"
  }
}

vi logstash.yml

http.host: "0.0.0.0"
#ES地址
xpack.monitoring.elasticsearch.hosts: ["http://192.168.1.110:9009"] 
xpack.monitoring.enabled: true
#ES中的内置账户和密码,在ES中配置
xpack.monitoring.elasticsearch.username: logstash_system    
xpack.monitoring.elasticsearch.password: *****************

docker pull logstash:6.7.0

docker run  --name logstash --privileged=true -p 9007:9600  -d -v /data/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf -v /data/logstash/log/:/home/public/  -v /data/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml logstash:8.1.3

Kibana

docker 部署kibana

注意,kibana要和es的版本一致,否则版本不兼容

cd /data0
mkdir kibana
cd kibana

docker run --name kibana  -p 5601:5601 -d kibana:6.6.0
docker cp kibana:/usr/share/kibana/config/kibana.yml .

将如下内容写到kibana.yml中,然后保存退出::wq

server.name: kibana
server.host: "0"
#elasticsearch.hosts: [ "http://elasticsearch:9200" ]
elasticsearch.hosts: [ "http://自己的elasticsearch的IP:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
#设置kibana中文显示
i18n.locale: zh-CN

重新启动

docker rm -f kibana
docker run --name kibana -p 5601:5601 -d -v /data0/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml kibana:6.6.0

kibana自动关联es索引

vi auto_add_index.sh

#!/bin/bash
today=`date +%Y.%m.%d`
yestoday=`date -d "1 days ago" +%Y-%m-%d`
pattern='bop-log-'${today}
old_pattern='bop-log-'${yestoday}
index='bop-log-'${today}
echo ${pattern} ${old_pattern}

#新增
curl -f -XPOST -H 'Content-Type: application/json' -H 'kbn-xsrf: anything' \
     "http://localhost:5601/api/saved_objects/index-pattern/${pattern}" -d"{\"attributes\":{\"title\":\"${index}\",\"timeFieldName\":\"@timestamp\"}}"

#设置默认索引
curl -f -XPOST -H 'Content-Type: application/json' -H 'kbn-xsrf: anything' http://localhost:5601/api/kibana/settings/defaultIndex -d "{\"value\":\"${pattern}\"}"

#删除
curl -XDELETE "http://localhost:5601/api/saved_objects/index-pattern/${old_pattern}" -H 'kbn-xsrf: true'

定时删除索引

kibana DevTools执行
30天后删除

PUT _ilm/policy/logs_policy
{
  "policy": {
    "phases": {
      
      "delete": {
        "min_age": "30d",
        "actions": {
          "delete": {}
        }
      }
    }
  }
}

kibana 自动创建索引模式

Kibana自动关联ES索引

ELK系列(2) - Kibana怎么修改日期格式Date format

iLogtail

iLogtail用户手册

ilogtail -> kafka -> logstash -> elasticsearch

参考文档

使用Docker搭建ELK日志系统

Kibana 可视化日志分析

docker 搭个日志收集系统

Springboot+logback集成ELK处理日志实例

Kibana + Elasticsearch + Logstash + Filebeat

Spring Cloud 分布式实时日志分析采集三种方案

10分钟部署一个ELK日志采集系统

如何快速采集分析平台日志,并进行展示监控?

手把手教你搭建ELK

filebeat上报数据异常排查

filebeat占用文件句柄磁盘满

filebeat常见问题

使用 ELK 集中管理 Spring Boot 应用日志

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值