4-日志平台-日志切分

日志采集:

流程

要做的事情

日志规范 固定字段定义 日志格式
日志采集 落盘规则 滚动策略 采集方法
日志传输 消息队列 消费方式 Topic规范 保存时间
日志切分 采样 过滤 自定格式
日志检索 索引分割 分片设置 检索优化 权限设置 保存时间
日志流监控 采集异常 传输异常 检索异常 不合规范 监控报警

不同的日志落地规则:

解析端:

1.对于INFO级别以下的日志进行随机采样,采样功能放在log agent采集端、采样率通过配置中心下发
2.分布式问题
3.Logstash、Filebeat、Fluentd、Logagent对比

logstash参数配置

配置作用日志平台配置
input:https://www.elastic.co/guide/en/logstash/current/plugins-inputs-kafka.html#plugins-inputs-kafka-consumer_threads
   
filter:
grok:https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#plugins-filters-grok-tag_on_failure
   
json:https://www.elastic.co/guide/en/logstash/current/plugins-filters-json.html
   
output:https://www.elastic.co/guide/en/logstash/5.6/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-template_name
manage_template => true开启logstash自动管理模板功能,默认manage_template参数为true, 否则logstash将不会调用Elasticsearch API创建模板。 
template => "/xxx/logstash/templates/XXX.json"模板的路径 
template_name => "tempalte_name"    映射模板的名字,template_name如果不指定的话,会使用默认值logstash. 
template_overwrite => true    是否覆盖已存在的模板,template_overwrite为true则template的order高的,满足同样条件的template将覆盖order低的 

之前的gohangout使用

日志平台的配置

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  kafka {
    bootstrap_servers => ["10.10.106.3:9092,10.10.106.3:9092"]
    group_id => "logging_logstash_ES"
    consumer_threads => 4
    enable_auto_commit => true
    topics_pattern => "logging_.*"
    auto_offset_reset => "earliest"
    decorate_events => true
    codec => json
    # type => "logging_filebeat"
  }
}

filter{
  mutate{
    # gsub => ["message", "\\x", "\\\x"]
    remove_field => ["@timestamp"]
    remove_field => ["@metadata"]
    remove_field => ["@version"]
    remove_field => ["host"]
    remove_field => ["agent"]
    remove_field => ["ecs"]
    remove_field => ["container"]
    remove_field => ["tags"]
    remove_field => ["input"]
    add_field => { "@log" => "%{log}" }
    add_field => { "@fields" => "%{fields}" }
  }
  json {
    source => "@log"
    remove_field => ["flags","offset","log","@log"]
    add_field => { "@file" => "%{file}" }
  }
  json {
    source => "@file"
    remove_field => ["file","@file"]
  }
  json {
    source => "@fields"
    remove_field => ["fields","@fields"]
  }
  grok {
    # patterns_dir => ["./patterns"]
    match => { "message" => "(?m)^%{TIMESTAMP_ISO8601:timestamp}\s*\|%{DATA:threadName}\s*\|%{LOGLEVEL:logLevel}\s*\|%{DATA:loggerName}\s*\|%{NOTSPACE:classMethod}\s*\|%{NOTSPACE:traceId}\s*\|%{GREEDYDATA:message}" }
    overwrite => ["message"]
  }
}

output {
  if "_grokparsefailure" in [tags] {
    # drop {}
    elasticsearch {
      hosts => ["你的Es地址"]
      action => "index"
      index => "irregular_logging_%{project}_%{environment}_%{+YYYY.MM.dd}"
      manage_template => true
      template => "/data/tools/logstash-7.12.1/templates/irregular_logging_template.json"
      template_name => "irregular_logging_template"
      template_overwrite => true
      # user => "elastic"
      # password => "changeme"
    }
  } else {
    elasticsearch {
      hosts => ["你的Es地址"]
      action => "index"
      index => "logging_%{project}_%{environment}_%{+YYYY.MM.dd}"
      manage_template => true
      template => "/data/tools/logstash-7.12.1/templates/logging_template.json"
      template_name => "logging_template"
      template_overwrite => true
      # user => "elastic"
      # password => "changeme"
    }
  }
}

logging_template.json

{
  "template": "logging_*",
  "order":1,
  "settings": {
      "index.number_of_replicas": "1",
      "index.number_of_shards": "5",
      "index.refresh_interval" : "10s"
  },
  "mappings": {
    "dynamic": "strict",
    "properties": {
        "message": {
          "type": "text"
        },
        "@timestamp": {
          "type": "date"
        },
        "log_timestamp": {
          "type": "date",
          "format": "yyyy-MM-dd HH:mm:ss:SSS||yyyy-MM-dd||epoch_millis"
        },
        "project": {
          "type": "keyword"
        },
        "path": {
          "type": "keyword"
        },
        "topic": {
          "type": "keyword"
        },
        "logLevel": {
          "type": "keyword"
        },
        "host": {
          "type": "keyword"
        },
        "loggerName": {
          "type": "keyword"
        },
        "threadName": {
          "type": "keyword"
        },
        "classMethod": {
          "type": "keyword"
        },
        "environment": {
          "type": "keyword"
        },
        "traceId": {
          "type": "keyword"
        }
      }
  }
}

irregular_logging_template.json

使用默认的就行

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值