EFK 日志收集系统

EFK 日志收集系统

日志系统采用ES、filebeat、Kabana工具搭建,整个版本是采用6.8。filebeat负责采集,定义日志文件采集格式,并配置输出到es时对日志进行预处理pipeline(能够对特有的字段进行检索,比如根据traceId、appName检索等)

一.ES

1.创建索引模板

PUT _template/paas-cloud-log-template
{
  "index_patterns": ["paas-cloud-log*"],
  "settings": {
    "number_of_shards": 3,  //根据es集群的节点数
    "index.lifecycle.name": "paas-cloud-log" //es中的索引周期策略管控,用于对创建滚动日志索引,定期清除索引。
  },
  "mappings": {
    "_doc": {
     "_meta": {
        "version": "6.8.23" //filebeat 版本(后期如果升级时,不同版本的filebeat可能索引字段不一样)
      },
      "_source": {
        "enabled": true 
      },
      "dynamic_templates": [  //filebeat动态字段模板
        {
          "fields": {
            "path_match": "fields.*",
            "match_mapping_type": "string",
            "mapping": {
              "type": "keyword"
            }
          }
        },
        {
          "docker.container.labels": {
            "path_match": "docker.container.labels.*",
            "match_mapping_type": "string",
            "mapping": {
              "type": "keyword"
            }
          }
        },
        {
          "strings_as_keyword": {
            "match_mapping_type": "string",
            "mapping": {
              "ignore_above": 1024,
              "type": "keyword"
            }
          }
        },
        {
          "message_full": {
            "match": "message_full",
            "mapping": {
              "fields": {
                "keyword": {
                  "ignore_above": 2048,
                  "type": "keyword"
                }
              },
              "type": "text"
            }
          }
        },
        {
          "message": {
            "match": "message",
            "mapping": {
              "type": "text"
            }
          }
        },
        {
          "strings": {
            "match_mapping_type": "string",
            "mapping": {
              "type": "keyword"
            }
          }
        }
      ],
      "properties": {   //自定义字段,这些字段是在es中pipeline根据paas-cloud日志文件输出格式解析出来的,帮助用户进行检索
        "appName": {
          "type": "keyword"
        },
        "port": {
          "type": "integer"
        },
        "userId": {
          "type": "text"
        },
        "tid": {
          "type": "text"
        },
        "pid": {
          "type": "text"
        },
        "loglevel": {
          "type": "keyword"
        },
        "traceId": {
          "type": "text"
        },
        "time": {
          "type": "keyword"
        }
      }
    }
  }
}

2. 创建pipeline

PUT _ingest/pipeline/paas-cloud-log //pipeline是在filebeat的out做配置,针对存储到es数据进行预处理
{ 
  "description" : "paas-cloud-log",
  "processors": [
    {
      "grok": { //grok表达式,es内置支持的详见http://grokdebug.herokuapp.com/patterns#,对验证grok表达式可以使用grokdebug,也可以使用kabana自定的tool助手支持debug检验grok表达式
        "field": "message",
        "patterns": ["%{appName:appName}:%{port:port}-%{pid:pid}-%{tid:tid}-%{userId:userId}%{placeholder1}%{TIMESTAMP_ISO8601:time}%{placeholder}%{LOGLEVEL:loglevel}%{placeholder1}%{placeholder}%{traceId:traceId}"], //根据paas-cloud日志采集格式提取相关字段,方便检索信息
        "pattern_definitions" : { //自定义的grok表达式,跟上面patterns中解析规则配合使用
          "placeholder" : """(\[)\s*""",
          "placeholder1" : """(\])\s*""",
          "appName" : """(?<=^\[)[A-Za-z0-9\-_]{0,64}+(?=:)""",
          "port" :"""%{POSINT}{0,1}""",
          "tid" : "%{port}",
          "pid" :"%{port}",
          "userId" : """[A-Za-z0-9\-_]{0,64}(?=\])""",
          "traceId" : """(?<=\[)\S*(?=\])"""
        },
        "ignore_failure" : true 
      }
    }
  ]
 }

二. FileBeat

1.fileBeat 配置

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /data/consumer/log.log   #日志文件采集路径,是否需要统一采集路径
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  fields:
   level: debug
   review: 1

  # ### Multiline options
  # parsers:
  #   - multiline:
  #       type: pattern
  #       pattern: '''^\[[A-Za-z0-9\-_]+:'''
  #       negate: true
  #       match: after
  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  multiline.pattern: '^\[[A-Za-z0-9\-_]+:'  #多行配置

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  multiline.negate: true

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ====================== Index Lifecycle Management (ILM) ======================

# Configure index lifecycle management (ILM) to manage the backing indices
# of your data streams.

# Enable ILM support. Valid values are true, false.
# setup.ilm.enabled: true

# # Set the lifecycle policy name. The default policy name is
# # 'beatname'.
# setup.ilm.policy_name: "consumer-staging"

# The path to a JSON file that contains a lifecycle policy configuration. Used
# to load your own lifecycle policy.
#setup.ilm.policy_file:

# Disable the check for an existing lifecycle policy. The default is true. If
# you disable this check, set setup.ilm.overwrite: true so the lifecycle policy
# can be installed.
#setup.ilm.check_exists: true

# Overwrite the lifecycle policy at startup. The default is false.
#setup.ilm.overwrite: false

#==================== Elasticsearch template setting ==========================
setup.template.overwrite: false
setup.template.fields: "fields.yml"
# setup.template.name: "%{[fields.application-name]}-%{[fields.env]}"
setup.template.name: "paas-cloud-log-template" #使用的es所以模板,连接上es时会自动下载加载
setup.template.pattern: "paas-cloud-log*"  #日志索引前缀,目前es只支持test*这种匹配,不支持复杂的正则表达式
#setup.template.settings.index.lifecycle.rollover_alias: "consumer-staging"
# setup.template.settings.index.lifecycle.name: "consumer-staging"
#setup.template.pattern: "%{[fields.application-name]}-%{[fields.env]}-*"
# setup.template.settings:
#   index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
fields:   # 通用字段
 env: staging  #代表环境
 application-name: consumer #应用服务名
 paas-cloud-log: paas-cloud-log #字段变量,供其他使用

#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------

output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]
  index: "%{[fields.paas-cloud-log]}-%{[fields.application-name]}-%{[fields.env]}-%{+yyyy.MM.dd}" 
  pipeline: "paas-cloud-log" #es中已经定义好的

  # Protocol - either `http` (default) or `https`.
  protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "****"
# output.elasticsearch:
#   # Array of hosts to connect to.
#   hosts: ["localhost:9200"]

  # Enabled ilm (beta) to use index lifecycle management instead daily indices.
  # ilm.enabled: true

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  # - add_host_metadata: ~
  # - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: info
logging.to_files: false
logging.files:
  path: /Users/Desktop/doc/paas-cloud/日志系统/filebeat-6.8.23-darwin-x86_64/log


# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

如何使用系统的环境变量替换filebeat配置中项,参考以下文档:

[filebeat 官方文档](

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值