filebeat是轻量型日志采集器,在任何环境中,始终都会潜伏着应用程序中断的风险。Filebeat 能够读取并转发日志行,并在出现中断的情况下,还会在一切恢复正常后,从中断前停止的位置继续开始。Filebeat 随附可观测性和安全数据源模块,这些模块简化了常见格式的日志的收集、解析和可视化过程,只需一条命令即可。之所以能实现这一点,是因为它将自动默认路径(因操作系统而异)与 Elasticsearch 采集节点管道的定义和 Kibana 仪表板组合在一起。不仅如此,Filebeat 的一些模块还随附了预配置的 Machine Learning 作业。
下载
https://www.elastic.co/cn/downloads/beats/filebeat
解压安装
tar -xzvf filebeat-7.14.0-linux-x86_64.tar.gz -C /usr/local/
然后到filebeat文件下修改filebeat.yml文件。
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
# 指定文件的输入类型log(默认)或者stdin
- type: log
# Change to true to enable this input configuration.
#更改为true以启用此输入配置
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
#指定要监控的日志,可以指定具体得文件或者目录
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
# 在输入中排除符合正则表达式列表的那些行。
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
# 包含输入中符合正则表达式列表的那些行(默认包含所有行),include_lines执行完毕之后会执行exclude_lines
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
# 忽略掉符合正则表达式列表的文件
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
# 向输出的每一条日志添加额外的信息,比如“level:debug”,方便后续对日志进行分组统计。
# 默认情况下,会在输出信息的fields子目录下以指定的新增fie