elk测试及批量部署

前言

ELK系统将采用官方推荐的filebeat作为shipper节点,以从远程几区采集日志;
为了增加elk的吞吐量、可扩展性及解耦,将使用kafka作为broker节点;
用logstash将日志从kafka转存进elasticsearch集群,转存过程中会进行数据格式匹配及转换;
最后用kibana搜索、展示日志。

整个elk架构图如下:

filebeat01
kafka集群
filebeat02
filebeat03
filebeat...
logstash集群
elasticsearch集群
kibana

1、filebeat测试过程及标准化

参考:https://www.elastic.co/guide/en/beats/filebeat/current/index.html
当前最新版本6.3.2

(1)测试后定型配置文件:

#=========================== Filebeat inputs =============================

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/.operation.log
  fields:
    # document_type: 参数已经被丢弃了
    # 设置一个field,用来和kafka topic关联,同时也是一个搜索的条件
    log_topics: operation
  # fields_under_root: true
  close_renamed: true
  # 如果path定义的是一个统配文件名, 这里可以配置
  scan_frequency: "10s"
  ignore_older: 24h
  decorate_events: false

- type: log
  enabled: true
  paths:
    - /var/log/messages
  fields:
    log_topics: messages
  # fields_under_root: true,这个当输出是kafka时,此版本直接drop event;但是当输出是文件时,是ok的
  close_renamed: true
  ignore_older: 24h
  #这个不是filebeat的参数而是logstash的一个参数,不过好像并没有报错,也没有生效
  decorate_events: false

#============================= Filebeat modules ===============================

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false


#=========================== Filebeat outputs =================================
output.kafka:
  hosts: ['10.40.2.229:9092', '10.40.2.235:9092', '10.40.2.237:9092']  # kafka集群
  topic: '%{[fields.log_topics]}'     #根据不同的fields传入不同的topic
  required_acks: 1                    # 如果日志量极其的大,这里可以设置为0
  compression: gzip                   # 启用消息压缩,减少网络开销

(2)服务启动脚本

参考:salt管理包下的模板

(3)filebeat部署脚本

参考:salt管理的state文件
注意:批量部署,后期还要整改

(4)部署注意事项

a、程序主体
无论是centos6还是centos7,都要依赖filebeat-god让filebeat以服务的形式启动;
filebeat-god是rpm包中专门封装的一个可执行文件。
b、服务启动用户
由于一些系统日志文件普通用户无权限查看,因此这里用root启动。

2、kafka集群部署

(1)kafka架构

consumer_cluster
zk_cluster
producer_cluster
kafka_cluster
consumer01
consumer02
consumer...
zookeeper01
zookeeper02
zookeeper03
producer01
producer02
producer...
kafka01
kafka02
kafka03

说明:
kafka分布式集群完全依赖zookeeper集群来实现高可用、动态扩展等核心功能;
producer、consumer都是可以基于zookeeper或者kafka-api来获取kafka集群元数据

(2)zookeeper部署

(使用salt进行安装,略)

(3)配置pillar

cat /srv/pillar/prod/service/kafka/init.sls
services:
  kafka:
    elk:                 {# 集群名字 #}
      listenport: 9092
      hosts:
        10.40.2.235: 1   {# host: broker.id#}
        10.40.2.237: 2
        10.40.2.229: 3
      factor: 2          {# 复制因子 #}
      package: "kafka_2.11-1.1.0.tgz"    {# 需要安装的版本 #}
      zookeepers: "10.40.2.230:22181"    {#格式 192.168.7.100:12181,192.168.7.101:12181,192.168.7.107:1218 #}
      kafka_heap_opts: "-Xmx4G -Xms4G"   {# jvm配置,默认是8G #}

(4)执行state文件

(略)

(5)配置文件

可以参考官方文档,相关注意点见后文

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1

############################# Socket Server Settings #############################

listeners=PLAINTEXT://:9092
port=9092
host.name= 10.40.2.235
advertised.host.name = 10.40.2.235
advertised.port=9092
num.network.threads=5
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600


############################# Log Basics #############################

log.dirs=/data/kafka/kafka-elk
num.partitions=3
num.recovery.threads.per.data.dir=1

############################# Log Flush Policy #############################

log.flush.interval.messages=10000
log.flush.interval.ms=1000

############################# Log Retention Policy #############################

log.retention.hours=72
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
default.replication.factor: 2
#日志超过3天就删除
log.cleanup.policy=delete

############################# Zookeeper #############################

zookeeper.connect=10.40.2.230:22181
zookeeper.connection.timeout.ms=6000

(6)注意事项

kafka版本:1.1.0;elk-6.3.2是基于这个版本开发的
advertised.host.name: 最好配置ip地址,否则默认用hostname作为广播的主机名,导致所有consumer都要配置主机名解析,这是不建议的,轻则导致消息无法消费,重则集群内部通信都有问题。
num.partitions: 这里默认设置为3,可以手动动态修改
default.replication.factor: 这里日志设置为2份,业务建议设置为3份,topic一旦建立将无法动态更改

(7)服务启动脚本

参考salt管理配置模板

3、logstsh

(1)jvm配置文件

# cat jvm.options
## JVM configuration

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space

-Xms1g   # 生产建议配置8G
-Xmx1g

################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################

## GC configuration
-XX:+UseParNewGC
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly

## Locale
# Set the locale language
#-Duser.language=en

# Set the locale country
#-Duser.country=US

# Set the locale variant, if any
#-Duser.variant=

## basic

# set the I/O temp directory
#-Djava.io.tmpdir=$HOME

# set to headless, just in case
-Djava.awt.headless=true

# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8

# use our provided JNA always versus the system one
#-Djna.nosys=true

# Turn on JRuby invokedynamic
-Djruby.compile.invokedynamic=true
# Force Compilation
-Djruby.jit.threshold=0

## heap dumps

# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError

# specify an alternative path for heap dumps
# ensure the directory exists and has sufficient space
#-XX:HeapDumpPath=${LOGSTASH_HOME}/heapdump.hprof

## GC logging
#-XX:+PrintGCDetails
#-XX:+PrintGCTimeStamps
#-XX:+PrintGCDateStamps
#-XX:+PrintClassHistogram
#-XX:+PrintTenuringDistribution
#-XX:+PrintGCApplicationStoppedTime

# log GC status to a file with time stamps
# ensure the directory exists
#-Xloggc:${LS_GC_LOG_FILE}

# Entropy source for randomness
-Djava.security.egd=file:/dev/urandom

说明:
由于默认采用比较先进的CMS垃圾回收算法,除了heap大小外基本不用改
heap大小需要根据pipeline数量及处理消息频率进行设置

(2)startup.options与log4j配置

# cat startup.options
################################################################################
# These settings are ONLY used by $LS_HOME/bin/system-install to create a custom
# startup script for Logstash and is not used by Logstash itself. It should
# automagically use the init system (systemd, upstart, sysv, etc.) that your
# Linux distribution uses.
#
# After changing anything here, you need to re-run $LS_HOME/bin/system-install
# as root to push the changes to the init script.
################################################################################

# Override Java location
#配置java的
JAVACMD=/usr/local/services/jdk1.8.0_91/bin/java

# Set a home directory
LS_HOME=/usr/local/services/logstash-6.3.2

# logstash settings directory, the path which contains logstash.yml
LS_SETTINGS_DIR=/usr/local/services/logstash-6.3.2/config

# Arguments to pass to logstash
LS_OPTS="--path.settings ${LS_SETTINGS_DIR}"

# Arguments to pass to java
LS_JAVA_OPTS=""

# pidfiles aren't used the same way for upstart and systemd; this is for sysv users.
LS_PIDFILE=/usr/local/services/logstash-6.3.2/logstash.pid

# user and group id to be invoked as
LS_USER=user_00
LS_GROUP=users

# Enable GC logging by uncommenting the appropriate lines in the GC logging
# section in jvm.options
LS_GC_LOG_FILE=/usr/local/services/logstash-6.3.2/logs/gc.log

# Open file limit
LS_OPEN_FILES=16384

# Nice level
LS_NICE=19

# Change these to have the init script named and described differently
# This is useful when running multiple instances of Logstash on the same
# physical box or vm
SERVICE_NAME="logstash"
SERVICE_DESCRIPTION="logstash"

# If you need to run a command or script before launching Logstash, put it
# between the lines beginning with `read` and `EOM`, and uncomment those lines.
###
## read -r -d '' PRESTART << EOMm
## EOM

说明:
配置前4个参数就可以了,其余默认即可,日志相关会被log4j的配置覆盖

(3)logstash.yml

# cat logstash.yml
#测试排错的设置
node.name: test
config.debug: false
log.level: info

(4)pipelines配置

# cat pipelines.yml
# - pipeline.id: my-pipeline_1
#   path.config: "/usr/local/services/logstash-6.3.2/config/conf.d/test.conf"
#   pipeline.workers: 1
#
# - pipeline.id: "my-other-pipeline"
#   path.config: "/usr/local/services/logstash-6.3.2/config/conf.d/test2.conf"
#   queue.type: persisted
#
#- pipeline.id: kafkatest
#  path.config: "/usr/local/services/logstash-6.3.2/config/conf.d/kafka.conf"
#  pipeline.workers: 1

- pipeline.id: kafkatest2
  path.config: "/usr/local/services/logstash-6.3.2/config/conf.d/k2.conf"
  pipeline.workers: 1

# cat conf.d/k2.conf
input{
  kafka{
    bootstrap_servers => ["10.40.2.230:9092,10.40.2.230:9093"]
    client_id => "hsobc1"
    group_id => "hsobc"
    #开始时从kafka相关topic最开始出开始消费
    auto_offset_reset => "earliest"
    #消费者线程数
    consumer_threads => 2
    #从哪个topic消费消息
    topics => ["operation"]
    #接收也就是消费的json类型,filebeat输出的就是json格式
    codec => "json"
    ##在输出消息的时候不输出自身的信息,包括:消费消息的大小、topic来源以及consumer的group信息。
    decorate_events => "false"

    #与kafka之间连接优化,老版本是zookeeper相关
    enable_auto_commit => "true"
    auto_commit_interval_ms => "2000"
    connections_max_idle_ms => "5000"
  }
}

filter {
  #grok正则复杂同时效率极其低下极其耗cpu;因此才有了dissect插件,但是不够灵活
  dissect {
    # filebeat中不能将这个提升到顶级,这里就增加一个顶级field
    add_field => { "log_topics" => "%{[fields][log_topics]}" }
    #将filebeat自动添加的一些可有可无的段给删掉,减少网络传输
    remove_field => [ "beat","input","prospector","@metadata","fields","host" ]
    mapping => {
      "message" => '%{date} %{+date} %{+date} %{hostname} %{shellname}: %{?user}=%{&user}, %{?login}=%{&login}, %{?from}=%{&from}, %{?pwd}=%{&pwd}, command="%{date2} %{+date2} - %{command} %{args}"'
    }
    # 匹配成功将删除message,匹配失败,直接在mapping过程中就会中断本次匹配,并将message整个保留;
    # 同时增加一个段"tags":["_dissectfailure"],
    remove_field => [ "message","date2" ]
  }
}

output {
  file {
    path => "/tmp/operation.txt"
  }
}

注意:
k2.conf文件是pipeline真正的实现,可以有多个,但是每个都必须要在pipelines.yml中明确配置
logstash实例可以有多个
dissect官方文档指出不支持换行符\n,经测试确实不支持

kafka消费偏移:
earliest: 当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,从头开始消费
latest: 当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,消费新产生的该分区下的数据
none: topic各分区都存在已提交的offset时,从offset后开始消费;只要有一个分区不存在已提交的offset,则抛出异常

(5)logstash标准及部署

模板及salt管理参考saltmaster上配置

配置测试
bin/logstash -t -f config/pipelines.yml
同一个配置文件,centos6上测试ok,centos7上测试结果:回头再测centos6也报错如下:

# bin/logstash -t -f config/pipelines.yml
Sending Logstash's logs to /usr/local/services/logstash-6.3.2/logs which is now configured via log4j2.properties
[2018-08-15T11:26:12,883][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-08-15T11:26:13,213][FATAL][logstash.runner          ] The given configuration is invalid. Reason: Expected one of #, input, filter, output at line 2, column 1 (byte 2) after

[2018-08-15T11:26:13,227][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

说是在命令行下忽略pipelines.yml的配置,可以将单条pipeline配置独立成一个文件,然后测试这个文件

使用如下命令生成启动文件两个文件:

bin/system-install /usr/local/services/logstash-6.3.2/config/startup.options systemd
cat /etc/default/logstash
JAVACMD="/usr/local/services/jdk1.8.0_91/bin/java"
LS_HOME="/usr/local/services/logstash-6.3.2"
LS_SETTINGS_DIR="/usr/local/services/logstash-6.3.2/config"
LS_PIDFILE="/usr/local/services/logstash-6.3.2/logstash.pid"
LS_USER="user_00"
LS_GROUP="users"
LS_GC_LOG_FILE="/usr/local/services/logstash-6.3.2/logs/gc.log"
LS_OPEN_FILES="16384"
LS_NICE="19"
SERVICE_NAME="logstash"
SERVICE_DESCRIPTION="logstash"

# cat /etc/systemd/system/logstash.service
[Unit]
Description=logstash

[Service]
Type=simple
User=user_00
Group=users
# Load env vars from /etc/default/ and /etc/sysconfig/ if they exist.
# Prefixing the path with '-' makes it try to load, but if the file doesn't
# exist, it continues onward.
EnvironmentFile=-/etc/default/logstash
EnvironmentFile=-/etc/sysconfig/logstash
ExecStart=/usr/local/services/logstash-6.3.2/bin/logstash "--path.settings" "/usr/local/services/logstash-6.3.2/config"
Restart=always
WorkingDirectory=/
Nice=19
LimitNOFILE=16384

[Install]
WantedBy=multi-user.target

上面的两个文件还是无法启动、找不到java
于是合并成如下:

# cat /etc/systemd/system/logstash.service
[Unit]
Description=logstash

[Service]
Type=simple
User=user_00
Group=users
# Load env vars from /etc/default/ and /etc/sysconfig/ if they exist.
# Prefixing the path with '-' makes it try to load, but if the file doesn't
# exist, it continues onward.
Environment=JAVA_HOME=/usr/local/services/jdk1.8.0_91
Environment=LS_HOME=/usr/local/services/logstash-6.3.2
Environment=LS_SETTINGS_DIR=/usr/local/services/logstash-6.3.2/config
Environment=LS_PIDFILE=/usr/local/services/logstash-6.3.2/logstash.pid
Environment=LS_USER=user_00
Environment=LS_GROUP=users
Environment=LS_GC_LOG_FILE=/usr/local/services/logstash-6.3.2/logs/gc.log
Environment=LS_OPEN_FILES=16384
Environment=LS_NICE=19
Environment=SERVICE_NAME=logstash
Environment=SERVICE_DESCRIPTION=logstash
ExecStart=/usr/local/services/logstash-6.3.2/bin/logstash "--path.settings" "/usr/local/services/logstash-6.3.2/config"
Restart=always
WorkingDirectory=/
Nice=19
LimitNOFILE=16384

[Install]
WantedBy=multi-user.target

(6)logstash的grok插件

说白了,grok就是使用正则表达式来匹配、处理一个事件(一条日志),这种处理方式效率极其低,一般用来处理日志量不是很大、没有固定格式的日志;一般生产建议统一日志格式标准,然后使用dissect插件处理。
参考:https://www.cnblogs.com/stozen/p/5638369.html
默认已经支持的正则:
logstash_home]/vendor/bundle/jruby/x.x/gems/logstash-patterns-core-x.x.x/patterns/grok-patterns
正则解析容易出错,强烈建议使用Grok Debugger调试:http://grokdebug.herokuapp.com/

看看以前的一个nginx例子:

log_format  pclog 'PC $remote_addr - $remote_user [$time_iso8601] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for" $cookie_DYN_USER_ID [$cookie_JSESSIONID] [$uri] [$arg_productId] [$arg_brandNum] [$arg_shopId]';
log_format  applog 'APP $remote_addr - $remote_user [$time_iso8601] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for" $cookie_DYN_USER_ID [$cookie_JSESSIONID]';
log_format  waplog  'WAP $remote_addr - $remote_user [$time_iso8601] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for" $cookie_DYN_USER_ID [$cookie_JSESSIONID]';

开发当时写的一个正则:

cat /opt/logstash/patterns/nginx
NGINXLOG %{WORD:channel} %{IPORHOST:clientip} %{HTTPDUSER:ident} %{USER:auth} \[%{DATA:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?:HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{QS:http_x_forwarded_for} %{USER:userId} \[%{DATA:sessionId}\] \[%{DATA:uri}\] \[%{DATA:productId}\] \[%{DATA:brandNum}\] \[%{DATA:shopId}\]

大量使用了默认的变量,也可以将自己写的正则放入一个单独的文件中,logstash中引用即可,例如上面的例子
注意:
(?:****)是不分组的意思???

下面还有几个例子:
DATE_CHS %{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY} 中国人习惯的日期格式
ZIPCODE_CHS [1-9]\d{5} 国内邮政编码
GAME_ACCOUNT [a-zA-Z][a-zA-Z0-9_]{4,15} 游戏账号,首字符为字母,4-15位字母、数字、下划线组成

再次申明:不建议使用grok

4、生产的例子

(1)生产的例子

注意事项:

将logstash-6.3.2降到6.3.1好像很多以前测试排查了好长时间的问题都小时了
下面的例子已经改掉了生产的关键字
相关说明,看前面的理论部分

前提:

开发已经将日志转成了json格式

发现logstash无法从kafka消费消息的坑:

  • 查看日志,没有任何错误
  • 将logstash的以前碰到的问题全部排查一遍,问题依旧
  • 将logstash的日志debug模式打开,没有错误日志
    • 过了n久之后,发现一个error:The coordinator is not available,要命的是这个日志级别竟然是debug级别,并不能算错误
    • 怀疑是kakfa只有一个节点的原因,但是心中还是给否决了,因为整个elk,就提供了3台机器
    • 找logstash官方文档,或者有参数kafka只有一个节点的问题呢,来来回回都没有找到
    • 排除了其他的所有可能之后,极不情愿的用salt将kafka集群扩容到3个节点
    • logstash立即可以消费数据了

filebeat.yml:

#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
  enabled: true
  json.keys_under_root: true
  json.overwrite_keys: true
  paths:
    - /data/logs/*.log
  fields:
    log_topics: hehe
    source_ip: 10.10.10.5
  fields_under_root: true
  close_renamed: true
  ignore_older: 24h
  decorate_events: false
#============================= Filebeat modules ===============================

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

#=========================== Filebeat outputs =================================
output.kafka:
  hosts: ['10.10.10.7:9092', '10.10.10.8:9092', '10.10.10.6:9092']
  topic: "%{[log_topics]}"
  required_acks: 1

logstash配置文件:

- pipeline.id: hehe
  path.config: "/etc/conf.d/hehe.config"
  pipeline.workers: 3

hehe.config

input{
  kafka {
    bootstrap_servers => ["10.10.10.8:9092,10.10.10.7:9092,10.10.10.6:9092"]
    client_id => "hehe01"
    group_id => "hehe"
    auto_offset_reset => "earliest"
    consumer_threads => 3
    topics => ["hehe"]
    codec => "json"
  }
}

filter {
  mutate {
    remove_field => [ "offset", "beat", "input", "host", "prospector" ]
  }
}

output {
  elasticsearch {
    hosts => ["10.10.10.3:9200"]
    index => "hehe-%{+YYYY.MM}"
  }
}

(2)nginx日志时间处理

另外再介绍一个很好用的插件:对于会ruby的人来说简直就是不要太好用了。
将nginx的日志中的时间改成$time_iso8601格式,然后logstash中利用filter进行处理:

# logstash.conf中filter的mutate下面增加如下一段:
filter {
# 老版本
  ruby{
    code=>"event['timestamp']=DateTime.parse(event['timestamp']).strftime('%Y-%m-%d %H:%M:%S')"
  }

# 新版本:
  ruby {
    code => 'event.set("time_local",DateTime.parse(event.get("time_local")).strftime("%Y-%m-%d %H:%M:%S"))'
  }

当然,对于nginx日志中时间处理,还是有很多坑的:
(1)可以修改源码,修改nginx日志格式
参考:https://www.cnblogs.com/bigberg/p/7774508.html
(2)使用filter的date插件,将日志中的时间转成es中@timestamp字段,这样对于延迟传输的日志,在使用kibana进行搜索时比较好

filter {
  date {
    match => ["time_local","dd/MMM/yyyy:HH:mm:ss Z"]
  }

这里将time_local字段内容按照dd/MMM/yyyy:HH:mm:ss Z格式解析,然后替换@timestamp中的字段

注意:

nginx日志中时间格式是time_local
@timestamp字段使用的是UTC标准时区的
时区偏移量只需要用一个字母 Z 即可

比如:

美国-6000时区时间(nginx日志中的时间):
"time_local": "10/Jan/2019:01:45:14 -0600"
@timestamp中的时间(es中存储的时间):
"@timestamp": "2019-01-10T07:45:14.000Z"
而此时中国的时间是:2019-01-10T15:45:14

以下是生产nginx的日志格式:

log_format main_json '{"remote_addr":"$remote_addr",'
                         '"remote_user":"$remote_user",'
                         '"time_local":"$time_local",'
                         '"request_method":"$request_method",'
                         '"uri":"$uri",'
                         '"args":"$args",'
                         '"protocol":"$server_protocol",'
                         '"status":"$status",'
                         '"body_bytes_sent":"$body_bytes_sent",'
                         '"referer":"$http_referer",'
                         '"agent":"$http_user_agent",'
                         '"request_time":"$request_time",'
                         '"upstream_response_time":"$upstream_response_time",'
                         '"host":"$host",'
                         '"true_ip":"$http_true_client_ip"'
                         '}';

以下是logstash处理nginx日志格式,这个是直接用logstash收集的,机器不够呀:

input {
  file {
    path => "/usr/local/services/tengine/logs/access.log"
    sincedb_path => "/usr/local/services/logstash-6.3.1/data/nginx"
    start_position => beginning
    close_older => 1200
    ignore_older => 43200
    add_field => {
      log_topic => nginx
      source_ip => "10.0.0.1"
    }
    codec => "json"
  }
}

filter {
  date {
    match => ["time_local","dd/MMM/yyyy:HH:mm:ss Z"]
  }

  mutate {
    convert => {
      upstream_response_time => "float"
      request_time => "float"
    }
  }
}

output {
  elasticsearch {
    hosts => ["10.0.0.1:9201","10.0.0.2:9201","10.0.0.3:9201"]
    index => "%{log_topic}-%{+YYYY.MM.dd}"
  }
}

(3)处理messages和secure日志

a、filebeat->logstash->es

filebeat.yaml

#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/messages
  fields:
    log_topics: messages
    source_ip: 10.0.0.10
  fields_under_root: true
  close_renamed: true
  ignore_older: 24h
  decorate_events: false

- type: log
  enabled: true
  paths:
    - /var/log/secure
  fields:
    log_topics: secure
    source_ip: 10.0.0.10
  fields_under_root: true
  close_renamed: true
  ignore_older: 24h
  decorate_events: false

#============================= Filebeat modules ===============================

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

#=========================== Filebeat outputs =================================
output.logstash:
  hosts: ["10.0.0.6:5044","10.0.0.7:5044"]

logstash配置文件:

input {
  beats {
    port => 5044
  }
}

filter {
  dissect {
    add_field => { "hostname" => "%{[host][name]}" }
    mapping => {
      "message" => "%{time->} %{+time} %{+time} %{} %{event_name}: %{event_msg}"
    }
    remove_field => [ "message","beat","prospector","offset","tags","input","host"]
  }

  date {
    match => ["time","MMM dd HH:mm:ss"]
  }

}

output {
    elasticsearch {
      hosts => ["10.0.0.1:9201","10.0.0.2:9201","10.0.0.3:9201"]
      index => "%{[log_topics]}-%{+YYYY.MM}"
    }
}

b、nginx主机上直接logstah->es

logstash配置文件:

input {
  file {
    path => "/var/log/messages"
    sincedb_path => "/usr/local/services/logstash-6.3.1/data/messages"
    start_position => beginning
    close_older => 1200
    ignore_older => 43200
    add_field => {
      log_topic => messages
      source_ip => "10.0.0.5"
    }
  }
}

filter {
  dissect {
    add_field => { "hostname" => "esearch-xxx.sl" }
    mapping => {
      "message" => "%{time->} %{+time} %{+time} %{} %{event_name}: %{event_msg}"
    }
    remove_field => [ "message"]
  }


  date {
    match => ["time","MMM dd HH:mm:ss"]
  }

}

output {
  elasticsearch {
    hosts => ["10.0.0.1:9201","10.0.0.2:9201","10.0.0.3:9201"]
    index => "%{log_topic}-%{+YYYY.MM}"
  }
}

5、kibana中文手册

参考:https://www.elastic.co/guide/cn/kibana/current/index.html

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值