lostash Timeout executing grok 问题排查

Timeout executing grok 问题排查

使用logstash的时候发现会有这个报错,导致logstash不稳定,input端是kafka,就会导致kafka的反复rebalance,一开始排查思路有问题,而且网上的资料可能都是其中的一个原因,所以导致我的排查思路偏了。

1. 问题背景

我有一组logstash 从kafka中消费日志数据,解析后丢到ES当中,提供一些日志的检索能力。大概的配置是这样的。

1. 基础配置

  1. logstash版本 7.5.0

  2. jvm

    1. java -version
      openjdk version "1.8.0_312"
      OpenJDK Runtime Environment (build 1.8.0_312-b07)
      OpenJDK 64-Bit Server VM (build 25.312-b07, mixed mode)
      
      
    2. -Xms6g
      -Xmx6g
      
  3. config/logstash.yml

    1. pipeline.workers: 6
      pipeline.batch.size: 50
      
      

2. pipeline配置

input {

  kafka{

        bootstrap_servers => "xxx:9092"
        topics_pattern => "log_.*"
        auto_offset_reset => "latest"
        group_id => "log-es-find"
        consumer_threads => 1
        client_id => "node123"
        partition_assignment_strategy  => "org.apache.kafka.clients.consumer.RoundRobinAssignor"
        max_poll_interval_ms => "600000"
        max_poll_records => "100"

  codec => json {
          charset => "UTF-8"
      }
  }

}





filter {

    mutate {
        remove_field => [ "host","@timestamp" ]
      }

    if "_LOGV3_" not in  [message] {
          grok {
              match => { 
                  "message" => '^%{LOGLEVEL:log_level} %{TIMESTAMP_ISO8601:time} %{HOST:host_name} %{SERVICCE:service_type} %{STRING:process} %{STRING:thread} - %{SESSION_ID:session_id} %{GREEDYDATA:message_log}$' 
              }
              remove_field => ["message"]
          }

    } else {

          grok {
                match => { "message" => '^%{LOGLEVEL:log_level} %{TIMESTAMP_ISO8601:time} %{HOST:host_name} %{SERVICCE:service_type} %{STRING:process} %{STRING:thread} _LOGV3_ %{SESSION_ID:session_id} %{EVENT_MARK:event} %{GREEDYDATA:message_log}$' 
                }

          }

          json {

              source => "message_log"
              target => "doc.custom"
              remove_field => ["message","message_log"]

          }

          mutate {
                rename => { "doc.custom" => "doc.%{event}" }
          }

    }

   if "_groktimeout" in [tags] {

      grok {

         match => {"message" => "^%{LOGLEVEL:log_level} %{TIMESTAMP_ISO8601:time} %{HOST:host_name} %{SERVICCE:service_type}"}
         remove_tag => [ "_groktimeout" ]
         add_tag => [ "timeout" ]
      }
  }

  if "_grokparsefailure" in [tags] {

      grok {
          remove_tag => [ "_grokparsefailure" ]
          add_tag => [ "grok_failure" ]
          match => { "message" => '^%{LOGLEVEL:log_level} %{TIMESTAMP_ISO8601:time} %{HOST:host_name} %{SERVICCE:service_type} %{GREEDYDATA:message_log}$' }
      }
   }

  date {
        match => ["time",  "ISO8601"]
        target => "@timestamp"
        locale => "cn"
  }


  mutate {
          remove_field => ["time", "@metadata","host","input", "agent","ecs","log" ]
  }

  if "_grokparsefailure" in [tags] {

      mutate {
          add_field => {
            "service_type" => "grok_failure"
          }
      }
   }

   if "_groktimeout" in [tags] {
        mutate {
              add_field => {
                  "service_type" => "grok_time_out"
              }
        }
   }

}



output {

 elasticsearch {

    hosts => ["http://es:9200"]
    index => "mai_log_%{[service_type]}_detail_%{+YYYY.MM.dd}"
    user => "elastic"
    password => "xxx"

  }
}

2. 问题表现

  因为之前搞过这一套,所以就很熟练的搭建起来了,logstash也没有做太多的配置,重点是没有对gc日志进行配置,没有打印gc日志,影响了后面的判断。

  logstash启动后表现异常,只是数据中有的数据的json字段是可以无限拓展的,导致ES中的有些索引有问题,后来把对应的一些字段修改为flattened解决了相关问题。但是logstash还有一个问题就是运行个7,8个小时就开始不稳定,频繁的与kafka失去联系,导致频繁发生rebalance。查看日志,发现有下面这样的日志:

[2021-11-25T11:22:57,107][WARN ][logstash.filters.grok    ][main] Timeout executing grok '%{LOGLEVEL:log_level} %{TIMESTAMP_ISO8601:time} %{HOST:host_name} %{SERVICCE:service_type} %{STRING:process} %{STRING:thread} - %{SESSION_ID:session_id} %{GREEDYDATA:message_log}$' against field 'message' with value 'Value too large to output (3876 bytes)! First 255 chars are: debug 2021-11-25T11:01:35.303029+08:00 kvm-jobs-job-index-online3 go_recall_job 30343 30349 - 10e11e80e227f6053f1637809295e246 uid=187853021, user={"Uid":187853021,"ExpectPosition":"跨境运营","LikeWords":[{"Term":"运营","Weight":0.62232894},{"Term":"跨境","Weight"'!

频繁的rebalance导致logstash几乎无法工作

3. 尝试通过修改 memory-queue+ poll-size

  这个时候想着有可能是某些字段的grok太复杂导致了logstash在处理数据的时候超时了?拿着Timeout executing grok作为关键字搜索了一波,大部分也是说grok使用的pattern太复杂了,于是年少无知的我又开始优化pattern(其实我的pattern并不复杂)。同时学习了一下logstash的消费模型。logstash的input和(filter+output)是分开的,input只是负责数据拉取,放到内存的缓存队列memory-queue中,filter从memory-queue中取数据消费。memory-queue是有大小限制的,当filter+output比较慢的话就会阻塞input的操作,这个时候我想会不会因为每次从kafka pull的数据太多导致呢?poll的数据太多,filter消费不动,导致input等待太久导致了kafka的超时。当时想着Timeout executing grok是因为少量比较特殊的数据导致的,这种情况也符合偶先的情况。

于是上面的一些配置做了改动

pipeline.workers: 6
pipeline.batch.size: 50

max_poll_interval_ms => "600000"
max_poll_records => "100"

这个实际上是改动后的配置,之前用的是默认的,比这个大。

再次重启了logstash,心想这次应该没有啥问题了吧,结果运行了一天左右,还是出现了上面的问题,黔驴技穷了,想了想,只能试试分析一下gc日志了,于是又开启了gc日志。

4. 通过gc日志定位问题

开启gc

jvm.options


## GC logging
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-XX:+PrintGCDateStamps
-XX:+PrintClassHistogram
-XX:+PrintTenuringDistribution
-XX:+PrintGCApplicationStoppedTime

# log GC status to a file with time stamps
# ensure the directory exists
-Xloggc:./gc_logs/gc.log

再次重启,等问题再次复现的时候观察gc日志 , 发现有频繁的full-gc操作,以至于logstash无法进行工作了,因为几乎所有的时间都在进行full-gc, 但是full-gc之后内存没有下降,很明显,发生了内存泄露,应该是logstash的啥bug让我触发了

2021-11-27T17:36:21.048+0800: 78068.514: [CMS-concurrent-mark-start]
2021-11-27T17:36:23.878+0800: 78071.344: [Full GC (Allocation Failure) 2021-11-27T17:36:23.878+0800: 78071.344: [CMS2021-11-27T17:36:25.512+0800: 78072.978: [CMS-concurrent-mark: 4.462/4.464 secs] [Times: user=39.30 sys=0.82, real=4.47 secs]
 (concurrent mode failure): 4194303K->4194303K(4194304K), 35.7906436 secs] 6081790K->4204353K(6081792K), [Metaspace: 103267K->103267K(1155072K)], 35.7908146 secs] [Times: user=44.59 sys=0.67, real=35.79 secs]
2021-11-27T17:36:59.669+0800: 78107.135: Total time for which application threads were stopped: 35.7917723 seconds, Stopping threads took: 0.0000556 seconds
2021-11-27T17:36:59.673+0800: 78107.139: Total time for which application threads were stopped: 0.0027825 seconds, Stopping threads took: 0.0001835 seconds
2021-11-27T17:37:00.674+0800: 78108.140: Total time for which application threads were stopped: 0.0007674 seconds, Stopping threads took: 0.0000516 seconds
2021-11-27T17:37:01.669+0800: 78109.135: [GC (CMS Initial Mark) [1 CMS-initial-mark: 4194303K(4194304K)] 4310321K(6081792K), 0.0121234 secs] [Times: user=0.10 sys=0.00, real=0.02 secs]
2021-11-27T17:37:01.681+0800: 78109.147: Total time for which application threads were stopped: 0.0129999 seconds, Stopping threads took: 0.0000897 seconds
2021-11-27T17:37:01.681+0800: 78109.147: [CMS-concurrent-mark-start]
2021-11-27T17:37:06.359+0800: 78113.825: [CMS-concurrent-mark: 4.678/4.678 secs] [Times: user=43.90 sys=1.14, real=4.67 secs]
2021-11-27T17:37:06.359+0800: 78113.825: [CMS-concurrent-preclean-start]
2021-11-27T17:37:06.569+0800: 78114.035: [Full GC (Allocation Failure) 2021-11-27T17:37:06.569+0800: 78114.035: [CMS2021-11-27T17:37:12.639+0800: 78120.106: [CMS-concurrent-preclean: 6.278/6.280 secs] [Times: user=7.09 sys=0.17, real=6.28 secs]
 (concurrent mode failure): 4194303K->4194303K(4194304K), 40.9531845 secs] 6081787K->4205126K(6081792K), [Metaspace: 103268K->103268K(1155072K)], 40.9533777 secs] [Times: user=39.95 sys=0.84, real=40.96 secs]
2021-11-27T17:37:47.523+0800: 78154.989: Total time for which application threads were stopped: 40.9546434 seconds, Stopping threads took: 0.0000614 seconds

使用jamp看了一下内存分布

jmap -histo:live $pid

num     #instances         #bytes  class name
----------------------------------------------
   1:      17289297      691571880  org.jruby.util.ByteList
   2:      17245027      689801080  org.jruby.RubyString
   3:      17417048      659620560  [C
   4:      17334568      554706176  java.util.concurrent.ConcurrentHashMap$Node
   5:      17214558      550865856  org.logstash.FieldReference
   6:      17296590      514218192  [B
   7:      17416092      417986208  java.lang.String
   8:          8271      135421720  [Ljava.util.concurrent.ConcurrentHashMap$Node;
   9:        139671        4469472  java.util.HashMap$Node
  10:         74368        4338232  [Ljava.lang.Object;

看起来不对劲儿,然后再社区发了一个discuss,果然是logstash的一个bug,遗憾的是这个bug已经有人提交commit了,但是还没有修复。。。
触发这个bug的原因是我的json 里面有一些字段搞的不规范,导致所有的日志中json的key的总合是上亿的,所以发生了内存泄露。是json filter的一个bug。

回顾这个问题,最开始的时候,一直以为是自己使用有问题,所以也就没有去打印gc,加上依赖自己之前的经验,感觉ELK应该bug比较少,真的没有想到,绕了一大圈,栽跟头了。。。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值