Logstash 插件 grok 使用

实例:

使用grok匹配项目日志

关于项目中日志的配置 配置文件logbak.xml文件
主要配置日志的输出模版

<pattern>[%d{yyyy-MM-dd HH:mm:ss}] [%thread] [%level] %logger{50} - %msg%n</pattern>

文件部分内容:

    <appender name="INFO_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <!-- 正在记录的日志文件的路径及文件名 -->
        <file>${log.path}/log_info.log</file>
        <!--日志文件输出格式-->
        <encoder>
            <!--<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>-->
            <pattern>[%d{yyyy-MM-dd HH:mm:ss}] [%thread] [%level] %logger{50} - %msg%n</pattern>
            <charset>UTF-8</charset>
        </encoder>
        <!-- 日志记录器的滚动策略,按日期,按大小记录 -->
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- 每天日志归档路径以及格式 -->
            <fileNamePattern>${log.path}/info/log-info-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
            <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                <maxFileSize>100MB</maxFileSize>
            </timeBasedFileNamingAndTriggeringPolicy>
            <!--日志文件保留天数-->
            <maxHistory>15</maxHistory>
        </rollingPolicy>
        <!-- 此日志文件只记录info级别的 -->
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
            <level>info</level>
            <onMatch>ACCEPT</onMatch>
            <onMismatch>DENY</onMismatch>
        </filter>
    </appender>

日志模版:

[2024-02-21 15:30:55] [main] [INFO] c.c.s.t.i.ths.tax.item.ths.ItemThsApplication - Started ItemThsApplication in 7.129 seconds (JVM running for 9.158)

(?<LogTime>\[%{YEAR}-%{MONTHNUM}-%{MONTHDAY}%{SPACE}%{TIME}\]) \[%{DATA:thread}\]%{SPACE}\[%{DATA:logLeverl}\]%{SPACE}%{DATA:class}%{SPACE}-%{SPACE}%{GREEDYDATA:message}

结果:

{
  "LogTime": "[2024-02-21 15:30:55]",
  "thread": "main",
  "message": "Started TaxItemThsApplication in 7.129 seconds (JVM running for 9.158)",
  "class": "c.c.s.t.i.ths.tax.item.ths.TaxItemThsApplication",
  "logLeverl": "INFO"
}
将grok 过滤应用到logstash+filebeat+es中:
filebeat配置文件 filebeat.yml 部分
# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input-specific configurations.

# filestream is an input for collecting log messages from files.
- type: log

  # Unique ID among all inputs, an ID is required.
  id: my-log-id

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /u01/es/log/*.log
    #- c:\programdata\elasticsearch\logs\*



# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
#  hosts: ["192.168.30.181:9200"]

  # Performance preset - one of "balanced", "throughput", "scale",
  # "latency", or "custom".
#  preset: balanced

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["192.168.30.181:5044"]

logstash.yml

http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://192.168.30.181:9200" ]
# 主管道的Logstash配置路径,如果指定目录或通配符,配置文件将按字母顺序从目录中读取
path.config: /usr/share/logstash/config/conf.d/*.conf
#Logstash将其日志写到的目录
path.logs: /usr/share/logstash/logs


logstash-simple.conf文件(/usr/share/logstash/config/conf.d/*.conf)
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  beats {
    port => 5044
  }
}
filter{
  grok{
    match => { "message" => "(?<LogTime>\[%{YEAR}-%{MONTHNUM}-%{MONTHDAY}%{SPACE}%{TIME}\]) \[%{DATA:thread}\]%{SPACE}\[%{DATA:logLeverl}\]%{SPACE}%{DATA:class}%{SPACE}-%{SPACE}%{GREEDYDATA:message}" }
   }
}
output {
  stdout{
   codec => rubydebug
  }
  elasticsearch {
    hosts => ["http://192.168.30.181:9200"]
    index => "logloglognew-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}

  • 9
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值