ELK logstash KV过滤插件

过滤插件:通用配置字段


  • add_field 如果过滤成功,添加一个字段到这个事件
  • add_tags 如果过滤成功,添加任意数量的标签到这个事件
  • remove_field 如果过滤成功,从这个事件移除任意字段
  • remove_tag 如果过滤成功,从这个事件移除任意标签

 

Description


This filter helps automatically parse messages (or specific event fields) which are of the foo=bar variety.

For example, if you have a log message which contains ip=1.2.3.4 error=REFUSED, you can parse those automatically by configuring:

    filter {
      kv { }
    }

The above will result in a message of ip=1.2.3.4 error=REFUSED having the fields:

  • ip: 1.2.3.4
  • error: REFUSED

This is great for postfix, iptables, and other types of logs that tend towards key=value syntax.

You can configure any arbitrary strings to split your data on, in case your data is not structured using = signs and whitespace. For example, this filter can also be used to parse query parameters like foo=bar&baz=fizz by setting the field_split parameter to &.

 

 

过滤插件:KV


KV插件:接收一个键值数据,按照指定分隔符解析为Logstash事件中的数据结构,放到事件顶层。

常用字段:

• field_split 指定键值分隔符,默认空

field_split

  • Value type is string
  • Default value is " "

A string of characters to use as single-character field delimiters for parsing out key-value pairs.

These characters form a regex character class and thus you must escape special regex characters like [ or ] using \.

Example with URL Query Strings

For example, to split out the args from a url query string such as ?pin=12345~0&d=123&e=foo@bar.com&oq=bobo&ss=12345:

    filter {
      kv {
        field_split => "&?"
      }
    }

The above splits on both & and ? characters, giving you the following fields:

pin: 12345~0
d: 123
e: foo@bar.com
oq: bobo
ss: 12345

 示例如下:

 如果日志以键值存储那么用这个插件会比较方便,指定分隔符

[root@localhost ~]# cat /usr/local/logstash/conf.d/test.conf
input {
  file {
    path => "/var/log/nginx/*.log"
    exclude => "error.log"
    start_position => "beginning"
    tags => "web"
    tags => "nginx"
    type => "access"
    add_field => {
    "project" => "microservice"
    "app" => "product"
    }
  }
}

filter {
 kv {
 field_split => "&?"
 } 
}

output {
  elasticsearch {
    hosts => 
    ["192.168.179.102:9200","192.168.179.103:9200","192.168.179.104:9200"]
    index => "test-%{+YYYY.MM.dd}"
 }
}

配置好logstash之后让其重新加载配置 ,查看信息看是否有报错

 如果字段没有拆开只能在message里面去搜索

这样就很呆板,就不能多维度去查询了,可视化展示是基于格式化后某些字段进行展示的。所以字段很重要,可以通过key value做解析。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值