日志格式如下所示:
Apr 12 01:09:55 swarm1 chronyd[599]: Source 5.79.108.34 online
Apr 12 01:09:55 swarm1 chronyd[599]: Source 13.55.50.68 online
Apr 12 01:09:55 swarm1 nm-dispatcher: req:8 'connectivity-change': start running ordered scripts...
Apr 12 01:09:55 swarm1 systemd: Started LSB: Bring up/down networking.
Apr 12 01:09:55 swarm1 avahi-daemon[553]: Registering new address record for fe80::6d42:77ba:70b6:b396 on ens33.*.
Apr 12 01:10:25 swarm1 systemd: Started Session 15 of user root.
Apr 12 01:10:25 swarm1 systemd: Starting Session 15 of user root.
Apr 12 01:10:41 swarm1 systemd: Started Session 16 of user root.
Apr 12 01:10:41 swarm1 systemd-logind: New session 16 of user root.
Apr 12 01:10:41 swarm1 systemd: Starting Session 16 of user root.
Apr 12 01:10:41 swarm1 gdm-launch-environment]: AccountsService: ActUserManager: user (null) has no username (object path: /org/freedesktop/Accounts/User0, uid: 0)
个人经验总结:可以通过提取日志信息中的时间戳,添加一个新的字段,可以在kibana端过滤掉原来的时间戳,只保留自己添加的字段,无法通过网上所写的那样可以替换掉logstash自动加上的@timestamp时间戳
[root@logstash6 conf.d]# cat logstash.conf
input {
beats {
host => "0.0.0.0"
port => 5044
# codec => "json"
}
}
filter{
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:timestamp}"}
}
date {
match => [ "timestamp","MMM dd HH:mm:ss","MMM d HH:mm:ss","ISO8601"]
target => "@timestamp"
#timezone => "Asia/Shanghai"
}
ruby { code => "event.set('timestamp',event.get('@timestamp').time.utc+8*60*60)"
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["192.168.101.53:9200"]
index => "logstash-%{+YYYY.MM}"
}
}