接着上面一篇来说,这篇主要介绍logstash处理实时数据问题,日志数据更新时,logstash处理方式是默认每15s检查一次文件夹,每5秒检查一次文件,这些参数可以改变的。遇到当处理较多批次数据时,logstash出现卡死状态的原因,我目前猜测是输入文件较多logstash处理很快,而输出插件input elasticsearch这个插件线程限制,导致的死锁问题,后面详细说。最后展示一下kibana日志数据时区从UTC改成显示北京时间的几个方法,最后展示一下仪表盘效果。
先上logstsh中配置文件代码:
<span style="font-size:14px;"><span style="font-family:SimSun;font-size:14px;">input {
file{
path => ["/home/cuixuange/Public/elk/test_log/*.log","/home/cuixuange/Public/elk/test_log/logs/*"]
start_position=>"beginning"
discover_interval => 15
stat_interval => 1
sincedb_write_interval => 15
}
}
filter{
grok{
match=>{ "message" => "(?m)%{DATA:timestamp} \[%{DATA:ip}\] . \[%{DATA:type}\] %{GREEDYDATA:log_json}"
}
}
json {
source => "log_json"
target => "log_json_content"
remove_field=>["logjson"]
}
# json {
# source => "trace"
# target => "trace_content"
# remove_field=>["trace"]
# }
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss" ]
locale => "en"
timezone=>"+00:00"
}
}
output {
if[timestamp]=~/^\d{4}-\d{2}-\d{2}/{
elasticsearch {
host => "192.168.172.128"
index => "logstash-test-%{+yyyy.MM.dd}" #logstash-* to find
workers=>5
template_overwrite =>true
}
}
#stdout{codec=>json_lines}
}
</span></span>
简单解释解释意思:
1