input插件:
File:从指定的文件中读取事件流;
使用FileWatch(Ruby Gem库)监听文件的变化。
.sincedb:记录了每个被监听的文件的inode, major number, minor nubmer, pos;
# /etc/logstash/conf.d/fromfile.conf
input {
file {
path => [“/var/log/messages”]
type => “system”
start_position => “beginning”
}
}
output {
stdout {
codec => rubydebug
}
}
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/fromfile.conf
udp:通过udp协议从网络连接来读取Message,其必备参数为port,用于指明自己监听的端口,host则用指明自己监听的地址;
collectd:性能监控程序;
CentOS 7:
epel源:
# yum -y install epel-release
# yum install collectd
配置文件/etc/collectd.conf
Hostname “node3.magedu.com”
LoadPlugin syslog
LoadPlugin cpu
LoadPlugin df
LoadPlugin interface
LoadPlugin load
LoadPlugin memory
LoadPlugin network
<Plugin network>
<Server “172.16.100.70” “25826”>
# 172.16.100.70是logstash主机的地址,25826是其监听的udp端口;
</Server>
</Plugin>
Include “/etc/collectd.d”
systemctl start collectd.service
logstash端:
# vim /etc/logstash/conf.d/fromcollect.conf
input {
udp {
port => 25826
codec => collectd {}
type => “collectd”
}
}
output {
stdout {
codec => rubydebug
}
}
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/fromcollect.conf
redis插件:
从redis读取数据,支持redis channel和lists两种方式;
filter插件:
用于在将event通过output发出之前对其实现某些处理功能。grok
grok:用于分析并结构化文本数据;目前 是logstash中将非结构化日志数据转化为结构化的可查询数据的不二之选。
syslog, apache, nginx
模式定义位置:/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-0.3.0/patterns/grok-patterns
语法格式:
%{SYNTAX:SEMANTIC}
SYNTAX:预定义模式名称;
SEMANTIC:匹配到的文本的自定义标识符;
1.1.1.1 GET /index.html 30 0.23
%{IP:clientip} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}
input {
stdin {}
}
filter {
grok {
match => { “message” => “%{IP:clientip} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}” }
}
}
output {
stdout {
codec => rubydebug
}
}
{
“message” => “1.1.1.1 GET /index.html 30 0.23”,
“@version” => “1”,
“@timestamp” => “2015-11-25T02:13:52.558Z”,
“host” => “node4.magedu.com”,
“clientip” => “1.1.1.1”,
“method” => “GET”,
“request” => “/index.html”,
“bytes” => “30”,
“duration” => “0.23”
}
自定义grok的模式:
grok的模式是基于正则表达式编写,其元字符与其它用到正则表达式的工具awk/sed/grep/pcre差别不大。
PATTERN_NAME (?the pattern here)
匹配apache log
input {
file {
path => [“/var/log/httpd/access_log”]
type => “apachelog”
start_position => “beginning”
}
}
filter {
grok {
match => { “message” => “%{COMBINEDAPACHELOG}” }
}
}
output {
stdout {
codec => rubydebug
}
}
nginx log的匹配方式:
将如下信息添加至 /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-0.3.0/patterns/grok-patterns文件的尾部:
NGUSERNAME [a-zA-Z\.\@\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip} – %{NOTSPACE:remote_user} \[%{HTTPDATE:timestamp}\] \”(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\” %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{NOTSPACE:http_x_forwarded_for}
input {
file {
path => [“/var/log/nginx/access.log”]
type => “nginxlog”
start_position => “beginning”
}
}
filter {
grok {
match => { “message” => “%{NGINXACCESS}” }
}
}
output {
stdout {
codec => rubydebug
}
}
output插件:
stdout {}
elasticsearch {
}