版本
logstash-7.13.1
官网参考
https://www.elastic.co/guide/en/logstash/current/config-examples.html
https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html
https://www.elastic.co/guide/en/logstash/current/event-dependent-configuration.html#conditionals
https://www.elastic.co/guide/en/logstash/current/input-plugins.html
https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html
confg 启动方式
bin/logstash -f logstash-simple.conf
conf的模板
# This is a comment. You should use comments to describe
# parts of your configuration.
input {
...
}
filter {
...
}
output {
...
}
样例1:
input {
file {
path => "/var/log/messages"
type => "syslog"
}
file {
path => "/var/log/apache/access.log"
type => "apache"
}
}
样例2
input { stdin { } }
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
样例3
input {
file {
path => "/tmp/access_log"
start_position => "beginning"
}
}
filter {
if [path] =~ "access" {
mutate { replace => { "type" => "apache_access" } }
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout { codec => rubydebug }
}
样例4
input {
file {
path => "/tmp/*_log"
}
}
filter {
if [path] =~ "access" {
mutate { replace => { type => "apache_access" } }
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
} else if [path] =~ "error" {
mutate { replace => { type => "apache_error" } }
} else {
mutate { replace => { type => "random_logs" } }
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
input介绍
对于input可以从很多源获取数据
An input plugin enables a specific source of events to be read by Logstash.
Receives events from Azure Event Hubs | ||
Receives events from the Elastic Beats framework | ||
Pulls events from the Amazon Web Services CloudWatch API | ||
Streams events from CouchDB’s | ||
read events from Logstash’s dead letter queue | ||
Reads query results from an Elasticsearch cluster | ||
Captures the output of a shell command as an event | ||
Streams events from files | ||
Reads Ganglia packets over UDP | ||
Reads GELF-format messages from Graylog2 as events | ||
Generates random log events for test purposes | ||
Reads events from a GitHub webhook | ||
Extract events from files in a Google Cloud Storage bucket | ||
Consume events from a Google Cloud PubSub service | ||
Reads metrics from the | ||
Generates heartbeat events for testing | ||
Receives events over HTTP or HTTPS | ||
Decodes the output of an HTTP API into events | ||
Reads mail from an IMAP server | ||
Reads events from an IRC server | ||
Generates synthetic log events | ||
Reads events from standard input | ||
Creates events from JDBC data | ||
Reads events from a Jms Broker | ||
Retrieves metrics from remote Java applications over JMX | ||
Reads events from a Kafka topic | ||
Receives events through an AWS Kinesis stream | ||
Reads events over a TCP socket from a Log4j | ||
Receives events using the Lumberjack protocl | ||
Captures the output of command line tools as an event | ||
Streams events from a long-running command pipe | ||
Receives facts from a Puppet server | ||
Pulls events from a RabbitMQ exchange | ||
Reads events from a Redis instance | ||
Receives RELP events over a TCP socket | ||
Captures the output of command line tools as an event | ||
Streams events from files in a S3 bucket | ||
Reads logs from AWS S3 buckets using sqs | ||
Creates events based on a Salesforce SOQL query | ||
Polls network devices using Simple Network Management Protocol (SNMP) | ||
Creates events based on SNMP trap messages | ||
Creates events based on rows in an SQLite database | ||
Pulls events from an Amazon Web Services Simple Queue Service queue | ||
Reads events from standard input | ||
Creates events received with the STOMP protocol | ||
Reads syslog messages as events | ||
Reads events from a TCP socket | ||
Reads events from the Twitter Streaming API | ||
Reads events over UDP | ||
Reads events over a UNIX socket | ||
Reads from the | ||
Reads events from a websocket | ||
Creates events based on the results of a WMI query | ||
Receives events over the XMPP/Jabber protocol |
输入源公共属性
The following configuration options are supported by all input plugins:
Setting | Input type | Required |
---|---|---|
No | ||
No | ||
No | ||
No | ||
No | ||
No |
详细说明
add_field
- Value type is hash
- Default value is
{}
Add a field to an event
codec
- Value type is codec
- Default value is
"json"
The codec used for input data. Input codecs are a convenient method for decoding your data before it enters the input, without needing a separate filter in your Logstash pipeline.
enable_metric
- Value type is boolean
- Default value is
true
Disable or enable metric logging for this specific plugin instance by default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
id
- Value type is string
- There is no default value for this setting.
Add a unique ID
to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 redis inputs. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
input {
redis {
id => "my_plugin_id"
}
}
Variable substitution in the id
field only supports environment variables and does not support the use of values from the secret store.
tags
- Value type is array
- There is no default value for this setting.
Add any number of arbitrary tags to your event.
This can help with processing later.
type
- Value type is string
- There is no default value for this setting.
Add a type
field to all events handled by this input.
Types are used mainly for filter activation.
文件的输入源
Streams events from files |
标准输入源
Reads events from standard input |
redis输入源
Reads events from a Redis instance |
属性参考如下:
Setting | Input type | Required |
---|---|---|
No | ||
No | ||
string, one of | Yes | |
No | ||
No | ||
No | ||
Yes | ||
No | ||
No | ||
No | ||
No | ||
No |
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-redis.html
data_type
- 必须字段.
- 值可以是三个中的任何一个:
list
,channel
,pattern_channel
- There is no default value for this setting.
Specify either list or channel. If data_type
is list
, then we will BLPOP the key. If data_type
is channel
, then we will SUBSCRIBE to the key. If data_type
is pattern_channel
, then we will PSUBSCRIBE to the key.
db
- Value type is number
- Default value is
0
The Redis database number.
host
- Value type is string
- Default value is
"127.0.0.1"
The hostname of your Redis server.
path
- Value type is string
- There is no default value for this setting.
- Path will override Host configuration if both specified.
The unix socket path of your Redis server.
key
- This is a required setting.
- Value type is string
- There is no default value for this setting.
The name of a Redis list or channel.
password
- Value type is password
- There is no default value for this setting.
Password to authenticate with. There is no authentication by default.
port
- Value type is number
- Default value is
6379
The port to connect on.
实验
机器1: redis + springboot + logstash (192.168.1.100)
input 如下
input {
file {
path => "/home/lxp/logs/*.log"
codec => multiline {
pattern => "^(\[%{TIMESTAMP_ISO8601}\])"
negate => true
what => "previous"
}
type => "springboot"
start_position => "beginning"
sincedb_path => "NUL"
}
}
filter {
}
output {
if [type] == "springboot" {
redis {
data_type => "list"
host => "192.168.1.110"
db => "0"
port => "6379"
key => "logstash_service"
}
stdout {
codec => rubydebug
}
}
}
机器2: Logstash + kibana + es
input {
redis {
key => "logstash_service"
host => "192.168.1.110"
port => 6379
db => "0"
data_type => "list"
type => "springboot"
}
}
output {
if [type] == "springboot" {
elasticsearch {
hosts => ["192.168.1.101:9200"]
index => "spring-%{+YYYY.MM.dd}"
}
}
stdout {
codec => rubydebug
}
}