前言
ELK搭建没有难度,难的是logstash的配置文件,logstash主要分为三个部分,input,filter和output。
input,输入源可选的输入源由很多,详情见ELK官网,这里我们说s3作为输入源。
filter,过滤器,logstash可以在input和output中间添加过滤器,可以将数据进行分类、过滤、打标签等操作,将数据格式化。logstash的核心就在此。
output,输出。一般是输出到elasticsearch。
说明:
AWS的ELB日志存储在S3,可以通过logstash的S3插件获取,经过过滤器后,输出到elasticsearch。
ELK的搭建和配置在这里就不说了,看官方文档就行,这里提供一个logstash的配置文件 ,用于抓取和格式化ELB日志。
input { s3 { access_key_id => "access_key" secret_access_key => "secret_key" bucket => "elb_bucket" region => "aws_region" type => "s3" } } filter { mutate{ split => { "message" => " " } add_field => { "log_time" => "%{[message][0]}" } add_field => { "elb_name" => "%{[message][1]}" } add_field => { "client_ip" => "%{[message][2]}" } add_field => { "t1" => "%{[message][4]}" } add_field => { "t2" => "%{[message][5]}" } add_field => { "t3" => "%{[message][6]}" } add_field => { "elb_code" => "%{[message][7]}" } add_field => { "server_code" => "%{[message][8]}" } add_field => { "getpost" => "%{[message][11]}" } add_field => { "url" => "%{[message][12]}" } remove_field => [ "message" ] } mutate { convert => { "t1" => "float" } convert => { "t2" => "float" } convert => { "t3" => "float" } convert => { "elb_code" => "integer" } convert => { "server_code" => "integer" } } grok { break_on_match => false match => { "client_ip" => "%{IPV4:device_ip}" } match => { "url" => "%{URIPROTO:url_head}://%{URIHOST:url_destination}:%{POSINT:url_port}%{URIPATH:url_path}(?:%{URIPARAM:url_param})?" } match => { "getpost" => "%{WORD:get_post}" } remove_field => [ "getpost" ] } mutate{ split => { "url_path" => "." } add_field => { "url_api" => "%{[url_path][0]}" } add_field => { "html_ashx" => "%{[url_path][1]}" } } date { match => ["log_time", "ISO8601"] target => "log_date" add_tag => [ "log_date" ] remove_field => [ "log_time" ] } geoip { source => "device_ip" add_tag => [ "geoip" ] remove_field => [ "client_ip" ] } } output { elasticsearch { hosts => ["xxx.xxx.xxx.xxx:9200"] index => "logstash-s3-%{+YYYY-MM-dd}" } }