Logstash安装配置

Logstash安装配置

1.安装
从官网下载.tar.gz安装包,我这里下载的是6.4.3版本。
https://www.elastic.co/cn/downloads/logstash
解压:
tar -zxvf logstash-6.4.3.tar.gz
2.配置

logstash可以修改启动占用的JVM内存,在config目录下的jvm.options,内容如下:
在这里插入图片描述
当然不修改也可以。

创建你想要的conf文件,例如收集Nginx日志,输出到elasticsearch上,需要这样一个配置文件:

 input {
         file {
             type => "flow"  #定义一个类型,自己随意定义,在output模块用到
             path => "/usr/local/nginx/logs/access.log"	#nginx日志的位置
             codec => "json"	#日志过来时格式化类型
    		 start_position => "beginning"
    	  }
    }
    filter {	#定义的日志过滤规则
            grok {
           		 match => { "message" => "%{COMBINEDAPACHELOG} %{QS:http_x_forwarded_for}"}
            }
      		mutate {convert => ["upstream_time", "float"]}
    }
    output {
             stdout {
                     codec => rubydebug	#在终端打印出日志,便于观察和调试
            }
    
        if [type] == "flow" {	#这里的if条件里的type,就是input里配置的
            elasticsearch {
                 index => "flow-%{+YYYY.MM.dd}"	#存入elasticsearch时,创建的索引名
                 hosts => ["172.16.185.31:9200"]  #elasticsearch地址
            }
        }else{
            elasticsearch {
                 index => "flowA-%{+YYYY.MM.dd}"
                 hosts => ["172.16.185.31:9200"]
            }
    
         }
    }

这个配置文件稍微有些复杂,里面已经加了注释。哪里不懂可以单独去搜索一下。

其中input模块是你的输入源,也就是nginx的日志,output模块是输出源,这里是输出到elasticsearch,需要注意hosts的地址不要写错。filter是比较重要的模块,也是最复杂的,在这里可以定义一系列的日志过滤规则,这个可以去单独的搜索学习下logstash的fileter模块配置。

再提供一个读取nginx日志,存入到kafka的conf文件:

 input {
         file {
             type => "flow"
             path => "/usr/local/nginx/logs/access.log"
            codec => "json"
            start_position => "beginning"
            }
    
    }
    output {
       if [type] == "flow" {
             kafka {
                     bootstrap_servers => "172.16.185.31:9092"
                    topic_id => 'nginx-access-kafkaceshi'
             codec => "json"
             }
        }
    }

还有利用filebeat收集日志,logstash读取后存入kafka的conf文件:

 input {
      beats {
        port => 5044
      }
    }
    output {
             stdout {
                     codec => rubydebug
            }
            kafka {
                 bootstrap_servers => "172.16.185.31:9092"
                 topic_id => 'nginx-kafka'
                 codec => "json"
             }
    
    }

3.启动

logstash启动命令:

./logstash -f /usr/local/logstash/logstash-6.4.3/config/flow-es.conf

然后日志信息就会出现在终端上!

4.错误
在这里插入图片描述

Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>“LogStash::ConfigurationError”, :message=>“Expected one of #, input, filter, output at line 1, column 1 (byte 1) after “, :backtrace=>[”/usr/local/logstash/logstash-6.4.3/logstash-core/lib/logstash/compiler.rb:41:in compile_imperative’”, “/usr/local/logstash/logstash-6.4.3/logstash-core/lib/logstash/compiler.rb:49:incompile_graph’”, “/usr/local/logstash/logstash-6.4.3/logstash-core/lib/logstash/compiler.rb:11:in block in compile_sources’”, “org/jruby/RubyArray.java:2486:inmap’”, “/usr/local/logstash/logstash-6.4.3/logstash-core/lib/logstash/compiler.rb:10:in compile_sources’”, “org/logstash/execution/AbstractPipelineExt.java:149:ininitialize’”, “/usr/local/logstash/logstash-6.4.3/logstash-core/lib/logstash/pipeline.rb:22:in initialize’”, “/usr/local/logstash/logstash-6.4.3/logstash-core/lib/logstash/pipeline.rb:90:ininitialize’”, “/usr/local/logstash/logstash-6.4.3/logstash-core/lib/logstash/pipeline_action/create.rb:38:in execute’”, “/usr/local/logstash/logstash-6.4.3/logstash-core/lib/logstash/agent.rb:309:inblock in converge_state’”]}
2019-09-16T15:40:56,894[logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

此错误是配置文件出错了,仔细检查配置文件的格式!

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值