简述
本文记录logstash的output配置为kafka的过程。这里是简单的例子,输入为stdin,本文主要目的是为了记录在这次配置过程中遇到的问题和解决的过程及总结。
关于kafka集群的搭建可以参考:https://www.cnblogs.com/ldsggv/p/11010497.html
一、logstash的conf文件配置
input{
stdin {}
}
output{
stdout { codec => rubydebug }
kafka {
bootstrap_servers => "192.168.183.195:9092,192.168.183.194:9092,192.168.183.196:9092" #生产者
codec => json
topic_id => "kafkalogstash" #设置写入kafka的topic
}
}
这里配置完成之后,如果kafka集群没有问题,那么启动logstash,就可以测试发送消息了;
启动:
bin/logstash -f logstash-kafka.conf
然后等待启动,
当提示:
[INFO ] 2019-06-11 17:52:51.163 [[main]-pipeline-manager] AppInfoParser - Kafka version : 2.1.0 [INFO ] 2019-06-11 17:52:51.164 [[main]-pipeline-manager] AppInfoParser - Kafka commitId : eec43959745f444f [INFO ] 2019-06-11 17:52:51.342 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0xa43495c sleep>"} The stdin plugin is now waiting for input: [INFO ] 2019-06-11 17:52:51.444 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [INFO ] 2019-06-11 17:52:51.708 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9601}
此时启动成功
然后输入消息,正常的输出为下图,在kafka集群也能看到对应的topic信息,也能通过kafka-console-consumer.sh消费消息
456 { "@timestamp" => 2019-06-11T10:20:09.615Z, "host" => "emr-worker-4.cluster-96380", "@version" => "1", "message" => "456" } [INFO ] 2019-06-11 18:20:10.642 [kafka-producer-network-thread | producer-1] Metadata - Cluster ID: S8sBZgHPRJOv-nULn_bVGw { "@timestamp" => 2019-06-11T11:48:11.234Z, "host" => "emr-worker-4.cluster-96380", "@version" => "1", "message" => "" }
上面是正确的输出结果,但是我从一开始是没有成功的,输出为:
[INFO ] 2019-06-11 17:53:33.558 [kafka-producer-network-thread | producer-1] Metadata - Cluster ID: S8sBZgHPRJOv-n