filebeat+kafka+ELK日志方案(Docker版)

9 篇文章 0 订阅

一、架构图

在这里插入图片描述

二、环境介绍

kafka、ELk均用Docker运行:192.168.120.129
在192.168.120.129部署filebeat采集日志

三、部署方案

步骤一、优化系统参数

# vim /etc/sysctl.conf
vm.max_map_count=655360
# sysctl -p
vm.max_map_count = 655360

步骤二、Docker搭建ELK
参考上篇博客:Docker部署ELK及简单运行

es端口:9204,kibana端口:5601
此时查看容器情况

# docker ps -a
CONTAINER ID        IMAGE                           COMMAND                  CREATED             STATUS              PORTS                                                NAMES
132fb37de886        kibana:4.5.3                    "/docker-entrypoin..."   2 hours ago         Up 2 hours          0.0.0.0:5601->5601/tcp                               kibana
d5c21a9b10f6        logstash                        "/docker-entrypoin..."   2 hours ago         Up 2 hours                                                               hopeful_hawking
687be4425ea5        73f3ae436ada                    "/docker-entrypoin..."   2 hours ago         Up 2 hours          9300/tcp, 0.0.0.0:9204->9200/tcp                     my_elasticsearch

步骤三、Docker运行Kafka、Zookeeper
1、拉取镜像

# docker pull wurstmeister/kafka
# docker search zookeeper

2、启动容器
首先启动zookeeper

# docker run -d --name zookeeper --publish 2181:2181 --volume /etc/localtime:/etc/localtime wurstmeister/zookeeper:latest

然后再启动Kafka

# docker run -d --name kafka --publish 9092:9092 --link zookeeper --env KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 --env KAFKA_ADVERTISED_HOST_NAME=192.168.120.129 --env KAFKA_ADVERTISED_PORT=9092 --volume /etc/localtime:/etc/localtime wurstmeister/kafka:latest

可为集群之外的客户端链接
–env KAFKA_ADVERTISED_HOST_NAME=kafka所在宿主机的IP (一定要是公网ip,否则远程是无法操作kafka的,kaka tools也是无法使用的,如果仅仅紧紧集群服务器间通信可以 设置 内网IP),中间的kafka所在宿主机的IP,如果不这么设置,可能会导致在别的机器上访问不到kafka。

3、验证kafka是否可以使用

# docker exec -it kafka bash
bash-4.4# cd /opt/kafka_2.13-2.6.0/bin/

运行kafka生产者发送消息

bash-4.4# ./kafka-console-producer.sh --broker-list localhost:9092 --topic sun                                    
>cc                                    #手动输入
>kk
>{"datas":[{"channel":"","metric":"temperature","producer":"ijinus","sn":"IJA0101-00002245","time":"1543207156000","va

运行kafka消费者接收消息

bash-4.4# ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic sun --from-beginning                 
kk
cc
kk
{"datas":[{"channel":"","metric":"temperature","producer":"ijinus","sn":"IJA0101-00002245","time":"1543207156000","va

kafka可视化工具kafka tool:https://www.kafkatool.com/download.html

步骤四、 编写logstash文件并传入容器

# vim logstash.conf
input {
  stdin {}
  beats {
    port => 5044
  }
 
     kafka {
        bootstrap_servers => [ "192.168.120.129:9092" ]
        topics => ["kk_consumer"]
     }
 
}
 
output {
  elasticsearch {
    hosts => [ "http://192.168.120.129:9204" ]
    index => "myservice-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}
# docker cp logstash.conf d5c:/usr/share/logstash/bin

步骤五、安装filebeat

# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.4-linux-x86_64.tar.gz
# tar -xf filebeat-6.2.4-linux-x86_64.tar.gz 
# cd filebeat-6.2.4-linux-x86_64
# vim filebeat.yml
- type: log
  enabled: true
  paths:
    - /root/a.log                                   #测试日志文件
#output.elasticsearch:                              #添加注释
  # Array of hosts to connect to.
 # hosts: ["localhost:9200"]
 
output.kafka:
  enabled: true
  hosts: ["192.168.120.129:9092"]
  topic: "kk_consumer"
  compression: gzip
  max_message_bytes: 100000000
# nohup ./filebeat -e -c filebeat.yml &

步骤六、运行logstash
进入logstash容器

# docker exec -it d5c bash

运行logstash

root@d5c21a9b10f6:/usr/share/logstash/bin# rm -rf /var/lib/logstash/.lock 
root@d5c21a9b10f6:#cd /usr/share/logstash/bin
root@d5c21a9b10f6:/usr/share/logstash/bin# logstash -f logstash.conf 
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
08:57:59.434 [main] INFO  logstash.modules.scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
08:57:59.448 [main] INFO  logstash.modules.scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
08:58:00.612 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.120.129:9204/]}}
08:58:00.613 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.120.129:9204/, :path=>"/"}
08:58:00.838 [[main]-pipeline-manager] WARN  logstash.outputs.elasticsearch - Restored connection to ES instance {:url=>"http://192.168.120.129:9204/"}
08:58:01.180 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Using mapping template from {:path=>nil}
08:58:01.191 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "omit_norms"=>true}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"string", "index"=>"analyzed", "omit_norms"=>true, "fielddata"=>{"format"=>"disabled"}}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"string", "index"=>"analyzed", "omit_norms"=>true, "fielddata"=>{"format"=>"disabled"}, "fields"=>{"raw"=>{"type"=>"string", "index"=>"not_analyzed", "doc_values"=>true, "ignore_above"=>256}}}}}, {"float_fields"=>{"match"=>"*", "match_mapping_type"=>"float", "mapping"=>{"type"=>"float", "doc_values"=>true}}}, {"double_fields"=>{"match"=>"*", "match_mapping_type"=>"double", "mapping"=>{"type"=>"double", "doc_values"=>true}}}, {"byte_fields"=>{"match"=>"*", "match_mapping_type"=>"byte", "mapping"=>{"type"=>"byte", "doc_values"=>true}}}, {"short_fields"=>{"match"=>"*", "match_mapping_type"=>"short", "mapping"=>{"type"=>"short", "doc_values"=>true}}}, {"integer_fields"=>{"match"=>"*", "match_mapping_type"=>"integer", "mapping"=>{"type"=>"integer", "doc_values"=>true}}}, {"long_fields"=>{"match"=>"*", "match_mapping_type"=>"long", "mapping"=>{"type"=>"long", "doc_values"=>true}}}, {"date_fields"=>{"match"=>"*", "match_mapping_type"=>"date", "mapping"=>{"type"=>"date", "doc_values"=>true}}}, {"geo_point_fields"=>{"match"=>"*", "match_mapping_type"=>"geo_point", "mapping"=>{"type"=>"geo_point", "doc_values"=>true}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "doc_values"=>true}, "@version"=>{"type"=>"string", "index"=>"not_analyzed", "doc_values"=>true}, "geoip"=>{"type"=>"object", "dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip", "doc_values"=>true}, "location"=>{"type"=>"geo_point", "doc_values"=>true}, "latitude"=>{"type"=>"float", "doc_values"=>true}, "longitude"=>{"type"=>"float", "doc_values"=>true}}}}}}}}
08:58:01.207 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://192.168.120.129:9204"]}
08:58:01.230 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/share/logstash/logstash-core/lib/org/apache/logging/log4j/log4j-slf4j-impl/2.6.2/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-5.1.11/vendor/jar-dependencies/runtime-jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
08:58:02.288 [[main]-pipeline-manager] INFO  logstash.inputs.beats - Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
The stdin plugin is now waiting for input:
08:58:02.421 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
08:58:02.489 [[main]<beats] INFO  org.logstash.beats.Server - Starting server on port: 5044
08:58:02.667 [[main]<kafka] INFO  org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: 
	metric.reporters = []
	metadata.max.age.ms = 300000
	partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
	reconnect.backoff.ms = 50
	sasl.kerberos.ticket.renew.window.factor = 0.8
	max.partition.fetch.bytes = 1048576
	bootstrap.servers = [192.168.120.129:9092]
	ssl.keystore.type = JKS
	enable.auto.commit = true
	sasl.mechanism = GSSAPI
	interceptor.classes = null
	exclude.internal.topics = true
	ssl.truststore.password = null
	client.id = logstash-0
	ssl.endpoint.identification.algorithm = null
	max.poll.records = 2147483647
	check.crcs = true
	request.timeout.ms = 40000
	heartbeat.interval.ms = 3000
	auto.commit.interval.ms = 5000
	receive.buffer.bytes = 65536
	ssl.truststore.type = JKS
	ssl.truststore.location = null
	ssl.keystore.password = null
	fetch.min.bytes = 1
	send.buffer.bytes = 131072
	value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
	group.id = logstash
	retry.backoff.ms = 100
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	ssl.trustmanager.algorithm = PKIX
	ssl.key.password = null
	fetch.max.wait.ms = 500
	sasl.kerberos.min.time.before.relogin = 60000
	connections.max.idle.ms = 540000
	session.timeout.ms = 30000
	metrics.num.samples = 2
	key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
	ssl.protocol = TLS
	ssl.provider = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.keystore.location = null
	ssl.cipher.suites = null
	security.protocol = PLAINTEXT
	ssl.keymanager.algorithm = SunX509
	metrics.sample.window.ms = 30000
	auto.offset.reset = latest

08:58:02.666 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9601}
08:58:02.857 [[main]<kafka] INFO  org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: 
	metric.reporters = []
	metadata.max.age.ms = 300000
	partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
	reconnect.backoff.ms = 50
	sasl.kerberos.ticket.renew.window.factor = 0.8
	max.partition.fetch.bytes = 1048576
	bootstrap.servers = [192.168.120.129:9092]
	ssl.keystore.type = JKS
	enable.auto.commit = true
	sasl.mechanism = GSSAPI
	interceptor.classes = null
	exclude.internal.topics = true
	ssl.truststore.password = null
	client.id = logstash-0
	ssl.endpoint.identification.algorithm = null
	max.poll.records = 2147483647
	check.crcs = true
	request.timeout.ms = 40000
	heartbeat.interval.ms = 3000
	auto.commit.interval.ms = 5000
	receive.buffer.bytes = 65536
	ssl.truststore.type = JKS
	ssl.truststore.location = null
	ssl.keystore.password = null
	fetch.min.bytes = 1
	send.buffer.bytes = 131072
	value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
	group.id = logstash
	retry.backoff.ms = 100
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	ssl.trustmanager.algorithm = PKIX
	ssl.key.password = null
	fetch.max.wait.ms = 500
	sasl.kerberos.min.time.before.relogin = 60000
	connections.max.idle.ms = 540000
	session.timeout.ms = 30000
	metrics.num.samples = 2
	key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
	ssl.protocol = TLS
	ssl.provider = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.keystore.location = null
	ssl.cipher.suites = null
	security.protocol = PLAINTEXT
	ssl.keymanager.algorithm = SunX509
	metrics.sample.window.ms = 30000
	auto.offset.reset = latest

08:58:02.948 [[main]<kafka] INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.10.0.1
08:58:02.948 [[main]<kafka] INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : a7a17cdec9eaa6c5
08:58:03.155 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-5.1.11/lib/logstash/inputs/kafka.rb:229] INFO  org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Discovered coordinator 192.168.120.129:9092 (id: 2147482646 rack: null) for group logstash.
08:58:03.170 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-5.1.11/lib/logstash/inputs/kafka.rb:229] INFO  org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Revoking previously assigned partitions [] for group logstash
08:58:03.170 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-5.1.11/lib/logstash/inputs/kafka.rb:229] INFO  org.apache.kafka.clients.consumer.internals.AbstractCoordinator - (Re-)joining group logstash
08:58:03.222 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-5.1.11/lib/logstash/inputs/kafka.rb:229] INFO  org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Successfully joined group logstash with generation 11
08:58:03.224 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-5.1.11/lib/logstash/inputs/kafka.rb:229] INFO  org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Setting newly assigned partitions [kk_consumer-0] for group logstash

步骤七、创建测试日志文件a.log

# touch a.log

步骤八、验证
打开终端1,进入kafka容器

# docker exec -it c24 bash
# ./kafka-console-consumer.sh --bootstrap-server 192.168.120.129:9092 --topic kk_consumer --from-beginning

打开终端2,向a.log写入信息

# echo 11 >> /root/a.log 
# echo jinqule >> /root/a.log

同时终端1有输出信息,kafka正常!

{"@timestamp":"2020-11-26T08:37:07.834Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.2.4","topic":"kk_consumer"},"source":"/root/a.log","offset":3,"message":"11","prospector":{"type":"log"},"beat":{"name":"es2","hostname":"es2","version":"6.2.4"}}
{"@timestamp":"2020-11-26T08:38:32.886Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.2.4","topic":"kk_consumer"},"source":"/root/a.log","offset":11,"message":"jinqule","prospector":{"type":"log"},"beat":{"name":"es2","hostname":"es2","version":"6.2.4"}}

查看数据是否进入es,正常!
在这里插入图片描述
查看数据是否进入kibana,正常!
在这里插入图片描述
至此,利用filebeat采集日志后,将日志输出到kafka,然后通过logstash的清洗,保存到了elasticsearch,最终展示到kibana!!!

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值