来源:Elasticsearch7.X-dockercompose安装Springboot整合ELK进行日志收集
请扫码关注查看更多Elasticsearch7.X系列文章!
为什么用kafka
通常我们看到如上数据流向图,有人用Redis在中间作为消息队列,但Redis作为消息队列并不是它的强项,RabbitMQ的为了保证消息不丢失他的性能和kafka至少相差10倍以上,作为日志可以允许丢失现象,Kafka 是一个高吞吐量的分布式发布订阅日志服务,具有高可用、高性能、分布式、高扩展、持久性等特性,所以kafka为最佳选择。
环境搭建
根据上一篇Elasticsearch7.X-Springboot+整合ELK进行日志收集(dockercompose)<1>,此处只需要搭建kafka环境即可,docker-compose.yml添加如下:1:
image: zookeeper:3.5.8
restart: always
container_name: zk1
hostname: zk1
ports:
- "2181:2181"
environment:
ZOO_MY_ID: 1 # id
ZOO_SERVERS: server.1=zk1:2888:3888;2181
networks:
- elk
kafka1:
image: wurstmeister/kafka:2.12-2.5.0
restart: always
container_name: kafka1
hostname: kafka1
ports:
- 9092:9092
- 9999:9999
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.244.129:9092
KAFKA_ADVERTISED_HOST_NAME: kafka1 #
KAFKA_HOST_NAME: kafka1
KAFKA_ZOOKEEPER_CONNECT: zk1:2181
KAFKA_ADVERTISED_PORT: 9092
KAFKA_BROKER_ID: 0 #
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
JMX_PORT: 9999 # jmx
links:
- zk1
networks:
- elk
logstash.conf加入
kafka {
id => "my_plugin_id"
bootstrap_servers => "192.168.244.129:9092"
topics => ["logger"]
auto_offset_reset => "latest"
}
output {
elasticsearch {
hosts => ["http://192.168.244.129:9200"]
index => "springboot-kafka-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
启动后:
springboot日志-kafka环境搭建
1、添加maven
springboot项目pom.xml 中引入:
<dependency> <groupId>com.github.danielwegener</groupId> <artifactId>logback-kafka-appender</artifactId> <version>0.2.0-RC1</version> </dependency> <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> </dependency>
2、logback-spring.xml添加日志对接kafka
logback-spring.xml中加入
<!--LOGSTASH config -->
<appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
<!-- 此处为kafka的Topic名称 -->
<topic>logger</topic>
<keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy" />
<deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />
<producerConfig>bootstrap.servers=192.168.244.129:9092</producerConfig>
<producerConfig>acks=0</producerConfig>
<producerConfig>linger.ms=1000</producerConfig>
<producerConfig>max.block.ms=0</producerConfig>
<producerConfig>client.id=0</producerConfig>
</appender>
<root level="INFO">
<appender-ref ref="kafkaAppender" />
<appender-ref ref="console"/>
</root>
3、启动springboot项目
4、请求测试接口
5、查看elasticsearch的日志数据
-
新生成的索引名
如果觉得文章能帮到您,欢迎关注微信公众号:“蓝天Java大数据” ,共同进步!
好了今天就介绍到这里,觉得有用关注最新系列文章。
![](https://i-blog.csdnimg.cn/blog_migrate/f3dc95a7034e3aeb14439e9efc56e9be.bmp)