logback kafka输出日志到ELK

参考 logback+kafka+elk搭建日志, 学习总结

日志流程: logback -> kafka -> logstash -> elasticsearch -> kibana

kafka安装启动

  • 官方下载, 选择Binary downloads下载
  • 先启动zookeeper
    bin/zookeeper-server-start.sh config/zookeeper.properties &
  • 启动kafka
    bin/kafka-server-start.sh config/server.properties &

logback与kafka集成

kafka与logback使用的是 logback-kafka-appender

  • 引入pom依赖
<!--kafka依赖-->
<dependency>
	<groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
    <version>2.1.6.RELEASE</version>
</dependency>
<!--logback-kafka-appender依赖-->
<dependency>
	<groupId>com.github.danielwegener</groupId>
    <artifactId>logback-kafka-appender</artifactId>
    <version>0.2.0-RC2</version>
</dependency>
  • 配置logback-spring.xml:

SpringBoot加载顺序: logback-spring.xml> logback-spring.groovy> logback.xml> logback.groovy

<?xml version="1.0" encoding="UTF-8"?>
<configuration  scan="true" scanPeriod="60 seconds" debug="false">
    <contextName>logback</contextName>
    <!--定义日志文件的存储地址 勿在 LogBack 的配置中使用相对路径-->
    <property name="LOG_HOME" value="/data/logs" />
    <!--输出到控制台-->
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <!--格式化输出:%d表示日期,%thread表示线程名,%-5level:级别从左显示5个字符宽度%msg:日志消息,%n是换行符-->
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} %contextName [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
        <topic>applog</topic>
        <!-- we don't care how the log messages will be partitioned  -->
        <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy" />

        <!-- use async delivery. the application threads are not blocked by logging -->
        <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />

        <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
        <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
        <!-- bootstrap.servers is the only mandatory producerConfig -->
        <producerConfig>bootstrap.servers=localhost:9092</producerConfig>
        <!-- don't wait for a broker to ack the reception of a batch.  -->
        <producerConfig>acks=0</producerConfig>
        <!-- wait up to 1000ms and collect log messages before sending them as a batch -->
        <producerConfig>linger.ms=1000</producerConfig>
        <!-- even if the producer buffer runs full, do not block the application but start to drop messages -->
        <producerConfig>max.block.ms=0</producerConfig>
        <!-- define a client-id that you use to identify yourself against the kafka broker -->
        <producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed</producerConfig>
    </appender>

    <root level="info">
        <appender-ref ref="console" />
        <appender-ref ref="kafkaAppender" />
    </root>
</configuration>

logstash配置启动

ELK的安装使用可以参考 ELK安装使用

  • 配置kafka接收数据,输出到es, index是 test-kafka
input {
     kafka {
        topics => "applog"
        bootstrap_servers => "localhost:9092"
        group_id => "es"
    }
}
output {
  elasticsearch {
    hosts => "localhost:9200"
    index => "test-kafka"
  }
}
  • 启动
    ./bin/logstash -f test-kafka.conf

elasticsearch启动

./bin/elasticsearch

kibana

./bin/kibana

验证

  • 启动程序输出日志
@Slf4j
@SpringBootApplication
public class LogKafkaApplication {

    public static void main(String[] args) throws InterruptedException {
        SpringApplication.run(LogKafkaApplication.class, args);

        while (true) {
            Thread.sleep(5000);
            log.info("log to kafka...");
        }
    }

}

访问 http://127.0.0.1:5601, 在test-kafka下出现了日志,如图
在这里插入图片描述

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值