springboot整合ELK---分两种直接使用logstash,另外一种整合kafka

24 篇文章 0 订阅
8 篇文章 0 订阅

环境说明:springBoot2.1.3,logback,es6.8.2

当我们服务节点特别多的时候,我们就需要考虑将日志统一放到ELK中去高效查找定位日志,不用去服务器一个一个找。同时整合分布式链路追踪打印日志。这里提供两种springboot整合ELK的方式。

1.第一种springboot-logstash环境搭建

1.1 添加maven

        <dependency>
            <groupId>net.logstash.logback</groupId>
            <artifactId>logstash-logback-encoder</artifactId>
            <version>6.3</version>
        </dependency>

1.2 logback-spring.xml添加日志打印

    <!--LOGSTASH config -->
    <appender name="LOGSTASH"
        class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>localhost:5000</destination>
        <!--<encoder charset="UTF-8"
            class="net.logstash.logback.encoder.LogstashEncoder"> -->
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS}
                [service:${springAppName:-}]
                [traceId:%X{X-B3-TraceId:-},spanId:%X{X-B3-SpanId:-},parentSpanId:%X{X-B3-ParentSpanId:-},exportable:%X{X-Span-Export:-}]
                [%thread] %-5level %logger{50} - %msg%n</pattern>
            <charset>UTF-8</charset> <!-- 此处设置字符集 -->
        </encoder>
    </appender>

1.3配置logstash

input {
  tcp {
    port => 5000
  }
}
filter {
  grok {
    match => {
    "message" => "%{TIMESTAMP_ISO8601:logTime} %{GREEDYDATA:service} %{GREEDYDATA:thread} %{LOGLEVEL:level} %{GREEDYDATA:loggerClass}-%{GREEDYDATA:logContent}"}
  }
}
output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "springboot-%{+YYYY.MM.dd}"
    user => "elastic"
    password => "changeme"
  }
}

1.4 启动ES, kibana, logstash, springboot查看

 

 

2.第二种springboot-kafka-logstash环境搭建

2.1 maven添加依赖

<dependency>
    <groupId>com.github.danielwegener</groupId>
    <artifactId>logback-kafka-appender</artifactId>
    <version>0.2.0-RC2</version>
</dependency>

2.2 修改日志配置

<appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [service:${springAppName:-}]
                [traceId:%X{X-B3-TraceId:-},spanId:%X{X-B3-SpanId:-},parentSpanId:%X{X-B3-ParentSpanId:-},exportable:%X{X-Span-Export:-}]
                [%thread] %-5level %logger{50} - %msg%n</pattern>
        </encoder>
        <topic>authLog</topic>
        <!-- we don't care how the log messages will be partitioned  -->
        <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy" />
​
        <!-- use async delivery. the application threads are not blocked by logging -->
        <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />
​
        <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
        <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
        <!-- bootstrap.servers is the only mandatory producerConfig -->
        <producerConfig>bootstrap.servers=localhost:9092</producerConfig>
        <!-- don't wait for a broker to ack the reception of a batch.  -->
        <producerConfig>acks=0</producerConfig>
        <!-- wait up to 1000ms and collect log messages before sending them as a batch -->
        <producerConfig>linger.ms=1000</producerConfig>
        <!-- even if the producer buffer runs full, do not block the application but start to drop messages -->
        <producerConfig>max.block.ms=0</producerConfig>
        <!-- define a client-id that you use to identify yourself against the kafka broker -->
        <producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed</producerConfig>
    </appender>

 

2.3 修改Logstash

input {
  kafka {
    id => "my_plugin_id"
    bootstrap_servers => "127.0.0.1:9092"
    topics => ["authLog"]
    auto_offset_reset => "latest"
  }
}
filter {
  grok {
    match => {
    "message" => "%{TIMESTAMP_ISO8601:logTime} %{GREEDYDATA:service} %{GREEDYDATA:thread} %{LOGLEVEL:level} %{GREEDYDATA:loggerClass}-%{GREEDYDATA:logContent}"}
  }
}
output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "springboot-%{+YYYY.MM.dd}"
    user => "elastic"
    password => "changeme"
  }
}

2.4启动zk,kafka,elk

 

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
通过docker-compose安装logstash的步骤如下: 1. 首先,需要编写一个docker-compose.yaml文件,指定logstash的版本、资源限制、挂载路径、端口等配置信息。示例文件如下: version: '3' services: logstash: restart: always image: logstash:6.7.0 deploy: replicas: 1 update_config: parallelism: 2 delay: 10s restart_policy: condition: on-failure delay: 5s max_attempts: 3 window: 120s resources: limits: cpus: '0.5' memory: 1024M reservations: cpus: '1' memory: 2408M volumes: - /opt/data/logstash/:/opt/data/logstash/ ports: - "9600:9600" - "5044:5044" container_name: "logstash" networks: - back-up networks: back-up: driver: bridge 2. 然后,使用docker-compose命令构建logstash容器: docker-compose -f docker-compose.yaml up -d 3. 最后,通过以下命令进入logstash容器进行操作: docker exec -it logstash /bin/bash 这样就可以通过docker-compose安装logstash了。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [docker-compose搭建 es/kibana/logstash elk](https://blog.csdn.net/chugu5948/article/details/100614342)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 33.333333333333336%"] - *2* [docker-compose docker 一次性安装打包 各个中间件 mysql zookeeper kafka redis](https://download.csdn.net/download/huangyanhua616/85592973)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 33.333333333333336%"] - *3* [docker部署logstash](https://blog.csdn.net/u013214151/article/details/105682052)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 33.333333333333336%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值