docker搭建logstash和使用方法

配置logstash

查询下载镜像【固定和elasticsearch一样的版本】
[root@hao ~]# docker search logstash
NAME                                                           DESCRIPTION                                      STARS     OFFICIAL   AUTOMATED
logstash                                                       Logstash is a tool for managing events and l…   2165      [OK]
opensearchproject/logstash-oss-with-opensearch-output-plugin   The Official Docker Image of Logstash with O…   19
grafana/logstash-output-loki                                   Logstash plugin to send logs to Loki             3
bitnami/logstash                                                                                                6
bitnami/logstash-exporter-archived                             A copy of the container images of the deprec…   0
rancher/logstash-config                                                                                         2
bitnamicharts/logstash                                                                                          0
dtagdevsec/logstash                                            T-Pot Logstash                                   4                    [OK]
malcolmnetsec/logstash-oss                                     Logstash data processing pipeline, as used b…   1
itzg/logstash                                                  Logstash with the ability to groom its own E…   2                    [OK]
uselagoon/logstash-7                                                                                            0
uselagoon/logstash-6                                                                                            0
jhipster/jhipster-logstash                                     Logstash image (based on the official image)5                    [OK]
itzg/logback-kafka-relay                                       Receives remote logback events, sends them t…   0
sequra/logstash_exporter                                       Prometheus exporter for the metrics availabl…   3
bonniernews/logstash_exporter                                  Prometheus exporter for Logstash 5.0+            3                    [OK]
monsantoco/logstash                                            Logstash Docker image based on Alpine Linux …   9                    [OK]
elastic/logstash                                               The Logstash docker images maintained by Ela…   27
komljen/logstash                                               Logstash kube image                              0                    [OK]
geoint/logstash-elastic-ha                                     Logstash container for ElasticSearch forward…   2                    [OK]
datasense/logstash_indexer                                     Logstash + crond curator                         0
mantika/logstash-dynamodb-streams                              Logstash image which includes dynamodb plugi…   4                    [OK]
digitalwonderland/logstash-forwarder                           Docker Logstash Integration - run once per D…   14                   [OK]
cfcommunity/logstash                                           https://github.com/cloudfoundry-community/lo…   0
vungle/logstash-kafka-es                                       A simple Logstash image to ship json logs fr…   1                    [OK]
[root@hao ~]# docker pull logstash:7.17.7
7.17.7: Pulling from library/logstash
fb0b3276a519: Already exists
4a9a59914a22: Pull complete
5b31ddf2ac4e: Pull complete
162661d00d08: Pull complete
706a1bf2d5e3: Pull complete
741874f127b9: Pull complete
d03492354dd2: Pull complete
a5245bb90f80: Pull complete
05103a3b7940: Pull complete
815ba6161ff7: Pull complete
7777f80b5df4: Pull complete
Digest: sha256:93030161613312c65d84fb2ace25654badbb935604a545df91d2e93e28511bca
Status: Downloaded newer image for logstash:7.17.7
docker.io/library/logstash:7.17.7
准备工作
建立文件夹,给data文件夹777权限
[root@hao /usr/local/software/elk/logstash]# ll
总用量 0
drwxrwsr-x. 2 root root 66 126 10:12 config
drwxrwxrwx. 4 root root 69 126 10:18 data
只需要建logstash.yml、pipelines.yml、logstash.conf文件
[root@hao /usr/local/software/elk/logstash]# tree
.
├── config
│   ├── jvm.options
│   ├── logstash.yml
│   └── pipelines.yml
├── data
│   ├── dead_letter_queue
│   ├── queue
│   └── uuid
└── pipeline
    └── logstash.conf

5 directories, 5 files
内容分别为
path.logs: /usr/share/logstash/logs
config.test_and_exit: false
config.reload.automatic: false
http.host: "0.0.0.0" 
xpack.monitoring.elasticsearch.hosts: [ "http://192.168.133.100:9200" ]
# This file is where you define your pipelines. You can define multiple.
# # For more information on multiple pipelines, see the documentation:
# #   https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html
#
- pipeline.id: main
  path.config: "/usr/share/logstash/pipeline/logstash.conf"
input {
  tcp {
    mode => "server"
    host => "0.0.0.0"
    port => 5044
    codec => json_lines
  }
}
filter{
}
output {
    elasticsearch {
      hosts => ["192.168.133.100:9200"]       #elasticsearch的ip地址
      index => "elk_logstash"                          #索引名称
    }
    stdout { codec => rubydebug }
}
创建容器
docker run -it \
--name logstash \
--privileged \
-p 5044:5044 \
-p 9600:9600 \
--network wn_docker_net \
--ip 172.18.12.72 \
-v /etc/localtime:/etc/localtime \
-v /usr/local/software/elk/logstash/config:/usr/share/logstash/config \
-v /usr/local/software/elk/logstash/pipeline:/usr/share/logstash/pipeline \
-v /usr/local/software/elk/logstash/data:/usr/share/logstash/data \
-d logstash:7.17.7
查看日志是否启动成功,没报错就可以

SpringBoot整合logstash

引入依赖
<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>7.3</version>
</dependency>
配置spring-logback.xml文件
<?xml version="1.0" encoding="UTF-8"?>
<!-- 日志级别从低到高分为TRACE < DEBUG < INFO < WARN < ERROR < FATAL,如果设置为WARN,则低于WARN的信息都不会输出 -->
<!-- scan:当此属性设置为true时,配置文档如果发生改变,将会被重新加载,默认值为true -->
<!-- scanPeriod:设置监测配置文档是否有修改的时间间隔,如果没有给出时间单位,默认单位是毫秒。
                 当scan为true时,此属性生效。默认的时间间隔为1分钟。 -->
<!-- debug:当此属性设置为true时,将打印出logback内部日志信息,实时查看logback运行状态。默认值为false。 -->
<configuration scan="true" scanPeriod="10 seconds">

    <!--1. 输出到控制台-->
    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <!--此日志appender是为开发使用,只配置最低级别,控制台输出的日志级别是大于或等于此级别的日志信息-->
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>DEBUG</level>
        </filter>
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} -%5level ---[%15.15thread] %-40.40logger{39} : %msg%n</pattern>
            <!-- 设置字符集 -->
            <charset>UTF-8</charset>
        </encoder>
    </appender>

    <!-- 2. 输出到文件  -->
    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <!--日志文档输出格式-->
        <append>true</append>
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} -%5level ---[%15.15thread] %-40.40logger{39} : %msg%n</pattern>
            <charset>UTF-8</charset> <!-- 此处设置字符集 -->
        </encoder>

    </appender>

    <!--LOGSTASH config -->
    <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>192.168.133.100:5044</destination>
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder">
            <!--自定义时间戳格式, 默认是yyyy-MM-dd'T'HH:mm:ss.SSS<-->
            <timestampPattern>yyyy-MM-dd HH:mm:ss</timestampPattern>
            <customFields>{"appname":"App"}</customFields>
        </encoder>
    </appender>


    <root level="DEBUG">
        <appender-ref ref="CONSOLE"/>
        <appender-ref ref="FILE"/>
        <appender-ref ref="LOGSTASH"/>
    </root>

</configuration>
主要配置35行的ip地址和端口
使日志插入logstash
只需要使用lombok依赖的@Slf4j注解,把要放入日志的东西加进去即可
package com.wnhz.smart.es.controller;

import com.wnhz.smart.common.http.ResponseResult;
import com.wnhz.smart.es.doc.BookTabDoc;
import com.wnhz.smart.es.service.IBookTabDocService;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import java.util.List;

/**
 * @author Hao
 * @date 2023-12-06 10:40
 */
@RestController
@RequestMapping("/api/query")
@Slf4j
public class QueryController {
    @Autowired
    private IBookTabDocService iBookTabDocService;

    @GetMapping("/test")
    public ResponseResult<List<BookTabDoc>> test() {
        List<BookTabDoc> allBooks = iBookTabDocService.getAllBooks();
        log.debug("从es中查询到的所有数据:{}", allBooks.subList(0, 1000));
        return ResponseResult.ok(allBooks.subList(0, 3));
    }
}
这样所有的数据就会自动插入logstash
配置kibana
进入网页http://192.168.133.100:5601/app/dev_tools#/console,创建索引
进入http://192.168.133.100:5601/app/management(原网页点击Stack Management),点击index Patterns创建匹配模式,输入logstash.conf文件中的index后面的名字,这里是elk_logstash

分类/spring/logstash_1.png  0 → 100644

查询方法:message 内容

日志的条数查询错误解决
当日志的条数太多会出现下面的错误警告

The length [1417761] of field [message] in doc[20]/index[elk_logstash] exceeds the [index.highlight.max_analyzed_offset] limit [1000000]. To avoid this error, set the query parameter [max_analyzed_offset] to a value less than index setting [1000000] and this will tolerate long field values by truncating them.

解决方法

解决方案,使用任意一个可以put http值和参数的工具,对目标主机上部署的es进行put命令配置:

!!!注意是put请求,请求地址和body参数分别为:
http://localhost:9200/_all/_settings?preserve_existing=true
{
  "index.highlight.max_analyzed_offset" : "999999999"
}
返回结果这样就是成功了
{
"acknowledged": true
}

数的工具,对目标主机上部署的es进行put命令配置:

!!!注意是put请求,请求地址和body参数分别为:
http://localhost:9200/_all/_settings?preserve_existing=true
{
  "index.highlight.max_analyzed_offset" : "999999999"
}
返回结果这样就是成功了
{
"acknowledged": true
}
  • 8
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值