ELK7.8.1最近版安装部署及配置(ElasticSearch+Kibana+Logstash) 整合SpringBoot日志采集

最近闲来无事,看了看日志收集,在网上看了很多资料,大部分都是比较旧的版本,于是乎尝试了一波最新版7.8.1

准备安装包,根据自己的系统选择下载,我这里选择的mac

点击跳转elastic官网选择版本下载
下载完成后,直接解压,关于如何Linux环境下如何解压,命令可以网上搜一大堆,mac和Linux安装基本一样,解压后如图:
在这里插入图片描述

1.安装配置elasticsearch

  • 1.进入elasticsearch/config目录,修改jvm.options,该文件是指定jvm一些内存参数,如果机器配置够好,可以不做修改,我这边将默认的堆内存修改成了256m,其余默认
## JVM configuration

################################################################
## IMPORTANT: JVM heap size
################################################################
##
## You should always set the min and max JVM heap
## size to the same value. For example, to set
## the heap to 4 GB, set:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
################################################################

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space

-Xms256m
-Xmx256m

进入bin目录,生成加密证书

./elasticsearch-certutil cert --ca elastic-stack-ca.p12

将生成的证书路劲填写在下面的配置文件中,我是在config中创建了一个cert包存的证书,默认在根目录下

  • 2.修改elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#此处配置一个集群名称,可以不配置用默认的
cluster.name: 594cto-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#配置一个节点名称,可以不配置用默认的
node.name: cto-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
# 数据存放路径,建议配置,且放在elasticsearch目录好管理,这个目录需要手动创建
path.data: /Users/admin/Documents/yw/elk/elasticsearch/data
#
# Path to log files:
#日志存放路径,建议配置,且放在elasticsearch目录好管理。这个目录本身就有
path.logs: /Users/admin/Documents/yw/elk/elasticsearch/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
# 访问IP,这块如果在Linux环境,就直接配置如下,目的是都能访问到
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
# 配置端口号  默认9200  我这边配置7001
http.port: 7001
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
# 配置让它以单节点启动,不用集群模式 值为上面自己定的node-name
cluster.initial_master_nodes: ["cto-1"]
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
# 以下配置是开启加密,需要设置开启用户名密码登录,先配置,待会启用
xpack.license.self_generated.type: trial
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate  
# 上一步生成的证书地址
xpack.security.transport.ssl.keystore.path: cert/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: cert/elastic-certificates.p12

# 默认开启kibana的默认心跳检测
xpack.monitoring.collection.enabled: true
# 配置跨域
http.cors.enabled: true
http.cors.allow-origin: "*"

接着在

  • 3.至此所有的配置相关的东西配置完成,进入elasticsearch/bin启动es 在bin目录执行
./elasticsearch

当看到这个则表示启动成功
在这里插入图片描述

  • 4.设置访问用户名和密码
    切记不要退出es ,重新开一个窗口进入bin目录,执行设置密码命令
➜  bin ./elasticsearch-setup-passwords interactive
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]

此时会提示是否要覆盖默认的密码,输入y,回车。接着输入你的密码,个人建议都输入成一样的,好记。。。。
在这里插入图片描述
看到这个,说明密码设置成功,创建了不同权限的账号密码,以后使用就用:
账号:elastic
密码:你刚设置的,我设置的是123456
接着重启一下es,在浏览器访问一下localhost:7001,会提示让你登录,输入账号密码,出现如下图表示安装成功
在这里插入图片描述
还有个小提示,新版本更新了谷歌elasticsearch-head插件,所以不用在在做什么配置,直接安装即可。具体方法百度
在这里插入图片描述
在这里插入图片描述

  • 5.安装一个中文IK分词器(版本一定要对,版本一定要对!!)
    分词器下载地址:IK分词器下载
    在这里插入图片描述
    下载后解压,然后在elasticsearch/plugins文件夹下创建ik文件夹,将解压后的文件复制进去(至于IK的分词器我还没仔细看,反正默认的分词够用)安装完效果如下
    在这里插入图片描述
    接着重启es,在启动日志看到loaded plugin[analysis-ik] 则代表插件安装成功(IK的使用后面在写es作全文检索的时候细讲)
    在这里插入图片描述
# 后台启动
./elasticsearch &

2.安装Kibana

kibana是elastic自带的一款图形化管理界面,解压kibana压缩包

  • 1.修改kibana.yml 在kibana/config/目录下 (部分不修改的配置就不贴了)
# Kibana is served by a back end server. This setting specifies the port to use.
# 访问的端口号
server.port: 5001

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
# 配置成0.0.0.0是可以任何地方都能访问到
server.host: "0.0.0.0"

# The URLs of the Elasticsearch instances to use for all your queries.
# 配置es的访问地址
elasticsearch.hosts: ["http://localhost:7001"]

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
# 建议配置超级权限的账号密码,在安装es的时候设置的那几个密码
elasticsearch.username: "elastic"
elasticsearch.password: "123456"

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
# 开启中文界面
i18n.locale: "zh-CN"

进入kibana/bin目录,启动kibana

./kibana
# 后台启动
./kibana &

在这里插入图片描述
看到这里,则表明安装成功,访问你自己配置的端口号,我这边是localhost:5001
在这里插入图片描述
输入账号密码即可登录,至此kibana安装完成

3.安装logstash,整合SpringBoot将打印的日志输入到es中,在kibana中查看。

解压logstash 进入logstash/config目录

  • 1.修改jvm.options,跟es一样,你电脑配置好的话就不用修改,我这边将堆内存修改为256m

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space

-Xms256m
-Xmx256m

################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################
  • 2.修改logstash.yml 只贴了部分修改的内容,其他内容不变
# Set the pipeline event ordering. Options are "auto" (the default), "true" or "false".
# "auto" will  automatically enable ordering if the 'pipeline.workers' setting
# is also set to '1'.
# "true" will enforce ordering on the pipeline and prevent logstash from starting
# if there are multiple workers.
# "false" will disable any extra processing necessary for preserving ordering.
#
pipeline.ordered: auto
#
# ------------ Pipeline Configuration Settings --------------
#
# Where to fetch the pipeline configuration for the main pipeline
#
# path.config:
#
# Pipeline configuration string for the main pipeline
#
# config.string:
#
# At startup, test if the configuration is valid and exit (dry run)
#
# config.test_and_exit: false
#
# Periodically check if the configuration has changed and reload the pipeline
# This can also be triggered manually through the SIGHUP signal
# 配置自动刷新
config.reload.automatic: true
#
# How often to check if the pipeline configuration has changed (in seconds)
# Note that the unit value (s) is required. Values without a qualifier (e.g. 60) 
# are treated as nanoseconds.
# Setting the interval this way is not recommended and might change in later versions.
# 刷新间隔
config.reload.interval: 20s

# By default, the HTTP API is bound to only the host's local loopback interface,
# ensuring that it is not accessible to the rest of the network. Because the API
# includes neither authentication nor authorization and has not been hardened or
# tested for use as a publicly-reachable API, binding to publicly accessible IPs
# should be avoided where possible.
# 配置可任意地址都可访问
http.host: 0.0.0.0
#
# The HTTP API web server will listen on an available port from the given range.
# Values can be specified as a single port (e.g., `9600`), or an inclusive range
# of ports (e.g., `9600-9700`).
# 端口号  默认不配置 我这边为了方便直接配置了
http.port: 6001

#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
#xpack.monitoring.enabled: false
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: password
#xpack.monitoring.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
# 配置es的访问密码
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: 123456
xpack.monitoring.elasticsearch.hosts: ["http://localhost:7001"]
  • 3.修改pipelines.yml 只需要将头部的部分打开,我这边配置了目录的形式,可以配置单个文件的,自行选择,目录需要自己创建
# List of pipelines to be loaded by Logstash
#
# This document must be a list of dictionaries/hashes, where the keys/values are pipeline settings.
# Default values for omitted settings are read from the `logstash.yml` file.
# When declaring multiple pipelines, each MUST have its own `pipeline.id`.
#
# Example of two pipelines:
#
# - pipeline.id: test
#   pipeline.workers: 1
#   pipeline.batch.size: 1
#   config.string: "input { generator {} } filter { sleep { time => 1 } } output { stdout { codec => dots } }"
- pipeline.id: localhost_test
  queue.type: persisted
  path.config: "/Users/admin/Documents/yw/elk/logstash/config/conf/*.config"
#
# Available options:
#
#   # name of the pipeline
#   pipeline.id: mylogs

logstash/config/conf/下存的就是需要收集日志的配置信息

  • 4.配置日志手机的config,在logstash/config/conf/下创建test.config
input {
  tcp {
    port => 5000
    codec => json_lines
    type => "querz"
  }

filter {
	
}
# type配置是在多个config的情况下防止日志传到不同的index中
# codec 是一个json格式化插件,需要单独安装
output {
	if [type] == "querz"{
		elasticsearch {
			hosts => "http://localhost:7001"
			user => "elastic"
			password => "123456"
		}
	}
}
  • 5.配置文件写好之后,因为里面用到了json格式化,所以需要装一个插件,进入logstash/bin目录下
 ./logstash-plugin install logstash-codec-json_lines

当看到提示安装成功后,启动logstash

./logstash
# 后台启动
./logstash &

4.SpringBoot 整合logstash将日志输出到kibana

此处比较简单,只需要两步

    1. 引入pom文件
<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>5.1</version>
</dependency>
    1. 在resources下创建logback.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <include resource="org/springframework/boot/logging/logback/base.xml" />
    <contextName>logback</contextName>

    <appender name="stash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>localhost:5000</destination>
        <!-- encoder必须配置,有多种可选 -->
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" >
            <providers>
                <timestamp>
                    <timeZone>UTC</timeZone>
                </timestamp>
                <pattern>
                    <pattern>
                        {
                        "severity": "%level",
                        "service": "${springAppName:-}",
                        "trace": "%X{X-B3-TraceId:-}",
                        "span": "%X{X-B3-SpanId:-}",
                        "exportable": "%X{X-Span-Export:-}",
                        "pid": "${PID:-}",
                        "thread": "%thread",
                        "class": "%logger{40}",
                        "message": "%message"
                        }
                    </pattern>
                </pattern>
            </providers>
        </encoder>
    </appender>

    <root level="info">
        <appender-ref ref="stash" />
    </root>
</configuration>

其中 localhost:5000这个端口必须和上面config文件中tcp的端口一致

配置完以后启动SpringBoot项目,进入kibana登录后
在这里插入图片描述

  • 1.点击创建索引模式,点击下一步
    在这里插入图片描述
  • 2.将logstash输入,进行匹配,logstash在config中可以根据每天格式来生成日志文件,如果需要可以自行查资料(如:logstash-2020-09-07)
    在这里插入图片描述
  • 3.选择时间戳,点击创建索引模式
  • 4.点击菜单kibana下的discover菜单
    在这里插入图片描述
    在左侧可以对检索的数据进行筛选,筛选后如下:
    在这里插入图片描述
    几乎和控制台日志格式能做到一样,这样线上日志在这里检索起来就方便很多。

本文是在学习过程中整理,如有错误欢迎各位大佬指正!O(∩_∩)O

接下来更新7.8.1高版本在SpringBoot中的API使用,做全文检索,关键字高亮等功能,敬请期待

  • 0
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值