CentOS7安装ELK日志系统及SpringCloud项目集成(生产环境已验证)

一.ElasticSearch的安装

  1. 下载安装包
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.1.0-linux-x86_64.tar.gz
  1. 解压安装包
tar -xzvf elasticsearch-7.1.0-linux-x86_64.tar.gz
  1. 新建用户和组(es不能使用root用户启用,需要新建用户)
#新建组
groupadd elasticsearch
#新增用户,并且授予密码
useradd elasticsearch -g elasticsearch -p elasticsearch
 #授予解压es包的权限
chmod -R 777 elasticsearch-7.1.0
#修改ES内存配置
vi /etc/sysctl.conf
vm.max_map_count=262144
sysctl -p

vi ./elasticsearch-7.1.0/config/elasticsearch.yml
#设置允许外网访问
network.host: 0.0.0.0
# 节点数设置:以下配置参数适合单节点部署elasticsearch的配置
discovery.seed_hosts: ["127.0.0.1", "[::1]"]
cluster.initial_master_nodes: ["node-1"]
  1. 更改配置
vim ./config/elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
# 允许外网访问配置
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# 节点数设置:以下配置参数适合单节点部署elasticsearch的配置
discovery.seed_hosts: ["127.0.0.1", "[::1]"]
cluster.initial_master_nodes: ["node-1"]


# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

  1. 启动ElasticSearch
#切换用户
su elasticsearch
#执行安装包bin目录下的启动文件
./bin/elasticsearch
#后台运行服务
./bin/elasticsearch -d
  1. 验证ElasticSearch是否运行成功
#服务器验证 查看返回文件是否正常
curl 127.0.0.1:9200
#浏览器验证 云服务器需要在防火墙规则中开放9200和9300端口
浏览器输入:服务器ip:9200 返回配置文件内容表示正常。(如下图所示)

在这里插入图片描述

二.logstash的安装

1.下载logstash的包

wget https://artifacts.elastic.co/downloads/logstash/logstash-7.1.0.tar.gz

2.解压安装包

tar -xzvf logstash-7.1.0.tar.gz

3.新增数据存储到es的配置

#进入logstash-7.1.0目录下的config中
cd logstash-7.1.0/config
#新增文件
vim logstash.conf
input {
tcp {
  port => 9600
  codec => "json"
}
}
output {
elasticsearch {
  hosts => ["127.0.0.1:9200"] #如果不是同一个服务器需要使用服务器ip地址
  index => "logstash-%{+YYYY.MM.dd}"  #索引名 后续kibana中设置索引筛选的时候需要用到
}
stdout {codec => rubydebug}
}

4.启动logstash

#当前处于config目录下的执行命令
../bin/logstash -f logstash.conf
#后台运行服务 
nohup ../bin/logstash -f logstash.conf &
#云服务器需要在防火墙规则中开放9600端口

三.kibana的安装

1.下载kibana的包

wget https://artifacts.elastic.co/downloads/kibana/kibana-7.1.0-linux-x86_64.tar.gz

2.解压kibana的包

tar -xzvf kibana-7.1.0-linux-x86_64.tar.gz

3.修改kibana的配置文件

#进入kibana的配置文件的目录
cd kibana-7.1.0-linux-x86_64/config
#修改kibana.yml的配置文件
vim kibana.yml

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://127.0.0.1:9200"]  #如果不是同一个服务器需要使用服务器ip地址

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
i18n.locale: "zh-CN"  #前端页面设置为中文版本

  1. 启动kibana
#进入到kibana的bin目录下
cd ../bin
#启动服务
./kibana
#后台运行服务
nohup ./kibana &
#云服务器需要在防火墙规则中开放5601端口

四.SpringCloud集成elk(logback)

  1. 导入依赖
				<!-- logstash日志文件上传-->
        <dependency>
            <groupId>net.logstash.logback</groupId>
            <artifactId>logstash-logback-encoder</artifactId>
            <version>5.3</version>
        </dependency>
        <!-- 链路追踪-->
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-zipkin</artifactId>
            <version>2.2.8.RELEASE</version>
        </dependency>
  1. 重命名resources目录下logback-spring.xml文件为logback-nacos.xml(此操作主要是用于日志上传的logstash目标地址支持配置文件配置,以便后续修改elk地址后无需重新打包程序),内容全部替换成以下内容:
<?xml version="1.0" encoding="UTF-8"?>
<!--
   小技巧: 在根pom里面设置统一存放路径,统一管理方便维护
   <properties>
       <log-path>/Users/rcdg</log-path>
   </properties>
   1. 其他模块加日志输出,直接copy本文件放在resources 目录即可
   2. 注意修改 <property name="${log-path}/log.path" value=""/> 的value模块
-->
<configuration debug="false" scan="false">
 <property name="log.path" value="logs/${project.artifactId}"/>
 <!-- 彩色日志格式 -->
 <property name="CONSOLE_LOG_PATTERN"
           value="${CONSOLE_LOG_PATTERN:-%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }) {magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}"/>
 <!-- 彩色日志依赖的渲染类 -->
 <conversionRule conversionWord="clr" converterClass="org.springframework.boot.logging.logback.ColorConverter"/>
 <conversionRule conversionWord="wex"
                 converterClass="org.springframework.boot.logging.logback.WhitespaceThrowableProxyConverter"/>
 <conversionRule conversionWord="wEx"
                 converterClass="org.springframework.boot.logging.logback.ExtendedWhitespaceThrowableProxyConverter"/>
 <!-- Console log output -->
 <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
   <encoder>
     <pattern>${CONSOLE_LOG_PATTERN}</pattern>
   </encoder>
 </appender>

 <!-- Log file debug output -->
 <appender name="debug" class="ch.qos.logback.core.rolling.RollingFileAppender">
   <file>${log.path}/debug.log</file>
   <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
     <fileNamePattern>${log.path}/%d{yyyy-MM, aux}/debug.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
     <maxFileSize>50MB</maxFileSize>
     <maxHistory>30</maxHistory>
   </rollingPolicy>
   <encoder>
     <pattern>%date [%thread] ${LOG_LEVEL_PATTERN:-%5p} %-5level [%logger{50}] %file:%line - %msg%n</pattern>
   </encoder>
 </appender>

 <!-- Log file error output -->
 <appender name="error" class="ch.qos.logback.core.rolling.RollingFileAppender">
   <file>${log.path}/error.log</file>
   <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
     <fileNamePattern>${log.path}/%d{yyyy-MM}/error.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
     <maxFileSize>50MB</maxFileSize>
     <maxHistory>30</maxHistory>
   </rollingPolicy>
   <encoder>
     <pattern>%date [%thread] ${LOG_LEVEL_PATTERN:-%5p} %-5level [%logger{50}] %file:%line - %msg%n</pattern>
   </encoder>
   <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
     <level>ERROR</level>
   </filter>
 </appender>

 <logger name="org.activiti.engine.impl.db" level="DEBUG">
   <appender-ref ref="debug"/>
 </logger>

 <!--nacos 心跳 INFO 屏蔽-->
 <logger name="com.alibaba.nacos" level="OFF">
   <appender-ref ref="error"/>
 </logger>

 <!-- 集成logstash -->
 <!--提取配置文件中的服务名-->
 <springProperty scope="context" name="springApplicationName" source="spring.application.name" />
 <springProperty scope="context" name="LOGBACK_URL" source="logstash.host" />
 <springProperty scope="context" name="LOGBACK_PORT" source="logstash.port" />
 <appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
   <destination>${LOGBACK_URL:- }:${LOGBACK_PORT:- }</destination>
   <encoder class="net.logstash.logback.encoder.LogstashEncoder" >
     <!--定义appname的名字是服务名,多服务时,根据这个进行区分日志-->
     <customFields>{"appname": "${springApplicationName}"}</customFields>
   </encoder>
 </appender>

 <!-- Level: FATAL 0  ERROR 3  WARN 4  INFO 6  DEBUG 7 -->
 <root level="INFO">
   <appender-ref ref="console"/>
   <appender-ref ref="debug"/>
   <appender-ref ref="error"/>
   <appender-ref ref="logstash" />
 </root>
</configuration>
  1. application.yml加入如下配置
#logstash服务器地址
logstash:
 host: ${log.host}
 #logstash端口
 port: ${log.port}
  1. Nacos公共配置文件common.xml加入如下配置,操作发布生效。(如已添加请忽略)
########### logstash地址 ##############
log:
 host: 124.222.107.134
 port: 9600
########### log配置文件 ##############
logging:
 config: classpath:logback-nacos.xml
	#config: /software/service/config/logback-nacos.xml 多个项目共用配置使用绝对路径的统一配置文件
  1. 打包或提交代码跑流水线。

五.Kibana前端页面相关配置

  1. 访问Kibana地址:服务器ip:5601

  2. 打开堆栈监测

在这里插入图片描述

  1. 配置索引(配置内容与上文中logstash的elasticsearch的配置有关)

在这里插入图片描述

在这里插入图片描述

  1. 最终检索效果

在这里插入图片描述

  1. 后续可以根据业务需求建立更详细的索引和筛选项。
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值