ELK分布式日志系统的搭建以及使用

ELK的组成

ELK是elasticsearch、logstash、kibana三个开源框架的首字母简写,logstash负责采集日志文件并写入elasticsearch集群,kibana是从elasticsearch集群快速查询日志的客户端;

下载准备

  1. 下载elasticsearch,https://www.elastic.co/cn/downloads/elasticsearch;
  2. 下载nodejs,https://nodejs.org/en/download/;
  3. 下载elasticsearch-head,https://github.com/mobz/elasticsearch-head;
  4. 下载ik中文分词器,https://github.com/medcl/elasticsearch-analysis-ik/releases
  5. 下载kibana,https://artifacts.elastic.co/downloads/kibana/kibana-7.5.2-linux-x86_64.tar.gz
  6. 下载logstash,https://artifacts.elastic.co/downloads/logstash/logstash-7.5.2.tar.gz
    PS:elasticsearch、ik、kibana、logstash的版本必须一致,这里我使用7.5.2版本,elasticsearch更新特别快,如果你想使用最新的版本,只要以上这些软件版本一致就可以。

环境准备

准备三台虚拟机:
192.168.200.131
192.168.200.140
192.168.200.142
64位linux系统
Linux m200p131 3.10.0-1062.12.1.el7.x86_64 #1 SMP Tue Feb 4 23:02:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

为了省去防火墙带来的端口问题,直接关闭防火墙
systemctl stop firewalld 关闭防火墙
systemctl disable firewalld 开机不启动防火墙

搭建elasticsearch集群

提前安装好jdk,这里不介绍如何安装jdk

  1. 登录192.168.200.131虚拟机,创建es用户 groupadd es 、useradd es -g es

  2. 上传elasticsearch-7.5.2-linux-x86_64.tar.gz

  3. 解压文件 tar -zxvf elasticsearch-7.5.2-linux-x86_64.tar.gz

  4. 移动解压后的文件 mv elasticsearch-7.5.2 /usr/local/elasticsearch

  5. 配置内存 vim /etc/sysctl.conf

  6. 最后一行添加 vm.max_map_count=262144(如果系统内存足够大,不用管,否则重启),sysctl -a|grep vm.max_map_count 查看内存

  7. vim /etc/security/limits.conf ,添加以下配置,然后重启 reboot

    • soft nofile 65535
    • hard nofile 65535
    • soft nproc 4096
    • hard nproc 4096
  8. vim /usr/local/elasticsearch/bin/elasticsearch-env 在第一行加入 JAVA_HOME="/usr/local/elasticsearch/jdk",最新的elasticsearch要求jdk在11以上,因为java8以后就收费了,企业一般是不会用的,所以把jdk指向elasticsearch自带的jdk

  9. 修改 vim /usr/local/elasticsearch/config/elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#集群名称,用来帮助各个节点通过识别该名称形成集群
cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#节点名称,一般设置为hostname,没有特别限制,只要各个节点的名称不同就行
node.name: m200p140
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#elasticsearch存储数据的位置,data文件夹要提前创建好
path.data: /usr/local/elasticsearch/data
#
# Path to log files:
#elasticsearch存放日志文件的地方
path.logs: /usr/local/elasticsearch/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#改为0.0.0.0可以远程连接
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#api端口号,默认就好
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#集群节点成员,及每个节点的node.name
cluster.initial_master_nodes: ["m200p131", "m200p140","m200p142"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 1
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
action.destructive_requires_name: true
#节点可以存储数据
node.data: true
#节点可以成为master
node.master: true
#各个节点所在服务器的ip
discovery.zen.ping.unicast.hosts: ["192.168.200.131", "192.168.200.140","192.168.200.142"]
#集群启动成功所需的最少节点数
discovery.zen.minimum_master_nodes: 2  #节点总数/2 +1
#内部通讯端口,默认就好
transport.tcp.port: 9300
#允许跨域请求
http.cors.enabled: true

http.cors.allow-origin: "*"

http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type
#elasticsearch安全配置
xpack.security.enabled: true

xpack.security.transport.ssl.enabled: true

xpack.security.transport.ssl.verification_mode: certificate

xpack.security.transport.ssl.keystore.path: elastic-certificates.p12

xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

  • 进入安装目录 ,生成证书 bin/elasticsearch-certutil cert -out
    config/elastic-certificates.p12 -pass
    “”,注意,每个节点的证书要一致,可以在一台机器上生成,然后把证书拷贝到其他节点的config目录里
  • 在/usr/local/elasticsearch/plugins里面创建ik文件夹,将ik中文分词器软件上传到ik文件夹里,使用unzip命令解压,解压后将软件包删除
  • 这时候单节点就安装好了,现在最好不要启动,避免无法形成集群,其他节点的安装步骤一样,记住改一下elasticsearch.yml里面的node.name就行了
  • 为es用户授权,chown -R es.es /usr/local/elasticsearch,然后使用es用户启动每个节点,su es 命令进入,/usr/local/elasticsearch/bin/elasticsearch -d后台启动elasticsearch
  • 使用 bin/elasticsearch-setup-passwords interactive 为管理员账户生成密码,只需要在一个节点操作一次就可以了,账户有点多,设置密码的时候耐心点,并且密码有字母和数字就行了,特殊字符有可能不行,密码设置后,一般用elastic这个超级管理员就可以
  • http://192.168.200.131:9200/_cat/nodes?v 查看集群节点,

    在这里插入图片描述
  • 如果节点没有形成集群,就把所有节点的数据和日志删掉,重启所有节点就行了

安装nodejs

  • 上传 node-v12.15.0-linux-x64.tar.xz 进行解压
    xz -d node-v12.15.0-linux-x64.tar.xz
    tar -xvf node-v12.15.0-linux-x64.tar

  • 重命名为node并移动到/usr/local

  • 配置node的环境变量 vim /etc/profile
    在这里插入图片描述

  • source /etc/profile 使环境变量生效

  • node -v npm -v 检查是否安装成功

安装easticsearch-head插件

  • 上传 elasticsearch-head-master.zip 并解压 unzip elasticsearch-head-master.zip ,然后移动 mv /usr/local/elasticsearch-head

  • 进入 /usr/local/elasticsearch-head 目录安装插件
    npm install -g grunt --registry=https://registry.npm.taobao.org
    npm install grunt --save
    npm install

  • 修改配置 elasticsearch-head下Gruntfile.js文件
    修改connect配置节点hostname ,大概在94行的位置

connect: {
			server: {
				options: {
					hostname: '192.168.200.131',
					port: 9100,
					base: '.',
					keepalive: true
				}
			}
		}
  • 修改 _site/app.js 修改http://localhost:9200字段到本机ES端口与IP,大概在4374行
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://192.168.200.131:9200";
  • 启动head插件服务(后台运行),首先进入 cd /usr/local/elasticsearch-head,执行 nohup npm run start &
  • elasticsearch-head在一个节点安装就可以了,如果你想在每个节点安装,安装步骤一样
  • http://192.168.200.131:9100/?auth_user=elastic&auth_password=你的密码 验证是否安装成功
    在这里插入图片描述

安装kibana

  • 上传 kibana-7.5.2-linux-x86_64.tar.gz
  • 解压 tar -zxvf kibana-7.5.2-linux-x86_64.tar.gz
  • 解压后的文件移动到 /usr/local/kibana 目录
  • 授权给es用户 chown -R es.es /usr/local/kibana
  • 进入config目录,编辑kibana.yml文件
# Kibana is served by a back end server. This setting specifies the port to use.
#端口,默认就好
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#设置可以远程连接
server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: "/usr/local/kibana"

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#kibana的服务名,随便设置
server.name: "m200p131"

# The URLs of the Elasticsearch instances to use for all your queries.
#kibana监控的elasticsearch集群的节点地址及端口
elasticsearch.hosts: ["http://192.168.200.131:9200","http://192.168.200.140:9200","http://192.168.200.142:9200"]

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch集群账号
elasticsearch.username: "elastic"
#elasticsearch集群密码
elasticsearch.password: "你的密码"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#显示简体中文
i18n.locale: "zh-CN"

  • 使用es用户 su es,进入bin目录 cd /usr/local/kibana/bin/,使用 nohup ./kibana & 后台启动
  • 浏览器登陆 http://192.168.200.131:5601 进入界面
    在这里插入图片描述
    在这里插入图片描述
  • kibana在一个节点安装就可以了,如果你想安装多个,安装步骤一样

安装logstash

  • 上传 logstash-7.5.2.tar.gz
  • 解压 tar -zxvf logstash-7.5.2.tar.gz
  • 解压后的文件移动到 /usr/local/logstash 目录
  • 进入config目录,拷贝 cp logstash-sample.conf logstash.conf
  • 编辑 logstash.conf文件
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  file {
    type => "liuke"    #采集的日志类型,用于区分多种日志的采集,自定义名称
    path => "/opt/logs/*/*.log"   #采集日志的位置
    start_position=>"beginning"   #从头开始采集
  }
}

filter {
         multiline {
            pattern => "^\d{4}-\d{1,2}-\d{1,2}\s\d{1,2}:\d{1,2}:\d{1,2}"   #当有堆栈异常时,将不是以时间开头的信息进行合并,避免一个exception被拆成一行一行的
            negate => true
            what => "previous"
        }
        #grok {
            #match => [ "message", "%{DATA:timestamp} %{NOTSPACE:level} %{GREEDYDATA:message} " ]
        #}
}

output {
  if [type] == "liuke" {
        elasticsearch {
        hosts => ["http://192.168.200.140:9200"]  #es集群的地址和端口,一般设置为es集群的slave节点
        index => "liuke-%{+YYYY.MM.dd}"  #kibana检索的索引
        user => "elastic"   #es集群账号
        password => "你的密码"   #es集群密码
	   }
  }
}

  • 为es用户授权 chown -R es.es /usr/local/logstash/
  • 进入bin目录安装插件 ./logstash-plugin install logstash-filter-multiline
  • 使用es用户,进入安装目录,执行启动命令 nohup ./bin/logstash -f ./config/logstash.conf &
  • 如果你的项目代码是集群部署,那么logstash也要安装多个,你的项目代码所在的服务器都要安装一次,这样才能采集分布式日志文件

定期清理ELK日志

  • 创建脚本文件 touch es-index-clear.sh 并赋予脚本777权限
  • vim es-index-clear.sh 编辑脚本
#!/bin/bash
#日志只保留7天
LAST_DATE=`date -d '-7 day' +%Y.%m.%d`
#删除7天前(当天)的ES索引
curl -XDELETE http://elastic:你的密码@192.168.200.131:9200/liuke-$LAST_DATE
  • crontab -e 添加定时任务
0 2 * * * /opt/sh/es-index-clear.sh
  • 重启定时 systemctl restart crond
  • “*” 分别对应 分 时 日 月 周 *代表每一 */2 代表每两(二) 0 1 * * * 代表每天凌晨1时零分

ELK快速检索日志

  • 点击kibana界面的管理按钮
    在这里插入图片描述

  • 点击索引模式
    在这里插入图片描述

  • 创建索引模式
    在这里插入图片描述

  • 输入索引名称,索引名称就是我们刚才配置的索引,点击下一步
    在这里插入图片描述

  • 添加时间筛选字段,点击创建按钮
    在这里插入图片描述

  • 点击发现按钮,更改监控的索引
    在这里插入图片描述

  • 查看采集到的日志
    在这里插入图片描述

  • kibana详细的筛选方法,这里就不细讲了,有兴趣你可以自己研究一下,别人讲再多,不如自己静下心来好好学习一番,欢迎你们的观看,如有错误的地方欢迎留言指出,谢谢!

通常,日志被分散的储存不同的设备上。如果你管理数十上百台服务器,你还在使用依次登录每台机器的传统方法查阅日志。这样是不是感觉很繁琐和效率低下。开源实时日志分析ELK平台能够完美的解决日志收集和日志检索、分析的问题,ELK就是指ElasticSearch、Logstash和Kiabana三个开源工具。 因为ELK是可以跨平台部署,因此非常适用于多平台部署的应用。 二 环境准备 1. 安装JDK1.8环境 2. 下载ELK软件包 logstash: https://artifacts.elastic.co/downloads/logstash/logstash-5.5.0.zip elasticsearch:https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.5.0.zip kibana: https://artifacts.elastic.co/downloads/kibana/kibana-5.5.0-windows-x86.zip 分别解压下载的软件,elasticsearch,logstash,kibana 可以放在一个统一文件夹下 三 部署 1.配置logstash 在logstash文件夹的下bin目录创建配置文件logstash.conf ,内容如下: input { # 以文件作为来源 file { # 日志文件路径 path => "F:\test\dp.log" } } filter { #定义数据的格式,正则解析日志(根据实际需要对日志日志过滤、收集) grok { match => { "message" => "%{IPV4:clientIP}|%{GREEDYDATA:request}|%{NUMBER:duration}"} } #根据需要对数据的类型转换 mutate { convert => { "duration" => "integer" }} } # 定义输出 output { elasticsearch { hosts => ["localhost:9200"] #Elasticsearch 默认端口 } }   在bin目录下创建run.bat,写入一下脚本: logstash.bat -f logstash.conf 执行run.bat启动logstash。 2. 配置Elasticsearch elasticsearch.bat即可启动。 启动后浏览器访问 127.0.0.1:9200 ,出现以下的json表示成功。 3.配置kibana Kibana启动时从文件kibana.yml读取属性。默认设置配置Kibana运行localhost:5601。要更改主机或端口号,或者连接到在其他机器上运行的Elasticsearch,需要更新kibana.yml文件。 kibana.bat启动Kibana。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值