利用ELK分析MySQL慢日志(更新中)

环境介绍

ES node1:192.168.237.25

ES node2:192.168.237.26

ES node3:192.168.237.27

Redis、Logstash、Kibana:192.168.237.30

MySQL Node:192.168.237.9

 

一、Filebeat

由于需要收集数据库慢日志,需要在所在服务器安装filebeat

登录192.168.237.9

1.1、安装filebeat

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.6.2-x86_64.rpm

sudo rpm -vi filebeat-7.6.2-x86_64.rpm

rpm -qc filebeat 查看配置文件路径

 

1.2、配置filebeat

vim /etc/filebeat/filebeat.yml

修改:enabled、paths、output.redis

#=========================== Filebeat inputs =============================

filebeat.inputs:

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /data/mysql/slow.log  #日志源

#----------------------------- Elasticsearch output --------------------------------
output.redis:
  hosts: ["192.168.237.30:6379"]  #redis 地址
  key: "mysql-slowlog"  #redis key
  password: 123456  #redis 密码

 

二、Redis

2.1、安装redis

登录192.168.237.30

wget http://download.redis.io/releases/redis-5.0.8.tar.gz

yum install gcc

tar zxvf redis-5.0.8.tar.gz

cd redis-5.0.8

make

make install PREFIX=/usr/local/redis

mkdir /usr/local/redis/etc

cp redis.conf /usr/local/redis/etc

修改/usr/local/redis/etc/redis.conf

bind 0.0.0.0  #开放远程登录
port 6379
timeout 120  #闲置连接超时时间
daemonize yes  #实现后台运行
dir /data/redis  #dump.rdb和appendonly.aof文件存放路径
requirepass 123456  #redis密码
appendonly yes  #开启aof

ln -s /usr/local/redis/bin/redis-server /usr/local/sbin/

ln -s /usr/local/redis/bin/redis-cli /usr/local/sbin/

开发默认端口6379

运行(配若置文件有更新,直接再次运行,就可以重载配置。注:修改密码依然需要kill redis进程再重启)
redis-server /usr/local/redis/etc/redis.conf

登录,输入密码:

# redis-cli

127.0.0.1:6379> AUTH 123456

 

2.2、验证filebeat与redis的连通性

回到客户端

step1、systemctl start filebeat

若不能启动,根据:cat /var/log/messages |tail -n 200输出结果修改filebeat或redis配置文件

step2、redis-cli -h 192.168.237.30,测试是能够登录,否则检查redis配置,执行:key *,观察输出结果是否显示:"mysql-slowlog"

 

三、Logstash

3.1、安装

安装JDK,安装方法见第四章ES第一节

安装logstash

下载地址:https://www.elastic.co/cn/downloads/logstash

mkdir /usr/local/logstash

tar zxvf logstash-7.6.2.tar.gz -C /usr/local/logstash --strip-components 1

测试:

[root@ceshi23730 config]# /usr/local/logstash/bin/logstash  -e 'input { stdin { } } output { stdout {} }'
Sending Logstash logs to /usr/local/logstash/logs which is now configured via log4j2.properties
[2020-04-17T14:42:46,293][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-04-17T14:42:46,534][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.6.2"}
[2020-04-17T14:42:48,362][INFO ][org.reflections.Reflections] Reflections took 34 ms to scan 1 urls, producing 20 keys and 40 values 
[2020-04-17T14:42:49,549][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.RubyArray) has been created for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2020-04-17T14:42:49,569][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["config string"], :thread=>"#<Thread:0x7531b408 run>"}
[2020-04-17T14:42:50,290][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
The stdin plugin is now waiting for input:
[2020-04-17T14:42:50,437][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-04-17T14:42:50,692][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
输入:test
/usr/local/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
    "@timestamp" => 2020-04-17T06:43:08.196Z,
       "message" => "test",
      "@version" => "1",
          "host" => "ceshi23730"
}

 

3.2、配置

/usr/local/logstash/config/logstash-simple.conf

input {
  redis {
    host => "192.168.237.30"
    port => 6379
    password => "123456"
    db => 2
    key => "mysql-slowlog"
    data_type => "list"
    batch_count => 1
  }
}

output {
  elasticsearch {
    hosts => ["http://192.168.237.25:9200"]
    #index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    index => "%logstash-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}


启动:

nohup /usr/local/logstash/bin/logstash -f /usr/local/logstash/config/logstash-sample.conf 2>&1 >/dev/null &

关闭:

kill -TERM {logstash_pid}

 

四、ES

3个ES节点分别安装JDK、elasticsearch

4.1、JDK

rpm -qa |grep open-jdk,查找并删除open-jdk

安装阿里JDK为例:

wget https://github.com/alibaba/dragonwell8/releases/download/dragonwell-8.3.3-GA/Alibaba_Dragonwell_8.3.3-GA_Linux_x64.tar.gz

mkdir /usr/local/Alibaba_Dragonwell_8.3.3

tar zxvf Alibaba_Dragonwell_8.3.3-GA_Linux_x64.tar.gz -C /usr/local/Alibaba_Dragonwell_8.3.3 --strip-components 1

chown -R root:root /usr/local/Alibaba_Dragonwell_8.3.3/

修改环境变量

export JAVA_HOME=/usr/local/Alibaba_Dragonwell_8.3.3

export PATH=${JAVA_HOME}/bin:$PATH

执行 java -version 进行校验无误后写入/etc/profile

source /etc/profile

 

4.2、安装elasticsearch(3节点)

下载地址:https://www.elastic.co/cn/downloads/elasticsearch

mkdir /usr/local/elasticsearch

tar zxvf elasticsearch-7.6.2-linux-x86_64.tar.gz -C /usr/local/elasticsearch --strip-components 1

groupadd elasticsearch

useradd elasticsearch -g elasticsearch

passwd elasticsearch

chown -R elasticsearch:elasticsearch /usr/local/elasticsearch/

mkdir /data/elasticsearch-data

mkdir /data/elasticsearch-logs

chown -R elasticsearch:elasticsearch /data/elasticsearch*

 

4.3、配置

4.3.1、系统配置

(1)修改 /etc/security/limits.conf
* soft nofile 65535
* hard nofile 65535
* soft nproc 32000
* hard nproc 32000

(2)修改 /etc/sysctl.conf
vm.max_map_count=655300

重载配置:sysctl -p

上述系统配置完成后,重启服务器,ulimit -a 确认:open files、max user processes。一般可以解决后续启动ES的报错:

[elasticsearch@ceshi23725 elasticsearch]$ ERROR: [2] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /data/elasticsearch-logs/es-cluster.log
 

4.3.2、elasticsearch master节点配置

修改配置文件:/usr/local/elasticsearch/config/elasticsearch.yml

cluster.name: es-cluster
node.name: node-1

# 候选master节点
node.master: true
node.data: false

path.data: /data/elasticsearch-data
path.logs: /data/elasticsearch-logs
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9300  # 节点间交互端口

http.cors.enabled: true  # 支持跨域访问
http.cors.allow-origin: "*"  # 允许所有域名访问

cluster.initial_master_nodes: ["192.168.237.25"]  # 指定可以作为master角色的节点清单,用于首次选举,可指定多台,用,分隔

 

4.3.3、elasticsearch data节点配置

修改配置文件:/usr/local/elasticsearch/config/elasticsearch.yml

cluster.name: es-cluster
node.name: node-2

# 数据节点
node.master: false
node.data: true

path.data: /data/elasticsearch-data
path.logs: /data/elasticsearch-logs
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9300  # 节点间交互端口

http.cors.enabled: true  # 支持跨域访问
http.cors.allow-origin: "*"  # 允许所有域名访问

discovery.seed_hosts: ["192.168.237.25:9300"]  # 指定可以作为master角色的候选清单

 

4.3.4、启动

(1)启动master节点

su elasticsearch

cd /usr/local/elasticsearch

./bin/elasticsearch -d -p /data/elasticsearch-data/elasticsearch.pid

(2)启动两个数据节点,同上

(3)校验

ES集群基础信息:

[root@ceshi23709 ~]# curl -GET http://192.168.237.25:9200
{
  "name" : "node-1",
  "cluster_name" : "es-cluster",
  "cluster_uuid" : "cU0D3TDWQT--JS2WJ_IEeg",
  "version" : {
    "number" : "7.6.2",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
    "build_date" : "2020-03-26T06:34:37.794943Z",
    "build_snapshot" : false,
    "lucene_version" : "8.4.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

健康度:

[root@ceshi23709 ~]# curl -GET http://192.168.237.25:9200/_cluster/health?pretty
{
  "cluster_name" : "es-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

(4)关闭

pkill -F /data/elasticsearch-data/elasticsearch.pid

 

4.3.5、测试数据链路连通性

[root@ceshi23730 redis]# curl http://192.168.237.25:9200/_search?pretty
{
  "took" : 9,
  "timed_out" : false,
  "_shards" : {
    "total" : 4,
    "successful" : 4,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 99,
      "relation" : "eq"
    },
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "%logstash-2020.04.16",
        "_type" : "_doc",
        "_id" : "Gf2Mh3EBQtJt23-0Q7E-",
        "_score" : 1.0,
        "_source" : {
          "message" : "./bin/mysqld, Version: 5.7.28-31-31.41-log (Percona XtraDB Cluster binary (GPL) 5.7.28-31.41, Revision ef2fa88, wsrep_31.41). started with:",
          "log" : {
            "offset" : 0,
            "file" : {
              "path" : "/data/mysql/slow.log"
            }
          },
          "@version" : "1",
          "host" : {
            "containerized" : false,
            "id" : "5188e09f2c0d47b2ad736027bcd0f083",
            "name" : "ceshi23709",
            "architecture" : "x86_64",
            "os" : {
              "platform" : "centos",
              "version" : "7 (Core)",
              "kernel" : "3.10.0-862.el7.x86_64",
              "family" : "redhat",
              "codename" : "Core",
              "name" : "CentOS Linux"
            },
            "hostname" : "ceshi23709"
          },
          "input" : {
            "type" : "log"
          },
          "ecs" : {
            "version" : "1.4.0"
          },
          "@timestamp" : "2020-04-16T06:35:56.108Z",
          "agent" : {
            "version" : "7.6.2",
            "ephemeral_id" : "70273e71-6179-409c-b912-9fd46a427367",
            "type" : "filebeat",
            "id" : "4ba9aad7-7b72-49d0-86d3-d8f0106c0a71",
            "hostname" : "ceshi23709"
          }
        }
      },
      {
        "_index" : "%logstash-2020.04.16",
        "_type" : "_doc",
        "_id" : "L_2Mh3EBQtJt23-0Q7E-",
        "_score" : 1.0,
        "_source" : {
          "ecs" : {
            "version" : "1.4.0"
          },
          "log" : {
            "offset" : 192,
            "file" : {
              "path" : "/data/mysql/slow.log"
            }
          },
          "@version" : "1",
          "host" : {
            "containerized" : false,
            "id" : "5188e09f2c0d47b2ad736027bcd0f083",
            "os" : {
              "version" : "7 (Core)",
              "kernel" : "3.10.0-862.el7.x86_64",
              "platform" : "centos",
              "family" : "redhat",
              "name" : "CentOS Linux",
              "codename" : "Core"
            },
            "name" : "ceshi23709",
            "architecture" : "x86_64",
            "hostname" : "ceshi23709"
          },
          "input" : {
            "type" : "log"
          },
          "message" : "Time                 Id Command    Argument",
          "@timestamp" : "2020-04-16T06:35:56.108Z",
          "agent" : {
            "ephemeral_id" : "70273e71-6179-409c-b912-9fd46a427367",
            "version" : "7.6.2",
            "type" : "filebeat",
            "id" : "4ba9aad7-7b72-49d0-86d3-d8f0106c0a71",
            "hostname" : "ceshi23709"
          }
        }
      },

 

五、kibana

5.1、安装

下载地址:https://www.elastic.co/cn/downloads/kibana

mkdir /usr/local/kibana

tar zxvf kibana-7.6.2-linux-x86_64.tar.gz -C /usr/local/kibana --strip-components 1

5.2、配置

/usr/local/kibana/config/kibana.yml

server.port: 5601
server.host: "192.168.237.30"
server.name: "kibana"
elasticsearch.hosts: ["http://192.168.237.25:9200"]  # ES master节点
kibana.index: ".kibana"
pid.file: /usr/local/kibana/data/kibana.pid

 

5.3、启动

nohup /usr/local/kibana/bin/kibana --allow-root >> /usr/local/kibana/data/kibana.log &

访问:http://192.168.237.30:5601/status

关闭:pkill -F /usr/local/kibana/data/kibana.pid

 

参考文档

使用Oracle JDK嫌官方需要注册下载的,可以使用华为的源(jdk版本较低)

Alibaba Dragonwell8 JDK Github地址

Elasticsearch APIs

使用ELK(Elasticsearch + Logstash + Kibana) 搭建日志集中分析平台实践

刨根问底 | Elasticsearch 5.X集群多节点角色配置深入详解

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值