ELK 系列二、Centos 7 安装ELK6.5.3 +filebeat+守护进程supervisor 进行日志解析和定制

大家好,本篇文章是使用ELK去解日志,本篇从ELK的安装部署至ELK的日志规则设置解析和展现做一个全面的分析,当然写这篇文章的目的是给自己做个日志。内容如下:

目录

一、解析目标和环境介绍

1、解析入库的目标是以下几种:

2.解析日志的环境如下:

二、部署ELK,并启动测试

启动ElasticSerach

配置和启动Logstash

三、部署filebeat进行采集日志上传至logstash

四、配置kibana

五、部署守护进程supervisor启动ELK和Filebeat


一、解析目标和环境介绍

1、解析入库的目标是以下几种:

程序运行日志、程序性能日志和其它日志(nginx、kong等),日志规范如下:

2.解析日志的环境如下:

ELK和filebeat的操作系统:centos7.4

ELK和filebeat版本:6.5.3

大家可以去官网下载,也可以去百度云

elasticsearch-6.5.3.tar.gz 下载地址如下: elasticsearch-6.5.3.tar.gz_免费高速下载|百度网盘-分享无限制

kibana-6.5.3-linux-x86_64.tar.gz  kibana-6.5.3-linux-x86_64.tar.gz_免费高速下载|百度网盘-分享无限制

logstash-6.5.3.tar.gz logstash-6.5.3.tar.gz_免费高速下载|百度网盘-分享无限制

filebeat-6.5.3-linux-x86_64.tar.gz  filebeat-6.5.3-linux-x86_64.tar.gz_免费高速下载|百度网盘-分享无限制

二、部署ELK,并启动测试

环境配置如下:

#1、设置系统参数
echo '* soft nofile 100001' >> /etc/security/limits.conf
echo '* hard nofile 100002' >> /etc/security/limits.conf
echo '* soft nproc 100001' >> /etc/security/limits.conf
echo '* hard nproc 100002' >> /etc/security/limits.conf
#参数介绍
soft nproc: 可打开的文件描述符的最大数(软限制)
hard nproc: 可打开的文件描述符的最大数(硬限制)
soft nofile:单个用户可用的最大进程数量(软限制)
hard nofile:单个用户可用的最大进程数量(硬限制)

#2、设置内存设置
echo 'vm.max_map_count=655360' >> /etc/sysctl.conf

#3、加载sysctl配置,执行命令
sysctl -p

注意,如果环境没有配置,es启动会报错:max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

上传服务器/soft/elk目录后进行解压

[root@i-uzt2a3oi elk]# tar -zxvf filebeat-6.5.3-linux-x86_64.tar.gz 

[root@i-uzt2a3oi elk]# tar -zxvf elasticsearch-6.5.3.tar.gz

[root@i-uzt2a3oi elk]# tar -zxvf kibana-6.5.3-linux-x86_64.tar.gz

[root@i-uzt2a3oi elk]# tar -zxvf logstash-6.5.3.tar.gz

创建elasticsearch用户,注意elasticsearch不能在root中启动

[root@i-uzt2a3oi elk]# groupadd elasticsearch

[root@i-uzt2a3oi elk]# useradd elasticsearch -g elasticsearch

[root@i-uzt2a3oi elk]# chown -R elasticsearch.elasticsearch /root/soft/elk/elasticsearch-6.5.3

[root@i-uzt2a3oi elk]# ll
total 480076
drwxr-xr-x  8 elasticsearch elasticsearch      4096 Dec  7 04:17 elasticsearch-6.5.3

创建mkdir /data目录并移动ELK安装包至此目录,因为root下不能被elasticsearch用户使用访问

[elasticsearch@i-uzt2a3oi elk]$ mkdir /data

[root@i-uzt2a3oi elk]# mv /root/soft/elk/elasticsearch-6.5.3 /data/

[root@i-uzt2a3oi elk]# mv /root/soft/elk/filebeat-6.5.3-linux-x86_64 /data/
[root@i-uzt2a3oi elk]# mv /root/soft/elk/kibana-6.5.3-linux-x86_64 /data/

[root@i-uzt2a3oi elk]# mv /root/soft/elk/logstash-6.5.3 /data/

[root@i-uzt2a3oi elk]# cd /data/
[root@i-uzt2a3oi data]# ls
elasticsearch-6.5.3  filebeat-6.5.3-linux-x86_64  kibana-6.5.3-linux-x86_64  logstash-6.5.3
[root@i-uzt2a3oi data]# ll 
total 16
drwxr-xr-x  8 elasticsearch elasticsearch 4096 Dec  7 04:17 elasticsearch-6.5.3
drwxr-xr-x  5 root          root          4096 Dec 17 09:37 filebeat-6.5.3-linux-x86_64
drwxrwxr-x 11 elasticsearch elasticsearch 4096 Dec  7 04:33 kibana-6.5.3-linux-x86_64
drwxr-xr-x 12 root          root          4096 Dec 17 09:36 logstash-6.5.3

--赋权限

[root@i-uzt2a3oi data]# chmod 777 /data/

启动ElasticSerach

切换用户:

[root@elk-server ELK]# su elasticsearch

启动:

[elasticsearch@i-uzt2a3oi data]$ [elasticsearch@i-uzt2a3oi elasticsearch-6.5.3]$ nohup /data/elasticsearch-6.5.3/bin/elasticsearch -d  >/data/elkrunlog/elasticsearch.log 2>&1 &

执行curl命令检查服务是否正常响应:curl 127.0.0.1:9200,收到响应如下:

[elasticsearch@i-uzt2a3oi data]$ curl 127.0.0.1:9200
{
  "name" : "pxzm9uo",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "eG0xrGPPSsmL-WJ-8HR-Hw",
  "version" : {
    "number" : "6.5.3",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "159a78a",
    "build_date" : "2018-12-06T20:11:28.826501Z",
    "build_snapshot" : false,
    "lucene_version" : "7.5.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

配置和启动Logstash

1.在目录logstash-6.2.3下创建文件default.conf,内容如下:

退出elasticsearch用户

[elasticsearch@i-uzt2a3oi logstash-6.5.3]$ exit

[elasticsearch@i-uzt2a3oi data]$ cd /data/logstash-6.5.3/

[root@i-uzt2a3oi logstash-6.5.3]# vim default.conf

这里做了数据过滤和解析,只有filebeat中的字段符合这里的配置才能被logstash成功解析发送至elasticsearch入库,这里的logtype就是filebeat和logstash两方约定的设置,后面详细查询filebeat中的设置

grok的解析规则可查询官网资料,这里的意思是DATA 为任何格式都能解析,但效率会相对低

性能日志示例:2018/12/18 11:23:44.907[|]535551a4-0274-11e9-a863-dca904930e90[|]session[|]GET[|]/api/common/v1/main/info[|]192.168.1.101[|]33086[|]192.168.1.100[|]1000[|]120[|]

解析对应的规则是if [logtype] == "otosaas_app_xingneng"中的内容

nginx解析的规则对应的是 if [logtype] == "otosaas_konglog" 中的内容

创建索引规则 index => "otosaas_app_xingneng-%{+YYYY.MM.dd}"

# 监听5044端口作为输入
input {
    beats {
        port => "5044"
    }
}
# 数据过滤
filter {
  if [logtype] == "otosaas_app_xingneng" {
    grok {
        match => { "message" => "%{DATA:logDate}\[\|\]%{DATA:requestId}\[\|\]%{DATA:appName}\[\|\]%{DATA:requestMethod}\[\|\]%{DATA:apiName}\[\|\]%{DATA:hostIP}\[\|\]%{DATA:hostPort}\[\|\]%{DATA:sourceIP}\[\|\]%{DATA:costTime}\[\|\]%{DATA:bizCode}\[\|\]" }
    }
    geoip {
        source => "clientip"
    }
  }

  if [logtype] == "otosaas_app_yunxing" {
    grok {
        match => { "message" => "%{DATA:logDate}\[\|\]%{DATA:requestId}\[\|\]%{DATA:appName}\[\|\]%{DATA:apiName}\[\|\]%{DATA:hostIP}\[\|\]%{DATA:sourceIP}\[\|\]%{DATA:requestParams}\[\|\]%{DATA:logType}\[\|\]%{DATA:logContent}\[\|\]" }
    }
    geoip {
        source => "clientip"
    }
  }

  if [logtype] == "otosaas_konglog" {
    grok {
        match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
    geoip {
        source => "clientip"
    }
  }
}
# 输出配置为本机的9200端口,这是ElasticSerach服务的监听端口
output {
  if [logtype] == "otosaas_app_xingneng" {
    elasticsearch {
        hosts => ["127.0.0.1:9200"]
        index => "otosaas_app_xingneng-%{+YYYY.MM.dd}"
    }
  }
  if [logtype] == "otosaas_app_yunxing" {
    elasticsearch {
        hosts => ["127.0.0.1:9200"]
        index => "otosaas_app_yunxing-%{+YYYY.MM.dd}"
    }
  }
  if [logtype] == "otosaas_konglog" {
    elasticsearch {
        hosts => ["127.0.0.1:9200"]
        index => "otosaas_konglog-%{+YYYY.MM.dd}"
    }
  }
}

监听5044端口的数据打印至9200端口

2. 后台启动Logstash服务:

[root@i-uzt2a3oi ~]# nohup /data/logstash-6.5.3/bin/logstash -f /data/logstash-6.5.3/default.conf >/data/elkrunlog/logstash.log 2>&1 &

配置和启动Kibana

1.编辑Kibana的配置文件
[root@i-uzt2a3oi elkrunlog]# vim /data/kibana-6.5.3-linux-x86_64/config/kibana.yml 

修改字段:

server.host: "0.0.0.0"

pid.file: /var/run/kibana.pid

2.启动服务

[root@i-uzt2a3oi elkrunlog]# nohup /data/kibana-6.5.3-linux-x86_64/bin/kibana > /data/elkrunlog/kibana.log 2>&1 &

在浏览器访问http://192.168.1.78:5601,看到如下页面:

三、部署filebeat进行采集日志上传至logstash

进入另外一台客户端进行filebeat安装

mkdir /data

cd /data

tar -zxvf filebeat-6.5.3-linux-x86_64.tar.gz

cd /data/filebeat-6.5.3-linux-x86_64

vim filebeat.yml

配置如下:

重点介绍的是,后续和logstash验证的是logtype字段
  fields:
     logtype: otosaas_app_xingneng

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  enabled: true

  paths:
     -  /var/app/logs/appName/performanceLog/*.log   
  fields_under_root: true 
  fields:
     logtype: otosaas_app_xingneng

- type: log

  enabled: true

  paths:
     - /var/app/logs/appName/bizLog/*.log
  fields_under_root: true
  fields:
     logtype: otosaas_app_yunxing

- type: log

  enabled: true

  paths:
     - /data/kong/logs/access.log
     - /data/kong/logs/error.log   
  fields_under_root: true
  fields:
     logtype: otosaas_konglog

#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
 # hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["192.168.1.78:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Procesors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

启动:

nohup /data/filebeat-6.5.3-linux-x86_64/filebeat -e -c /data/filebeat-6.5.3-linux-x86_64/filebeat.yml  -d "publish" >/dev/null 2>/data/filebeat-6.5.3-linux-x86_64/filebeat.log & 

四、配置kibana

输入http://192.168.1.78:5601

创建索引

五、部署守护进程supervisor启动ELK和Filebeat

配置文件标红部分需要更新,如果没有配置,es启动会报错:max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

[supervisord]
logfile=/var/log/supervisor/supervisord.log  ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB       ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10          ; (num of main logfile rotation backups;default 10)
loglevel=info               ; (log level;default info; others: debug,warn,trace)
pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false              ; (start in foreground if true;default false)
minfds=500000                 ; (min. avail startup file descriptors;default 1024)
minprocs=500000               ; (min. avail process descriptors;default 200)

1、先安装ELK服务端所在服务器的supervisor

yum install epel-release -y
yum install -y supervisor

编辑配置文件

vim /etc/supervisord.conf

#开启网页访问管理
[inet_http_server]         ; inet (TCP) server disabled by default
port=0.0.0.0:9001        ; (ip_address:port specifier, *:port for all iface)
username=user              ; (default is no username (open server))
password=123               ; (default is no password (open server))

先开启supervisord net

systemctl start supervisord

systemctl enable supervisord

创建配置文件

[root@i-uzt2a3oi supervisord.d]# pwd
/etc/supervisord.d

[root@i-uzt2a3oi supervisord.d]# vim elasticsearch.ini 
配置如下:

[program:elasticsearch]
command=/data/elasticsearch-6.5.3/bin/elasticsearch
directory=/data/elasticsearch-6.5.3/bin
user=elasticsearch
redirect_stderr=true
stdout_logfile=/data/elkrunlog/elasticsearch.log
autostart=true
autorestart=true
;startsecs=10000
;stopwaitsecs=600
killasgroup=true
environment=JAVA_HOME=/usr/local/jdk1.8.0_181

再创建 kibana.ini 和 logstash.ini

配置文件如下:

[program:kibana]
command= /data/kibana-6.5.3-linux-x86_64/bin/kibana
directory=/data/kibana-6.5.3-linux-x86_64/bin
redirect_stderr=true
stdout_logfile=/data/elkrunlog/kibana.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs=600
killasgroup=true
environment=JAVA_HOME=/usr/local/jdk1.8.0_181
[program:logstash]
;command= /data/logstash-6.5.3/bin/logstash --debug  -f /data/logstash-6.5.3/default.conf
command= /data/logstash-6.5.3/bin/logstash  -f /data/logstash-6.5.3/default.conf
directory=/data/logstash-6.5.3/bin
redirect_stderr=true
stdout_logfile=/data/elkrunlog/logstash.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs=600
killasgroup=true
environment=JAVA_HOME=/usr/local/jdk1.8.0_181

[root@i-uzt2a3oi supervisord.d]# ls
elasticsearch.ini  kibana.ini  logstash.ini

先关掉之前启动的服务进程,通过ps -ef |grep elasticsearch 命令查找进程号,再kill

然后使配置文件生效

[root@i-uzt2a3oi supervisord.d]# supervisorctl update

这时候他会自己开启服务

[root@i-uzt2a3oi supervisord.d]# supervisorctl status
elasticsearch                    RUNNING   pid 814, uptime 2:47:19
kibana                           RUNNING   pid 816, uptime 2:47:19
logstash                         RUNNING   pid 815, uptime 2:47:19

然后再对filebeat进行部署,配置文件如下 

vim /etc/supervisord.d/filebeat.ini

[program:filebeat]
command= /data/filebeat-6.5.3-linux-x86_64/filebeat -e -c /data/filebeat-6.5.3-linux-x86_64/filebeat.yml  -d "publish"
directory=/data/filebeat-6.5.3-linux-x86_64
redirect_stderr=true
stdout_logfile=/data/filebeat-6.5.3-linux-x86_64/filebeat.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs=600
killasgroup=true
environment=JAVA_HOME=/usr/local/jdk1.8.0_181

supervisorctl update

[root@backup filebeat-6.5.3-linux-x86_64]# supervisorctl status
filebeat                         RUNNING   pid 14555, uptime 4:03:53

使用网页的形式也能开启关闭服务

输入地址:http://192.168.1.78:9001/ 输入账号密码

OK,大功告成 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值