ELK搭建logcenter,监控laravel和nginx日志

21 篇文章 0 订阅
7 篇文章 0 订阅

前提:

收集服务端多环境项目日志到日志中心,实时洞察日志数据,可方便的查看多个环境多个项目的实时日志,增加开发效率。
再也不用登录服务器进入项目目录,敲出tail -f命令。
以前日志无法实现的比如:
上周发生了多少与PDOException有关的错误?
比较与上个月相比生成的Log::warning的数量
按照2019年1月1日至5月12日期间降序记录的Log::error。
等等现在都可以实现。
搭建日志中心这件事五月份就做了,也已经投入了生产环境使用,但一直没整理,现在整理出来做为记录。

实现了什么:

应用程序log以设定的频率同步到日志中心,增加运维、排查错误效率。(尤其是线上负载均衡时

架构:

ELK(Elasticsearch、Logstash、Kibana)。

1.filebeat监控日志文件,并存储到redis。

2.Logstash从redis释放出日志,解析处理并转发到ElasticSearch。

3.kibana从ElasticSearch中提取数据展示。

本地/程序所在服务器配置:

客户端部署只需要安装并配置logstash即可

1.下载系统对应文件
https://www.elastic.co/cn/downloads/beats/filebeat

2.编辑配置文件filebeat.yml:
input部分配置:

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /data/www/website/meeting.xxxxxxx.com.cn/storage/logs/*.log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  multiline.timeout: 5s
  fields:
        index: 'production_meeting_logs'

- type: log
  enabled: true
  paths:
    - /data/logs/nginx/meeting_xxxxxx_com_cn_error.log
  fields:
        index: 'production_nginx_meeting_logs'

- type: log
  enabled: true
  paths:
    - /data/logs/nginx/xxxxxx-com-cn-error.log
  fields:
        index: 'production_nginx_m_logs'
- type: log
  enabled: true
  paths:
    - /data/logs/nginx/imgcenter_xxxxxx_com_cn_error.log
  fields:
        index: 'production_nginx_imgcenter_logs'。

output部分配置:

output.redis:
  hosts: ["你的redisIP:redis端口"]
  db: 0
  timeout: 5
  key: "%{[fields.index]:otherIndex}"
  password: "你的redis密码"

配置文件中只能配置一个output,默认es的output是开启的,也在filebeat.yml中,找到并注释即可。

3.启动filebeat

sudo ./filebeat -strict.perms=false -e -c filebeat.yml

服务端配置:

需要java环境,如果没有请安装:apt-get install default-jdk 参考https://www.cnblogs.com/guxiaobei/p/8556586.html
我只参考了设置JAVA_HOME环境变量部分,因为之前安装过,所以执行update-alternatives --config java的时候发现了java
/etc/environment
JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64"
source /etc/environment

1.服务器安装配置logstash
在网页中下载符合自己操作系统的文件 https://www.elastic.co/cn/downloads/logstash
然后编辑config/logstash.yml配置文件,改动部分:

	path.config: /etc/downloads/logstash-7.0.1/conf.d
	config.debug: true
	http.host: "127.0.0.1"
	log.level: trace
	path.logs: /var/log/logstash

	output.redis:
	  hosts: ["你的redisIP:redis端口"]
	  db: 0
	  timeout: 5
	  key: "laravel_log"
坑点:配置和参数之间需要有空格,比如 config.debug: true ,写成 config.debug:true 则启动会报错。

然后创建针对不同项目log处理的配置文件,在logstash目录创建conf.d文件夹,然后比如创建一个production_meeting_logs.conf文件夹,内容:

# 从redis将数据取出
input {
  redis {
    type => "productionmeeting"
    host => "你的redis地址"
    port => "你的redis端口"
    db => "0"
    data_type => "list"
    key => "production_meeting_logs"
    password => "你的redis密码"
  }
}
  
# 格式化laravel日志
filter {
   grok {
        match => [ "message","\[%{TIMESTAMP_ISO8601:logtime}\] %{WORD:env}\.(?<level>[A-Z]{4,5})\: %{GREEDYDATA:msg}}" ]
        }
}
  
# 输出到elasticsearch
output {
        if [type] == 'productionmeeting' {
                elasticsearch {
                        document_type => "logs"
                        hosts => ["127.0.0.1"]
                        index => "production_meeting_logs"
                }
        }
}

运行logstash:
./bin/logstash

es和kibana安装这里就不展开了。

安装kibana后,配置kibana展示数据:

Management > Create Index Pattern

index name 输入 conf.d/laravel_log.conf 中 output 部分 es 的 index,比如 production_meeting_logs。然后直接点击Create,之后就可以在Discover中选择索引并查看。

nginx反向代理kibana,增加基础http认证

        server {
           # 通过反向代理对kibana身份认证
           listen 1234;
           server_name localhost;

           location / { 
                auth_basic "YDKC LogCenter";
                auth_basic_user_file /httpauth/nginx/htpasswd;
                proxy_pass http://127.0.0.1:5601;
           }
        }

最后附上客户端和服务端一些配置文件供大家参考

(共有:filebeat.yml 、logstash.yml、conf.d/production_meeting_logs.conf、conf.d/production_nginx_meeting_xxx_com_cn_error_logs.conf、四个)
filebeat.yml :

[root@xxx filebeat-7.1.1-linux-x86_64]# cat filebeat.yml 
###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /data/www/xxxxxxx/meeting.xxxxxxx.com.cn/storage/logs/*.log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  multiline.timeout: 5s
  fields:
        index: 'production_meeting_logs'

- type: log
  enabled: true
  paths:
    - /data/logs/nginx/meeting_xxxxxxx_com_cn_error.log
  fields:
        index: 'production_nginx_meeting_logs'

- type: log
  enabled: true
  paths:
    - /data/logs/nginx/xxxxxxx-com-cn-error.log
  fields:
        index: 'production_nginx_m_logs'

- type: log
  enabled: true
  paths:
    - /data/logs/nginx/imgcenter_xxxxxxx_com_cn_error.log
  fields:
        index: 'production_nginx_imgcenter_logs'

output.redis:
  hosts: ["xxx.xxx.xxx.xxx:6379"]
  db: 0
  timeout: 5
  key: "%{[fields.index]:otherIndex}"
  password: "xxxxxxx1312"

  # matching any regular expression from the list.

  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pat
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:
#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

logstash.yml

root@logcenter:/data/logstash-7.0.1/config# cat logstash.yml 
# ------------ Pipeline Configuration Settings --------------
#
# Where to fetch the pipeline configuration for the main pipeline
#
# path.config:
path.config: /data/logstash-7.0.1/conf.d
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
# http.host: "127.0.0.1"
http.host: "127.0.0.1"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
# http.port: 9600-9700
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
#   * fatal
#   * error
#   * warn
#   * info (default)
#   * debug
#   * trace
#
log.level: trace
# path.logs:
path.logs: /var/log/logstash

conf.d/production_meeting_logs.conf :

root@logcenter:/data/logstash-7.0.1# cat /data/logstash-7.0.1/conf.d/production_meeting_logs.conf 
# 从redis将数据取出
input {
  redis {
    type => "productionmeeting"
    host => "xxx.xxx.xxx.xxx"
    port => "6379"
    db => "0"
    data_type => "list"
    key => "production_meeting_logs"
    password => "xxxxxx"
  }
}
  
# 格式化laravel日志
filter {
   grok {
        match => [ "message","\[%{TIMESTAMP_ISO8601:logtime}\] %{WORD:env}\.(?<level>[A-Z]{4,5})\: %{GREEDYDATA:msg}}" ]
        }
}
  
# 输出到elasticsearch
output {
        if [type] == 'productionmeeting' {
                elasticsearch {
                        document_type => "logs"
                        hosts => ["127.0.0.1"]
                        index => "production_meeting_logs"
                }
        }
}

conf.d/production_nginx_meeting_xxx_com_cn_error_logs.conf :

root@logcenter:/data/logstash-7.0.1# cat /data/logstash-7.0.1/conf.d/production_nginx_meeting_xxx_com_cn_error_logs.conf 
# 从redis将数据取出
input {
  redis {
    type => "meeting_xxx_com_cn_error"
    host => "你的redis"
    port => "6379"
    db => "0"
    data_type => "list"
    key => "production_nginx_meeting_logs"
    password => "xxxxxx"
  }
}
  
# 格式化laravel日志
filter {
   grok {
        # match => [ "message","\[%{TIMESTAMP_ISO8601:logtime}\] %{WORD:env}\.(?<level>[A-Z]{4,5})\: %{GREEDYDATA:msg}}" ]
        }
}
  
# 输出到elasticsearch
output {
        if [type] == 'meeting_xxx_com_cn_error' {
                elasticsearch {
                        document_type => "logs"
                        hosts => ["127.0.0.1"]
                        index => "production_nginx_meeting_logs"
                }
        }
}

参考链接:
https://github.com/buonzz/logstash-laravel-logs
https://blog.csdn.net/qq292913477/article/details/88874405
https://www.jianshu.com/p/20b20ec3c35f
nginx日志的配置参考:http://www.xiaomlove.com/2017/09/10/use-elk-to-view-and-analyze-log/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值