docker-compose部署EFKL,存储,过滤laravel日志文件

19 篇文章 1 订阅
15 篇文章 0 订阅

继上一篇部署EFK之后,由于发现通过fluentd转发到ES的日志并不能实现我们预期的效果,先看看我们预期的效果:

在这里插入图片描述

再给大家看一下我要收集的日志格式:

[2021-10-29 03:39:12] saveData.INFO: saveData {"params":{"index":"fulfillments_1","id":5941107,"body":{"id":5941107,"shippingMethodId":null,"shippingMethodName":null,"pluginId":null,"shipToName":"tan2","shipToPhone":null,"shipToSuburb":"FRASER RISE","shipToState":"VIC","shipToPostcode":"3336","shipToCountry":"AU","shipToAddress1":"Second St","shipToAddress2":null,"shipToCompanyName":"eiz","shipToEmail":null,"fromAddress1":"tet-1","fromAddress2":null,"fromSuburb":"Moorabbin","fromState":"VIC","fromCountry":"AU","fromPostcode":"3189","fromCompany_name":"eiz","fromName":"jin2","fromPhone":"47658975","fromEmail":null,"carrierName":null,"labelNumber":[],"fulfillmentStatus":1,"consignments":[],"products":[{"id":4,"account_id":1,"product_id":4,"sku":"124","title":"dsadasds","weight":1,"length":11,"width":11,"height":11,"quantity":0,"location":null,"insured_amount":null,"status":0,"custom_label":null,"custom_label2":null,"custom_label3":null,"img_url":null,"barcode":null,"wms_stock":0,"pivot":{"fulfillment_id":5941107,"product_id":4,"qty":1,"note":null,"sku":"124"}}],"consignmentStatus":0,"picklistStatus":0,"createdAt":"2021-10-26 13:33:03","updatedAt":"2021-10-29 14:39:11","package_info":[{"packObj":[],"qty":"2","weight":"13","length":"6","width":"7","height":"8","package_id":null}],"price":null,"note":null,"tags":[{"id":95,"account_id":1,"parent_id":null,"name":"test","description":"{\"name\":\"test\",\"color\":\"#eb2f96\"}"}],"errors":null,"tracking_status":0,"packageNum":2,"productNum":1,"autoQuoteResult":[],"orders":[],"log":[],"shipToRef":"TJ0000212"}}} []

我们预期的效果是将日志中的内容都格式化显示出来,但是上一篇文章,EFK部署日志系统,搭建完成后日志中的内容还是会都堆在message字段中,这让我们很难查阅,于是便有了第二次尝试(本篇文章依然是根据laravel框架来示例):

1、docker-compose部署logstash+filebeat,大家可以看到这次我用的是opensearch(等同于Elasticsearch)+opensearch-dashboards(等同于Kibana),下面是我的docker-compose.yaml文件内容

version: "2.2"
services:
  opensearch:
    build:
      context: dockerfiles
      dockerfile: opensearch-no-security.dockerfile
    restart: always
    container_name: opensearch
    image: wangyi/opensearch:latest
    environment:
      - discovery.type=single-node
    ports:
      - 9200:9200
      - 9600:9600 # required for Performance Analyzer
    volumes:
      - opensearch-data1:/usr/share/opensearch/data

  opensearch-dashboards:
    build:
      context: dockerfiles
      dockerfile: opensearch-dashboards-no-security.dockerfile
    image: wangyi/opensearch-dashboard:latest
    container_name: opensearch-dashboards
    ports:
      - 5601:5601
    environment:
      OPENSEARCH_HOSTS: '["http://opensearch:9200"]' # must be a string with no spaces when specified as an environment variable

  filebeat:
    build: ./filebeat
    restart: "always"
    container_name: filebeat
    volumes:
      - ./storage/logs/:/tools/logs/
    user: root


  logstash:
    depends_on:
      - opensearch
    image: "docker.elastic.co/logstash/logstash:7.1.0"
    volumes:
      - ./logstash/logstash.yml:/usr/share/logstash/config/logstash.yml
      - ./logstash/conf.d/:/usr/share/logstash/conf.d/
    ports:
      - "5044:5044"
    links:
      - opensearch

volumes:
  opensearch-data1:

2、其次是我的目录文件

在这里插入图片描述

3、然后编辑filebeat文件夹下的Dockerfile文件

FROM docker.elastic.co/beats/filebeat-oss:7.11.0

# Copy our custom configuration file
COPY ./filebeat.yml /usr/share/filebeat/filebeat.yml

USER root
# Create a directory to map volume with all docker log files
RUN mkdir /usr/share/filebeat/dockerlogs
RUN chown -R root /usr/share/filebeat/
RUN chmod -R go-w /usr/share/filebeat/

4、然后继续编辑logstash.yml文件

path.config: /usr/share/logstash/conf.d/*.conf
path.logs: /var/log/logstash

5、接下来是本篇文章的关键,编辑filebeat配置文件和logstash配置文件

filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /tools/logs/saveData/*/*/*.log
  fields:
    filetype: savedata ## 此处设置的filetype值在logstash配置文件里面需要用到,不同的日志文件创建不同的index,相当于一个标记的作用

- type: log
  enabled: true
  paths:
    - /tools/logs/condition/*/*/*.log
  fields:
    filetype: condition

setup.ilm.enabled: false

setup.template.settings:
  index.number_of_shards: 1
  index.number_of_replicas: 0
  index.codec: best_compression

output.logstash:  ##通过此处来链接logstash服务,将日志分发给logstash,然后再由logstash进行过滤
  enabled: true
  hosts: ["logstash:5044"]

配置完filebeat.yml我们就需要配置logstash.conf文件

input {
  beats {
    port => 5044
  }
}
filter {
    grok {
        match => {
            "message" => "\[%{TIMESTAMP_ISO8601:logtime}\] %{WORD:env}\.(?<level>[A-Z]{4,5})\: %{WORD:params} %{GREEDYDATA:msg} " ##此处的正则仅供参考,我的场景是过滤laravel日志文件的正则表达式
        }
    }
    json {
        source => "msg" ##将过滤完的内容转为json格式,不转的话是string格式的,不加这句话不会达到我们预期的效果
    }
    mutate{
        remove_field => ["message"] ##将原本的message字段删除掉
    }
}
output {
    if [fields][filetype] == "savedata" { ##判断来自于哪一个日志文件,filebeat文件配置
         elasticsearch {
            index => "savedatas_%{+YYYY.MM.dd}"
            hosts => ["opensearch:9200"]
         }
    }

    if [fields][filetype] == "condition" {
         elasticsearch {
            index => "conditions_%{+YYYY.MM.dd}"
            hosts => ["opensearch:9200"]
         }
    }
}

这里给大家推荐一个在线的grok测试地址,特别好用GROK在线测试

!!!一切配置就绪,点火,启动

docker-compose up -d 服务名称

这里说一下启动顺序
1、opensearch(E)
2、opensearch-dashboards(K)
3、logstash(L)
4、filebeat(F)

我们通过查看日志可以看到服务都已经成功启动

docker logs -f imageId

然后我们查看opensearch-dashboards后台可以看到达到我们文章开头预期的那个效果,所有转存的日志已经被格式化处理。

奥,对 忘了贴出来我的opensearch和opensearch-dashboards的配置文件
目录配置:
在这里插入图片描述

1、opensearch.yml

cluster.name: docker-cluster

# Bind to all interfaces because we don't know what IP address Docker will assign to us.
network.host: 0.0.0.0
compatibility.override_main_response_version: true

2、opensearch-no-security.dockerfile

FROM opensearchproject/opensearch:1.1.0
RUN /usr/share/opensearch/bin/opensearch-plugin remove opensearch-security
COPY --chown=opensearch:opensearch config.d/opensearch.yml /usr/share/opensearch/config/

3、opensearch-dashboards.yml

# Copyright 2021 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# or in the "license" file accompanying this file. This file is distributed
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.

# Description:
# Default configuration for OpenSearch Dashboards

server.host: "0"
opensearch.hosts: ["https://localhost:9200"]
#opensearch.ssl.verificationMode: none
#opensearch.username: "kibanaserver"
#opensearch.password: "kibanaserver"
#opensearch.requestHeadersWhitelist: [ authorization,securitytenant ]

#opensearch_security.multitenancy.enabled: true
##opensearch_security.multitenancy.tenants.preferred: ["Private", "Global"]
#opensearch_security.readonly_mode.roles: ["kibana_read_only"]
# Use this setting if you are running opensearch-dashboards without https
#opensearch_security.cookie.secure: false

4、opensearch-dashboards-no-security.dockerfile

FROM opensearchproject/opensearch-dashboards:1.1.0
RUN /usr/share/opensearch-dashboards/bin/opensearch-dashboards-plugin remove securityDashboards
COPY --chown=opensearch-dashboards:opensearch-dashboards config.d/opensearch_dashboards.yml /usr/share/opensearch-dashboards/config/

其实这两个(opensearch+opensearch-dashboards)完全可以用Elasticsearch+Kibana代替,看大家需求,Elasticsearch+Kibana的配置文件在上一篇文章中有写出来

这期的文章就写到这里,下一期写不用logstash来过滤日志文件,因为后来发现logstash这个玩意太占CPU,仅仅通过filebeat+es的pipeline就可以实现我们预期的效果,且不耗CPU,过几天写

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值