ES——Fluent-bit——kibana组建日志收集系统---docker方式部署

本文介绍了如何通过Docker部署Elasticsearch、Fluent-bit和Kibana,构建日志收集系统。首先,准备Docker环境和华为云CSS的ES集群。接着,详细讲述了Fluent-bit的配置,包括docker-compose.yaml、fluent-bit.conf和parsers.conf的编写,以及启动流程。然后,涉及了Kibana的部署配置。最后,配置了索引生命周期管理和索引模板。文章末尾提到将更新自建ES集群和错误日志告警相关内容。
摘要由CSDN通过智能技术生成

ES——Fluent-bit——kibana组建日志收集系统—docker方式部署

一、准备环境

1.部署docker环境

2.准备es集群

本文章采用的是华为云的CSS云搜索服务的多节点集群(es集群),并且该集群时开启安全模式,未开启https访问

二、fluent-bit部署及配置

1.编写docker-compose.yaml
# vim docker-compose.yaml

version: "3"
services:
  fluent-bit:
    image: cr.fluentbit.io/fluent/fluent-bit:1.9.3
    container_name: fluent-bit
    restart: always
    volumes:
      - ./:/fluent-bit/etc/
      - /apps/:/apps/             #此路径为服务的日志路径

    ports:
      - "2020:2020"
    deploy:
      resources:
        limits:
          cpus: '1'
          memory: 2G
        reservations:
          cpus: '0.01'
          memory: 2M

2.编写fluent-bit.conf
# vim fluent-bit.conf

[SERVICE]
    Flush         5
    Log_Level     info
    Daemon        off
    Parsers_File  parsers.conf
    HTTP_Server   On
    HTTP_Listen   0.0.0.0
    HTTP_Port     2020
#        parsers_file parsers_multiline.conf
[INPUT]
    Name tail
    Tag  fram-admin
    Parser docker
    Path /apps/farm-admin/logs/fram-admin/*.log    #需收集的日志文件
#        multiline.parser multiline-regex
    Multiline On
    Parser_Firstline  docker      #多行合并,引用规则


[FILTER]
    Name parser
    Match **
    Parser docker    #引用规则
    Key_Name log

[OUTPUT]
    Name stdout
    Match *

[OUTPUT]
    Name          es
    Match         fram-admin        #对应input的tag
    Host          192.168.101.51   #css的ip地址
    Port          9200           #css的端口
    HTTP_User   admin            #css的账户
    HTTP_Passwd   admin          #css的密码
    Logstash_Format On           #是否启用索引 
    Logstash_Prefix prod_fram-admin   #索引前缀名
    Logstash_DateFormat %Y-%W          #索引后缀,以年-周为时间结尾
    Replace_Dots    On
    Trace_Error On
    Retry_Limit     False

3.编写parsers.conf
# vim parsers.conf

[PARSER]
    Name   apache
    Format regex
    Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
    Time_Key time
    Time_Format %d/%b/%Y:%H:%M:%S %z

[PARSER]
    Name   apache2
    Format regex
    Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
    Time_Key time
    Time_Format %d/%b/%Y:%H:%M:%S %z

[PARSER]
    Name   apache_error
    Format regex
    Regex  ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$

[PARSER]
    Name   nginx
    Format regex
    Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*))" "(?<agent>[^\"]*)"(?: "(?<target>[^\"]*))"$
    Time_Key time
    Time_Format %d/%b/%Y:%H:%M:%S %z

[PARSER]
    Name   json
    Format json
    Time_Key time
    Time_Format %d/%b/%Y:%H:%M:%S %z

[PARSER]
    Name        docker
    Format      regex
    Regex        (?<logdate>(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3})) (?<level>(\w{4,5}))[ ]+(?<thread>(\[[\w -_].*?\])) (?<method>(\[[ \w\.-_].*?\])) (?<message>(.*))
    Time_Key    time
    Time_Format %Y-%m-%dT%H:%M:%S.%L
    Time_Keep   On

[PARSER]
    Name        syslog
    Format      regex
    Regex       ^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$
    Time_Key    time
    Time_Format %b %d %H:%M:%S

4.启动fluent-bit
# docker-compose create
# docker-compose start

三、kibana的部署及配置

部署kibana
# docker pull amazon/opendistro-for-elasticsearch-kibana:1.11.0


# docker run -it -d  --restart=always --name kibana -p  80:5601   amazon/opendistro-for-elasticsearch-kibana:1.11.0

# docker exec -it kibana    /bin/bash

# vi  config/kibana.yml 
#server.name: kibana
#server.host: "0"
#elasticsearch.hosts: https://192.168.101.106:9200
#elasticsearch.ssl.certificateAuthorities: "/usr/share/kibana/config/CloudSearchService.cer"
#elasticsearch.ssl.verificationMode: none
#elasticsearch.username: kibanaserver
#elasticsearch.password: kibanaserver
#elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
elasticsearch.username: "admin" 
elasticsearch.password: "admin"
elasticsearch.ssl.verificationMode: none
server.ssl.enabled: false
server.rewriteBasePath: false
server.port: 5601
server.host: "0"
elasticsearch.hosts: ["http://192.168.101.51:9200"]
elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
opendistro_security.multitenancy.enabled: true
opendistro_security.multitenancy.tenants.enable_global: true
opendistro_security.multitenancy.tenants.enable_private: true
opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
opendistro_security.multitenancy.enable_filter: false
#opendistro_security.multitenancy.enabled: true
#opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
opendistro_security.readonly_mode.roles: ["kibana_read_only"]
# Use this setting if you are running kibana without https
#opendistro_security.cookie.secure: false


#exit  #退出容器
# docker  restart  kibana
浏览器访问http://ip:5601

四、索引生命周期配置

此配置在kibana的界面上

在这里插入图片描述

在这里插入图片描述

{
    "policy": {
        "policy_id": "hot-cold-delete",
        "description": "A simple default policy that changes the replica count between hot and cold and  delete states.",
        "last_updated_time": 1625996294336,
        "schema_version": 1,
        "error_notification": null,
        "default_state": "hot",
        "states": [
            {
                "name": "hot",
                "actions": [
                    {
                        "replica_count": {
                            "number_of_replicas": 0
                        }
                    }
                ],
                "transitions": [
                    {
                        "state_name": "cold",
                        "conditions": {
                            "min_index_age": "25d"
                        }
                    }
                ]
            },
            {
                "name": "cold",
                "actions": [
                    {
                        "replica_count": {
                            "number_of_replicas": 0
                        }
                    }
                ],
                "transitions": [
                    {
                        "state_name": "delete",
                        "conditions": {
                            "min_index_age": "30d"
                        }
                    }
                ]
            },
            {
                "name": "delete",
                "actions": [
                    {
                        "delete": {}
                    }
                ],
                "transitions": []
            }
        ]
    }
}

五、创建索引模板并引用生命周期

登陆华为云的css的Cerebor界面

在这里插入图片描述
在这里插入图片描述

{
  "order": 0,
  "version": 60001,
  "index_patterns": [
    "prod*"
  ],
  "settings": {
    "index": {
      "opendistro": {
        "index_state_management": {
          "policy_id": "hot-cold-delete"
        }
      },
      "refresh_interval": "5s",
      "number_of_shards": "1",
      "number_of_replicas": "0"
    }
  },
  "mappings": {
    "dynamic_templates": [
      {
        "message_field": {
          "path_match": "message",
          "mapping": {
            "norms": false,
            "type": "text"
          },
          "match_mapping_type": "string"
        }
      },
      {
        "string_fields": {
          "mapping": {
            "norms": false,
            "type": "text",
            "fields": {
              "keyword": {
                "ignore_above": 256,
                "type": "keyword"
              }
            }
          },
          "match_mapping_type": "string",
          "match": "*"
        }
      }
    ],
    "properties": {
      "@timestamp": {
        "type": "date"
      },
      "geoip": {
        "dynamic": true,
        "properties": {
          "ip": {
            "type": "ip"
          },
          "latitude": {
            "type": "half_float"
          },
          "location": {
            "type": "geo_point"
          },
          "longitude": {
            "type": "half_float"
          }
        }
      },
      "@version": {
        "type": "keyword"
      }
    }
  },
  "aliases": {}
}


至此日志收集系统完毕,后续更新自建es集群,以及错误日志告警等,敬请各位看官持续关注…

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 2
    评论
以下是使用Ansible部署Fluent Bit的步骤: 1. 安装Ansible 在部署Fluent Bit之前,需要在管理节点上安装Ansible。可以使用以下命令在Debian/Ubuntu系统上安装Ansible: ``` $ sudo apt update $ sudo apt install ansible ``` 在CentOS/RHEL系统上安装Ansible: ``` $ sudo yum install epel-release $ sudo yum install ansible ``` 2. 创建Ansible playbook 创建一个Ansible playbook来安装和配置Fluent Bit。在此过程中,您需要指定Fluent Bit的配置文件和输入源。 以下是一个示例playbook: ``` - hosts: fluentbit_servers become: yes tasks: - name: Install Fluent Bit apt: name: fluent-bit state: present - name: Configure Fluent Bit copy: src: /path/to/fluent-bit.conf dest: /etc/fluent-bit/fluent-bit.conf - name: Start Fluent Bit service: name: fluent-bit state: started ``` 在此playbook中,'fluentbit_servers'是您要在其上部署Fluent Bit的服务器的名称或IP地址。还要将'/path/to/fluent-bit.conf'替换为Fluent Bit配置文件的实际路径。 3. 运行Ansible playbook 运行上述playbook以安装和配置Fluent Bit: ``` $ ansible-playbook fluentbit.yml ``` 在运行此命令之前,请确保已将Fluent Bit配置文件复制到正确的位置,并且在服务器上已安装了Fluent Bit软件包。 4. 验证Fluent Bit 一旦Fluent Bit已安装和配置,您可以使用以下命令检查它是否正在运行: ``` $ sudo systemctl status fluent-bit ``` 此命令应显示Fluent Bit服务的状态信息。如果一切正常,您应该看到“active (running)”状态。 此外,您还可以使用以下命令检查Fluent Bit是否正在接收和处理数据: ``` $ sudo tail -f /var/log/syslog | grep fluent-bit ``` 此命令应显示Fluent Bit正在处理的日志消息。 这就是使用Ansible部署Fluent Bit的步骤。请记住,在实际环境中,您需要根据自己的需求和环境进行自定义配置。
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

意海还念か

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值