EFK + Kafka 日志收集

简易方案:https://blog.csdn.net/weixin_42555971/article/details/127101376

方案

  1. 部署 FluentBit 从集群每个节点采集日志
  2. 推送 kafka 削峰并保存
  3. 部署 FileBeat 从 kafka 消费日志
  4. FileBeat 发送日志到 ElasticSearch 并保存
  5. 部署 Kibana 展示 ElasticSearch 数据
采集
采集
采集
推送
推送
推送
消费
消费
消费
发送
发送
发送
展示
节点
FluentBit
节点
FluentBit
节点
FluentBit
Kafka
FileBeat
FileBeat
FileBeat
Elastic
Kibana

 

部署 FluentBit

官方文档:https://github.com/fluent/fluent-bit-kubernetes-logging

  1. 部署 service-account、role、role-binding

Kubernetes v1.21 以下

kubectl create namespace logging
kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-service-account.yaml
kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role.yaml
kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role-binding.yaml

Kubernetes v1.22 以上

kubectl create namespace logging
kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-service-account.yaml
kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role-1.22.yaml
kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role-binding-1.22.yaml

fluent-bit-service-account.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluent-bit
  namespace: logging

fluent-bit-role-1.22.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluent-bit-read
rules:
- apiGroups: [""]
  resources:
  - namespaces
  - pods
  verbs: ["get", "list", "watch"]

fluent-bit-role-binding-1.22.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: fluent-bit-read
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: fluent-bit-read
subjects:
- kind: ServiceAccount
  name: fluent-bit
  namespace: logging
  1. 部署 ConfigMap

部署命令

kubectl apply -f fluent-bit-configmap.yaml

编写 fluent-bit-configmap.yaml

官网:https://docs.fluentbit.io/manual/pipeline/outputs/kafka
正则:http://rubular.com/

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
  namespace: log
  labels:
    k8s-app: fluent-bit
data:
  # Configuration files: server, input, filters and output
  # ======================================================
  fluent-bit.conf: |
    [SERVICE]
        Flush         1
        Log_Level     info
        Daemon        off
        Parsers_File  parsers.conf
        HTTP_Server   On
        HTTP_Listen   0.0.0.0
        HTTP_Port     2020

    @INCLUDE input-kubernetes.conf
    @INCLUDE filter-kubernetes.conf
    @INCLUDE output-kafka.conf

  input-kubernetes.conf: |
    [INPUT]
        Name              tail
        Tag               kube.njc.dev.permission
        Path              /var/log/containers/*dev_permission-service*.log
        DB                /var/log/flb_kube.db
        Mem_Buf_Limit     5MB
        Rotate_Wait       60
        Skip_Long_Lines   On
        Refresh_Interval  10
        Multiline         On
        Parser_Firstline  multiline_pattern
        
  filter-kubernetes.conf: |        
    [FILTER]
        Name                kubernetes
        Match               kube.njc.*
        Kube_URL            https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        Kube_Tag_Prefix     kube.var.log.containers.
        Merge_Log           On
        Merge_Log_Key       log_processed
        K8S-Logging.Parser  On
        K8S-Logging.Exclude Off
        
    [FILTER]
        Name         parser
        Match        kube.njc.dev.permission
        Key_Name     log
        Parser       njc_service_log
        Reserve_Data  On
        Preserve_Key  On

  output-kafka.conf: |
    [OUTPUT]
        Name           kafka
        Match          kube.njc.dev.permission
        Brokers        192.168.140.16:31823,192.168.140.16:30499,192.168.140.16:30906
        Topics         service.log.dev.permission
        rdkafka.message.max.bytes    200000000
        rdkafka.fetch.message.max.bytes    204857600
        
  parsers.conf: |
    [PARSER]
        Name   njc_service_log
        Format regex
        Regex  ^(?<time>[^ ]*) [^ ]* (?<level>[^ ]*) [^ ]* (?<class>[^ ]*): (?<Message>[^ \]].*)
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z
        
    [PARSER]
        Name multiline_pattern
        Format regex
        Regex ^(?<log>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\.\d{3}.*)
        Time_Key  time
        Time_Format %b %d %H:%M:%S

修改 INPUT 的 Path,按需过滤容器日志
注意 output-kafka 的 Match 和 FILTER 的 Match 以及 INPUT 的 Tag 一样
修改 kafka 的 Brokers 和 Topics

集群节点日志存储情况
在这里插入图片描述

  1. 部署 DaemonSet

部署命令

kubectl apply -f fluent-bit-ds.yaml

编写 fluent-bit-ds.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluent-bit
  namespace: log
  labels:
    k8s-app: fluent-bit-logging
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      k8s-app: fluent-bit-logging
  template:
    metadata:
      labels:
        k8s-app: fluent-bit-logging
        version: v1
        kubernetes.io/cluster-service: "true"
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "2020"
        prometheus.io/path: /api/v1/metrics/prometheus
    spec:
      containers:
      - name: fluent-bit
        image: fluent/fluent-bit:1.5
        imagePullPolicy: Always
        ports:
        - containerPort: 2020
        readinessProbe:
          httpGet:
            path: /api/v1/metrics/prometheus
            port: 2020
        livenessProbe:
          httpGet:
            path: /
            port: 2020
        resources:
          requests:
            cpu: 5m
            memory: 10Mi
          limits:
            cpu: 50m
            memory: 60Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: fluent-bit-config
          mountPath: /fluent-bit/etc/
      terminationGracePeriodSeconds: 10
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: fluent-bit-config
        configMap:
          name: fluent-bit-config
      serviceAccountName: fluent-bit
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      - operator: "Exists"
        effect: "NoExecute"
      - operator: "Exists"
        effect: "NoSchedule"

部署结果:使用 DaemonSet 部署,保证每个 node 节点均运行着一个 pod,可以采集所有节点的日志
在这里插入图片描述

部署日志:可以看见成功连接 kafka 集群
在这里插入图片描述

 

部署 Kafka

部署文档: https://blog.csdn.net/weixin_42555971/article/details/125018511

部署命令

helm install kafka bitnami/kafka -f values.yaml -n component

配置文件

global:
  storageClass: "nfs-client"
replicaCount: 3
externalAccess:
  enabled: true
  service:
    type: NodePort
  autoDiscovery:
    enabled: true
serviceAccount:
  create: true
rbac:
  create: true

replicaCount:副本数量
externalAccess:自动创建 NodePort
serviceAccount:自动创建 serviceAccount
rbac:自动创建 rbac

部署结果
在这里插入图片描述

kafka成功收集到节点日志信息
在这里插入图片描述

 

部署 FileBeat

官方文档: https://github.com/elastic/helm-charts/tree/master/filebeat

部署命令

helm repo add elastic https://helm.elastic.co
helm install filebeat elastic/filebeat -f values.yaml

values.yaml 文件

filebeatConfig:
  filebeat.yml: |
    filebeat.inputs:
    - type: kafka
      hosts:
        - 192.168.140.16:31823
        - 192.168.140.16:30499
        - 192.168.140.16:30906
      topics: ["service.log.dev.permission"]
      group_id: "xxx"
      max_wait_time: 1000ms

    processors:
      - decode_json_fields:
          fields: ["message"]
          max_depth: 3
          target: ""
          overwrite_keys: false
          process_array: true
      - drop_fields:
          fields: ["@metadata", "agent","host", "ecs", "message","kubernetes.annotations","kubernetes.labels","log_processed.time
    ","log.time"]

    filebeat.config.modules:
      path: ${path.config}/modules.d/*.yml
      reload.enabled: false

    setup.template.settings:
      index.number_of_shards: 1
      index.refresh_interval: 10s
      index.number_of_replicas: 1
      index.codec: best_compression
    setup.template.enabled: true
    setup.template.fields: fields.yml
    setup.template.name: "k8s"
    setup.template.pattern: "k8s-*"
    setup.template.enabled: true
    setup.ilm.enabled: false

    output.elasticsearch:
      username: 'elastic'
      password: '0215szA91NN3q6fk6MQB6UvS'
      protocol: http
      hosts: ["es-es-es:9200"]
      index: "k8s-%{[kafka.topic]}-%{+yyyy.MM.dd}"

修改 kafka 的 hosts、topics 和 group_id
修改 elasticsearch 的用户名、密码、hosts 和 index
因处于同一集群下,elasticsearch 配置使用了 Headless

部署结果
在这里插入图片描述

部署日志
在这里插入图片描述
 

部署 Elastic

官网:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html

安装 eck

安装 crds 和 operator

kubectl create -f https://download.elastic.co/downloads/eck/2.4.0/crds.yaml
kubectl apply -f https://download.elastic.co/downloads/eck/2.4.0/custom resource definitions.yaml

查看 operator 运行日志

kubectl -n elastic-system logs -f statefulset.apps/elastic-operator

在这里插入图片描述
 

安装 elastic

部署命令

kubectl apply -f deployment.yaml

deployment.yaml 文件

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch # 需要安装es operator
metadata:
  name: es
  namespace: els-test
spec:
  version: 7.11.2
  http:
    tls:
      selfSignedCertificate:
        disabled: true
  nodeSets:
    - name: es
      count: 3
      volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
          spec:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 10Gi
            storageClassName: nfs-client
      podTemplate:
        spec:           
          containers:
            - name: elasticsearch
              env:
                - name: ES_JAVA_OPTS
                  value: "-Xms2g -Xmx2g"
              resources:
                requests:
                  memory: 3Gi
                limits:
                  memory: 3Gi

namespace:配置命名空间
tls:关闭 tls 协议,配置文章:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-transport-settings.html
storageClassName:配置存储卷,可以参考文章 nfs-client
storage:配置存储空间,预设 10Gi,配置文章:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-volume-claim-templates.html
resources:配置最大最小内存,配置文章:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-managing-compute-resources.html
ES_JAVA_OPTS:配置最大最小 java 内存,配置文章:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-jvm-heap-dumps.html

部署结果
在这里插入图片描述

账号:elastic
密码:

 kubectl  get secret es-es-elastic-user -n els-test -o=jsonpath='{.data.elastic}' | base64 --decode 

在这里插入图片描述

配置 ingress 或 nodeport ,参考文章:ingress 部署

结果如下即 elastic 部署成功,cluster_name 是 es

在这里插入图片描述

查看elastic集群节点状态:/_cat/nodes?v
在这里插入图片描述

查看elastic集群索引信息:/_cat/indices?v
可以看出:elastic 成功消费 kafka 的 topic,FileBeat 配置 index: “k8s-%{[kafka.topic]}-%{+yyyy.MM.dd}”
在这里插入图片描述
 

部署 Kibana

部署命令

kubectl apply -f kibana.yaml

kibana.yaml

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
  namespace: els-test
spec:
  version: 7.11.2
  count: 1
  elasticsearchRef:
    name: es
  http:
    tls:
      selfSignedCertificate:
        disabled: true

namespace:和 elastic 一样
tls:关闭 tls 协议
elasticsearchRef:和 elastic 的 cluster_name 一样
编写 deployment.yaml 文件

部署结果
在这里插入图片描述

配置 ingress 或 nodeport ,参考文章:ingress 部署

查看 index 索引
在这里插入图片描述

创建 index 索引
在这里插入图片描述
在这里插入图片描述

填写正则匹配,如:dev.permission
在这里插入图片描述

在这里插入图片描述

成功展示
在这里插入图片描述

选择 log.log,可以只看 java 服务的日志
在这里插入图片描述

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值