EFK+kafka消峰日志

概述

在使用EFK采集k8s集群日志的时候,如果日志太多,elasticsearch的写入压力会非常大,所以需要引入消息队列kafka缓解压力

这是日志采集的数据流图

 

安装

在这次试验中filebeat,logstash,elasticsearch,kibana这些程序我安装在k8s集群中,使用yaml文件发布。

kafka程序因为有注册中心zookeeper的特殊性,再加上目前没有官方封装的docker镜像,所以我使用传统方式安装。

安装elasticsearch

创建es-conf.yaml文件,内容如下,然后发布资源

kubectl apply -f es-conf.yaml

es程序的配置文件:

apiVersion: v1
kind: ConfigMap
metadata:
  name: es-conf
  namespace: syslogs
  labels:
    k8s-app: es-conf
data:
  elasticsearch.yml: |-
    ###集群名称,集群的节点要配置相同的集群名称###
    cluster.name: es-k8s-cluster

    ###ES的节点名称###
    node.name: es-node-1

    ###作为控制节点###
    node.master: true
    ###作为数据节点###
    node.data: true
    ###不作为流处理节点###
    node.ingest: false

    ###单机模式,不会加入任何集群####
    discovery.type: single-node



    ###绑定的IP地址####
    network.host: 0.0.0.0
    http.cors.enabled: true
    http.cors.allow-origin: "*"


 

elasticsearch属于有状态应用,适合使用StatefulSet类型资源,不建议使用deployment


apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es
  namespace: syslogs  
  labels:
    app: es  
spec:
  serviceName: "es"
  replicas: 1
  selector:
    matchLabels:
      app: es
  template:
    metadata:


      labels:
        app: es
        version: v1
    spec:
      containers:
      - name: es
        ###es的镜像###
        image: elasticsearch:7.1.1
        ###
        imagePullPolicy: Always
        env:
        ###设置时区###
        - name: TZ
          value: Asia/Shanghai             
        
        - name: ES_JAVA_OPTS
          value: "-Xms4096m -Xmx4096m -Djava.security.egd=file:/dev/./urandom"       

          
        ###设置es可以使用的cpu和内存###
        resources:
          limits:
            cpu: "2"
            memory: 5Gi
          requests:
            cpu: "2"
            memory: 5Gi
        ###设置es的端口###    
        ports:
        - containerPort: 9200
        - containerPort: 9300
        

  
          
          
        ###挂载的卷###      
        volumeMounts:
        - name: nfs-es           
          mountPath: /usr/share/elasticsearch/data
          subPath: data 


        

          ##从cm设置es的参数###
        - name: es-conf            
          mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
          subPath: elasticsearch.yml 
         
        


      volumes:

          
      - name: es-conf 
        configMap:
          name: es-conf
          defaultMode: 420

            
      ##nfs共享挂载点
      - name: nfs-es  
        nfs:
          path: /nfs/data/es7      
          server: 192.168.0.21         




---

apiVersion: v1
kind: Service
metadata:
  name: es
  namespace: syslogs    
  labels:
    app: es
spec:
  ports:
  - port: 9200
    targetPort: 9200
    nodePort: 32001 
    name: http
  - port: 9300
    targetPort: 9300
    nodePort: 32002 
    name: tcp
      

  selector:
    app: es
  type: NodePort

 

安装kibana

 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: syslogs
  labels:
    k8s-app: kibana

    
spec:
  replicas: 1
  ###更新:容器准备完成之后,延迟60s,配合strategy.maxUnavailable: 0时,可以忽略###
  #minReadySeconds: 60
  strategy:
    ###升级方式,默认的方式###
    type: RollingUpdate 
    ###严格控制,每次升级一个pod,不可用状态为0个pod###
    rollingUpdate:
      ###滚动升级时会先启动1个pod###
      maxSurge: 1
      ###滚动升级时允许的最大不可用的pod个数###      
      maxUnavailable: 0
  
  
  
  selector:
    matchLabels:
      k8s-app: kibana
  template:
    metadata:
      labels:
        k8s-app: kibana
        version: v1

    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:7.1.1
        ###
        imagePullPolicy: IfNotPresent
    
        
                
        
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 500m
        env:
            - name: ELASTICSEARCH_HOSTS
              value: http://es:9200



            ###设置时区###
            - name: TZ
              value: Asia/Shanghai                   
            ##设置中文    
            - name: I18N_LOCALE
              value: zh-CN 


        ports:
        - containerPort: 5601
          name: ui
          protocol: TCP
---

apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: syslogs
  labels:
    k8s-app: kibana

spec:
  ports:
  - port: 5601
    protocol: TCP
    nodePort: 32564
    targetPort: ui
  selector:
    k8s-app: kibana
    
  type: NodePort

 

安装logstash

 

apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-config
  namespace: syslogs
  labels:
    k8s-app: logstash
data:
  logstash.conf: |-
    input{
      kafka{
        ##kafka的地址
        bootstrap_servers => "192.168.0.21:9092"
        topics => "msg"
        consumer_threads => 10
        decorate_events => true
        codec => "json"
        auto_offset_reset => "latest"
     
        }
    }

    filter {
      if ([kubernetes][namespace] != "default") {
         drop {}
      }     
    } 
    output{
      elasticsearch {
        hosts => ["es:9200"]
        index => "filebeat-%{+YYYY.MM.dd}-01"
        }
    }        
  

 

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: logstash
  name: logstash
  namespace: syslogs
spec:
  replicas: 1

  selector:
    matchLabels:
      k8s-app: logstash
  template:
    metadata:
      labels:
        k8s-app: logstash
    spec:
      containers:
        - name: logstash
          image: docker.elastic.co/logstash/logstash:7.1.1
          env:
            - name: "XPACK_MONITORING_ENABLED"
              value: "false"

        
            
          volumeMounts:
            - mountPath: /usr/share/logstash/pipeline/logstash.conf
              name: config
              readOnly: true
              subPath: logstash.conf              

      volumes:
      - name: config
        configMap:
          name: logstash-config

 

安装filebeat

 

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: syslogs
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.inputs:
        - type: docker
          containers.ids:
          - "*"
          
          processors:
            - add_kubernetes_metadata:
                in_cluster: true



    output.kafka:
      enabled: true
      hosts: ["192.168.0.21:9092"]
      topic: msg      
  

 

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: syslogs
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat

      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.1.1

        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /usr/share/filebeat/filebeat.yml
          readOnly: true
          subPath: filebeat.yml

        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

     
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate


 

filebeat程序部署在k8s集群中需要发布一个rbac资源,因为“processors.add_kubernetes_metadata”这个功能,需要获取pod和node的一些信息。

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: syslogs
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: syslogs
  labels:
    k8s-app: filebeat
---

安装kafka

使用bin方式安装,没什么好讲的。

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值