kubernetes日志收集方案

kubernetes日志收集业务日志的方案有两种:

  1. 将日志采集工具filebeat容器和业务容器封装在同一个pod中.业务容器与filebeat目录共享日志目录
  2. 将filebeat容器以daemonsetful的模式再每个节点部署,节点的目录挂载给业务容器作为日志目录,挂载给filebeat容器作为采集目录.

方案1一旦业务pod变多会消耗更多的计算资源,但采集日志的效果更好,适合用于核心业务的日志采集.

方案2节省计算资源,但日志采集的效果略差

可以考虑混合使用,核心业务使用方案1,普通业务使用方案2采集

本次先简略介绍方案2

首先部署elasticsearch,因为这是一个有状态的应用需要使用statefulset控制器,并且将数据目录挂载到存储卷上.

为elasticsearch创建动态存储卷类

# 创建provisioner服务使用的账号
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: kube-system

---
# 创建角色具有provisioner服务需要的权限
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

---
# 将provisioner服务使用的账号绑定到拥有权限的角色中
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: kube-system 
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

# 部署provisioner服务,指定服务账号并连接相应的nfs后端存储,如此provisioner便能成为k8s的一种自定义资源
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-provisioner-01
  namespace: kube-system
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-provisioner-01
  template:
    metadata:
      labels:
        app: nfs-provisioner-01
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: jmgao1983/nfs-client-provisioner:latest
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-provisioner-01  # 此处供应者名字供storageclass调用
            - name: NFS_SERVER
              value: 192.168.2.42   # 填入NFS的地址
            - name: NFS_PATH
              value: /nfs_dir   # 填入NFS挂载的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.2.42
            path: /nfs_dir   # 填入NFS挂载的目录

---
# 根据provisioner资源创建存储类,这是一种动态存储类,根据pvc的需求可以向此存储类定制需要的pv.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-boge
provisioner: nfs-provisioner-01

创建合适大小的pvc

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-es
spec:
  storageClassName: nfs-boge
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 800Mi

kubectl apply -f 这两个文件后结果如下

[root@k8s-1 storage]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM            STORAGECLASS   REASON   AGE
pvc-5793ae2c-4ad4-4946-9be3-d786483dc9e3   800Mi      RWX            Delete           Bound    default/pvc-es   nfs-boge                3h22m

创建ESpod

apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    k8s-app: elasticsearch-logging
    version: v6.8.13
  name: elasticsearch-logging
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: elasticsearch-logging
      version: v6.8.13
  serviceName: elasticsearch-logging
  template:
    metadata:
      labels:
        k8s-app: elasticsearch-logging
        version: v6.8.13
    spec:
      containers:
      - env:
        - name: cluster.name
          value: elasticsearch-logging-0
        - name: ES_JAVA_OPTS
          value: "-Xms512m -Xmx512m"
        image: elastic/elasticsearch:6.8.13
        name: elasticsearch-logging
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
        volumeMounts:
        - mountPath: /usr/share/elasticsearch/data
          name: elasticsearch-logging
        - name: tz-config
          mountPath: /etc/localtime
      dnsConfig:
        options:
        - name: single-request-reopen

      initContainers:
      - command:
        - /sbin/sysctl
        - -w
        - vm.max_map_count=262144
        image: alpine:3.12
        imagePullPolicy: IfNotPresent
        name: elasticsearch-logging-init
        resources: {}
        securityContext:
          privileged: true
      - name: fix-permissions
        image: alpine:3.12
        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: elasticsearch-logging
          mountPath: /usr/share/elasticsearch/data

      volumes:
      - name: elasticsearch-logging
        persistentVolumeClaim:
            claimName: pvc-es  #使用前面声明的pvc资源作为存储卷持久保存数据
      - name: tz-config
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: elasticsearch-logging
  name: elasticsearch
spec:
  ports:
  - port: 9200
    protocol: TCP
    targetPort: db
  selector:
    k8s-app: elasticsearch-logging
  type: NodePort

创建kibana

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  labels:
    app: kibana
spec:
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: elastic/kibana:6.8.13
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        env:
          - name: ELASTICSEARCH_URL
            value: http://elasticsearch:9200
        ports:
        - containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:
  name: kibana
  labels:
    app: kibana
spec:
  ports:
  - port: 5601
    protocol: TCP
    targetPort: 5601
  type: NodePort
  selector:
    app: kibana

创建filebeatpod

# 创建configmap保存filebeat的配置文件
apiVersion: v1
kind: ConfigMap
metadata:
  name: k8s-logs-filebeat-config
data:
  filebeat.yml: |-
     filebeat.inputs:
     - type: log
       enabled: true
       paths:
         - /nginx/access.log
       fields:
         source: "access"

     - type: log
       enabled: true
       paths:
         - /nginx/error.log
       fields:
         source: "error"
         
	## 下面试用java类型的日志文件采集
     - type: log
       enabled: true
       paths:
         - /nginx/catalina.*.log
       multiline.pattern: '^\['
       multiline.negate: true
       multiline.match: "after"
       fields:
         source: "tomcat"

	## 输出到后端ES进行存储,根据字段判定存储的索引名字
     output.elasticsearch:
       hosts: ["elasticsearch:9200"]
       indices:
         - index: "nginx-acc-%{+yyyy.MM.dd}"
           when.contains:
             fields:
               source: "acc"
         - index: "nginx-err-%{+yyyy.MM.dd}"
           when.contains:
             fields:
               source: "err"
         - index: "tomcat-%{+yyyy.MM.dd}"
           when.contains:
             fields:
               source: "tomcat"
---

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: k8s-logs
spec:
  selector:
    matchLabels:
      project: k8s
      app: filebeat
  template:
    metadata:
      labels:
        project: k8s
        app: filebeat
    spec:
      containers:
      - name: filebeat
        image: elastic/filebeat:6.8.13
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
          limits:
            cpu: 500m
            memory: 500Mi
        securityContext:
          runAsUser: 0
        volumeMounts:
        - name: filebeat-config
          mountPath: /etc/filebeat.yml
          subPath: filebeat.yml
        - name: nginx-logs
          mountPath: /nginx
        - name: tz-config
          mountPath: /etc/localtime
        - name: filebeat-data
          mountPath: /usr/share/filebeat/data/

      volumes:
      - name: tz-config
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai
          
      # 此为宿主机共享的日志目录    
      - name: nginx-logs
        hostPath:
          path: /data
          type: DirectoryOrCreate
          
      - name: filebeat-data
        hostPath:
          path: /filebeat-data
          type: DirectoryOrCreate
      - name: filebeat-config
        configMap:
          name: k8s-logs-filebeat-config

配上一个测试采集日志文件的nginx

apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        volumeMounts:    # 我们这里将nginx容器默认的页面目录挂载
          - name: html-files
            mountPath: "/var/log/nginx/"
          - name: tz-config
            mountPath: /etc/localtime
      nodeSelector:
        kubernetes.io/hostname: k8s-2
      volumes:
      - name: tz-config
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai
      - name: html-files
        hostPath:
          path: /data
          type: DirectoryOrCreate 
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值