kubernetes1.13.4部署EFK

基础环境

拥有一个完美运行的kubernetes1.13.4集群,可参考我的部署文章创建自己的集群。

部署步骤

  1. 编写部署需要的yaml文件

    • es-psp-binding.yaml

      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: gce:podsecuritypolicy:elasticsearch-logging
        namespace: kube-system
        labels:
          addonmanager.kubernetes.io/mode: Reconcile
          kubernetes.io/cluster-service: "true"
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: gce:podsecuritypolicy:privileged
      subjects:
      - kind: ServiceAccount
        name: elasticsearch-logging
        namespace: kube-system
      
    • es-statefulset.yaml

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: elasticsearch-logging
        namespace: kube-system
        labels:
          k8s-app: elasticsearch-logging
          kubernetes.io/cluster-service: "true"
          addonmanager.kubernetes.io/mode: Reconcile
      ---
      kind: ClusterRole
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: elasticsearch-logging
        labels:
          k8s-app: elasticsearch-logging
          kubernetes.io/cluster-service: "true"
          addonmanager.kubernetes.io/mode: Reconcile
      rules:
      - apiGroups:
        - ""
        resources:
        - "services"
        - "namespaces"
        - "endpoints"
        verbs:
        - "get"
      ---
      kind: ClusterRoleBinding
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        namespace: kube-system
        name: elasticsearch-logging
        labels:
          k8s-app: elasticsearch-logging
          kubernetes.io/cluster-service: "true"
          addonmanager.kubernetes.io/mode: Reconcile
      subjects:
      - kind: ServiceAccount
        name: elasticsearch-logging
        namespace: kube-system
        apiGroup: ""
      roleRef:
        kind: ClusterRole
        name: elasticsearch-logging
        apiGroup: ""
      ---
      apiVersion: apps/v1
      kind: StatefulSet
      metadata:
        name: elasticsearch-logging
        namespace: kube-system
        labels:
          k8s-app: elasticsearch-logging
          version: v6.3.0
          kubernetes.io/cluster-service: "true"
          addonmanager.kubernetes.io/mode: Reconcile
      spec:
        serviceName: elasticsearch-logging
        replicas: 2
        selector:
          matchLabels:
            k8s-app: elasticsearch-logging
            version: v6.3.0
        template:
          metadata:
            labels:
              k8s-app: elasticsearch-logging
              version: v6.3.0
              kubernetes.io/cluster-service: "true"
          spec:
            serviceAccountName: elasticsearch-logging
            containers:
            - image: docker.io/mirrorgooglecontainers/elasticsearch:v6.3.0
              name: elasticsearch-logging
              resources:
                limits:
                  cpu: 1000m
                requests:
                  cpu: 100m
              ports:
              - containerPort: 9200
                name: db
                protocol: TCP
              - containerPort: 9300
                name: transport
                protocol: TCP
              volumeMounts:
              - name: elasticsearch-logging
                mountPath: /data
              env:
              - name: "NAMESPACE"
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.namespace
            volumes:
            - name: elasticsearch-logging
              emptyDir: {}
            initContainers:
            - image: alpine:3.6
              command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
              name: elasticsearch-logging-init
              securityContext:
                privileged: true
      
    • fluentd-es-ds.yaml

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: fluentd-es
        namespace: kube-system
        labels:
          k8s-app: fluentd-es
          kubernetes.io/cluster-service: "true"
          addonmanager.kubernetes.io/mode: Reconcile
      ---
      kind: ClusterRole
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: fluentd-es
        labels:
          k8s-app: fluentd-es
          kubernetes.io/cluster-service: "true"
          addonmanager.kubernetes.io/mode: Reconcile
      rules:
      - apiGroups:
        - ""
        resources:
        - "namespaces"
        - "pods"
        verbs:
        - "get"
        - "watch"
        - "list"
      ---
      kind: ClusterRoleBinding
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: fluentd-es
        labels:
          k8s-app: fluentd-es
          kubernetes.io/cluster-service: "true"
          addonmanager.kubernetes.io/mode: Reconcile
      subjects:
      - kind: ServiceAccount
        name: fluentd-es
        namespace: kube-system
        apiGroup: ""
      roleRef:
        kind: ClusterRole
        name: fluentd-es
        apiGroup: ""
      ---
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        name: fluentd-es-v2.2.0
        namespace: kube-system
        labels:
          k8s-app: fluentd-es
          version: v2.2.0
          kubernetes.io/cluster-service: "true"
          addonmanager.kubernetes.io/mode: Reconcile
      spec:
        selector:
          matchLabels:
            k8s-app: fluentd-es
            version: v2.2.0
        template:
          metadata:
            labels:
              k8s-app: fluentd-es
              kubernetes.io/cluster-service: "true"
              version: v2.2.0
            annotations:
              scheduler.alpha.kubernetes.io/critical-pod: ''
              seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
          spec:
            priorityClassName: system-node-critical
            serviceAccountName: fluentd-es
            containers:
            - name: fluentd-es
              image: docker.io/mirrorgooglecontainers/fluentd-elasticsearch:v2.2.0
              env:
              - name: FLUENTD_ARGS
                value: --no-supervisor -q
              resources:
                limits:
                  memory: 500Mi
                requests:
                  cpu: 100m
                  memory: 200Mi
              volumeMounts:
              - name: varlog
                mountPath: /var/log
              - name: varlibdockercontainers
                mountPath: /var/lib/docker/containers
                readOnly: true
              - name: config-volume
                mountPath: /etc/fluent/config.d
            nodeSelector:
              beta.kubernetes.io/fluentd-ds-ready: "true"
            terminationGracePeriodSeconds: 30
            volumes:
            - name: varlog
              hostPath:
                path: /var/log
            - name: varlibdockercontainers
              hostPath:
                path: /var/lib/docker/containers
            - name: config-volume
              configMap:
                name: fluentd-es-config-v0.1.5
      
    • kibana-service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: kibana-logging
        namespace: kube-system
        labels:
          k8s-app: kibana-logging
          kubernetes.io/cluster-service: "true"
          addonmanager.kubernetes.io/mode: Reconcile
          kubernetes.io/name: "Kibana"
      spec:
        ports:
        - port: 5601
          protocol: TCP
          targetPort: ui
        selector:
          k8s-app: kibana-logging
      
    • es-service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: elasticsearch-logging
        namespace: kube-system
        labels:
          k8s-app: elasticsearch-logging
          kubernetes.io/cluster-service: "true"
          addonmanager.kubernetes.io/mode: Reconcile
          kubernetes.io/name: "Elasticsearch"
      spec:
        ports:
        - port: 9200
          protocol: TCP
          targetPort: db
        selector:
          k8s-app: elasticsearch-logging
      
    • fluentd-es-configmap.yaml

      kind: ConfigMap
      apiVersion: v1
      metadata:
        name: fluentd-es-config-v0.1.5
        namespace: kube-system
        labels:
          addonmanager.kubernetes.io/mode: Reconcile
      data:
        system.conf: |-
          <system>
            root_dir /tmp/fluentd-buffers/
          </system>
        containers.input.conf: |-
          <source>
            @id fluentd-containers.log
            @type tail
            path /var/log/containers/*.log
            pos_file /var/log/es-containers.log.pos
            tag raw.kubernetes.*
            read_from_head true
            <parse>
              @type multi_format
              <pattern>
                format json
                time_key time
                time_format %Y-%m-%dT%H:%M:%S.%NZ
              </pattern>
              <pattern>
                format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
                time_format %Y-%m-%dT%H:%M:%S.%N%:z
              </pattern>
            </parse>
          </source>
          <match raw.kubernetes.**>
            @id raw.kubernetes
            @type detect_exceptions
            remove_tag_prefix raw
            message log
            stream stream
            multiline_flush_interval 5
            max_bytes 500000
            max_lines 1000
          </match>
        system.input.conf: |-
          <source>
            @id minion
            @type tail
            format /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/
            time_format %Y-%m-%d %H:%M:%S
            path /var/log/salt/minion
            pos_file /var/log/salt.pos
            tag salt
          </source>
          <source>
            @id startupscript.log
            @type tail
            format syslog
            path /var/log/startupscript.log
            pos_file /var/log/es-startupscript.log.pos
            tag startupscript
          </source>
          <source>
            @id docker.log
            @type tail
            format /^time="(?<time>[^)]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/
            path /var/log/docker.log
            pos_file /var/log/es-docker.log.pos
            tag docker
          </source>
          <source>
            @id etcd.log
            @type tail
            format none
            path /var/log/etcd.log
            pos_file /var/log/es-etcd.log.pos
            tag etcd
          </source>
          <source>
            @id kubelet.log
            @type tail
            format multiline
            multiline_flush_interval 5s
            format_firstline /^\w\d{4}/
            format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
            time_format %m%d %H:%M:%S.%N
            path /var/log/kubelet.log
            pos_file /var/log/es-kubelet.log.pos
            tag kubelet
          </source>
          <source>
            @id kube-proxy.log
            @type tail
            format multiline
            multiline_flush_interval 5s
            format_firstline /^\w\d{4}/
            format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
            time_format %m%d %H:%M:%S.%N
            path /var/log/kube-proxy.log
            pos_file /var/log/es-kube-proxy.log.pos
            tag kube-proxy
          </source>
          <source>
            @id kube-apiserver.log
            @type tail
            format multiline
            multiline_flush_interval 5s
            format_firstline /^\w\d{4}/
            format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
            time_format %m%d %H:%M:%S.%N
            path /var/log/kube-apiserver.log
            pos_file /var/log/es-kube-apiserver.log.pos
            tag kube-apiserver
          </source>
          <source>
            @id kube-controller-manager.log
            @type tail
            format multiline
            multiline_flush_interval 5s
            format_firstline /^\w\d{4}/
            format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
            time_format %m%d %H:%M:%S.%N
            path /var/log/kube-controller-manager.log
            pos_file /var/log/es-kube-controller-manager.log.pos
            tag kube-controller-manager
          </source>
          <source>
            @id kube-scheduler.log
            @type tail
            format multiline
            multiline_flush_interval 5s
            format_firstline /^\w\d{4}/
            format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
            time_format %m%d %H:%M:%S.%N
            path /var/log/kube-scheduler.log
            pos_file /var/log/es-kube-scheduler.log.pos
            tag kube-scheduler
          </source>
          <source>
            @id glbc.log
            @type tail
            format multiline
            multiline_flush_interval 5s
            format_firstline /^\w\d{4}/
            format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
            time_format %m%d %H:%M:%S.%N
            path /var/log/glbc.log
            pos_file /var/log/es-glbc.log.pos
            tag glbc
          </source>
          <source>
            @id cluster-autoscaler.log
            @type tail
            format multiline
            multiline_flush_interval 5s
            format_firstline /^\w\d{4}/
            format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
            time_format %m%d %H:%M:%S.%N
            path /var/log/cluster-autoscaler.log
            pos_file /var/log/es-cluster-autoscaler.log.pos
            tag cluster-autoscaler
          </source>
          <source>
            @id journald-docker
            @type systemd
            matches [{ "_SYSTEMD_UNIT": "docker.service" }]
            <storage>
              @type local
              persistent true
              path /var/log/journald-docker.pos
            </storage>
            read_from_head true
            tag docker
          </source>
          <source>
            @id journald-container-runtime
            @type systemd
            matches [{ "_SYSTEMD_UNIT": "{{ fluentd_container_runtime_service }}.service" }]
            <storage>
              @type local
              persistent true
              path /var/log/journald-container-runtime.pos
            </storage>
            read_from_head true
            tag container-runtime
          </source>
          <source>
            @id journald-kubelet
            @type systemd
            matches [{ "_SYSTEMD_UNIT": "kubelet.service" }]
            <storage>
              @type local
              persistent true
              path /var/log/journald-kubelet.pos
            </storage>
            read_from_head true
            tag kubelet
          </source>
          <source>
            @id journald-node-problem-detector
            @type systemd
            matches [{ "_SYSTEMD_UNIT": "node-problem-detector.service" }]
            <storage>
              @type local
              persistent true
              path /var/log/journald-node-problem-detector.pos
            </storage>
            read_from_head true
            tag node-problem-detector
          </source>
          <source>
            @id kernel
            @type systemd
            matches [{ "_TRANSPORT": "kernel" }]
            <storage>
              @type local
              persistent true
              path /var/log/kernel.pos
            </storage>
            <entry>
              fields_strip_underscores true
              fields_lowercase true
            </entry>
            read_from_head true
            tag kernel
          </source>
        forward.input.conf: |-
          <source>
            @type forward
          </source>
        monitoring.conf: |-
          <source>
            @type prometheus
          </source>
          <source>
            @type monitor_agent
          </source>
          <source>
            @type prometheus_monitor
            <labels>
              host ${hostname}
            </labels>
          </source>
          <source>
            @type prometheus_output_monitor
            <labels>
              host ${hostname}
            </labels>
          </source>
          <source>
            @type prometheus_tail_monitor
            <labels>
              host ${hostname}
            </labels>
          </source>
        output.conf: |-
          <filter kubernetes.**>
            @type kubernetes_metadata
          </filter>
          <match **>
            @id elasticsearch
            @type elasticsearch
            @log_level info
            type_name fluentd
            include_tag_key true
            host elasticsearch-logging
            port 9200
            logstash_format true
            <buffer>
              @type file
              path /var/log/fluentd-buffers/kubernetes.system.buffer
              flush_mode interval
              retry_type exponential_backoff
              flush_thread_count 2
              flush_interval 5s
              retry_forever
              retry_max_interval 30
              chunk_limit_size 2M
              queue_limit_length 8
              overflow_action block
            </buffer>
          </match>
      
    • kibana-deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: kibana-logging
        namespace: kube-system
        labels:
          k8s-app: kibana-logging
          kubernetes.io/cluster-service: "true"
          addonmanager.kubernetes.io/mode: Reconcile
      spec:
        replicas: 1
        selector:
          matchLabels:
            k8s-app: kibana-logging
        template:
          metadata:
            labels:
              k8s-app: kibana-logging
            annotations:
              seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
          spec:
            containers:
            - name: kibana-logging
              image: docker.elastic.co/kibana/kibana-oss:6.3.2
              resources:
                limits:
                  cpu: 1000m
                requests:
                  cpu: 100m
              env:
                - name: ELASTICSEARCH_URL
                  value: http://elasticsearch-logging:9200
                - name: SERVER_BASEPATH
                  value: /api/v1/namespaces/kube-system/services/kibana-logging/proxy
              ports:
              - containerPort: 5601
                name: ui
                protocol: TCP
      
      
  2. 执行命令,完成部署

    $ ls
    es-psp-binding.yaml  es-statefulset.yaml        fluentd-es-ds.yaml      kibana-service.yaml es-service.yaml      fluentd-es-configmap.yaml  kibana-deployment.yaml
    
    $ kubectl create -f .
    
    # 理论上这样部署是没什么问题,不过我为了踩坑部署顺序为 fluentd-es-configmap.yaml > fluentd-es-ds.yaml 部署完毕之后再是 es-psp-binding.yaml > es-statefulset.yaml > es-service.yaml 部署完毕之后再是 kibana-deployment.yaml > kibana-service.yaml,供大家参考。
    
    
  3. 访问kibana

    通过proxy代理将kibana服务暴露出来:

    $ kubectl proxy --address='10.67.34.130' --port=8086 --accept-hosts='^*$'
    

    浏览器访问:
    http://10.67.34.130:8086/api/v1/namespaces/kube-system/services/kibana-logging/proxy

    即可进入kibana的图形操作界面。

    首先我们需要创建一个index-pattern索引,

    点击导航栏 “Management” -> “index pattern”, Index pattern默认 logstash-* 即可;

    点击Next Step;

    “Time-field name” 默认 @timestamp,

    最后点击 “Create” 即可完成索引创建。

    等待一会,查看边栏中的”Discover”,就可以看到日志内容输出了。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值