六步在 Kubernetes 上搭建 EFK 日志收集系统

六步在 Kubernetes 上搭建 EFK 日志收集系统

 

一:先创建一个命名空间,我们将在其中安装所有日志相关的资源对象。

 

# Vim kube-logging.yaml

 

kind: Namespace

apiVersion: v1

metadata:

  name: kube-logging

 

# kubectl create -f kube-logging.yaml

 

如果创建成果会输出一下内容:

Namespace /kube-logging created

 

 

查看命名空间命令:

#kubectl get namespaces

 

 

二:接下来可以部署EFK相关组件:

 

#vim elasticsearch_svc.yaml

kind: Service

apiVersion: v1

metadata:

  name: elasticsearch

  namespace: kube-logging

  labels:

    app: elasticsearch

spec:

  selector:

    app: elasticsearch

  clusterIP: None

  ports:

    - port: 9200

      name: rest

    - port: 9300

      name: inter-node

注释:

定义了一个名为 elasticsearch 的 Service,指定标签 app=elasticsearch,当我们将 Elasticsearch StatefulSet 与此服务关联时,服务将返回带有标签 app=elasticsearch的 Elasticsearch Pods 的 DNS A 记录,然后设置 clusterIP=None,将该服务设置成无头服务。最后,我们分别定义端口9200、9300,分别用于与 REST API 交互,以及用于节点间通信

 

使用 kubectl 直接创建上面的服务资源对象:

#kubectl create -f elasticsearch-svc.yaml

#service/elasticsearch created

#kubectl get services --namespace=logging

 

二:创建StatefulSet

#vim elasticsearch_statefulset.yaml

apiVersion: apps/v1

kind: StatefulSet

metadata:

  name: es-cluster

  namespace: kube-logging

spec:

  serviceName: elasticsearch

  replicas: 3

  selector:

    matchLabels:

      app: elasticsearch

  template:

    metadata:

      labels:

        app: elasticsearch

定义了一个名为 es-cluster 的 StatefulSet 对象,然后定义 serviceName=elasticsearch和前面创建的 Service 相关联,这可以确保使用以下 DNS 地址访问 StatefulSet 中的每一个 Pod: es-cluster-[0,1,2].elasticsearch.logging.svc.cluster.local,其中[0,1,2]对应于已分配的 Pod 序号。

然后指定3个副本,将 matchLabels 设置为 app=elasticsearch,所以 Pod 的模板部分 .spec.template.metadata.lables也必须包含 app=elasticsearch标签。

然后定义 Pod 模板部分内容:

 

 

# vim elasticsearch_statefulset.yaml

apiVersion: apps/v1

kind: StatefulSet

metadata:

  name: es-cluster

  namespace: kube-logging

spec:

  serviceName: elasticsearch

  replicas: 3

  selector:

    matchLabels:

      app: elasticsearch

  template:

    metadata:

      labels:

        app: elasticsearch

    spec:

      containers:

      - name: elasticsearch

        image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.3

        resources:

            limits:

              cpu: 1000m

            requests:

              cpu: 100m

        ports:

        - containerPort: 9200

          name: rest

          protocol: TCP

        - containerPort: 9300

          name: inter-node

          protocol: TCP

        volumeMounts:

        - name: data

          mountPath: /usr/share/elasticsearch/data

        env:

          - name: cluster.name

            value: k8s-logs

          - name: node.name

            valueFrom:

              fieldRef:

                fieldPath: metadata.name

          - name: discovery.zen.ping.unicast.hosts

            value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"

          - name: discovery.zen.minimum_master_nodes

            value: "2"

          - name: ES_JAVA_OPTS

            value: "-Xms512m -Xmx512m"

      initContainers:

      - name: fix-permissions

        image: busybox

        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]

        securityContext:

          privileged: true

        volumeMounts:

        - name: data

          mountPath: /usr/share/elasticsearch/data

      - name: increase-vm-max-map

        image: busybox

        command: ["sysctl", "-w", "vm.max_map_count=262144"]

        securityContext:

          privileged: true

      - name: increase-fd-ulimit

        image: busybox

        command: ["sh", "-c", "ulimit -n 65536"]

        securityContext:

          privileged: true

  volumeClaimTemplates:

  - metadata:

      name: data

      labels:

        app: elasticsearch

    spec:

      accessModes: [ "ReadWriteOnce" ]

      storageClassName: do-block-storage

      resources:

        requests:

          storage: 100Gi

# kubectl create -f elasticsearch_statefulset.yaml

创建成功会输出:

statefulset.apps/es-cluster created

 

kubectl rollout status sts/es-cluster --namespace=kube-logging

 

Output

Waiting for 3 pods to be ready...

Waiting for 2 pods to be ready...

Waiting for 1 pods to be ready...

partitioned roll out complete: 3 new pods have been updated...

kubectl port-forward es-cluster-0 9200:9200 --namespace=kube-logging

 

#curl http://localhost:9200/_cluster/state?pretty

Output

{

  "cluster_name" : "k8s-logs",

  "compressed_size_in_bytes" : 348,

  "cluster_uuid" : "QD06dK7CQgids-GQZooNVw",

  "version" : 3,

  "state_uuid" : "mjNIWXAzQVuxNNOQ7xR-qg",

  "master_node" : "IdM5B7cUQWqFgIHXBp0JDg",

  "blocks" : { },

  "nodes" : {

    "u7DoTpMmSCixOoictzHItA" : {

      "name" : "es-cluster-1",

      "ephemeral_id" : "ZlBflnXKRMC4RvEACHIVdg",

      "transport_address" : "10.244.8.2:9300",

      "attributes" : { }

    },

    "IdM5B7cUQWqFgIHXBp0JDg" : {

      "name" : "es-cluster-0",

      "ephemeral_id" : "JTk1FDdFQuWbSFAtBxdxAQ",

      "transport_address" : "10.244.44.3:9300",

      "attributes" : { }

    },

    "R8E7xcSUSbGbgrhAdyAKmQ" : {

      "name" : "es-cluster-2",

      "ephemeral_id" : "9wv6ke71Qqy9vk2LgJTqaA",

      "transport_address" : "10.244.40.4:9300",

      "attributes" : { }

    }

  },

...

 

三:创建kibana Deployment and Service

 

#vim kibana.yaml

apiVersion: v1

kind: Service

metadata:

  name: kibana

  namespace: kube-logging

  labels:

    app: kibana

spec:

  ports:

  - port: 5601

  selector:

    app: kibana

---

apiVersion: apps/v1

kind: Deployment

metadata:

  name: kibana

  namespace: kube-logging

  labels:

    app: kibana

spec:

  replicas: 1

  selector:

    matchLabels:

      app: kibana

  template:

    metadata:

      labels:

        app: kibana

    spec:

      containers:

      - name: kibana

        image: docker.elastic.co/kibana/kibana-oss:6.4.3

        resources:

          limits:

            cpu: 1000m

          requests:

            cpu: 100m

        env:

          - name: ELASTICSEARCH_URL

            value: http://elasticsearch:9200

        ports:

        - containerPort: 5601

 

四:创建kibana:

#kubectl create -f kibana.yaml

查看kibana Pod的运行状态

#kubectl rollout status deployment/kibana --namespace=kube-logging

kubectl get pods --namespace=kube-logging

#kubectl port-forward kibana-6c9fb4b5b7-plbg2 5601:5601 --namespace=kube-logging

浏览器测试:

http://localhost:56.1

 

 

五:部署Fluentd

Fluentd 是一个高效的日志聚合器,Fluentd 足够高效并且消耗的资源相对较少.

这里使用daemonSet创建:

 

# vim fluentd.yaml

 

apiVersion: v1

kind: ServiceAccount

metadata:

  name: fluentd

  namespace: kube-logging

  labels:

    app: fluentd

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

  name: fluentd

  labels:

    app: fluentd

rules:

- apiGroups:

  - ""

  resources:

  - pods

  - namespaces

  verbs:

  - get

  - list

  - watch

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: fluentd

roleRef:

  kind: ClusterRole

  name: fluentd

  apiGroup: rbac.authorization.k8s.io

subjects:

- kind: ServiceAccount

  name: fluentd

  namespace: kube-logging

---

apiVersion: apps/v1

kind: DaemonSet

metadata:

  name: fluentd

  namespace: kube-logging

  labels:

    app: fluentd

spec:

  selector:

    matchLabels:

      app: fluentd

  template:

    metadata:

      labels:

        app: fluentd

    spec:

      serviceAccount: fluentd

      serviceAccountName: fluentd

      tolerations:

      - key: node-role.kubernetes.io/master

        effect: NoSchedule

      containers:

      - name: fluentd

        image: fluent/fluentd-kubernetes-daemonset:v0.12-debian-elasticsearch

        env:

          - name:  FLUENT_ELASTICSEARCH_HOST

            value: "elasticsearch.kube-logging.svc.cluster.local"

          - name:  FLUENT_ELASTICSEARCH_PORT

            value: "9200"

          - name: FLUENT_ELASTICSEARCH_SCHEME

            value: "http"

          - name: FLUENT_UID

            value: "0"

        resources:

          limits:

            memory: 512Mi

          requests:

            cpu: 100m

            memory: 200Mi

        volumeMounts:

        - name: varlog

          mountPath: /var/log

        - name: varlibdockercontainers

          mountPath: /var/lib/docker/containers

          readOnly: true

      terminationGracePeriodSeconds: 30

      volumes:

      - name: varlog

        hostPath:

          path: /var/log

      - name: varlibdockercontainers

        hostPath:

          path: /var/lib/docker/containers

 

#kubectl create -f fluentd.yaml

#kubectl get pods -n logging

#kubectl get ds --namespace=kube-logging

 

Fluentd 启动成功后,我们可以前往 Kibana 的 Dashboard 页面中,点击左侧的 Discover,可以看到如下配置页面:

 

http://localhost:5601.

 

 

六 测试:

新建一个counter.yaml文件:

#vim counter.yaml

apiVersion: v1

kind: Pod

metadata:

  name: counter

spec:

  containers:

  - name: count

    image: busybox

    args: [/bin/sh, -c,

            'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']

 

#kubectl  create -f counter.yaml

 

该 Pod 只是简单将日志信息打印到 stdout,所以正常来说 Fluentd 会收集到这个日志数据,在 Kibana 中也就可以找到对应的日志数据了

参考内容来自:

https://www.digitalocean.com/community/tutorials/how-to-set-up-an-elasticsearch-fluentd-and-kibana-efk-logging-stack-on-kubernetes

最近的三年多时间,随着容器技术的火爆及Kubernetes成为容器编排管理的标准,国内外厂商均已开始了全面拥抱Kubernetes的转型, 无数中小型企业已经落地 Kubernetes,或正走在容器化的道路上 。 第一章介绍docker的前世今生,了 解docker的实现原理,以Django项目为例,教大家如何编写最佳的Dockerfile实现构业务镜像的制作。通过本章的学习,大家会知道docker的概念及基本操作,并学会构建自己的业务镜像,并通过抓包的方式掌握Docker最常用的bridge网络模式的通信。 第二章本章学习kubernetes的架构及工作流程,重点介绍如本章学习kubernetes的架构及工作流程,重点介绍如断的滚动更新,通过服务发现来实现集群内部的服务间访问,并通过ingress- -nginx实现外部使用域名访问集群内部的服务。同时介绍基于EFK如何搭建Kubernetes集群的日志收集系统。学完本章,我们的Django demo项目已经可以运行在k8s集群中,同时我们可以使用域名进行服务的访问。第三章本章基于k8s集群部署gitlab、sonarQube、 Jenkins等工具,并把上述工具集成到Jenkins中,以Django项目为例,通过多分支流水线及Jenkinsfle实现项目代码提交到不同的仓库分支,实现自动代码扫描、单元测试、docker容器构建、k8s服务的自动部署。第四章由于公司内部项目众多,大量的项目使用同一套流程做CICD,那么势必会存在大量的重复代码,因此本章主要通过使用groovy实现Jenkins的sharedL ibrary的开发,以提取项目在CICD实践过程中的公共逻辑,提供一系列的流程的接口供公司内各项目调用,开发完成后,还是以Django的demo项目为例,进行Jenkinsfle的改造,最后仅需通过简单的Jenkinsfle的配置,即可优雅的完成CICD流程的整个过程,此方式已在大型企业内部落地应用。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值