【EFK】基于K8S构建EFK+logstash+kafka日志平台

【4】传输方案

1、output.elasticsearch
如果你希望使用 filebeat 直接向 elasticsearch 输出数据,需要配置 output.elasticsearch
output.elasticsearch:
hosts: [“192.168.40.180:9200”]
2、output.logstash
如果使用filebeat向 logstash输出数据,然后由 logstash 再向elasticsearch 输出数据,需要配置 output.logstash。 logstash 和 filebeat 一起工作时,如果 logstash 忙于处理数据,会通知FileBeat放慢读取速度。一旦拥塞得到解决,FileBeat 将恢复到原来的速度并继续传播。这样,可以减少管道超负荷的情况。
output.logstash:
hosts: [“192.168.40.180:5044”]

3、output.kafka
如果使用filebeat向kafka输出数据,然后由 logstash 作为消费者拉取kafka中的日志,并再向elasticsearch 输出数据,需要配置 output.logstash
output.kafka:
enabled: true
hosts: [“192.168.40.180:9092”]
topic: elfk8stest

2.3、Logstash组件

2.4、Fluent组件

2.5、fluentd、filebeat、logstash对比分析

三、EFK组件安装

在安装Elasticsearch集群之前,我们先创建一个名称空间,在这个名称空间下安装日志收工具elasticsearch、fluentd、kibana。我们创建一个kube-logging名称空间,将EFK组件安装到该名称空间中。

kubectl create ns kube-logging

3.1、安装elasticsearch

首先,我们需要部署一个有3个节点的Elasticsearch集群。
我们使用3个Elasticsearch Pods可以避免高可用中多节点集群中发生的“脑裂”问题。
脑裂问题参考如下:
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html#split-brain

【1】创建headless service服务
[root@master 4]# cat elasticsearch\_svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  namespace: kube-logging
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  clusterIP: None
  ports:
  - port: 9200
    name: rest
  - port: 9300
    name: inter-node

在kube-logging名称空间定义了一个名为 elasticsearch 的 Service服务,带有app=elasticsearch标签,当我们将 Elasticsearch StatefulSet 与此服务关联时,服务将返回带有标签app=elasticsearch的 Elasticsearch Pods的DNS A记录,然后设置clusterIP=None,将该服务设置成无头服务。最后,我们分别定义端口9200、9300,分别用于与 REST API 交互,以及用于节点间通信。

[root@master 4]# kubectl apply -f elasticsearch\_svc.yaml
service/elasticsearch created
[root@master 4]# kubectl get svc -n kube-logging
NAME            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
elasticsearch   ClusterIP   None         <none>        9200/TCP,9300/TCP   20s

现在我们已经为 Pod 设置了无头服务和一个稳定的域名.elasticsearch.kube-logging.svc.cluster.local,接下来我们通过 StatefulSet来创建具体的 Elasticsearch的Pod 应用。

【2】创建Storageclass ,实现存储类动态供给

1、安装nfs

[root@master 4]# yum install nfs-utils -y
[root@node01 ~]# yum install nfs-utils -y
[root@node02 ~]#yum install nfs-utils -y
 
[root@master 4]# systemctl start nfs
[root@node01 ~]# systemctl start nfs
[root@node02 ~]# systemctl start nfs

[root@master 4]# systemctl enable nfs.service
[root@node01 ~]# systemctl enable nfs.service
[root@node02 ~]# systemctl enable nfs.service

2、master创建共享目录

[root@master 4]# mkdir /data/v1 -p
# 编辑/etc/exports文件
[root@master 4]# vim /etc/exports
/data/v1 10.32.1.0/24(rw,no_root_squash)
# 加载配置,使配置生效
[root@master 4]# exportfs -arv
exporting 10.32.1.0/24:/data/v1
[root@master 4]# systemctl restart nfs

3、创建nfs作为存储的供应商

  • 创建sa
    kubectl create sa nfs-provisioner
  • 对sa做rbac授权
[root@master 4]# cat rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get","list","watch","create","delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services", "endpoints"]
    verbs: ["get"]
  - apiGroups: ["extensions"]
    resources: ["podsecuritypolicies"]
    resourceNames: ["nfs-provisioner"]
    verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-provisioner
subjects:
- kind: ServiceAccount
  name: nfs-provisioner
  namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-provisioner
  apiGroup: rbac.authorization.k8s.io

[root@master 4]# kubectl apply -f rbac.yaml
clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-provisioner created

  • 创建pod
    把nfs-client-provisioner.tar.gz上传到node工作节点上,手动解压。
    ctr -n=k8s.io image import nfs-client-provisioner.tar.gz
    通过deployment创建pod用来运行nfs-provisioner
[root@master 4]# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
 name: nfs-provisioner
spec:
  selector:
    matchLabels:
     app: nfs-provisioner
  replicas: 1
  strategy:
      type: Recreate
  template:
    metadata:
      labels:
        app: nfs-provisioner
    spec:
      serviceAccount: nfs-provisioner
      containers:
      - name: nfs-provisioner
        # 这个供应商镜像如果有问题,就换成其他的,我的最后换成了
        # registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner 前提是这个镜像上传到了node节点
        image: registry.cn-hangzhou.aliyuncs.com/open-ali/xianchao/nfs-client-provisioner:v1
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: nfs-client-root
          mountPath: /persistentvolumes
        env:
        - name: PROVISIONER_NAME
          value: example.com/nfs
        - name: NFS_SERVER
          value: 10.32.1.147
          #这个需要写nfs服务端所在的ip地址,大家需要写自己安装了nfs服务的机器ip
        - name: NFS_PATH
          value: /data/v1
      volumes:
        # 这个是nfs服务端共享的目录
        - name: nfs-client-root
          nfs:
           server: 10.32.1.147
           path: /data/v1

[root@master 4]# kubectl apply -f deployment.yaml
deployment.apps/nfs-provisioner configured
[root@master 4]# kubectl get pods -owide| grep nfs
nfs-provisioner-5fb64dc877-4pzbk   1/1     Running   0             6m49s   10.244.196.143   node01   <none>           <none>

  • 创建storageclass
[root@master 4]# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: do-block-storage
provisioner: example.com/nfs
[root@master 4]# k apply -f class.yaml
storageclass.storage.k8s.io/do-block-storage created
[root@master 4]# k get sc
NAME               PROVISIONER       RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
do-block-storage   example.com/nfs   Delete          Immediate           false                  70m

【3】安装Elasticsearch集群

把elasticsearch_7_2_0.tar.gz和busybox.tar.gz
上传到工作节点node01、node02,手动解压:

[root@node01 package]# ctr -n=k8s.io image import elasticsearch\_7\_2\_0.tar.gz
[root@node02 package]# ctr -n=k8s.io image import elasticsearch\_7\_2\_0.tar.gz

更新和应用yaml文件

[root@master 4]# cat elasticsearch-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es-cluster
  namespace: kube-logging
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - name: elasticsearch
        image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        ports:
          - containerPort: 9200
            name: rest
            protocol: TCP
          - containerPort: 9300
            name: inter-node
            protocol: TCP
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
        env:
        - name: cluster.name
          value: k8s-logs
        - name: node.name
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: discovery.seed_hosts
          value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
        - name: cluster.initial_master_nodes
          value: "es-cluster-0,es-cluster-1,es-cluster-2"
        - name: ES_JAVA_OPTS
          value: "-Xms512m -Xmx512m"
      initContainers:
      - name: fix-permissions
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
      - name: increase-vm-max-map
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sysctl", "-w", "vm.max\_map\_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true
  volumeClaimTemplates:
  - metadata:
      name: data
      labels:
        app: elasticsearch
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: do-block-storage
      resources:
        requests:
          storage: 5Gi

上面内容的解释:在kube-logging的名称空间中定义了一个es-cluster的StatefulSet。
然后,我们使用serviceName 字段与我们之前创建的headless ElasticSearch服务相关联。这样可以确保可以使用以下DNS地址访问StatefulSet中的每个Pod:,es-cluster-[0,1,2].elasticsearch.kube-logging.svc.cluster.local,其中[0,1,2]与Pod分配的序号数相对应。
我们指定3个replicas(3个Pod副本),将selector matchLabels 设置为app: elasticseach。该.spec.selector.matchLabels和.spec.template.metadata.labels字段必须匹配。

[root@master 4]# kubectl get pods -n kube-logging
NAME           READY   STATUS    RESTARTS   AGE
es-cluster-0   1/1     Running   0          10m
es-cluster-1   1/1     Running   0          10m
es-cluster-2   1/1     Running   0          10m
[root@master 4]# kubectl get svc -n kube-logging
NAME            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
elasticsearch   ClusterIP   None         <none>        9200/TCP,9300/TCP   6h15m


pod部署完成之后,可以通过REST API检查elasticsearch集群是否部署成功,使用下面的命令将本地端口9200转发到 Elasticsearch 节点(如es-cluster-0)对应的端口:

kubectl port-forward es-cluster-0 9200:9200 --namespace=kube-logging
然后,在另外的终端窗口中,执行如下请求,新开一个master1终端:

[root@master 4]# curl http://localhost:9200/\_cluster/state?pretty|head -50
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
 10  146k   10 16294    0     0   310k      0 --:--{-- --:--:-- --:--:--     0
  "cluster\_name" : "k8s-logs",
  "cluster\_uuid" : "GCzlBOZnT8abADeTCJyrRg",
  "version" : 17,
  "state\_uuid" : "yIAM6AEzSdOCgK9FHmuriQ",
  "master\_node" : "7UL9lwt2Qa-Rx9S-5hm3tQ",
  "blocks" : { },
  "nodes" : {
    "6FkyqeBnQ9GGJjqxmIK4OA" : {
      "name" : "es-cluster-2",
      "ephemeral\_id" : "T8DdDj6tQSm-mERp4rkrNg",
      "transport\_address" : "10.244.196.142:9300",
      "attributes" : {
        "ml.machine\_memory" : "8201035776",
        "ml.max\_open\_jobs" : "20",
        "xpack.installed" : "true"
      }
    },
    "7UL9lwt2Qa-Rx9S-5hm3tQ" : {
      "name" : "es-cluster-0",
      "ephemeral\_id" : "JFYS2bHqTD-FaxV5IpACWQ",
      "transport\_address" : "10.244.196.135:9300",
      "attributes" : {
        "ml.machine\_memory" : "8201035776",
        "xpack.installed" : "true",
        "ml.max\_open\_jobs" : "20"
      }
    },
    "QRd7XeJ5TtO-bdaru3wAkg" : {
      "name" : "es-cluster-1",
      "ephemeral\_id" : "uG4ZE\_N8QGGNDyXnE4puSQ",
      "transport\_address" : "10.244.140.105:9300",
      "attrib:--utes" : {
         "ml.machine\_memory" : "8201248768",
-        "ml.max\_open\_jobs" : "20",
-        "xpack.installed" : "true"
:      }
-    }
-  },
# 看到上面的信息就表明我们名为 k8s-logs 的 Elasticsearch 集群成功创建了3个节点:
# es-cluster-0,es-cluster-1,和es-cluster-2
# 当前主节点是 es-cluster-0

3.2、安装kibana可视化UI界面

[root@master 4]# cat kibana.yaml
apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    app: kibana
spec:
  ports:
  - port: 5601
  selector:
    app: kibana
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    app: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:7.2.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        env:
        - name: ELASTICSEARCH_URL
          value: http://elasticsearch:9200
        ports:
        - containerPort: 5601

[root@master 4]# kubectl apply -f kibana.yaml
[root@master 4]# kubectl get pods -n kube-logging
NAME                     READY   STATUS    RESTARTS   AGE
es-cluster-0             1/1     Running   0          41m
es-cluster-1             1/1     Running   0          41m
es-cluster-2             1/1     Running   0          40m
kibana-69f46c6bd-vm7rh   1/1     Running   0          18m
[root@master 4]# kubectl get svc -n kube-logging
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
elasticsearch   ClusterIP   None             <none>        9200/TCP,9300/TCP   6h43m
kibana          ClusterIP   10.109.118.117   <none>        5601/TCP            18m

# 修改service的type类型为NodePort:
[root@master 4]# kubectl edit svc kibana -n kube-logging
[root@master 4]# kubectl get svc -n kube-logging
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
elasticsearch   ClusterIP   None             <none>        9200/TCP,9300/TCP   6h44m
kibana          NodePort    10.109.118.117   <none>        5601:31598/TCP      20m

在浏览器中打开http://<k8s集群任意节点IP>:31598即可,如果看到如下欢迎界面证明 Kibana 已经成功部署到了Kubernetes集群之中。
在这里插入图片描述

3.3、安装fluentd组件

将镜像上传到各个节点(master、node节点都要上传)
然后解压

[root@master 4]# ctr -n=k8s.io images import fluentd-v1-9-1.tar.gz
[root@node01 package]# ctr -n=k8s.io images import fluentd-v1-9-1.tar.gz
[root@node02 package]# ctr -n=k8s.io images import fluentd-v1-9-1.tar.gz

[root@master 4]# cat fluentd.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: kube-logging
  labels:
    app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluentd
  labels:
    app: fluentd
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - namespaces
  verbs:
  - get
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd
roleRef:
  kind: ClusterRole
  name: fluentd
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: fluentd
  namespace: kube-logging
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-logging
  labels:
    app: fluentd
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd
        image: fluentd:v1.9.1-debian-1.0
        imagePullPolicy: IfNotPresent
        env:
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "elasticsearch.kube-logging.svc.cluster.local"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          - name: FLUENTD_SYSTEMD_CONF
            value: disable
          - name: FLUENT_CONTAINER_TAIL_PARSE_TYPE  # 注意:如果是用containerd做容器运行时,就要加这4行,使用docker则不用
            value: "cri"
          - name: FLUENT_CONTAINER_TAIL_PARSE_TIME_FORMAT
            value: "%Y-%m-%dT%H:%M:%S.%L%z"
        resources:
          limits:
            memory: 512Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

[root@master 4]# kubectl apply -f fluentd.yaml
serviceaccount/fluentd unchanged
clusterrole.rbac.authorization.k8s.io/fluentd unchanged
clusterrolebinding.rbac.authorization.k8s.io/fluentd unchanged
daemonset.apps/fluentd created

[root@master 4]# kubectl get pods -n kube-logging
NAME                     READY   STATUS    RESTARTS   AGE
es-cluster-0             1/1     Running   0          17h
es-cluster-1             1/1     Running   0          17h
es-cluster-2             1/1     Running   0          17h
fluentd-8fzqg            1/1     Running   0          23s
fluentd-fjhgg            1/1     Running   0          23s
fluentd-vlhn6            1/1     Running   0          23s
kibana-69f46c6bd-vm7rh   1/1     Running   0          16h

Fluentd 启动成功后,我们可以前往 Kibana 的 Dashboard 页面中,点击左侧的Discover,可以看到如下配置页面:

img
img

一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!

RESTARTS AGE
es-cluster-0 1/1 Running 0 17h
es-cluster-1 1/1 Running 0 17h
es-cluster-2 1/1 Running 0 17h
fluentd-8fzqg 1/1 Running 0 23s
fluentd-fjhgg 1/1 Running 0 23s
fluentd-vlhn6 1/1 Running 0 23s
kibana-69f46c6bd-vm7rh 1/1 Running 0 16h


Fluentd 启动成功后,我们可以前往 Kibana 的 Dashboard 页面中,点击左侧的Discover,可以看到如下配置页面:



[外链图片转存中...(img-ZwfHvPnr-4701992837690)]
[外链图片转存中...(img-xKQMBoZh-4701992837691)]

**一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!**

  • 10
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值