kubernetes-EFK

EFK

主流的 ELK(Elasticsearch,Logstash,Kibana)目前已经转变为 EFK(Elasticsearch,Filebeat or Fluentd,Kibana),对于容器云的日志方案业内也普遍推荐采用 Fluentd

logstash支持所有主流日志类型,插件支持最丰富,可以灵活DIY,但性能较差,JVM容易导致内存使用量高。

fluentd支持所有主流日志类型,插件支持较多,性能表现较好。
logtail占用机器cpu、内存资源最少,结合阿里云日志服务的E2E体验良好。

注意事项:

生产环境中,fluentd收集上来的日志可以输出到外部的es集群,建议单独部署es集群,因为占用内存资源较大,可以将外部es集群通过svc的方式给集群内部资源提供服务,如果要部署在k8s集群内,建议将es部署在单独的三台node节点

1. fluentd-elasticsearch-kibana部署

官方地址:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch

1.下载相关yaml文件

[root@VM-0-7-centos ~]# git clone https://github.com/kubernetes/kubernetes.git
[root@VM-0-7-centos ~]# cd kubernetes/cluster/addons/fluentd-elasticsearch

2.部署es

[root@VM-0-7-centos fluentd-elasticsearch]# kubectl apply -f create-logging-namespace.yaml 
[root@VM-0-7-centos fluentd-elasticsearch]# kubectl apply -f es-statefulset.yaml 
[root@VM-0-7-centos fluentd-elasticsearch]# kubectl apply -f es-service.yaml

2.1查看es状态

[root@VM-0-7-centos ~]# kubectl get po -n logging
NAME                      READY   STATUS    RESTARTS   AGE
elasticsearch-logging-0   1/1     Running   0          3m58s
elasticsearch-logging-1   1/1     Running   0          49s

2.2查看es集群状态

#将svc改为NodePort类型
[root@VM-0-7-centos fluentd-elasticsearch]# cat es-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-logging
  namespace: logging
  labels:
    k8s-app: elasticsearch-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Elasticsearch"
spec:
#  clusterIP: None
  ports:
    - name: db
      port: 9200
      protocol: TCP
      targetPort: 9200
    - name: transport
      port: 9300
      protocol: TCP
      targetPort: 9300
  publishNotReadyAddresses: true
  selector:
    k8s-app: elasticsearch-logging
  sessionAffinity: None
  type: NodePort
[root@VM-0-7-centos fluentd-elasticsearch]# kubectl get svc -n logging
NAME                    TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
elasticsearch-logging   NodePort   10.247.252.38    <none>        9200:30748/TCP,9300:31191/TCP   3m43s
[root@VM-0-7-centos fluentd-elasticsearch]# curl 10.247.252.38:9200/_cat/health?pretty
1629878634 08:03:54 kubernetes-logging yellow 1 1 2 2 0 0 1 0 - 66.7%

3.部署fluentd

[root@VM-0-7-centos fluentd-elasticsearch]# kubectl apply -f fluentd-es-configmap.yaml 
[root@VM-0-7-centos fluentd-elasticsearch]# kubectl apply -f fluentd-es-ds.yaml 

* 直接创建fluentd-es-ds.yaml会报错,注释securityContext[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-empqyNKy-1631772616397)(/tfl/pictures/202108/tapd_68571556_1629876804_38.png)]

3.1查看fluentd状态

[root@VM-0-7-centos fluentd-elasticsearch]# kubectl get po -n logging
NAME                      READY   STATUS    RESTARTS   AGE
elasticsearch-logging-0   1/1     Running   0          8m54s
fluentd-es-v3.1.1-cjjgg   1/1     Running   0          16s
fluentd-es-v3.1.1-jhrkr   1/1     Running   0          16s
fluentd-es-v3.1.1-xt77z   1/1     Running   0          16s

4.部署kibana

[root@VM-0-7-centos fluentd-elasticsearch]# kubectl apply -f kibana-deployment.yaml
[root@VM-0-7-centos fluentd-elasticsearch]# kubectl apply -f kibana-service.yaml

* 直接创建kibana-deployment.yaml会报错,注释securityContext[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-PKgKrm4A-1631772616400)(/tfl/pictures/202108/tapd_68571556_1629879307_13.png)]

4.1查看kibana状态

[root@VM-0-7-centos fluentd-elasticsearch]# kubectl get po -n logging
NAME                              READY   STATUS    RESTARTS   AGE
elasticsearch-logging-0           1/1     Running   0          24m
fluentd-es-v3.1.1-cjjgg           1/1     Running   0          15m
fluentd-es-v3.1.1-jhrkr           1/1     Running   0          15m
fluentd-es-v3.1.1-xt77z           1/1     Running   0          15m
kibana-logging-6fc8c7c7b8-b5rqt   1/1     Running   0          13s

4.2修改kibana访问类型为svc

[root@VM-0-7-centos fluentd-elasticsearch]# cat kibana-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: kibana-logging
  namespace: logging
  labels:
    k8s-app: kibana-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Kibana"
spec:
  ports:
  - port: 5601
    protocol: TCP
    targetPort: ui
  selector:
    k8s-app: kibana-logging
  type: NodePort

4.3查看svc访问kibana

[root@VM-0-7-centos fluentd-elasticsearch]# kubectl get svc -n logging
NAME                    TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
elasticsearch-logging   NodePort   10.247.252.38    <none>        9200:30748/TCP,9300:31191/TCP   8m36s
kibana-logging          NodePort   10.247.254.105   <none>        5601:31125/TCP                  5m3s

4.4访问kibana地址

  • http://宿主机ip:svc端口
    http://1.14.97.124:31125
    在这里插入图片描述
    在这里插入图片描述

ELK

Filebeat+kafka/Redis+Logstash+ES+Kibana部署

链接地址:https://github.com/dotbalo/k8s/tree/master/efk-7.10.2

1.下载相关yaml文件

[root@VM-0-7-centos ~]# git clone https://github.com/dotbalo/k8s.git
[root@VM-0-7-centos ~]# cp -rp k8s/efk-7.10.2 ./elk
[root@VM-0-7-centos ~]# rm -fr k8s/

2.部署filebeat

# 1、编写filebeat资源配置清单
[root@k8s-master01 filebeat]# vim filebeat-cm.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeatconf
data:
  filebeat.yml: |-
    filebeat.inputs:
    - input_type: log
      paths:
        - /data/log/*/*.log
      tail_files: true
      fields:
        pod_name: '${podName}'
        pod_ip: '${podIp}'
        pod_deploy_name: '${podDeployName}'
        pod_namespace: '${podNamespace}'
      tags: [test-filebeat]
    output.kafka:
      hosts: ["kafka:9092"]
      topic: "test-filebeat"
      codec.json:
        pretty: false
      keep_alive: 30s
      
# 2、编写logstash资源配置清单       
[root@k8s-master01 filebeat]# vim logstash-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-configmap
data:
  logstash.yml: |
    http.host: "0.0.0.0"
    path.config: /usr/share/logstash/pipeline
  logstash.conf: |
    # all input will come from filebeat, no local logs
    input {
      kafka {
              enable_auto_commit => true
              auto_commit_interval_ms => "1000"
              bootstrap_servers => "kafka:9092"
              topics => ["test-filebeat"]
              codec => json
          }
    }
    output {
       stdout{ codec=>rubydebug}
       if [fields][pod_namespace] =~ "logging" {
           elasticsearch {
             hosts => ["elasticsearch-logging:9200"]
             index => "%{[fields][pod_namespace]}-s-%{+YYYY.MM.dd}"
          }
       } else {
          elasticsearch {
             hosts => ["elasticsearch-logging:9200"]
             index => "no-index-%{+YYYY.MM.dd}"
          }
       }
    }
# 3、logstash-service.yaml
[root@k8s-master01 filebeat]# vim logstash-service.yaml 
kind: Service
apiVersion: v1
metadata:
  name: logstash-service
spec:
  selector:
    app: logstash
  ports:
  - protocol: TCP
    port: 5044
    targetPort: 5044
  type: ClusterIP
# 4、logstash-deploy.yaml
[root@k8s-master01 filebeat]# vim logstash-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: logstash-deployment
spec:
  selector:
    matchLabels:
      app: logstash
  replicas: 1
  template:
    metadata:
      labels:
        app: logstash
    spec:
      containers:
      - name: logstash
        image: docker.elastic.co/logstash/logstash-oss:7.10.2
        ports:
        - containerPort: 5044
        volumeMounts:
          - name: config-volume
            mountPath: /usr/share/logstash/config
          - name: logstash-pipeline-volume
            mountPath: /usr/share/logstash/pipeline
      volumes:
      - name: config-volume
        configMap:
          name: logstash-configmap
          items:
            - key: logstash.yml
              path: logstash.yml
      - name: logstash-pipeline-volume
        configMap:
          name: logstash-configmap
          items:
            - key: logstash.conf
              path: logstash.conf
# 5、编写测试app的资源配置清单,filebeat以边车模式跟应用容器在同一pod下运行,共享日志目录
[root@k8s-master01 filebeat]# vim app.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
  labels:
    app: app
    env: release
spec:
  selector:
    matchLabels:
      app: app
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
  # minReadySeconds: 30
  template:
    metadata:
      labels:
        app: app
    spec:
      nodeSelector:
         kubernetes.io/hostname: k8s-node02
      containers:
        - name: filebeat                        
          image: elastic/filebeat:7.10.2
          resources:
            requests:
              memory: "100Mi"
              cpu: "10m"
            limits:
              cpu: "200m"
              memory: "300Mi"
          imagePullPolicy: IfNotPresent
          env:
            - name: podIp
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: status.podIP
            - name: podName
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: podNamespace
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: podDeployName
              value: app
            - name: TZ
              value: "Asia/Shanghai"
          securityContext:
            runAsUser: 0
          volumeMounts:
            - name: logpath
              mountPath: /data/log/app/
            - name: filebeatconf
              mountPath: /usr/share/filebeat/filebeat.yml 
              subPath: usr/share/filebeat/filebeat.yml
        - name: app
          image: alpine:3.6 
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: logpath
              mountPath: /home/tomcat/target/
            - name: tz-config
              mountPath: /etc/localtime
            - mountPath: /usr/share/zoneinfo/Asia/Shanghai
              name: tz-config
            - mountPath: /etc/timezone
              name: timezone
          env:
            - name: TZ
              value: "Asia/Shanghai"
            - name: LANG
              value: C.UTF-8
            - name: LC_ALL
              value: C.UTF-8
            - name: ENV
              value: k8srelease
            - name: XMS
              value: "2048m"
            - name: XMX
              value: "2048m"
            - name: MEMORY_LIMIT
              valueFrom:
                resourceFieldRef:
                  resource: requests.memory
                  divisor: 1Mi
          command:
            - sh
            - -c
            - sleep 360000
          ports:
            - containerPort: 8080
              name: tomcat
      volumes:
        - name: tz-config
          hostPath:
            path: /usr/share/zoneinfo/Asia/Shanghai
        - hostPath:
            path: /etc/timezone
            type: ""
          name: timezone
        - name: logpath
          emptyDir: {}
        - name: filebeatconf
          configMap:
            name: filebeatconf
            items:
              - key: filebeat.yml
                path: usr/share/filebeat/filebeat.yml
# 6、应用资源配置清单
[root@k8s-master01 filebeat]# kubectl create -f filebeat-cm.yaml -n logging
[root@k8s-master01 filebeat]# kubectl create -f logstash-cm.yaml -n logging
[root@k8s-master01 filebeat]# kubectl create -f logstash-service.yaml -n logging
[root@k8s-master01 filebeat]# kubectl create -f logstash-deploy.yaml -n logging
[root@k8s-master01 filebeat]# kubectl create -f app.yaml -n logging
# 7、查看pod是否正常运行
root@k8s-master01 kafka]# kubectl get pod -n logging 
NAME                                   READY   STATUS    RESTARTS   AGE
app-65cc5f557b-g7kqs                   2/2     Running   0          21m
elasticsearch-logging-0                1/1     Running   2          22h
elasticsearch-logging-1                1/1     Running   2          22h
kafka-0                                1/1     Running   0          22h
kafka-1                                1/1     Running   0          22h
kafka-2                                1/1     Running   0          22h
kibana-logging-6778fdcd79-7gcvd        1/1     Running   7          21h
logstash-deployment-85bcdd4755-9v9l2   1/1     Running   3          25m
zookeeper-0                            1/1     Running   0          22h
zookeeper-1                            1/1     Running   0          22h
zookeeper-2                            1/1     Running   0          22h
# 8、模拟日志上报

# 模拟之前先打开logstash的日志,可以看到上报上来的日志
[root@k8s-master01 ~]# kubectl logs -f -n logging logstash-deployment-85bcdd4755-9v9l2

# 进入测试容器,模拟上报
[root@k8s-master01 filebeat]# kubectl exec -it -n logging app-65cc5f557b-2brc5 -c app -- sh
/ # touch /home/tomcat/target/app.log
/ # for i in `seq 1 2000` ; do echo $i >> /home/tomcat/target/app.log ; done

打开kibana页面,添加索引,查看是否有日志上报

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值