elk采集K8S应用日志

K8S采集流程

Pod中应用会将日志输出到容器终端上,这时可以直接用filebeat 采集node节点上面的/var/log/containers/*.log日志,然后将日志输出到kafka消息队列中,经过kafka将日志写入logstash进行格式化,然后由logstash传入elasticsearch存储,然后kibana会连接elasticsearch展示索引数据。

数据传输流程:Pod -> /var/log/containers/*.log -> Filebeat -> Kafka集群 -> Logstash -> Elasticsearch -> Kibana

K8S配置filebeat

filebeat.daemonset.yml                   filebeat.permission.yml
filebeat.indice-lifecycle.configmap.yml  filebeat.settings.configmap.yml
filebeat权限认证
$ cat filebeat.permission.yml
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    app: filebeat
rules:
- apiGroups: [""]
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: kube-system
  name: filebeat
  labels:
    app: filebeat
Filebeat主配置文件
$ cat filebeat.settings.configmap.yml 
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: kube-system
  name: filebeat-config
  labels:
    app: filebeat
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      enabled: true
      paths:
      - /var/log/containers/*.log
      multiline: # 多行处理,正则表示如果前面几个数字不是4个数字开头,那么就会合并到一行,解决Java堆栈错误日志收集问题
        pattern: ^\d{4}-\d{1,2}-\d{1,2}\s\d{1,2}:\d{1,2}:\d{1,2} #匹配Java日志开头时间
        negate: true # 正则是否开启,默认false不开启
        match: after # 不匹配的正则的行是放在上面一行的前面还是后面
      processors:
      - add_kubernetes_metadata:
          in_cluster: true
          host: ${NODE_NAME}
          matchers:
          - logs_path:
              logs_path: "/var/log/containers/"
    
      - add_cloud_metadata:
      - add_kubernetes_metadata:
          matchers:
          - logs_path:
              logs_path: "/var/log/containers/"
      - add_docker_metadata:
 
    output:
      kafka:
        enabled: true # 增加kafka的输出
        hosts: ["10.0.0.72:9092"]
        topic: filebeat
        max_message_bytes: 5242880
        partition.round_robin:
          reachable_only: true
        keep-alive: 120
        required_acks: 1
 
    setup.ilm:
      policy_file: /etc/indice-lifecycle.json
Filebeat索引生命周期策略配置

ElasticSearch 的 indice 生命周期表示一组规则,可以根据 indice 的大小或者时长应用到你的 indice 上。比如可以每天或者每次超过 1GB 大小的时候对 indice 进行轮转,我们也可以根据规则配置不同的阶段。由于监控会产生大量的数据,很有可能一天就超过几十G的数据,所以为了防止大量的数据存储,我们可以利用 indice 的生命周期来配置数据保留,这个在 Prometheus 中也有类似的操作。 如下所示的文件中,我们配置成每天或每次超过5GB的时候就对 indice 进行轮转,并删除所有超过30天的 indice 文件,我们这里只保留30天监控数据完全足够了。

filebeat.indice-lifecycle.configmap.yml
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: kube-system
  name: filebeat-indice-lifecycle
  labels:
    app: filebeat
data:
  indice-lifecycle.json: |-
    {
      "policy": {
        "phases": {
          "hot": {
            "actions": {
              "rollover": {
                "max_size": "5GB" ,
                "max_age": "1d"
              }
            }
          },
          "delete": {
            "min_age": "30d",
            "actions": {
              "delete": {}
            }
          }
        }
      }
    }
Filebeat Daemonset配置文件
$ cat filebeat.daemonset.yml
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  namespace: kube-system
  name: filebeat
  labels:
    app: filebeat
spec:
  selector:
    matchLabels:
      app: filebeat
  template:
    metadata:
      labels:
        app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.8.0
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: filebeat-indice-lifecycle
          mountPath: /etc/indice-lifecycle.json
          readOnly: true
          subPath: indice-lifecycle.json
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlog
          mountPath: /var/log
          readOnly: true
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: dockersock
          mountPath: /var/run/docker.sock
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: filebeat-indice-lifecycle
        configMap:
          defaultMode: 0600
          name: filebeat-indice-lifecycle
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: dockersock
        hostPath:
          path: /var/run/docker.sock
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate

全部创建

deployment 参考
apiVersion: apps/v1
kind: Deployment
metadata:
  name: htf-damp-data-rsync
  namespace: prod
  labels:
    app: htf-damp-data-rsync
  annotations:
    deployment.kubernetes.io/revision: "1"
spec:
  selector:
    matchLabels:
      app: htf-damp-data-rsync
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
  template:
    metadata:
      labels:
        app: htf-damp-data-rsync
    spec:
      containers:
        - name: htf-damp-data-rsync
          image: k8s-registry.qhtx.local/haitong/htf-damp-data-rsync-0.0.3-snapshot:13669
          imagePullPolicy: Always
          resources:
            requests:
              cpu: 200m
              memory: 1000Mi
            limits:
              cpu: 2000m
              memory: 4000Mi
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
          env:
            - name: TZ
              value: "Asia/Shanghai"
            - name: LANG
              value: C.UTF-8
            - name: LC_ALL
              value: C.UTF-8
          volumeMounts:
          - name : varlibdockercontainers
            mountPath: /var/lib/docker/containers
      volumes:
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: htf-damp-data-rsync
  name: htf-damp-data-rsync
  namespace: prod
spec:
  ports:
  - name: htf-damp-data-rsync
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: htf-damp-data-rsync
  sessionAffinity: None
  type: ClusterIP
status:
 loadBalancer: {}
开发输出日志到控制台
<!-- 默认控制台格式 -->
    <property name="DEFAULT_CONSOLE_PATTERN"
              value="%d{yyyy-MM-dd HH:mm:ss.SSS} %highlight(%-5level) %yellow([%thread]) %cyan(%logger{30}) - %wrap(%trace{tid}){'[',']'}%msg%n"/>
日志格式
 <!--默认日志格式-->
    <property name="DEFAULT_JSON_PATTERN" value='{
    "@ts":"%d{yyyy-MM-dd HH:mm:ss.SSS}",
    "thread":"%thread",
    "level":"%level",
    "logger":"%logger{50}",
    "msg":"%.-${BONE_MAX_MSG_LENGTH}msg",
    "throwable":"%throwable",
    "tid":"%X{traceId}",
    "app":"${BONE_APP_NAME}",
    "env":"${BONE_ENV}",
    "hostname":"${HOSTNAME}"
  }'/>
kafka自动创建topic成功
配置logstash
    input{
      kafka {
        bootstrap_servers => ["10.226.21.35:9092"]
        topics => ["filebeat"]
        codec => "json"
      }
    }
    output {
      elasticsearch {
        hosts => ["10.226.21.38:9200"]
        index => "logstash-k8s-%{+YYYY.MM.dd}"
        user => "elastic"
        password => "kw8LkFpsTmY3"
        }
      }

日志建立成功

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-995RhaF4-1664326788124)(https://note.youdao.com/yws/res/7812/WEBRESOURCE66220e17c2e9400163ef95826b8180f6)]

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
k8s一键部署ELK是指使用Kubernetes进行一键部署Elasticsearch、Logstash和Kibana的组合。ELK是一套日志分析和可视化平台,用于收集、解析和展示大量的日志数据。它可以帮助用户对日志数据进行搜索、分析和监控,以便更好地理解应用程序的性能和故障情况。通过k8s一键部署ELK,用户可以方便地在Kubernetes集群中部署和管理ELK组件。 具体步骤如下: 1. 在Kubernetes集群中创建一个命名空间(namespace)用于部署ELK组件。 2. 创建持久卷(PersistentVolume)和持久卷声明(PersistentVolumeClaim),用于存储Elasticsearch的数据。 3. 创建ConfigMap,将ELK组件的配置文件存储为Kubernetes资源。 4. 使用Kubernetes的Deployment或StatefulSet来创建Elasticsearch实例,并将其配置为使用持久卷进行数据存储。 5. 创建Logstash的Deployment或StatefulSet,配置它与Elasticsearch进行数据交互。 6. 创建Kibana的Deployment或StatefulSet,配置它与Elasticsearch进行数据展示和查询。 通过以上步骤,用户就可以在Kubernetes集群中一键部署ELK,并开始使用它来对日志数据进行收集、解析和展示。这样可以方便地监控应用程序的运行情况,及时发现和解决问题。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [分配任务调度XXL-JOB,使用ELK + Kafka + Filebeat日志收集,文件上传使用七牛云,数据加密AES,每](https://download.csdn.net/download/weixin_42140625/16073929)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] - *2* *3* [k8s一键安装mysql8单机版](https://blog.csdn.net/wangxin_wangxin/article/details/128723587)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值