前言
在上一节我们准备好日志收集环境 6-4 搭建ELK及Kafka日志收集环境。
其中日志源可以通过node节点收集,或者使用sidecar容器收集,它们主要区别为:
node节点收集,基于daemonset部署日志收集进程,实现json-file类型(标准输出/dev/stdout、错误输 出/dev/stderr)日志收集,即应用程序产生的标准输出和错误输出的日志。
使用sidecar容器(一个pod多容器)收集当前pod内一个或者多个业务容器的日志(通常基于emptyDir实现业务容器与sidcar之间的日志共享)。
Daemonset收集日志
制作镜像
FROM logstash:7.12.1
USER root
WORKDIR /usr/share/logstash
ADD logstash.yml /usr/share/logstash/config/logstash.yml
ADD logstash.conf /usr/share/logstash/pipeline/logstash.conf
nerdctl build -t easzlab.io.local:5000/myhub/logstash:v7.12.1-json-file-log-v1 .
nerdctl push easzlab.io.local:5000/myhub/logstash:v7.12.1-json-file-log-v1
部署DaemonSet
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: logstash-elasticsearch
namespace: kube-system
labels:
k8s-app: logstash-logging
spec:
selector:
matchLabels:
name: logstash-elasticsearch
template:
metadata:
labels:
name: logstash-elasticsearch
spec:
# master节点也采集日志
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: logstash-elasticsearch
image: easzlab.io.local:5000/myhub/logstash:v7.12.1-json-file-log-v1
env:
# kafka集群连接
- name: "KAFKA_SERVER"
value: "192.168.100.175:9092,192.168.100.176:9092,192.168.100.177:9092"
- name: "TOPIC_ID"
value: "jsonfile-log-topic"
- name: "CODEC"
value: "json"
volumeMounts: