K8S采集流程
Pod中应用会将日志输出到容器终端上,这时可以直接用filebeat 采集node节点上面的/var/log/containers/*.log日志,然后将日志输出到kafka消息队列中,经过kafka将日志写入logstash进行格式化,然后由logstash传入elasticsearch存储,然后kibana会连接elasticsearch展示索引数据。
数据传输流程:Pod -> /var/log/containers/*.log -> Filebeat -> Kafka集群 -> Logstash -> Elasticsearch -> Kibana
K8S配置filebeat
filebeat.daemonset.yml filebeat.permission.yml
filebeat.indice-lifecycle.configmap.yml filebeat.settings.configmap.yml
filebeat权限认证
$ cat filebeat.permission.yml
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
app: filebeat
rules:
- apiGroups: [""]
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: filebeat
labels:
app: filebeat
Filebeat主配置文件
$ cat filebeat.settings.configmap.yml
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: kube-system
name: filebeat-config
labels:
app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
enabled: true
paths:
- /var/log/containers/*.log
multiline: # 多行处理,正则表示如果前面几个数字不是4个数字开头,那么就会合并到一行,解决Java堆栈错误日志收集问题
pattern: ^\d{4}-\d{1,2}-\d{1,2}\s\d{1,2}:\d{1,2}:\d{1,2} #匹配Java日志开头时间
negate: true # 正则是否开启,默认false不开启
match: after # 不匹配的正则的行是放在上面一行的前面还是后面
processors:
- add_kubernetes_metadata:
in_cluster: true
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- add_cloud_metadata:
- add_kubernetes_metadata:
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- add_docker_metadata:
output:
kafka:
enabled: true # 增加kafka的输出
hosts: ["10.0.0.72:9092"]
topic: filebeat
max_message_bytes: 5242880
partition.round_robin:
reachable_only: true
keep-alive: 120
required_acks: 1
setup.ilm:
policy_file: /etc/indice-lifecycle.json
Filebeat索引生命周期策略配置
ElasticSearch 的 indice 生命周期表示一组规则,可以根据 indice 的大小或者时长应用到你的 indice 上。比如可以每天或者每次超过 1GB 大小的时候对 indice 进行轮转,我们也可以根据规则配置不同的阶段。由于监控会产生大量的数据,很有可能一天就超过几十G的数据,所以为了防止大量的数据存储,我们可以利用 indice 的生命周期来配置数据保留,这个在 Prometheus 中也有类似的操作。 如下所示的文件中,我们配置成每天或每次超过5GB的时候就对 indice 进行轮转,并删除所有超过30天的 indice 文件,我们这里只保留30天监控数据完全足够了。
filebeat.indice-lifecycle.configmap.yml
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: kube-system
name: filebeat-indice-lifecycle
labels:
app: filebeat
data:
indice-lifecycle.json: |-
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_size": "5GB" ,
"max_age": "1d"
}
}
},
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
Filebeat Daemonset配置文件
$ cat filebeat.daemonset.yml
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
namespace: kube-system
name: filebeat
labels:
app: filebeat
spec:
selector:
matchLabels:
app: filebeat
template:
metadata:
labels:
app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.8.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: filebeat-indice-lifecycle
mountPath: /etc/indice-lifecycle.json
readOnly: true
subPath: indice-lifecycle.json
- name: data
mountPath: /usr/share/filebeat/data
- name: varlog
mountPath: /var/log
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: filebeat-indice-lifecycle
configMap:
defaultMode: 0600
name: filebeat-indice-lifecycle
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
全部创建
deployment 参考
apiVersion: apps/v1
kind: Deployment
metadata:
name: htf-damp-data-rsync
namespace: prod
labels:
app: htf-damp-data-rsync
annotations:
deployment.kubernetes.io/revision: "1"
spec:
selector:
matchLabels:
app: htf-damp-data-rsync
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
template:
metadata:
labels:
app: htf-damp-data-rsync
spec:
containers:
- name: htf-damp-data-rsync
image: k8s-registry.qhtx.local/haitong/htf-damp-data-rsync-0.0.3-snapshot:13669
imagePullPolicy: Always
resources:
requests:
cpu: 200m
memory: 1000Mi
limits:
cpu: 2000m
memory: 4000Mi
ports:
- name: http
containerPort: 8080
protocol: TCP
env:
- name: TZ
value: "Asia/Shanghai"
- name: LANG
value: C.UTF-8
- name: LC_ALL
value: C.UTF-8
volumeMounts:
- name : varlibdockercontainers
mountPath: /var/lib/docker/containers
volumes:
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: htf-damp-data-rsync
name: htf-damp-data-rsync
namespace: prod
spec:
ports:
- name: htf-damp-data-rsync
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: htf-damp-data-rsync
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
开发输出日志到控制台
<!-- 默认控制台格式 -->
<property name="DEFAULT_CONSOLE_PATTERN"
value="%d{yyyy-MM-dd HH:mm:ss.SSS} %highlight(%-5level) %yellow([%thread]) %cyan(%logger{30}) - %wrap(%trace{tid}){'[',']'}%msg%n"/>
日志格式
<!--默认日志格式-->
<property name="DEFAULT_JSON_PATTERN" value='{
"@ts":"%d{yyyy-MM-dd HH:mm:ss.SSS}",
"thread":"%thread",
"level":"%level",
"logger":"%logger{50}",
"msg":"%.-${BONE_MAX_MSG_LENGTH}msg",
"throwable":"%throwable",
"tid":"%X{traceId}",
"app":"${BONE_APP_NAME}",
"env":"${BONE_ENV}",
"hostname":"${HOSTNAME}"
}'/>
kafka自动创建topic成功
配置logstash
input{
kafka {
bootstrap_servers => ["10.226.21.35:9092"]
topics => ["filebeat"]
codec => "json"
}
}
output {
elasticsearch {
hosts => ["10.226.21.38:9200"]
index => "logstash-k8s-%{+YYYY.MM.dd}"
user => "elastic"
password => "kw8LkFpsTmY3"
}
}
日志建立成功
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-995RhaF4-1664326788124)(https://note.youdao.com/yws/res/7812/WEBRESOURCE66220e17c2e9400163ef95826b8180f6)]