Kubernetes ELK日志采集思路

 需求背景  日志系统


如果机器少的话,可以去机器上面查看,日志可以一个一个去查看 

  • 业务发展越来越庞大,服务器越来越多
  • 各种访问日志、应用日志、错误日志量越来越多
  • 开发人员排查问题,需要到服务器上查日志,效率低、权限不好控制
  • 运维需实时关注业务访问情况

 

 

容器特性给日志采集带来的难度


容器特性给日志采集带来的难度:
  • K8s弹性伸缩性:导致不能预先确定采集的目标
  • 容器隔离性:容器的文件系统与宿主机是隔离,导致日志采集器读取日志文件受阻

日志按体现方式分类


应用程序日志记录体现方式分为两类:
• 标准输出:输出到控制台,使用kubectl logs可以看到
• 日志文件:写到容器的文件系统的文件

标准输出


针对标准输出: 以DaemonSet方式在每个Node 上部署一个日志收集程序,采集
/var/lib/docker/containers/目录下所有容器日志

以官方的nginx为例,访问nginx的日志都重定向到标准输出了

[root@k8s-master ~]# kubectl get pod,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-6799fc88d8-zcrss   1/1     Running   0          3m41s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP        39d
service/nginx        NodePort    10.99.50.2   <none>        80:31332/TCP   39d

开始访问nginx,通过标准输出查看日志(标准输出有日志,在制作nginx的镜像的时候将日志,日志访问,日志错误重定向到标准输出了)

[root@k8s-master ~]# curl 10.99.50.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>


[root@k8s-master ~]# kubectl logs -f nginx-6799fc88d8-zcrss 
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
10.244.235.192 - - [24/Dec/2020:06:20:12 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"


#这条是刚刚访问的nginx日志可以看到重定向到屏幕了
10.244.235.192 - - [24/Dec/2020:06:22:03 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"

进入到容器内部,更加详细的查看(可以看到日志没有被记录下来)

[root@k8s-master ~]# kubectl exec -it  nginx-6799fc88d8-zcrss -- bash
root@nginx-6799fc88d8-zcrss:/# ls /var/log/nginx/
access.log  error.log
root@nginx-6799fc88d8-zcrss:/# cat /var/log/nginx/access.log 


^H^C
root@nginx-6799fc88d8-zcrss:/# ls -l /var/log/nginx/
total 0
lrwxrwxrwx 1 root root 11 Dec 15 20:20 access.log -> /dev/stdout
lrwxrwxrwx 1 root root 11 Dec 15 20:20 error.log -> /dev/stderr

 可以标准输出被docker接管了,写在下面的日志文件当中

[root@k8s-master ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
nginx-6799fc88d8-zcrss   1/1     Running   0          29m   10.244.169.146   k8s-node2   <none>           <none>

[root@k8s-node2 ~]# docker ps
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS              PORTS               NAMES
aacb357d299e        nginx                                               "/docker-entrypoint.??   36 minutes ago      Up 36 minutes 


[root@k8s-node2 ~]# cd /var/lib/docker/containers/aacb357d299e22d638e63c8ec0e7598af82ba92e84c05914557ce06d88e3e200/


[root@k8s-node2 aacb357d299e22d638e63c8ec0e7598af82ba92e84c05914557ce06d88e3e200]# ls
aacb357d299e22d638e63c8ec0e7598af82ba92e84c05914557ce06d88e3e200-json.log  config.v2.json   mounts
checkpoints                                                                hostconfig.json


[root@k8s-node2 aacb357d299e22d638e63c8ec0e7598af82ba92e84c05914557ce06d88e3e200]# cat aacb357d299e22d638e63c8ec0e7598af82ba92e84c05914557ce06d88e3e200-json.log 
{"log":"/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration\n","stream":"stdout","time":"2020-12-24T06:19:37.278625521Z"}
{"log":"/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/\n","stream":"stdout","time":"2020-12-24T06:19:37.278686293Z"}
{"log":"/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh\n","stream":"stdout","time":"2020-12-24T06:19:37.292982643Z"}
{"log":"10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf\n","stream":"stdout","time":"2020-12-24T06:19:37.315839232Z"}
{"log":"10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf\n","stream":"stdout","time":"2020-12-24T06:19:37.356368291Z"}
{"log":"/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh\n","stream":"stdout","time":"2020-12-24T06:19:37.357543935Z"}
{"log":"/docker-entrypoint.sh: Configuration complete; ready for start up\n","stream":"stdout","time":"2020-12-24T06:19:37.368828605Z"}
{"log":"10.244.235.192 - - [24/Dec/2020:06:20:12 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.29.0\" \"-\"\n","stream":"stdout","time":"2020-12-24T06:20:12.77838001Z"}
{"log":"10.244.235.192 - - [24/Dec/2020:06:22:03 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.29.0\" \"-\"\n","stream":"stdout","time":"2020-12-24T06:22:03.043991599Z"}

要查看日志文件,如果要以demonset部署日志采集,只需要采集下面目录下的日志。这样就可以收集到所有容器的标准输出采集 

[root@k8s-node2 ~]# ls /var/lib/docker/containers/*/*-json.log
/var/lib/docker/containers/02089d9eb498dd1003f70fab385ee14dd9bc13abfc79eefdc0332687e6c6fd89/02089d9eb498dd1003f70fab385ee14dd9bc13abfc79eefdc0332687e6c6fd89-json.log
/var/lib/docker/containers/0bb6a0a0496bc47b5f45a1eded4cfd38c3447efdaa20b30d7eec58479f297b58/0bb6a0a0496bc47b5f45a1eded4cfd38c3447efdaa20b30d7eec58479f297b58-json.log
/var/lib/docker/containers/0d701930711b67e61980a8a0e00634de8548e914a2275b7c9de71743b855cf23/0d701930711b67e61980a8a0e00634de8548e914a2275b7c9de71743b855cf23-json.log
[root@master containers]# kubectl logs kube-apiserver-master -n kube-system | tail -n 10
I0927 02:15:55.659265       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0927 02:16:31.274800       1 client.go:360] parsed scheme: "passthrough"
I0927 02:16:31.274858       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://192.168.100.5:2379  <nil> 0 <nil>}] <nil> <nil>}
I0927 02:16:31.274868       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0927 02:17:07.471679       1 client.go:360] parsed scheme: "passthrough"
I0927 02:17:07.471735       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://192.168.100.5:2379  <nil> 0 <nil>}] <nil> <nil>}
I0927 02:17:07.471745       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0927 02:17:47.679824       1 client.go:360] parsed scheme: "passthrough"
I0927 02:17:47.679908       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://192.168.100.5:2379  <nil> 0 <nil>}] <nil> <nil>}
I0927 02:17:47.679919       1 clientconn.go:948] ClientConn switching balancer to "pick_first"


[root@master ~]#  docker ps | grep kube-apiserver-master
06038e658f24   9ba91a90b7d1                                                 "kube-apiserver --ad??   17 minutes ago   Up 17 minutes             k8s_kube-apiserver_kube-apiserver-master_kube-system_936f8541e2409f470bf7c78dc04aa160_2
e49723258988   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2      "/pause"                 17 minutes ago   Up 17 minutes             k8s_POD_kube-apiserver-master_kube-system_936f8541e2409f470bf7c78dc04aa160_2

[root@master ~]# cd /var/lib/docker/containers/06038e658f24ac6ecc26b2935eb44c30872296e444869f507b830c2c573d1fda/
[root@master 06038e658f24ac6ecc26b2935eb44c30872296e444869f507b830c2c573d1fda]# ls
06038e658f24ac6ecc26b2935eb44c30872296e444869f507b830c2c573d1fda-json.log  checkpoints  config.v2.json  hostconfig.json  mounts

[root@master 06038e658f24ac6ecc26b2935eb44c30872296e444869f507b830c2c573d1fda]# tail -n 10 06038e658f24ac6ecc26b2935eb44c30872296e444869f507b830c2c573d1fda-json.log 
{"log":"I0927 02:17:07.471745       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2021-09-27T02:17:07.471917661Z"}
{"log":"I0927 02:17:47.679824       1 client.go:360] parsed scheme: \"passthrough\"\n","stream":"stderr","time":"2021-09-27T02:17:47.68001592Z"}
{"log":"I0927 02:17:47.679908       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://192.168.100.5:2379  \u003cnil\u003e 0 \u003cnil\u003e}] \u003cnil\u003e \u003cnil\u003e}\n","stream":"stderr","time":"2021-09-27T02:17:47.68007838Z"}
{"log":"I0927 02:17:47.679919       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2021-09-27T02:17:47.680084677Z"}
{"log":"I0927 02:18:24.023738       1 client.go:360] parsed scheme: \"passthrough\"\n","stream":"stderr","time":"2021-09-27T02:18:24.023923962Z"}
{"log":"I0927 02:18:24.023810       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://192.168.100.5:2379  \u003cnil\u003e 0 \u003cnil\u003e}] \u003cnil\u003e \u003cnil\u003e}\n","stream":"stderr","time":"2021-09-27T02:18:24.023984675Z"}
{"log":"I0927 02:18:24.023819       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2021-09-27T02:18:24.023991183Z"}
{"log":"I0927 02:19:05.002207       1 client.go:360] parsed scheme: \"passthrough\"\n","stream":"stderr","time":"2021-09-27T02:19:05.002961057Z"}
{"log":"I0927 02:19:05.002265       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://192.168.100.5:2379  \u003cnil\u003e 0 \u003cnil\u003e}] \u003cnil\u003e \u003cnil\u003e}\n","stream":"stderr","time":"2021-09-27T02:19:05.003041516Z"}
{"log":"I0927 02:19:05.002277       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2021-09-27T02:19:05.003048752Z"}

容器当中日志文件(日志存放在容器文件系统当中)


针对容器中日志文件: 在Pod中增加一个容器运行日志采集器,使用emtyDir共享日志目录让日志采集器读取到日志文件
如果是日志文件如tomcat那么就需要pod当中加上日志采集容器,然后通过数据卷进行共享,那么日志采集容器也可以读取到
[root@k8s-master ~]# cat tomcat-log.yml 
apiVersion: v1
kind: Pod
metadata:
  name: tomcat-web
  namespace: default
spec:
  containers:
  - name: web
    image: tomcat
    volumeMounts:
     - name: logs
       mountPath: /usr/local/tomcat/logs

  volumes:
  - name: logs
    emptyDir: {}


[root@k8s-master ~]# kubectl apply -f tomcat-log.yml 
pod/tomcat-web created



[root@k8s-master ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
tomcat-web               1/1     Running   0          3m13s   10.244.169.139   k8s-node2   <none>           <none>

在宿主机上就可以看到访问日志了

[root@k8s-node2 ~]# docker ps
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS              PORTS               NAMES
7a57e7f2a2ee        tomcat                                              "catalina.sh run"        54 seconds ago      Up 53 seconds                           k8s_web_tomcat-web_default_93e898f8-9353-44d7-b6d7-d94c00bf2014_0


[root@k8s-master ~]# curl 10.244.169.139:8080
<!doctype html><html lang="en"><head><title>HTTP Status 404 ?.Not Found</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 404 ?.Not Found</h1><hr class="line" /><p><b>Type</b> Status Report</p><p><b>Description</b> The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.</p><hr class="line"


[root@k8s-node2 ~]# cat /var/lib/kubelet/pods/93e898f8-9353-44d7-b6d7-d94c00bf2014/volumes/kubernetes.io~empty-dir/logs/
catalina.2020-12-24.log              localhost.2020-12-24.log             manager.2020-12-24.log               
host-manager.2020-12-24.log          localhost_access_log.2020-12-24.txt  
[root@k8s-node2 ~]# cat /var/lib/kubelet/pods/93e898f8-9353-44d7-b6d7-d94c00bf2014/volumes/kubernetes.io~empty-dir/logs/localhost_access_log.2020-12-24.txt 
10.244.235.192 - - [24/Dec/2020:08:10:04 +0000] "GET / HTTP/1.1" 404 682

去容器里面看看日志验证一下

[root@k8s-master ~]# kubectl exec -it tomcat-web -- bash
root@tomcat-web:/usr/local/tomcat# cd logs/
root@tomcat-web:/usr/local/tomcat/logs# ls
catalina.2020-12-24.log  host-manager.2020-12-24.log  localhost.2020-12-24.log	localhost_access_log.2020-12-24.txt  manager.2020-12-24.log
root@tomcat-web:/usr/local/tomcat/logs# cat localhost_access_log.2020-12-24.txt 
10.244.235.192 - - [24/Dec/2020:08:10:04 +0000] "GET / HTTP/1.1" 404 682

所以容器内部日志可以使用emptydir的方式来访问,在宿主机/var/lib/kubelet/pods/目录下

[root@master elk]# kubectl get pod -o wide -n ops
NAME                               READY   STATUS    RESTARTS   AGE     IP             NODE    NOMINATED NODE   READINESS GATES          <none>
tomcat-web                         1/1     Running   0          9m43s   10.233.90.53   node1   <none>           <none>


[root@master elk]# curl 10.233.90.53:8080
<!doctype html><html lang="en"><head><title>HTTP Status 404 ?.Not Found</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 404 ?.Not Found</h1><hr class="line" /><p><b>Type</b> Status Report</p><p><b>Description</b> The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.</p><hr class="line" /><h3>A


[root@node1 ~]# docker ps | grep tomcat
fbd8d3f84f40   tomcat                                                    "catalina.sh run"        2 minutes ago    Up 2 minutes              k8s_web_tomcat-web_ops_a150e55c-b5c5-495f-a105-5e423015d93e_0

[root@node1 logs]# pwd
/var/lib/kubelet/pods/a150e55c-b5c5-495f-a105-5e423015d93e/volumes/kubernetes.io~empty-dir/logs

[root@node1 logs]# cat localhost_access_log.2021-09-27.txt 
10.233.70.0 - - [27/Sep/2021:02:33:45 +0000] "GET / HTTP/1.1" 404 683


[root@node1 logs]# ls
catalina.2021-09-27.log  localhost_access_log.2021-09-27.txt

 后面将busybox换成一个日志采集器采集容器当中的数据(其实实际当中是使用filebeat去采集,这里只是busybox来实现这么一个思路)

[root@k8s-master ~]# cat buysbox-gather-tomcatlogs.yml 
apiVersion: v1
kind: Pod
metadata:
  name: tomcat-web
  namespace: default
spec:
  containers:
  - name: web
    image: tomcat
    volumeMounts:
     - name: logs
       mountPath: /usr/local/tomcat/logs
  - name: logs
    image: busybox
    command: [/bin/sh,-c,'tail -f /tmp/localhost_access_log.2020-12-24.txt']
    volumeMounts:
    - name: logs
      mountPath: /tmp

  volumes:
  - name: logs
    emptyDir: {}

[root@k8s-master ~]# kubectl apply -f buysbox-gather-tomcatlogs.yml 
pod/tomcat-web created

开始测试啦

[root@k8s-master ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
tomcat-web               2/2     Running   0          2m46s   10.244.169.140   k8s-node2   <none>           <none>

[root@k8s-master ~]# curl 10.244.169.140:8080
<!doctype html><html lang="en"><head><title>HTTP Status 404 ?.Not Found</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 404 ?.Not Found</h1><hr class="line" /><p><b>Type</b> Status Report</p><p><b>Description</b> The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.</p><hr class="line" /><h3>Apache Tomcat/9.0.41</h3></body></html>[root@k8s-master ~]# 


[root@k8s-master ~]# kubectl logs -f tomcat-web -c logs
10.244.235.192 - - [24/Dec/2020:08:20:29 +0000] "GET / HTTP/1.1" 404 682


[root@k8s-master ~]# kubectl exec -it tomcat-web -c logs -- sh
/ # cd /tmp/
/tmp # ls
catalina.2020-12-24.log              localhost.2020-12-24.log             manager.2020-12-24.log
host-manager.2020-12-24.log          localhost_access_log.2020-12-24.txt
/tmp # cat localhost_access_log.2020-12-24.txt
10.244.235.192 - - [24/Dec/2020:08:20:29 +0000] "GET / HTTP/1.1" 404 682

可以看到监控到日志被收集到了,后面将busybox换成日志采集器就行了。

总结


  • 对于标准输出的日志,也就是logs看到,docker会接管该标准输出写到固定的目录下。那么就可以在每个节点下部署一个容器采集器去这个目录下所有容器的日志就ok了
  • 如果日志写到文件系统的,那么可以在pod当中加上一个日志采集容器,通过emptydir数据卷去共享获取到web容器里面的日志,或者持久化到宿主机某个目录,采集到这个目录(/var/lib/kubelet/pods/

1. 标注输出  持久化到宿主机的/var/lib/docker/containers/

2. empty挂载pod的日志文件所在的目录,然后持久化到/var/lib/kubelet/pods/

3. 通过filebeat采集

上面就是日志的体现以及采集的思路

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
要远程采集日志,需要在目标服务器上安装一个日志采集代理,如Logstash或Filebeat。然后,将代理配置为将日志发送到Elasticsearch集群中的一个或多个节点。 以下是大致步骤: 1. 在目标服务器上安装Logstash或Filebeat 2. 配置代理以获取要采集日志 3. 配置代理以将日志发送到Elasticsearch集群中的节点 4. 在Elasticsearch中创建适当的索引模板和搜索查询以检索日志 具体来说,可以按照以下步骤进行操作: 1. 安装Logstash或Filebeat 对于Logstash,可以按照官方文档中的指南进行操作。对于Filebeat,也可以按照官方文档中的指南进行操作。 2. 配置代理 在Logstash或Filebeat中,需要配置输入和输出。输入是指要采集日志文件或数据源,输出是指将日志发送到Elasticsearch集群中的节点。 例如,在Filebeat中,可以使用以下配置: ``` filebeat.inputs: - type: log enabled: true paths: - /var/log/myapp/*.log output.elasticsearch: hosts: ["http://es-node1:9200", "http://es-node2:9200"] ``` 这将采集`/var/log/myapp/*.log`中的日志,并将其发送到`es-node1`和`es-node2`节点上的Elasticsearch。 3. 创建索引模板和搜索查询 在Elasticsearch中,需要创建一个适当的索引模板以确保日志正确地解析和存储。还可以创建搜索查询以检索和过滤日志。 例如,可以使用以下示例索引模板: ``` PUT _template/myapp_logs { "index_patterns": ["myapp-*"], "settings": { "number_of_shards": 1 }, "mappings": { "properties": { "timestamp": { "type": "date" }, "message": { "type": "text" }, "tags": { "type": "keyword" } } } } ``` 此模板指定了一个索引模式,即`myapp-*`,并定义了索引中的字段。还可以创建搜索查询以检索和过滤日志。 总之,远程采集日志需要配置日志采集代理,并将其配置为将日志发送到Elasticsearch集群中的节点。然后,在Elasticsearch中创建索引模板和搜索查询以检索日志

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值