一、通过kafka.UI查看监控数据
### --- 通过Chrome登入Kibana:http://192.168.1.11:30495/
~~~ ——>创建Index:设置——>Kibana:Index Patterns——>Create index pattern
~~~ ——>Index pattern:public*——>Next step
~~~ ——>Time Filer field name Refresh:@timestamp——>Create index pattern——>END
![](https://i-blog.csdnimg.cn/blog_migrate/5b5290eeebcd17ed1905db999e9a778f.png)
二、查看数据是否存储在es中:数据存储成功
![](https://i-blog.csdnimg.cn/blog_migrate/5750f70b1fe855720b2ca16d2bd8f289.png)
三、验证是否可以通过EDIT FILETR过滤日志:
### --- 验证是否可以通过EDIT FILETR过滤日志:
~~~ 查看pod的名称:app-d585bb7cf-rlfkw
~~~ ——>ADD Filter:——>Field:fields.pod_name——>Operator:is
~~~ ——>Value:app-d585bb7cf-rlfkw——>Save:它就会过滤出来,带这个字段的日志
![](https://i-blog.csdnimg.cn/blog_migrate/f74662c25b8cd5dfb8e302d3339d154d.png)
四、删除此实验环境的所有资源
### --- 进入创建目录
[root@k8s-master01 filebeat]# pwd
/root/README/EFK/filebeat
### --- 删除对应的容器
[root@k8s-master01 filebeat]# kubectl delete -f . public-service
[root@k8s-master01 filebeat]# kubectl delete -f . -n public-service
deployment.apps "app" deleted
configmap "filebeatconf" deleted
configmap "logstash-configmap" deleted
deployment.apps "logstash-deployment" deleted
service "logstash-service" deleted