本文介绍通过elk + filebeat方式收集k8s日志,其中filebeat以sidecar方式部署。elfk最新版本:7.6.2
k8s日志收集方案
- 3种日志收集方案:
1. node上部署一个日志收集程序
Daemonset方式部署日志收集程序,对本节点 /var/log 和 /var/lib/docker/containers 两个目录下的日志进行采集
2. sidecar方式部署日志收集程序
每个运行应用程序的pod中附加一个日志收集的容器,使用 emptyDir 共享日志目录让日志容器收集日志
3. 应用程序直接推送日志
常见的如 graylog 工具,直接修改代码推送日志到es,然后在graylog上展示出来
- 3种收集方案的优缺点:
方案 | 优点 | 缺点 |
---|---|---|
1. node上部署一个日志收集程序 | 每个node仅需部署一个日志收集程序,消耗资源少,对应用无侵入 | 应用程序日志需要写到标准输出和标准错误输出,不支持多行日志 |
2. pod中附加一个日志收集容器 | 低耦合 | 每个pod启动一个日志收集容器,增加资源消耗 |
3. 应用程序直接推送日志 | 无需额外收集工具 | 侵入应用,增加应用复杂度 |
下面测试第1种方案:每个node上部署一个日志收集程序,注意elfk版本保持一致。
SideCar方式收集k8s日志
- 主机说明:
系统 | ip | 角色 | cpu | 内存 | hostname |
---|---|---|---|---|---|
CentOS 7.8 | 192.168.30.128 | master、deploy | >=2 | >=2G | master1 |
CentOS 7.8 | 192.168.30.129 | master | >=2 | >=2G | master2 |
CentOS 7.8 | 192.168.30.130 | node | >=2 | >=2G | node1 |
CentOS 7.8 | 192.168.30.131 | node | >=2 | >=2G | node2 |
CentOS 7.8 | 192.168.30.132 | node | >=2 | >=2G | node3 |
CentOS 7.8 | 192.168.30.133 | test | >=2 | >=2G | test |
- 搭建k8s集群:
搭建过程省略,具体参考:Kubeadm方式搭建k8s集群 或 二进制方式搭建k8s集群
搭建完成后,查看集群:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready master 4d16h v1.14.0
master2 Ready master 4d16h v1.14.0
node1 Ready <none> 4d16h v1.14.0
node2 Ready <none> 4d16h v1.14.0
node3 Ready <none> 4d16h v1.14.0
这里为了方便,直接使用之前的k8s集群,注意删除之前实验的k8s资源对象。
- docker-compose部署elk:
部署过程参考:docker-compose部署elfk
mkdir /software & cd /software
git clone https://github.com/Tobewont/elfk-docker.git
cd elfk-docker/
echo 'ELK_VERSION=7.6.2' > .env #替换版本
vim elasticsearch/Dockerfile
ARG ELK_VERSION=7.6.2
# https://github.com/elastic/elasticsearch-docker
FROM docker.elastic.co/elasticsearch/elasticsearch-oss:${ELK_VERSION}
# FROM elasticsearch:${ELK_VERSION}
# Add your elasticsearch plugins setup here
# Example: RUN elasticsearch-plugin install analysis-icu
vim elasticsearch/config/elasticsearch.yml
---
## Default Elasticsearch configuration from Elasticsearch base image.
## https://github.com/elastic/elasticsearch/blob/master/distribution/docker/src/docker/config/elasticsearch.yml
#
cluster.name: "docker-cluster"
network.host: "0.0.0.0"
## X-Pack settings
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-xpack.html
#
#xpack.license.self_generated.type: trial #trial为试用版,一个月期限,可更改为basic版本
#xpack.security.enabled: true
#xpack.monitoring.collection.enabled: true
#http.cors.enabled: true
#http.cors.allow-origin: "*"
#http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type
vim kibana/Dockerfile
ARG ELK_VERSION=7.6.2
# https://github.com/elastic/kibana-docker
FROM docker.elastic.co/kibana/kibana-oss:${ELK_VERSION}
# FROM kibana:${ELK_VERSION}
# Add your kibana plugins setup here
# Example: RUN kibana-plugin install <name|url>
vim kibana/config/kibana.yml
---
## Default Kibana configuration from Kibana base image.
## https://github.com/elastic/kibana/blob/master/src/dev/build/tasks/os_packages/docker_generator/templates/kibana_yml.template.js
#
server.name: "kibana"
server.host: "0.0.0.0"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
# xpack.monitoring.ui.container.elasticsearch.enabled: true
## X-Pack security credentials
#
# elasticsearch.username: elastic
# elasticsearch.password: changeme
vim logstash/Dockerfile
ARG ELK_VERSION=7.6.2
# https://github.com/elastic/logstash-docker
FROM docker.elastic.co/logstash/logstash-oss:${ELK_VERSION}
# FROM logstash:${ELK_VERSION}
# Add your logstash plugins setup here
# Example: RUN logstash-plugin install logstash-filter-json
RUN logstash-plugin install logstash-filter-multiline
vim logstash/config/logstash.yml
---
## Default Logstash configuration from Logstash base image.
## https://github.com/elastic/logstash/blob/master/docker/data/logstash/config/logstash-full.yml
#
http.host: "0.0.0.0"
#xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
## X-Pack security credentials
#
#xpack.monitoring.enabled: true
#xpack.monitoring.elasticsearch.username: elastic
#xpack.monitoring.elasticsearch.password: changeme
#xpack.monitoring.collection.interval: 10s
vim logstash/pipeline/logstash.conf #如果filebeat配置文件中没有配置多行合并,可以在logstash中进行多行合并
input {
beats {
port => 5040
}
}
filter {
if [type] == "nginx_access" {
#multiline {
#pattern => "^\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3}"
#negate => true
#what => "previous"
#}
grok {
match => [ "message", "%{IPV4:remote_addr} - (%{USERNAME:user}|-) \[%{HTTPDATE:log_timestamp}\] \"%{WORD:method} %{DATA:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:status} %{NUMBER:bytes} %{QS:referer} %{QS:agent} %{QS:xforward}" ]
}
}
if [type] == "nginx_error" {
#multiline {
#pattern => "^\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3}"
#negate => true
#what => "previous"
#}
grok {
match => [ "message", "%{IPV4:remote_addr} - (%{USERNAME:user}|-) \[%{HTTPDATE:log_timestamp}\] \"%{WORD:method} %{DATA:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:status} %{NUMBER:bytes} %{QS:referer} %{QS:agent} %{QS:xforward}" ]
}
}
if [type] == "tomcat_catalina" {
#multiline {
#pattern => "^\d{1,2}-\S{3}-\d{4}\s\d{1,2}:\d{1,2}:\d{1,2}"
#negate => true
#what => "previous"
#}
grok {
match => [ "message", "(?<timestamp>%{MONTHDAY}-%{MONTH}-%{YEAR} %{TIME}) %{LOGLEVEL:severity} \[%{DATA:exception_info}\] %{GREEDYDATA:message}" ]
}
}
}
output {
if [type] == "nginx_access" {
elasticsearch {
hosts => ["elasticsearch:9200"]
#user => "elastic"
#password => "changeme"
index => "nginx-access"
}
}
if [type] == "nginx_error" {
elasticsearch {
hosts => ["elasticsearch:9200"]
#user => "elastic"