Kubernetes-20231119

1.实现Java日志实现多行合并收集

配置logstash的yum仓库

查看可安装版本

yum --showduplicates list logstash

安装指定版本,与elasticsearch保持版本一致

yum install -y logstash-8.5.3

添加配置文件

vim /etc/logstash/conf.d/eslog-es.conf

input {
  file {
    path => "/data/eslogs/magedu-es-cluster.log"
    type => "eslog"
    stat_interval => "1"
    start_position => "beginning"
    codec => multiline {  #多行合并插件
      pattern => "^\[[0-9]{4}\-[0-9]{2}\-[0-9]{2}" #匹配每条java日志开头的时间信息
      negate => "true"  #匹配条件,匹配符合正则表达式的内容
      what => "previous" #what表示匹配内容与多行事件的关系,previous表示匹配的内容在多行事件之前。
    }
  }
}

output {
  if [type] == "eslog" {
    elasticsearch {
      hosts =>  ["192.168.220.107:9200"]
      index => "magedu-eslog-%{+YYYY.MM}"
      user => "magedu"
      password => "123456"
    }}
}

systemctl restart logstash

2.实现TCP日志收集、Syslog日志收集及基于ES API删除历史index

Centos7使用rsyslog收集haproxy日志并发送到logstash

logstash添加rsyslog测试配置文件

vim /etc/logstash/conf.d/rsyslogtest.conf

input{
  syslog {
    type => "rsyslog-haproxy"
    port => "514"  #监听一个本地的端口
}}

output{
  stdout{
    codec => rubydebug
}}

启动logstash,监听514端口

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/rsyslogtoes.conf

安装haproxy

yum install -y haproxy

开启haproxy日志,配置反向代理到Kibana

vim /etc/haproxy/haproxy.cfg

global

    log         127.0.0.1 local6

listen kibana
    bind 0.0.0.0:5601
    log global
    server 192.168.220.106 192.168.220.106:5601 check inter 2s fall 3 rise 3

日志的级别为local0~local7,另外16~23保留为本地使用:

级别代码描述
emerg0系统不可用
alert1必须马上采取行动的事件
crit2关键的事件
err3错误事件
warning4警告事件
notice5普通但重要的事件
info6有用的信息
debug7调试信息

修改rsyslog配置文件

# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514

local6.*  @@192.168.220.106:514 #以tcp协议发送日志

重启服务

systemctl restart haproxy

systemctl restart rsyslog

浏览器访问haproxy反向代理地址,查看logstash是否收到日志

http://192.168.220.108:5601/

logstash添加配置文件把日志信息写入ES

vim /etc/logstash/conf.d/rsyslogtoes.conf

input{
  syslog {
    type => "rsyslog-haproxy"
    port => "514"  #监听一个本地的端口
}}

output{
  if [type] == "rsyslog-haproxy" {
    elasticsearch {
      hosts =>  ["192.168.220.107:9200"]
      index => "magedu-rsyslog-haproxy-%{+YYYY.MM}"
      user => "magedu"
      password => "123456"
    }}
}

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/rsyslogtoes.conf

访问Kibana创建数据视图,查看日志

logstash使用TCP协议收集日志

vim /etc/logstash/conf.d/tcplog.conf

input {
  tcp {
    port => 9889
    type => "magedu-tcplog"
    mode => "server"
  }
}


output {
  if [type] == "magedu-tcplog" {
    elasticsearch {
      hosts => ["192.168.220.108:9200"]
      index => "magedu-tcplog-%{+YYYY.MM.dd}"
      user => "magedu"
      password => "123456"
  }}
}

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tcplog.conf

向logsatsh发送日志:

echo "ERROR tcplog message1" > /dev/tcp/192.168.220.106/9889

或使用nc命令

apt install netcat root@web3:~# echo "nc test" | nc 172.31.2.107 9889

nc 172.31.2.107 9889 < /etc/passwd

kibana验证日志

基于ES API删除历史index

使用elasticsearch-head删除索引

root@es1:~# curl -u magedu:123456 -X DELETE "http://172.31.2.102:9200/索引名称?pretty" #删除单个索引

root@es1:~# cat /data/scripts/es-index-delete.sh #基于脚本批量删除

#!/bin/bash
DATE=`date -d "2 days ago" +%Y.%m.%d`
index="
logstash-magedu-accesslog
magedu-app1-errorlog
"
for NAME in  ${index};do
  INDEX_NAME="$NAME-$DATE"
  echo $INDEX_NAME
  curl -u magedu:123456 -XDELETE http://172.31.2.101:9200/${INDEX_NAME}
done

3.logstash基于Redis实现日志收集缓存后再消费至ES集群及filebeat-logstash-redis-logsatsh-es架构

logstash基于Redis实现日志收集缓存后再消费至ES集群

logstash基于redis的收集:Redis output plugin | Logstash Reference [8.11] | Elastic

yum install redis-stack -y

vim /etc/redis-stack.conf

bind 0.0.0.0

port 6379

daemonize no

requirepass 123456

systemctl restart redis-stack-server

安装nginx,修改nginx访问日志为json格式

vim /etc/nginx/nginx.conf

http {
    log_format access_json '{"@timestamp":"$time_iso8601",'
        '"host":"$server_addr",'
        '"clientip":"$remote_addr",'
        '"size":$body_bytes_sent,'
        '"responsetime":$request_time,'
        '"upstreamtime":"$upstream_response_time",'
        '"upstreamhost":"$upstream_addr",'
        '"http_host":"$host",'
        '"uri":"$uri",'
        '"domain":"$host",'
        '"xff":"$http_x_forwarded_for",'
        '"referer":"$http_referer",'
        '"tcp_xff":"$proxy_protocol_addr",'
        '"http_user_agent":"$http_user_agent",'
        '"status":"$status"}';
    access_log  /var/log/nginx/access.log  access_json;

logstash收集nginx服务的请求日志和错误日志并写入redis

input {
  file {
    path => "/var/log/nginx/access.log"
    type => "magedu-nginx-accesslog"
    start_position => "beginning"
    stat_interval => "1"
    codec => "json" #对json格式日志进行json解析
  }

  file {
    path => "/var/log/nginx/error.log"
    type => "magedu-nginx-errorlog"
    start_position => "beginning"
    stat_interval => "1"
  }
}

filter {
  if [type] == "magedu-nginx-errorlog" {
    grok {
      match => { "message" => ["(?<timestamp>%{YEAR}[./]%{MONTHNUM}[./]%{MONTHDAY} %{TIME}) \[%{LOGLEVEL:loglevel}\] %{POSINT:pid}#%{NUMBER:threadid}\: \*%{NUMBER:connectionid} %{GREEDYDATA:messs
age}, client: %{IPV4:clientip}, server: %{GREEDYDATA:server}, request: \"(?:%{WORD:request-method} %{NOTSPACE:request-uri}(?: HTTP/%{NUMBER:httpversion}))\", host: %{GREEDYDATA:domainname}"]}
      remove_field => "message" #删除源日志
    }
  }
}


output {
  if [type] == "magedu-nginx-accesslog" {
    redis {
      data_type => "list"
      key => "magedu-nginx-accesslog"
      host => "192.168.220.202"
      port => "6379"
      db => "0"
      password => "123456"
    }
  }
  if [type] == "magedu-nginx-errorlog" {
    redis {
      data_type => "list"
      key => "magedu-nginx-errorlog"
      host => "192.168.220.202"
      port => "6379"
      db => "0"
      password => "123456"
    }
  }
}

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginxtolostash.conf

redis 验证日志

root@redis:~# redis-cli

127.0.0.1:6379> AUTH 123456

127.0.0.1:6379> KEYS *

1) "magedu-nginx-errorlog"

2) "magedu-nginx-accesslog"

logstash基于redis收集nginx日志

vim /etc/logstash/conf.d/magedu-nginxlog-redis-to-es.conf

input {
    redis {
      host => "192.168.220.202"
      port => "6379"
      db => 0
      password => "123456"
      data_type => "list"
      key => "magedu-nginx-accesslog"
      codec => "json"
    }

    redis {
      host => "192.168.220.202"
      port => "6379"
      db => 0
      password => "123456"
      data_type => "list"
      key => "magedu-nginx-errorlog"
      codec => "json"
    }

}


output {
   if [type] == "magedu-nginx-accesslog" {
    elasticsearch {
      hosts => ["192.168.220.108:9200"]
      index => "logstash-magedu-nginx-accesslog-%{+YYYY.MM.dd}"
      password => "123456"
      user => "magedu"
   }}

  if [type] == "magedu-nginx-errorlog" {
    elasticsearch {
      hosts => ["192.168.220.108:9200"]
      index => "logstash-magedu-nginx-errorlog-%{+YYYY.MM.dd}"
      password => "123456"
      user => "magedu"
   }}
}

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/magedu-nginxlog-redis-to-es.conf

验证nginx accesslog及nginx errorlog

filebeat-logstash-redis-logsatsh-es架构

部署并配置filebeat

yum install -y filebeat-8.5.3

vim /etc/filebeat/filebeat.yml

filebeat.inputs:
- type: filestream
  id: magedu-app1
  enabled: true
  paths:
    - /var/log/nginx/error.log 
  fields:
    project: magedu
    type: magedu-app1-errorlog

- type: filestream
  id: magedu-app1
  enabled: true
  paths:
    - /var/log/nginx/access.log 
  fields:
    project: magedu
    type: magedu-app1-accesslog

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

setup.template.settings:
  index.number_of_shards: 1
  
output.logstash:
  enabled: true
  hosts: ["192.168.220.107:5044"]
  loadbalance: true
  worker: 1
  compression_level: 3

processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

systemctl start filebeat.service

配置logstash接收filebeat日志并转发至redis:

vim /etc/logstash/conf.d/beats-magedu-to-redis.conf

input {
  beats {
    port => 5044
    codec => "json"
  }
}


output {
  #stdout {
  #  codec => "rubydebug"
  #}
####################################
  if [fields][type] == "magedu-app1-accesslog" {
  redis {
    host => "192.168.220.202"
    password => "123456"
    port => "6379"
    db => "0"
    key => "magedu-app1-accesslog"
    data_type => "list"
   }
  }
  if [fields][type] == "magedu-app1-errorlog" {
  redis {
    host => "192.168.220.202"
    password => "123456"
    port => "6379"
    db => "0"
    key => "magedu-app1-errorlog"
    data_type => "list"
     }
  }
}

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/beats-magedu-to-redis.conf

redis 验证数据

127.0.0.1:6379> KEYS *

1) "magedu-app1-accesslog" 2) "magedu-app1-errorlog"

配置logstash从redis消费日志并写入elasticsearch

vim /etc/logstash/conf.d/filebeat-nginxlog-redis-to-es.conf

input {
  redis {
    data_type => "list"
    key => "magedu-app1-accesslog"
    host => "192.168.220.202"
    port => "6379"
    db => "0"
    password => "123456"
    codec => "json"  #json解析
  }

  redis {
    data_type => "list"
    key => "magedu-app1-errorlog"
    host => "192.168.220.202"
    port => "6379"
    db => "0"
    password => "123456"
  }
}

output {
  if [fields][type] == "magedu-app1-accesslog" {
    elasticsearch {
      hosts => ["192.168.220.108:9200"]
      index => "filebeat-accesslog-%{+YYYY.MM.dd}"
      user => "magedu"
      password => "123456"
    }
  }

  if [fields][type] == "magedu-app1-errorlog" {
    elasticsearch {
      hosts => ["192.168.220.108:9200"]
      index => "filebeat-errorlog-%{+YYYY.MM.dd}"
      user => "magedu"
      password => "123456"
    }
  }
}

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/filebeat-nginxlog-redis-to-es.conf

kibana验证nginx 错误日志与访问日志

4.在Kibana 创建图像及Dashboard

Kibana 创建图像:Visualize Library-->创建可视化

可创建的图像类型

垂直条形图-访问量最高的客户端前十IP

表-访问量最高的页面

饼图-状态码百分比

Kibana 创建dashboard: dashboard-->创建仪表盘--> 从库中添加--保存

5.在K8S环境部署kafka集群(kakfak课程-基于Strimzi Operator部署kafka集群)或使用已有的kafka集群,用于后续K8S环境的日志收集

6、Kubernetes日志收集—基于DaemonSet收集容器日志及系统日志

前提条件:准备好私有镜像仓库,Kafka,ES,Kibana
编辑需要添加到镜像中的logstash配置文件,把node节点宿主机/var/log/目录下的pod日志和系统日志挂载到logstash容器中实现收集

vim logstash.conf

input {
  file {
    #path => "/var/lib/docker/containers/*/*-json.log" #docker
    path => "/var/log/pods/*/*/*.log"
    start_position => "beginning"
    type => "jsonfile-daemonset-applog"
  }

  file {
    path => "/var/log/*.log"
    start_position => "beginning"
    type => "jsonfile-daemonset-syslog"
  }
}

output {
  if [type] == "jsonfile-daemonset-applog" {
    kafka {
      bootstrap_servers => "${KAFKA_SERVER}"
      topic_id => "${TOPIC_ID}"
      batch_size => 16384  #logstash每次向ES传输的数据量大小,单位为字节
      codec => "${CODEC}"
   } }

  if [type] == "jsonfile-daemonset-syslog" {
    kafka {
      bootstrap_servers => "${KAFKA_SERVER}"
      topic_id => "${TOPIC_ID}"
      batch_size => 16384
      codec => "${CODEC}" #系统日志不是json格式
  }}
}

vim logstash.yml

http.host: "0.0.0.0"

vim Dockerfile

FROM logstash:7.12.1
USER root
WORKDIR /usr/share/logstash
#RUN rm -rf config/logstash-sample.conf
ADD logstash.yml /usr/share/logstash/config/logstash.yml
ADD logstash.conf /usr/share/logstash/pipeline/logstash.conf

vim build-commond.sh

#!/bin/bash

docker build -t harbor.magedu.net/baseimages/logstash:v7.12.1-json-file-log-v4 .

docker push harbor.magedu.net/baseimages/logstash:v7.12.1-json-file-log-v4

#nerdctl build -t harbor.linuxarchitect.io/baseimages/logstash:v7.12.1-json-file-log-v1 .

#nerdctl push harbor.linuxarchitect.io/baseimages/logstash:v7.12.1-json-file-log-v1

构建logstash镜像,上传到镜像仓库,用于部署在k8s中收集容器日志

sh build-commond.sh

部署logstash镜像的DaemonSet

vim 2.DaemonSet-logstash.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: logstash-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: logstash-logging
spec:
  selector:
    matchLabels:
      name: logstash-elasticsearch
  template:
    metadata:
      labels:
        name: logstash-elasticsearch
    spec:
      tolerations:
      # this toleration is to have the daemonset runnable on master nodes
      # remove it if your masters can't run pods
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: logstash-elasticsearch
        image: harbor.magedu.net/baseimages/logstash:v7.12.1-json-file-log-v4
        env:
        - name: "KAFKA_SERVER"
          value: "192.168.220.201:9092"
          #value: "172.31.7.111:39092,172.31.7.112:39092,172.31.7.113:39092"
        - name: "TOPIC_ID"
          value: "jsonfile-log-topic"
        - name: "CODEC"
          value: "json"
        volumeMounts:
        - name: varlog #定义宿主机系统日志挂载路径
          mountPath: /var/log #宿主机系统日志挂载点
        - name: varlibdockercontainers #定义容器日志挂载路径,和logstash配置文件中的收集路径保持一致
          #mountPath: /var/lib/docker/containers #docker挂载路径
          mountPath: /var/log/pods #containerd挂载路径,此路径与logstash的日志收集路径必须一致
          readOnly: false
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log #宿主机系统日志
      - name: varlibdockercontainers
        hostPath:
          #path: /var/lib/docker/containers #docker的宿主机日志路径
          path: /var/log/pods #containerd的宿主机日志路径

查看是否部署成功

[root@DELL_PC 1.daemonset-logstash]# kubectl get pod -A -o wide

NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

kube-system calico-kube-controllers-6655b6c4b-hzjzw 1/1 Running 7 (141m ago) 3h17m 10.200.252.1 192.168.220.201

kube-system calico-node-pcmxm 1/1 Running 0 3h17m 192.168.220.201 192.168.220.201

kube-system calico-node-z7gbj 1/1 Running 1 (23m ago) 3h17m 192.168.220.101 k8s-master

kube-system logstash-elasticsearch-cmgkl 0/1 ContainerCreating 0 14s 192.168.220.201

编辑从kafak收集数据发到ES的logstash配置文件

vim 3.logsatsh-daemonset-jsonfile-kafka-to-es.conf

input {
  kafka {
    #bootstrap_servers => "172.31.4.101:9092,172.31.4.102:9092,172.31.4.103:9092"
    bootstrap_servers => "192.168.220.201:9092"
    topics => ["jsonfile-log-topic"]
    codec => "json"
  }
}




output {
  #if [fields][type] == "app1-access-log" {
  if [type] == "jsonfile-daemonset-applog" {
    elasticsearch {
      hosts => ["192.168.220.106:9200"]
      index => "jsonfile-daemonset-applog-%{+YYYY.MM.dd}"
      user => magedu
      password => "123456"
    }}

  if [type] == "jsonfile-daemonset-syslog" {
    elasticsearch {
      hosts => ["192.168.220.106:9200"]
      index => "jsonfile-daemonset-syslog-%{+YYYY.MM.dd}"
      user => magedu
      password => "123456"
    }}
}

启动logstash服务

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/3.logsatsh-daemonset-jsonfile-kafka-to-es.conf

kibana创建视图并查看日志

ES可以访问,访问KIbana时出现异常,原因是在开启认证功能的集群模式下把ES从多台主机修改为一台主机后,没有发现主节点导致集群状态异常,需要把ES改为单机模式,并重新初始化ES用户名密码,在ES开启认证功能时需要在KIbana配置文件中设置ES用户名为kibana_system,并添加用户密码(在ES未开启认证功能时不需要)

单机模式的ES配置文件

#cluster.name: magedu-es-cluster

#node.name: node1

#path.data: /data/esdata

path.logs: /data/eslogs

network.host: 0.0.0.0

http.port: 9200

#discovery.seed_hosts: ["192.168.220.106"]

cluster.initial_master_nodes: ["192.168.220.106"]

action.destructive_requires_name: true

xpack.security.enabled: true

xpack.security.transport.ssl.enabled: true

xpack.security.transport.ssl.keystore.path: /apps/elasticsearch/config/certs/es1.example.com.p12

xpack.security.transport.ssl.truststore.path: /apps/elasticsearch/config/certs/es1.example.com.p12

7、Kubernetes日志收集—以Sidecar容器实现Pod中的日志收集

前提条件:准备好私有镜像仓库,Kafka,ES,Kibana
定义通过emptyDir实现业务容器与sidecar容器的日志共享,以让sidecar收集业务容器中的日志

编辑需要添加到镜像中的ogstash配置文件

vim logstash.conf

input {
  file {
    path => "/var/log/applog/catalina.out"
    start_position => "beginning"
    type => "app1-sidecar-catalina-log"
  }
  file {
    path => "/var/log/applog/localhost_access_log.*.txt"
    start_position => "beginning"
    type => "app1-sidecar-access-log"
  }
}

output {
  if [type] == "app1-sidecar-catalina-log" {
    kafka {
      bootstrap_servers => "${KAFKA_SERVER}"
      topic_id => "${TOPIC_ID}"
      batch_size => 16384  #logstash每次向ES传输的数据量大小,单位为字节
      codec => "${CODEC}"
   } }

  if [type] == "app1-sidecar-access-log" {
    kafka {
      bootstrap_servers => "${KAFKA_SERVER}"
      topic_id => "${TOPIC_ID}"
      batch_size => 16384
      codec => "${CODEC}"
  }}
}

vim logstash.yml

http.host: "0.0.0.0"

 vim Dockerfile

FROM logstash:7.12.1
USER root
WORKDIR /usr/share/logstash
#RUN rm -rf config/logstash-sample.conf
ADD logstash.yml /usr/share/logstash/config/logstash.yml
ADD logstash.conf /usr/share/logstash/pipeline/logstash.conf

vim build-commond.sh

#!/bin/bash
docker build -t harbor.magedu.net/baseimages/logstash:v7.12.1-sidecar .
docker push harbor.magedu.net/baseimages/logstash:v7.12.1-sidecar
#nerdctl  build -t harbor.linuxarchitect.io/baseimages/logstash:v7.12.1-sidecar .
#nerdctl push harbor.linuxarchitect.io/baseimages/logstash:v7.12.1-sidecar

 构建logstash镜像,上传到镜像仓库,用于部署在k8s中收集容器日志

sh build-commond.sh

使用Depolyment部署业务容器和日志收集容器,通过创建emptyDir类型的卷把业务容器日志共享给日志收集容器

vim 2.tomcat-app1.yaml

kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: magedu-tomcat-app1-deployment-label
  name: magedu-tomcat-app1-deployment #当前版本的deployment 名称
  namespace: magedu
spec:
  replicas: 1
  selector:
    matchLabels:
      app: magedu-tomcat-app1-selector
  template:
    metadata:
      labels:
        app: magedu-tomcat-app1-selector
    spec:
      containers:
      - name: magedu-tomcat-app1-container
        image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v1
        imagePullPolicy: IfNotPresent
        #imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
        resources:
          limits:
            cpu: 1
            memory: "512Mi"
          requests:
            cpu: 500m
            memory: "512Mi"
        volumeMounts:
        - name: applogs
          mountPath: /apps/tomcat/logs
        startupProbe:
          httpGet:
            path: /myapp/index.html
            port: 8080
          initialDelaySeconds: 5 #首次检测延迟5s
          failureThreshold: 3  #从成功转为失败的次数
          periodSeconds: 3 #探测间隔周期
        readinessProbe:
          httpGet:
            #path: /monitor/monitor.html
            path: /myapp/index.html
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 3
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 3
        livenessProbe:
          httpGet:
            #path: /monitor/monitor.html
            path: /myapp/index.html
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 3
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 3
      - name: sidecar-container
        image: harbor.magedu.net/baseimages/logstash:v7.12.1-sidecar
        imagePullPolicy: IfNotPresent
        #imagePullPolicy: Always
        env:
        - name: "KAFKA_SERVER"
          value: "192.168.220.201:9092"
          #value: "172.31.7.111:39092"
        - name: "TOPIC_ID"
          value: "tomcat-app1-topic"
        - name: "CODEC"
          value: "json"
        volumeMounts:
        - name: applogs
          mountPath: /var/log/applog
      volumes:
      - name: applogs #定义通过emptyDir实现业务容器与sidecar容器的日志共享,以让sidecar收集业务容器中的日志
        emptyDir: {}

kubectl apply -f 2.tomcat-app1.yaml

查看是否部署成功

[root@DELL_PC 2.sidecar-logstash]# kubectl get pod -A
NAMESPACE     NAME                                             READY   STATUS    RESTARTS      AGE
kube-system   calico-kube-controllers-6655b6c4b-hzjzw          1/1     Running   7 (20h ago)   21h
kube-system   calico-node-pcmxm                                1/1     Running   0             21h
kube-system   calico-node-z7gbj                                1/1     Running   1 (18h ago)   21h
magedu        magedu-tomcat-app1-deployment-6d986c6b9f-ds8jj   2/2     Running   0             53s

部署业务容器service

vim 3.tomcat-service.yaml

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: magedu-tomcat-app1-service-label
  name: magedu-tomcat-app1-service
  namespace: magedu
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 40080
  selector:
    app: magedu-tomcat-app1-selector

apply -f 3.tomcat-service.yaml

编辑从kafak收集数据发到ES的logstash配置文件

vim 4.logsatsh-sidecar-kafka-to-es.conf

input {
  kafka {
    #bootstrap_servers => "172.31.4.101:9092,172.31.4.102:9092,172.31.4.103:9092"
    bootstrap_servers => "192.168.220.201:9092"
    topics => ["tomcat-app1-topic"]
    codec => "json"
  }
}

output {
  #if [fields][type] == "app1-access-log" {
  if [type] == "app1-sidecar-access-log" {
    elasticsearch {
      hosts => ["192.168.220.106:9200"]
      index => "app1-sidecar-accesslog-%{+YYYY.MM.dd}"
      user => magedu
      password => "123456"
    }
  }

  #if [fields][type] == "app1-catalina-log" {
  if [type] == "app1-sidecar-catalina-log" {
    elasticsearch {
      hosts => ["192.168.220.106:9200"]
      index => "app1-sidecar-catalinalog-%{+YYYY.MM.dd}"
      user => magedu
      password => "123456"
    }
  }

#  stdout {
#    codec => rubydebug
#  }
}

启动 logstash服务

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/4.logsatsh-sidecar-kafka-to-es.conf

访问页面生成访问日志,添加tomcat日志

kubectl exec -it magedu-tomcat-app1-deployment-6d986c6b9f-ds8jj -n magedu sh

kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.

Defaulted container "magedu-tomcat-app1-container" out of: magedu-tomcat-app1-container, sidecar-container

sh-4.2# echo 'tomcat start log' >> /apps/tomcat/logs/catalina.out

在Kibana查看收集的日志

8、Kubernetes日志收集—容器内置日志收集进程filebeat实现pod日志收集

前提条件:准备好私有镜像仓库,Kafka,ES,Kibana,构建安装有filebeate的tomcat镜像

编辑需要添加到镜像中的filebeat配置文件

vim filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /apps/tomcat/logs/catalina.out
  fields:
    type: filebeat-tomcat-catalina
- type: log
  enabled: true
  paths:
    - /apps/tomcat/logs/localhost_access_log.*.txt
  fields:
    type: filebeat-tomcat-accesslog
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:

output.kafka:
  hosts: ["192.168.220.201:9092"]
  #hosts: ["172.31.7.111:39092","172.31.7.112:39092","172.31.7.113:39092"]
  required_acks: 1
  topic: "filebeat-magedu-app1"
  compression: gzip
  max_message_bytes: 1000000
#output.redis:
#  hosts: ["172.31.2.105:6379"]
#  key: "k8s-magedu-app1"
#  db: 1
#  timeout: 5
#  password: "123456"

编辑需要添加到镜像中的服务启动脚本

vim run_tomcat.sh

#!/bin/bash
#echo "nameserver 223.6.6.6" > /etc/resolv.conf
#echo "192.168.7.248 k8s-vip.example.com" >> /etc/hosts
/usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat &
su - tomcat -c "/apps/tomcat/bin/catalina.sh start"
tail -f /etc/hosts

编辑需要添加到镜像中的tomcat配置文件,修改访问根目录

 <Host name="localhost"  appBase="/data/tomcat/webapps"  unpackWARs="false" autoDeploy="false">
构建并上传镜像,需要确保添加到镜像中的脚本有执行权限,否则镜像将无法在K8s运行

vim Dockerfile

#tomcat web1
FROM harbor.magedu.net/pub-images/tomcat-base:v8.5.43
ADD catalina.sh /apps/tomcat/bin/catalina.sh
ADD server.xml /apps/tomcat/conf/server.xml
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
RUN chmod a+x /apps/tomcat/bin/catalina.sh
RUN chmod a+x /apps/tomcat/bin/run_tomcat.sh
ADD filebeat.yml /etc/filebeat/filebeat.yml
ADD myapp.tar.gz /data/tomcat/webapps/myapp/
RUN chown  -R tomcat.tomcat /data/ /apps/
EXPOSE 8080 8443
CMD ["/apps/tomcat/bin/run_tomcat.sh"]

vim build-command.sh

#!/bin/bash
docker build -t  harbor.magedu.net/magedu/tomcat-app1:filebeat .
docker push  harbor.magedu.net/magedu/tomcat-app1:filebeat
#nerdctl build -t  harbor.linuxarchitect.io/magedu/tomcat-app1:${TAG}  .
#nerdctl push harbor.linuxarchitect.io/magedu/tomcat-app1:${TAG}

sh build-command.sh

使用构建好的filebeat收集tomcat日志的镜像部署在K8s

vim 3.tomcat-app1.yaml

kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: magedu-tomcat-app1-filebeat-deployment-label
  name: magedu-tomcat-app1-filebeat-deployment
  namespace: magedu
spec:
  replicas: 1
  selector:
    matchLabels:
      app: magedu-tomcat-app1-filebeat-selector
  template:
    metadata:
      labels:
        app: magedu-tomcat-app1-filebeat-selector
    spec:
      containers:
      - name: magedu-tomcat-app1-filebeat-container
        image: harbor.magedu.net/magedu/tomcat-app1:filebeat
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
        resources:
          limits:
            cpu: 1
            memory: "512Mi"
          requests:
            cpu: 500m
            memory: "512Mi"

编辑从kafak收集数据发到ES的logstash配置文件

vim 5.logstash-filebeat-process-kafka-to-es.conf

input {
  kafka {
    bootstrap_servers => "192.168.220.201:9092"
    #bootstrap_servers => "172.31.7.111:39092,172.31.7.112:39092,172.31.7.113:39092"
    topics => ["filebeat-magedu-app1"]
    codec => "json"
  }
}



output {
  if [fields][type] == "filebeat-tomcat-catalina" {
    elasticsearch {
      hosts => ["192.168.220.106:9200"]
      index => "filebeat-tomcat-catalina-%{+YYYY.MM.dd}"
      user => magedu
      password => "123456"
    }}

  if [fields][type] == "filebeat-tomcat-accesslog" {
    elasticsearch {
      hosts => ["192.168.220.106:9200"]
      index => "filebeat-tomcat-accesslog-%{+YYYY.MM.dd}"
      user => magedu
      password => "123456"
    }}

}

访问页面生成访问日志,添加tomcat日志

[root@DELL_PC 3.container-filebeat-process]# kubectl exec -it magedu-tomcat-app1-filebeat-deployment-5cc59d4d4b-jct7t -n magedu sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
sh-4.2# ls /apps/tomcat/logs/
catalina.2023-12-10.log  catalina.out  host-manager.2023-12-10.log  localhost.2023-12-10.log  localhost_access_log.2023-12-10.txt  manager.2023-12-10.log
sh-4.2# echo 'filebeat collect tomcat' >> /apps/tomcat/logs/catalina.out
sh-4.2# exit

 在Kibana查看收集的日志

扩展:
  1、将日志写入数据库
  2、在kibana地图显示客户端城市

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值