seventeenth work

实现Java日志实现多行合并收集

logstash默认是一次只收集一行收集日志的,所以当一个日志有多行的时候,就需要把他们合并后收集,例如java日志。
一般使用multiline插件来实现多行合并;在input中使用
multiline插件文档:https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html

input {
      stdin {
        codec => multiline {
          pattern => "pattern, a regexp" #基于正则表达式对事件内容进行合并
          negate => "true" or "false" #指定匹配条件,是正则表达式匹配成功还是失败才认为匹配结果是成立的
          what => "previous" or "next" #一旦条件成立是往前合并还是往后合并
        }
      }
    }
    
    #一般都是选择往前合并,往前合并号控制一些

# pattern示例
^\[[0-9]{4}\-[0-9]{2}\-[0-9]{2} ==> [2021-08-25T08:58:20,650][WARN ] #/data/esdata/logs/magedu-m44.log
^[0-9]{2}\-[A-Za-z]+\-[0-9]{4} ==> 25-Aug-2021 #/apps/tomcat/logs/catalina.out         

使用logstash收集ES的java日志

#各个ES服务器安装logstash
dpkg -i logstash-8.5.1-amd64.deb

#各个ES服务器logstash配置
vim /etc/logstash/conf.d/eslog-to-es.conf
input {
  file {
    path => "/data/eslogs/my-es-application.log"
    type => "eslog"
    stat_interval => "1"
    start_position => "beginning"
    codec => multiline {
      #pattern => "^\["
      pattern => "^\[[0-9]{4}\-[0-9]{2}\-[0-9]{2}"
      negate => "true"
      what => "previous"
    }
  }
}

output {
  if [type] == "eslog" {
    elasticsearch {
      hosts =>  ["10.0.0.180:9200"]
      index => "wang-eslog-%{+YYYY.ww}"
      user => "magedu"
      password => "123456"
    }}
}

#测试logstash配置是否正确
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/eslog-to-es.conf  -t

#重启各个ES服务器logstash
systemctl restart logstash

kibana界面展示效果
在这里插入图片描述

实现TCP日志收集、Syslog日志收集,以及基于ES API删除历史index

Syslog日志收集

因为没有网络设置,所以用haproxy代理测试

#安装并配置haproxy
apt install -y haproxy
#编辑配置文件
vim /etc/haproxy/haproxy.cfg
listen kibana
  bind 0.0.0.0:5601
  log global
  server 10.0.0.180 10.0.0.180:5601 check inter 2s fall 3 rise 3

listen elasticsearch-9200
  bind 0.0.0.0:9200
  log global
  server 10.0.0.181 10.0.0.181:9200 check inter 2s fall 3 rise 3
  server 10.0.0.182 10.0.0.182:9200 check inter 2s fall 3 rise 3

#重启haproxy
systemctl restart haproxy

haproxy 访问测试
在这里插入图片描述
配置logstash服务器

vim /etc/logstash/conf.d/rsyslog-haproxy-to-es.conf
input{
  syslog {
    type => "rsyslog-haproxy"
    port => "2514"  #监听一个本地的端口,将syslog日志发送到这里;因为logstash是普通用户启动的,监听不了1000以内的端口
}}

output{
  if [type] == "rsyslog-haproxy" {
    elasticsearch {
      hosts =>  ["10.0.0.180:9200"]
      index => "wang-rsyslog-haproxy-%{+YYYY.ww}"
      user => "magedu"
      password => "123456"
    }}
}

#测试文件是否正确
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/rsyslog-haproxy-to-es.conf -t

#重启logstash
systemctl restart logstash

配置syslog

vim /etc/rsyslog.d/49-haproxy.conf
# Create an additional socket in haproxy's chroot in order to allow logging via
# /dev/log to chroot'ed HAProxy processes
$AddUnixListenSocket /var/lib/haproxy/dev/log

# Send HAProxy messages to a dedicated logfile
:programname, startswith, "haproxy" {
 # /var/log/haproxy.log
 @@10.0.0.183:2514
  stop
}

# @@10.0.0.184:514 表示tcp协议的10.0.0.184的514端口
 # /var/log/haproxy.log 往本地写

#重启syslog
systemctl  restart rsyslog.service
rsyslog.service

kibana界面验证
在这里插入图片描述

实现TCP日志收集

#编辑logstash配置
vim /etc/logstash/conf.d/tcp-log-to-es.conf
input {
  tcp {
    port => 9889 #让logstash监听一个端口,把日志发给这个端口
    type => "wang-tcplog"
    mode => "server"  #模式,server端
  }
}


output {
  if [type] == "wang-tcplog" {
    elasticsearch {
      hosts => ["10.0.0.180:9200"]
      index => "wang-tcplog-%{+YYYY.MM.dd}"
      user => "magedu"
      password => "123456"
  }}
}


#重启logstash

向logstash发送日志

#向logsatsh发送日志
echo "ERROR tcplog message1"  > /dev/tcp/10.0.0.183/9889

#或者适用nc命令
apt install netcat
echo "nc test" | nc 10.0.0.183 9889
nc 10.0.0.183 9889 < /etc/passwd

kibana界面
在这里插入图片描述

基于ES API删除历史index

#删除单个索引
curl -u magedu:123456 -X DELETE "http://10.0.0.180:9200/test_index?pretty" 

#基于脚本批量删除
cat /data/scripts/es-index-delete.sh
#!/bin/bash
DATE=`date -d "10 days ago" +%Y.%m.%d`
index="
wang-nginx-errorlog
wang-nginx-accesslog
"
for NAME in  ${index};do
  INDEX_NAME="$NAME-$DATE"
  echo $FULL_NAME
  curl -u magedu:123456 -X DELETE http://10.0.0.180:9200/${INDEX_NAME}
done

logstash基于Redis实现日志收集缓存后再消费至ES集群及filebeat-logstash-redis-logsatsh-es架构

redis安装

redis服务器:10.0.0.185

apt install redis -y
bind 0.0.0.0
requirepass 123456

#重启redis
systemctl restart redis-server.service

配置filebeat收集nginx日志

服务器:10.0.0.183

安装nginx

#安装nginx
wget https://nginx.org/download/nginx-1.24.0.tar.gz
tar xvf nginx-1.24.0.tar.gz
cd nginx-1.24.0
#安装nginx需要的基础环境
apt install iproute2 ntpdate tcpdump telnet traceroute nfs-kernel-server nfs-common lrzsz tree openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev gcc openssh-server iotop unzip zip make

#编译
./configure --prefix=/apps/nginx \
--with-http_ssl_module \
--with-http_v2_module \
--with-http_realip_module \
--with-http_stub_status_module \
--with-http_gzip_static_module \
--with-pcre \
--with-file-aio \
--with-stream \
--with-stream_ssl_module \
--with-stream_realip_module

#安装
make && make install

#编辑nginx配置文件
vim /apps/nginx/conf/nginx.conf
 server {
        listen       80;
        server_name  www.wang.com;
        
#nginx配置json格式的日志
    log_format access_json '{"@timestamp":"$time_iso8601",'
                           '"host":"$server_addr",'
                           '"clientip":"$remote_addr",'
                           '"size":$body_bytes_sent,'
                           '"responsetime":$request_time,'
                           '"upstreamtime":"$upstream_response_time",'
                           '"upstreamhost":"$upstream_addr",'
                           '"http_host":"$host",'
                           '"uri":"$uri",'
                           '"domain":"$host",'
                           '"xff":"$http_x_forwarded_for",'
                           '"referer":"$http_referer",'
                           '"tcp_xff":"$proxy_protocol_addr",'
                           '"http_user_agent":"$http_user_agent",'
                           '"status":"$status"}';
    access_log /apps/nginx/logs/json-ccess.log access_json;
    

filebeat收集nginx日志

#安装
dpkg -i filebeat-8.5.1-amd64.deb
#编辑filebeat配置文件
vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: filestream
  id: wang-nginx #自己定
  enabled: true #启用配置
  paths:
    - /apps/nginx/logs/json-ccess.log
  fields:
    project: wang
    type: wang-nginx-json-accesslog
- type: filestream
  id: wang-nginx
  enabled: true
  paths:
    - /apps/nginx/logs/error.log
  fields:
    project: wang
    type: wang-nginx-errorlog
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:

output.logstash:
  hosts: ["10.0.0.184:5044"]
  enabled: true
  loadbalance: true
  worker: 1
  compression_level: 3
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~


#重启filebeat
systemctl restart filebeat

安装配置logstash

#安装
dpkg -i logstash-8.5.1-amd64.deb
#编辑配置文件
vim /etc/logstash/conf.d/wang-nginx-to-redis.conf
input {
  beats {
    port => 5044
    codec => "json"
  }
}


output {
  #stdout {
  #  codec => "rubydebug"
  #}
####################################
  if [fields][type] == "wang-nginx-json-accesslog" {
  redis {
    host => "10.0.0.185"
    password => "123456"
    port => "6379"
    db => "0"
    key => "wang-nginx-json-accesslog"
    data_type => "list"
   }
  }
  if [fields][type] == "wang-nginx-errorlog" {
  redis {
    host => "10.0.0.185"
    password => "123456"
    port => "6379"
    db => "0"
    key => "wang-nginx-errorlog"
    data_type => "list"
     }
  } 
}

#检查配置文件是否正确
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/wang-nginx-to-redis.conf  -t

#重启logstash
systemctl restart logstash

重启filebeat服务
访问nginx页面生成日志

配置logstash消费redis数据,并写入ES

这个logstash服务器为 :10.0.0.183

#编辑logstash配置
vim /etc/logstash/conf.d/redis-to-logstash-to-es.conf
input {
  redis {
    data_type => "list"
    key => "wang-nginx-json-accesslog"
    host => "10.0.0.185"
    port => "6379"
    db => "0"
    password => "123456"
    codec => "json"  #json解析
  }

  redis {
    data_type => "list"
    key => "wang-nginx-errorlog"
    host => "10.0.0.185"
    port => "6379"
    db => "0"
    password => "123456"
  }
}

output {
  if [fields][type] == "wang-nginx-json-accesslog" {
    elasticsearch {
      hosts => ["10.0.0.180:9200"]
      index => "wang-nginx-json-accesslog-%{+YYYY.MM.dd}"
      user => "magedu"
      password => "123456"
    }
  }

  if [fields][type] == "wang-nginx-errorlog" {
    elasticsearch {
      hosts => ["10.0.0.180:9200"]
      index => "wang-nginx-errorlog-%{+YYYY.MM.dd}"
      user => "magedu"
      password => "123456"
    }
  }
}

#测试配置文件是否正确
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-to-logstash-to-es.conf -t
#重启logstash
systemctl restart logstash

es界面验证
在这里插入图片描述
kibana界面验证
在这里插入图片描述

在Kibana 创建图像及Dashboard

在Kibana 创建图像

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

创建Dashboard

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

在K8S环境部署kafka集群(kakfak课程-基于Strimzi Operator部署kafka集群)或使用已有的kafka集群,用于后续K8S环境的日志收集

#在服务器10.0.0.100 创建nfs共享存储
mkdir -p /data/volumes

vim /etc/exports
/data/volumes *(rw,no_root_squash,insecure)

systemctl restart nfs-server.service

root@k8s-deploy:~# showmount -e
Export list for k8s-deploy.canghailyt.com:
/data/volumes                   *

#在k8s中创建存储类
cat 1.rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: wang
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: wang
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: wang
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: wang
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: wang
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io



#创建
kubectl apply -f 1.rbac.yaml

cat 2-storageclass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  #name: managed-nfs-storage
  name: nfs-csi
  annotations:
    storageclass.kubernetes.io/is-default-class: "true" 
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
reclaimPolicy: Retain #PV的删除策略,默认为delete,删除PV后立即删除NFS server的数据
mountOptions:
  #- vers=4.1 #containerd有部分参数异常
  #- noresvport #告知NFS客户端在重新建立网络连接时,使用新的传输控制协议源端口
  - noatime #访问文件时不更新文件inode中的时间戳,高并发环境可提高性能
parameters:
  #mountOptions: "vers=4.1,noresvport,noatime"
  archiveOnDelete: "true"  #删除pod时保留pod数据,默认为false时为不保留数据 

#创建
kubectl apply -f 2-storageclass.yaml

cat 3-nfs-provisioner.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: wang
spec:
  replicas: 1
  strategy: #部署策略
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          #image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 
          image: registry.cn-qingdao.aliyuncs.com/zhangshijie/nfs-subdir-external-provisioner:v4.0.2 
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 10.0.0.200
            - name: NFS_PATH
              value: /data/volumes
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.0.200
            path: /data/volumes

#创建
kubectl apply -f 3-nfs-provisioner.yaml

#部署kafka
kubectl create namespace wang
kubectl apply -f https://strimzi.io/install/latest?namespace=wang
#查看pod启动成功;这个pod主要就是用来管理kafka的
# kubectl get pod -n wang
NAME                                       READY   STATUS    RESTARTS   AGE
strimzi-cluster-operator-95d88f6b5-cbtb8   1/1     Running   0          47s

#下载kafka部署脚本
wget https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml

#查看及修改
vim kafka-persistent-single.yaml 
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: wang-kafka-cluster
  namespace: wang
spec:
  kafka:
    version: 3.6.0
    replicas: 3
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
      - name: external # 增加外部客户端访问用的linstener;提供外部使用,不需要可以不要
        port: 9094 #监听端口
        type: nodeport #指定nodeport类型
        tls: false
        configuration:
          bootstrap:
            nodePort: 30092 # 指定宿主机nodeport端口
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 3
      default.replication.factor: 1
      min.insync.replicas: 1
      inter.broker.protocol.version: "3.6"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        class: nfs-csi
        size: 100Gi
        deleteClaim: false
  zookeeper:
    replicas: 3
    storage:
      type: persistent-claim
      size: 10Gi
      class: nfs-csi
      deleteClaim: false
  entityOperator:
    topicOperator: {}
    userOperator: {}

#部署
kubectl apply -f kafka-persistent-single.yaml
 
#查看pod
root@k8s-master1:/opt/20231130/kafka# kubectl get pod -n wang 
NAME                                       READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-5f778458fc-wbrcs    1/1     Running   0          6m40s
strimzi-cluster-operator-95d88f6b5-cbtb8   1/1     Running   0          34m
wang-kafka-cluster-kafka-0                 1/1     Running   0          37s
wang-kafka-cluster-kafka-1                 1/1     Running   0          37s
wang-kafka-cluster-kafka-2                 1/1     Running   0          37s
wang-kafka-cluster-zookeeper-0             1/1     Running   0          111s
wang-kafka-cluster-zookeeper-1             1/1     Running   0          111s
wang-kafka-cluster-zookeeper-2             1/1     Running   0          111s
  

Kubernetes日志收集—基于DaemonSet实现容器日志

创建logstash镜像

cat Dockerfile 
FROM logstash:7.12.1


USER root
WORKDIR /usr/share/logstash 
#RUN rm -rf config/logstash-sample.conf
ADD logstash.yml /usr/share/logstash/config/logstash.yml
ADD logstash.conf /usr/share/logstash/pipeline/logstash.conf 


cat build-commond.sh 
#!/bin/bash

#docker build -t harbor.magedu.local/baseimages/logstash:v7.12.1-json-file-log-v4 .

#docker push harbor.magedu.local/baseimages/logstash:v7.12.1-json-file-log-v4

nerdctl build -t harbor.canghailyt.com/base/logstash:v7.12.1-json-file-log-v1 .

nerdctl push harbor.canghailyt.com/base/logstash:v7.12.1-json-file-log-v1


cat logstash.yml 
http.host: "0.0.0.0"
#xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]


cat logstash.conf 
input {
  file {
    #path => "/var/lib/docker/containers/*/*-json.log" #docker
    path => "/var/log/pods/*/*/*.log"
    start_position => "beginning"
    type => "jsonfile-daemonset-applog"
  }

  file {
    path => "/var/log/*.log"
    start_position => "beginning"
    type => "jsonfile-daemonset-syslog"
  }
}

output {
  if [type] == "jsonfile-daemonset-applog" {
    kafka {
      bootstrap_servers => "${KAFKA_SERVER}"
      topic_id => "${TOPIC_ID}"
      batch_size => 16384  #logstash每次向ES传输的数据量大小,单位为字节
      codec => "${CODEC}" 
   } }

  if [type] == "jsonfile-daemonset-syslog" {
    kafka {
      bootstrap_servers => "${KAFKA_SERVER}"
      topic_id => "${TOPIC_ID}"
      batch_size => 16384
      codec => "${CODEC}" #系统日志不是json格式
  }}
}


#创建镜像
bash build-commond.sh

创建daemonset

#编辑daemonset的yaml文件
vim 2.DaemonSet-logstash.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: logstash-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: logstash-logging
spec:
  selector:
    matchLabels:
      name: logstash-elasticsearch
  template:
    metadata:
      labels:
        name: logstash-elasticsearch
    spec:
      tolerations:
      # this toleration is to have the daemonset runnable on master nodes
      # remove it if your masters can't run pods
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: logstash-elasticsearch
        image: harbor.canghailyt.com/base/logstash:v7.12.1-json-file-log-v1 
        env:
        - name: "KAFKA_SERVER"
          #value: "172.31.2.107:9092,172.31.2.108:9092,172.31.2.109:9092"
          value: "10.0.0.210:30092,10.0.0.211:30092,10.0.0.209:30092"
        - name: "TOPIC_ID"
          value: "jsonfile-log-topic"
        - name: "CODEC"
          value: "json"
#        resources:
#          limits:
#            cpu: 1000m
#            memory: 1024Mi
#          requests:
#            cpu: 500m
#            memory: 1024Mi
        volumeMounts:
        - name: varlog #定义宿主机系统日志挂载路径
          mountPath: /var/log #宿主机系统日志挂载点
        - name: varlibdockercontainers #定义容器日志挂载路径,和logstash配置文件中的收集路径保持一直
          #mountPath: /var/lib/docker/containers #docker挂载路径
          mountPath: /var/log/pods #containerd挂载路径,此路径与logstash的日志收集路径必须一致
          readOnly: false
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log #宿主机系统日志
      - name: varlibdockercontainers
        hostPath:
          #path: /var/lib/docker/containers #docker的宿主机日志路径
          path: /var/log/pods #containerd的宿主机日志路径

#验证pod
root@k8s-master1:/opt/20231130/1.daemonset-logstash# kubectl get pod -n kube-system | grep logs
logstash-elasticsearch-79cnx              1/1     Running   0                2m12s
logstash-elasticsearch-7zpwr              1/1     Running   0                2m12s
logstash-elasticsearch-h2jdm              1/1     Running   0                2m12s
logstash-elasticsearch-jbsmn              1/1     Running   0                2m12s
logstash-elasticsearch-twxw5              1/1     Running   0                2m12s
logstash-elasticsearch-w5tq7              1/1     Running   0                2m12s

配置logstash服务器接收kafka的消息并写入ES

服务器为:10.0.0.183

#编辑logstash配置
vim /etc/logstash/conf.d/logsatsh-daemonset-jsonfile-kafka-to-es.conf
input {
  kafka {
    bootstrap_servers => "10.0.0.209:30092,10.0.0.210:30092,10.0.0.21:30092"
    topics => ["jsonfile-log-topic"]
    codec => "json"
  }
}




output {
  #if [fields][type] == "app1-access-log" {
  if [type] == "jsonfile-daemonset-applog" {
    elasticsearch {
      hosts => ["10.0.0.180:9200"]
      index => "jsonfile-daemonset-applog-%{+YYYY.MM.dd}"
      user => magedu
      password => "123456"
    }}

  if [type] == "jsonfile-daemonset-syslog" {
    elasticsearch {
      hosts => ["10.0.0.180:9200"]
      index => "jsonfile-daemonset-syslog-%{+YYYY.MM.dd}"
      user => magedu
      password => "123456"
    }}
}

#测试配置是否正确
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logsatsh-daemonset-jsonfile-kafka-to-es.conf  -t

#重启logstash
systemctl restart logstash

es界面验证
在这里插入图片描述
kibana界面验证
在这里插入图片描述

Kubernetes日志收集—以Sidecar容器实现Pod中的日志收集

构建sidecar镜像

cat Dockerfile 
FROM logstash:7.12.1


USER root
WORKDIR /usr/share/logstash 
#RUN rm -rf config/logstash-sample.conf
ADD logstash.yml /usr/share/logstash/config/logstash.yml
ADD logstash.conf /usr/share/logstash/pipeline/logstash.conf 


cat logstash.conf 
input {
  file {
    path => "/var/log/applog/catalina.out"
    start_position => "beginning"
    type => "app1-sidecar-catalina-log"
  }
  file {
    path => "/var/log/applog/localhost_access_log.*.txt"
    start_position => "beginning"
    type => "app1-sidecar-access-log"
  }
}

output {
  if [type] == "app1-sidecar-catalina-log" {
    kafka {
      bootstrap_servers => "${KAFKA_SERVER}"
      topic_id => "${TOPIC_ID}"
      batch_size => 16384  #logstash每次向ES传输的数据量大小,单位为字节
      codec => "${CODEC}" 
   } }

  if [type] == "app1-sidecar-access-log" {
    kafka {
      bootstrap_servers => "${KAFKA_SERVER}"
      topic_id => "${TOPIC_ID}"
      batch_size => 16384
      codec => "${CODEC}"
  }}
}


cat logstash.yml 
http.host: "0.0.0.0"
#xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]


cat build-commond.sh 
#!/bin/bash

#docker build -t harbor.magedu.local/baseimages/logstash:v7.12.1-sidecar .

#docker push harbor.magedu.local/baseimages/logstash:v7.12.1-sidecar
nerdctl  build -t harbor.canghailyt.com/base/logstash:v7.12.1-sidecar .
nerdctl push harbor.canghailyt.com/base/logstash:v7.12.1-sidecar


#执行构建
bash build-commond.sh

部署web服务

vim 2.tomcat-app1.yaml 
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: magedu-tomcat-app1-deployment-label
  name: magedu-tomcat-app1-deployment #当前版本的deployment 名称
  namespace: wang
spec:
  replicas: 1
  selector:
    matchLabels:
      app: magedu-tomcat-app1-selector
  template:
    metadata:
      labels:
        app: magedu-tomcat-app1-selector
    spec:
      containers:
      - name: magedu-tomcat-app1-container
        image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v1
        imagePullPolicy: IfNotPresent
        #imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
        resources:
          limits:
            cpu: 1
            memory: "512Mi"
          requests:
            cpu: 500m
            memory: "512Mi"
        volumeMounts:
        - name: applogs
          mountPath: /apps/tomcat/logs
        startupProbe:
          httpGet:
            path: /myapp/index.html
            port: 8080
          initialDelaySeconds: 5 #首次检测延迟5s
          failureThreshold: 3  #从成功转为失败的次数
          periodSeconds: 3 #探测间隔周期
        readinessProbe:
          httpGet:
            #path: /monitor/monitor.html
            path: /myapp/index.html
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 3
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 3
        livenessProbe:
          httpGet:
            #path: /monitor/monitor.html
            path: /myapp/index.html
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 3
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 3
      - name: sidecar-container
        image: harbor.canghailyt.com/base/logstash:v7.12.1-sidecar
        imagePullPolicy: IfNotPresent
        #imagePullPolicy: Always
        env:
        - name: "KAFKA_SERVER"
          #value: "172.31.2.107:9092,172.31.2.108:9092,172.31.2.109:9092"
          value: "10.0.0.209:30092,10.0.0.210:30092,10.0.0.211:30092"
        - name: "TOPIC_ID"
          value: "tomcat-app1-topic"
        - name: "CODEC"
          value: "json"
        volumeMounts:
        - name: applogs
          mountPath: /var/log/applog
      volumes:
      - name: applogs #定义通过emptyDir实现业务容器与sidecar容器的日志共享,以让sidecar收集业务容器中的日志
        emptyDir: {}

vim 3.tomcat-service.yaml 
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: magedu-tomcat-app1-service-label
  name: magedu-tomcat-app1-service
  namespace: wang
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 30180
  selector:
    app: magedu-tomcat-app1-selector

#部署
kubectl apply -f 2.tomcat-app1.yaml -f 3.tomcat-service.yaml

#查看pod
kubectl get pod -n wang 
NAME                                                  READY   STATUS    RESTARTS      AGE
magedu-tomcat-app1-deployment-866bc89b47-478vw        2/2     Running   0             7m56s

web界面验证:
在这里插入图片描述
消费kafka消息并写入ES

#logstash配置
vim /etc/logstash/conf.d/logsatsh-sidecar-kafka-to-es.conf
input {
  kafka {
    bootstrap_servers => "10.0.0.209:30092,10.0.0.210:30092,10.0.0.211:30092"
    topics => ["tomcat-app1-topic"]
    codec => "json"
  }
}



output {
  #if [fields][type] == "app1-access-log" {
  if [type] == "app1-sidecar-access-log" {
    elasticsearch {
      hosts => ["10.0.0.180:9200"]
      index => "app1-sidecar-accesslog-%{+YYYY.MM.dd}"
      user => magedu
      password => "123456"
    }
  }

  #if [fields][type] == "app1-catalina-log" {
  if [type] == "app1-sidecar-catalina-log" {
    elasticsearch {
      hosts => ["10.0.0.180:9200"]
      index => "app1-sidecar-catalinalog-%{+YYYY.MM.dd}"
      user => magedu
      password => "123456"
    }
  }

#  stdout {
#    codec => rubydebug
#  }
}

#检查配置文件
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logsatsh-sidecar-kafka-to-es.conf -t

#重启logstash
systemctl restart logstash

ES界面验证
在这里插入图片描述
kibana验证
在这里插入图片描述

Kubernetes日志收集—容器内置日志收集进程filebeat实现pod日志收集

构建镜像

# ll
total 31908
drwxr-xr-x 3 root root     4096 Nov 27 15:15 ./
drwxr-xr-x 3 root root     4096 Nov 27 15:14 ../
-rw-r--r-- 1 root root      544 Nov 27 15:15 Dockerfile
-rw-r--r-- 1 root root      300 Nov 27 15:15 build-command.sh
-rwxr-xr-x 1 root root    23611 Nov 27 15:15 catalina.sh*
-rw-r--r-- 1 root root 32600353 Nov 27 15:15 filebeat-7.12.1-x86_64.rpm
-rw-r--r-- 1 root root      805 Nov 27 15:14 filebeat.yml
-rw-r--r-- 1 root root       63 Nov 27 15:14 index.html
drwxr-xr-x 2 root root     4096 Nov 27 15:15 myapp/
-rw-r--r-- 1 root root      149 Nov 27 15:14 myapp.tar.gz
-rwxr-xr-x 1 root root      372 Nov 27 15:14 run_tomcat.sh*
-rw-r--r-- 1 root root     6462 Nov 27 15:14 server.xml


cat Dockerfile 
#tomcat web1
FROM harbor.canghailyt.com/base/tomcat-base:v8.5.43 

ADD catalina.sh /apps/tomcat/bin/catalina.sh
ADD server.xml /apps/tomcat/conf/server.xml
#ADD myapp/* /data/tomcat/webapps/myapp/
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
ADD filebeat.yml /etc/filebeat/filebeat.yml 

ADD myapp.tar.gz /data/tomcat/webapps/myapp/
RUN chown  -R tomcat.tomcat /data/ /apps/
#ADD filebeat-7.5.1-x86_64.rpm /tmp/
#RUN cd /tmp && yum localinstall -y filebeat-7.5.1-amd64.deb

EXPOSE 8080 8443

CMD ["/apps/tomcat/bin/run_tomcat.sh"]


cat filebeat.yml 
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /apps/tomcat/logs/catalina.out
  fields:
    type: filebeat-tomcat-catalina
- type: log
  enabled: true
  paths:
    - /apps/tomcat/logs/localhost_access_log.*.txt 
  fields:
    type: filebeat-tomcat-accesslog
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:

output.kafka:
  #hosts: ["172.31.2.107:9092","172.31.2.108:9092","172.31.2.109:9092"]
  hosts: ["10.0.0.209:30092","10.0.0.210:30092","10.0.0.211:30092"]
  required_acks: 1
  topic: "filebeat-magedu-app1"
  compression: gzip
  max_message_bytes: 1000000
#output.redis:
#  hosts: ["172.31.2.105:6379"]
#  key: "k8s-magedu-app1"
#  db: 1
#  timeout: 5
#  password: "123456"


cat run_tomcat.sh 
#!/bin/bash
#echo "nameserver 223.6.6.6" > /etc/resolv.conf
#echo "192.168.7.248 k8s-vip.example.com" >> /etc/hosts

/usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat &
su - tomcat -c "/apps/tomcat/bin/catalina.sh start"
tail -f /etc/hosts


cat build-command.sh 
#!/bin/bash
TAG=$1
#docker build -t  harbor.linuxarchitect.io/magedu/tomcat-app1:${TAG} .
#sleep 3
#docker push  harbor.linuxarchitect.io/magedu/tomcat-app1:${TAG}
nerdctl build -t  harbor.canghailyt.com/base/tomcat-app1:${TAG}  .
nerdctl push harbor.canghailyt.com/base/tomcat-app1:${TAG}


#构建
bash build-command.sh v1

部署web服务

vim 3.tomcat-app1.yaml 
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: magedu-tomcat-app1-filebeat-deployment-label
  name: magedu-tomcat-app1-filebeat-deployment
  namespace: magedu
spec:
  replicas: 1
  selector:
    matchLabels:
      app: magedu-tomcat-app1-filebeat-selector
  template:
    metadata:
      labels:
        app: magedu-tomcat-app1-filebeat-selector
    spec:
      containers:
      - name: magedu-tomcat-app1-filebeat-container
        image: harbor.canghailyt.com/base/tomcat-app1:v1 
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
        resources:
          limits:
            cpu: 1
            memory: "512Mi"
          requests:
            cpu: 500m
            memory: "512Mi"


vim 4.tomcat-service.yaml 
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: magedu-tomcat-app1-filebeat-service-label
  name: magedu-tomcat-app1-filebeat-service
  namespace: magedu
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 30292
  selector:
    app: magedu-tomcat-app1-filebeat-selector


#部署
kubectl apply -f 3.tomcat-app1.yaml -f 4.tomcat-service.yaml 

web界面访问
在这里插入图片描述
logstash消费kafka消息并写入ES

vim /etc/logstash/conf.d/logstash-filebeat-process-kafka-to-es.conf
input {
  kafka {
    bootstrap_servers => "10.0.0.209:30092,10.0.0.210:30092,10.0.0.211:30092"
    topics => ["filebeat-magedu-app1"]
    codec => "json"
  }
}



output {
  if [fields][type] == "filebeat-tomcat-catalina" {
    elasticsearch {
      hosts => ["10.0.0.180:9200","10.0.0.181:9200"]
      index => "filebeat-tomcat-catalina-%{+YYYY.MM.dd}"
      user => magedu
      password => "123456"
    }}

  if [fields][type] == "filebeat-tomcat-accesslog" {
    elasticsearch {
      hosts => ["10.0.0.180:9200","10.0.0.181:9200"]
      index => "filebeat-tomcat-accesslog-%{+YYYY.MM.dd}"
      user => magedu
      password => "123456"
    }}

}

#验证配置是否正确
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-filebeat-process-kafka-to-es.conf -t

#重启logstash
systemctl restart logstash

ES 验证
在这里插入图片描述
kibana验证
在这里插入图片描述

  • 26
    点赞
  • 21
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值