ELK 托管 K8S 架构修改与实战

6 篇文章 0 订阅
1 篇文章 0 订阅

Logging

  • 收集: 采集多种来源的日志数据(流式日志收集器)
  • 传输: 能够稳定的吧日志数据传输到中央系统(消息队列)
  • 存储: 可以将日志以结构化数据的形式储存起来(搜索引擎)
  • 分析: 支持方便的分析、检索方式最好有GUI管理系统(前端)
  • 告警: 能够提供错误报告、监控机制(监控工具)

ELK stack

  • ElasticSeach: 通过与mysql的B-tree正向索引相反的倒排索引,利用分词器将关键词作为标签索引出所有符合条件的目标
  • LogStash: 流式日志收集器
  • Kibana: 前端GUI

20201111150141

缺点

  • logstash使用jruby语言开发吃资源、大量部署消耗极高
  • 业务程序与logstash耦合过松,不利于业务迁移
  • 日志收集与ES耦合又过紧、易打爆丢数据
  • 在容器云环境下,传统ELK模型难以完成工作

容器环境ELK架构图

20201111151603

边车模式的pod共享了UTS、IPC、NET、USER

  • 用边车模式解决了logstash与业务程序(pod)耦合度过松的问题
  • 增加kafka让filebeat以topic形式写入kafka之后异步写入logstash,会让kibana显示上有些许延迟
  • ES中以不同的index分环境
  • 以不同的topic区分项目

改进dubbo-demo-web

  1. 基于tomcat运行dubbo-demo-web

    [root@hdss7-200 ~]# cd /opt/src/
    [root@hdss7-200 src]# wget https://mirror.bit.edu.cn/apache/tomcat/tomcat-8/v8.5.50/bin/apache-tomcat-8.5.50.tar.gz
    [root@hdss7-200 src]# mkdir -p  /data/dockerfile/tomcat
    [root@hdss7-200 src]# tar -xvf apache-tomcat-8.5.50.tar.gz -C /data/dockerfile/tomcat/
    [root@hdss7-200 src]# cd !$
    
  2. 修改tomcat配置,关闭AJP端口(为了与apache快速通信的接口)

    [root@hdss7-200 tomcat]# vim apache-tomcat-8.5.50/conf/server.xml
        <!--
        <Connector protocol="AJP/1.3"
                  address="::1"
                  port="8009"
                  redirectPort="8443" />
        -->
    
  3. 删除manager和host-manager的handlers,并修改日志级别到INFO

    [root@hdss7-200 tomcat]# vim apache-tomcat-8.5.50/conf/logging.properties
    handlers = 1catalina.org.apache.juli.AsyncFileHandler, 2localhost.org.apache.juli.AsyncFileHandler, java.util.logging.ConsoleHandler
    ...
    1catalina.org.apache.juli.AsyncFileHandler.level = INFO
    1catalina.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
    1catalina.org.apache.juli.AsyncFileHandler.prefix = catalina.
    1catalina.org.apache.juli.AsyncFileHandler.encoding = UTF-8
    
    2localhost.org.apache.juli.AsyncFileHandler.level = INFO
    2localhost.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
    2localhost.org.apache.juli.AsyncFileHandler.prefix = localhost.
    2localhost.org.apache.juli.AsyncFileHandler.encoding = UTF-8
    
    #3manager.org.apache.juli.AsyncFileHandler.level = FINE
    #3manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
    #3manager.org.apache.juli.AsyncFileHandler.prefix = manager.
    #3manager.org.apache.juli.AsyncFileHandler.encoding = UTF-8
    #
    #4host-manager.org.apache.juli.AsyncFileHandler.level = FINE
    #4host-manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
    #4host-manager.org.apache.juli.AsyncFileHandler.prefix = host-manager.
    #4host-manager.org.apache.juli.AsyncFileHandler.encoding = UTF-8
    
    java.util.logging.ConsoleHandler.level = INFO
    java.util.logging.ConsoleHandler.formatter = org.apache.juli.OneLineFormatter
    java.util.logging.ConsoleHandler.encoding = UTF-8
    
  4. 制作Dockerfile

        [root@hdss7-200 tomcat]# vi Dockerfile
        From harbor.od.com/public/jre:8u112
        RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
            echo 'Asia/Shanghai' >/etc/timezone
        ENV CATALINA_HOME /opt/tomcat
        ENV LANG zh_CN.UTF-8
        ADD apache-tomcat-8.5.50/ /opt/tomcat
        ADD config.yml /opt/prom/config.yml                             # 基于文件的自动发现,配合prometheus在k8s集群外使用
        ADD jmx_javaagent-0.3.1.jar /opt/prom/jmx_javaagent-0.3.1.jar   # 提供jvm的监控信息
        WORKDIR /opt/tomcat
        ADD entrypoint.sh /entrypoint.sh
        CMD ["/entrypoint.sh"]
    [root@hdss7-200 tomcat]# wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.3.1/jmx_prometheus_javaagent-0.3.1.jar -O jmx_javaagent-0.3.1.jar
    [root@hdss7-200 tomcat]# vim entrypoint.sh
    #!/bin/bash
    M_OPTS="-Duser.timezone=Asia/Shanghai -javaagent:/opt/prom/jmx_javaagent-0.3.1.jar=$(hostname -i):${M_PORT:-"12346"}:/opt/prom/config.yml"
    C_OPTS=${C_OPTS}
    MIN_HEAP=${MIN_HEAP:-"128m"}
    MAX_HEAP=${MAX_HEAP:-"128m"}
    JAVA_OPTS=${JAVA_OPTS:-"-Xmn384m -Xss256k -Duser.timezone=GMT+08  -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSClassUnloadingEnabled -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+PrintClassHistogram  -Dfile.encoding=UTF8 -Dsun.jnu.encoding=UTF8"}
    CATALINA_OPTS="${CATALINA_OPTS}"
    JAVA_OPTS="${M_OPTS} ${C_OPTS} -Xms${MIN_HEAP} -Xmx${MAX_HEAP} ${JAVA_OPTS}"
    sed -i -e "1a\JAVA_OPTS=\"$JAVA_OPTS\"" -e "1a\CATALINA_OPTS=\"$CATALINA_OPTS\"" /opt/tomcat/bin/catalina.sh
    
    cd /opt/tomcat && /opt/tomcat/bin/catalina.sh run 2>&1 >> /opt/tomcat/logs/stdout.log
    [root@hdss7-200 tomcat]# chmod u+x entrypoint.sh
    [root@hdss7-200 tomcat]# vim config.yml
        ---
        rules:
          - pattern: '.*'
    [root@hdss7-200 tomcat]# docker build . -t harbor.od.com/base/tomcat:v8.5.50
    [root@hdss7-200 tomcat]# docker push  harbor.od.com/base/tomcat:v8.5.50
    
  5. war包需要增加一项string Parameter,新建流水线

    String parameter
    nameDefault ValueDescription
    root_urlROOTtomcat的上下文,默认ROOT
    • 流水线脚本
    pipeline {
      agent any
        stages {
        stage('pull') { //get project code from repo
          steps {
            sh "git clone ${params.git_repo} ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.app_name}/${env.BUILD_NUMBER} && git checkout ${params.git_ver}"
            }
        }
        stage('build') { //exec mvn cmd
          steps {
            sh "cd ${params.app_name}/${env.BUILD_NUMBER}  && /var/jenkins_home/maven-${params.maven}/bin/${params.mvn_cmd}"
          }
        }
        stage('unzip') { //unzip  target/*.war -c target/project_dir
          steps {
            sh "cd ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.target_dir} && mkdir project_dir && unzip *.war -d ./project_dir"
          }
        }
        stage('image') { //build image and push to registry
          steps {
            writeFile file: "${params.app_name}/${env.BUILD_NUMBER}/Dockerfile", text: """FROM harbor.od.com/${params.base_image}
    ADD ${params.target_dir}/project_dir /opt/tomcat/webapps/${params.root_url}"""
            sh "cd  ${params.app_name}/${env.BUILD_NUMBER} && docker build -t harbor.od.com/${params.image_name}:${params.git_ver}_${params.add_tag} . && docker push harbor.od.com/${params.image_name}:${params.git_ver}_${params.add_tag}"
          }
        }
      }
    }
    
  6. 配置资源清单不需要特意修改

elasticsearch

  • 10.4.7.12
  1. 下载物料

    [root@hdss7-12 ~]# cd /opt/src/
    [root@hdss7-12 src]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.6.tar.gz
    [root@hdss7-12 src]# tar -xf elasticsearch-6.8.6.tar.gz -C /opt/
    [root@hdss7-12 src]# ln -s /opt/elasticsearch-6.8.6/ /opt/elasticsearch
    
  2. 配置elasticsearch

    [root@hdss7-12 src]# cd /opt/elasticsearch
    [root@hdss7-12 elasticsearch]# mkdir -p /data/elasticsearch/{data,logs}
    [root@hdss7-12 elasticsearch]# vi config/elasticsearch.yml
    cluster.name: es.od.com
    node.name: hdss7-12.host.com
    path.data: /data/elasticsearch/data
    path.logs: /data/elasticsearch/logs
    bootstrap.memory_lock: true               # 独占内存
    network.host: 10.4.7.12
    http.port: 9200
    
  3. 修改jvm内存,生产最好32G内存

    [root@hdss7-12 elasticsearch]# cat config/jvm.options
    -Xms512m
    -Xmx512m
    
  4. 创建启动用户

    [root@hdss7-12 elasticsearch]# useradd -s /bin/bash -M es
    [root@hdss7-12 elasticsearch]# chown -R es.es /opt/elasticsearch-6.8.6/
    [root@hdss7-12 elasticsearch]# chown -R es.es /data/elasticsearch/
    
  5. 调整文件描述符,官方建议

    [root@hdss7-12 elasticsearch]# touch /etc/security/limits.d/es.conf
    [root@hdss7-12 elasticsearch]# vim /etc/security/limits.d/es.conf
    es hard nofile 65536
    es soft fsize unlimited
    es hard memlock unlimited
    es soft memlock unlimited
    
  6. 调整内核参数

    [root@hdss7-12 elasticsearch]# sysctl -w vm.max_map_count=262144
    [root@hdss7-12 elasticsearch]# echo "vm.max_map_count=262144" >> /etc/sysctl.conf
    [root@hdss7-12 elasticsearch]# sysctl -p
    
  7. 启动es,使用es用户执行 su -c 后面的命令

    [root@hdss7-12 elasticsearch]# su -c "/opt/elasticsearch/bin/elasticsearch -d" es
    [root@hdss7-12 elasticsearch]# sudo -ues "whoami"
    [root@hdss7-12 elasticsearch]# netstat -luntp|grep 9200
    
  8. 高版本es只能通过curl命令调整模板,分片调整5副本集0,生产建议3个副本集

    [root@hdss7-12 elasticsearch]# curl -H "Content-Type:application/json" -XPUT http://10.4.7.12:9200/_template/k8s -d '{
      "template" : "k8s*",
      "index_patterns": ["k8s*"],  
      "settings": {
        "number_of_shards": 5,
        "number_of_replicas": 0
      }
    }'
    

kafka

  • kafka-managert只支持到kafka2.2,建议kafka使用2.2版本及以下
  • 10.4.7.11
  1. 安装kafka

    [root@hdss7-11 ~]# vim /var/named/od.com.zone
    ...
    km    A    10.4.7.10
    zk1   A    10.4.7.11
    [root@hdss7-11 ~]# systemctl restart named
    [root@hdss7-11 ~]# cd /opt/src/
    [root@hdss7-11 src]# wget https://archive.apache.org/dist/kafka/2.2.0/kafka_2.12-2.2.0.tgz
    [root@hdss7-11 src]# tar -xf kafka_2.12-2.2.0.tgz -C /opt/
    [root@hdss7-11 src]# ln -s /opt/kafka_2.12-2.2.0/ /opt/kafka
    [root@hdss7-11 src]# mkdir -p /data/kafka/logs
    ...
    log.dirs=/data/kafka/logs
    ...
    zookeeper.connect=localhost:2181
    log.flush.interval.messages=10000
    log.flush.interval.ms=1000
    ...
    delete.topic.enable=true
    host.name=hdss7-11.host.com
    ...
    [root@hdss7-11 src]# cd /opt/kafka/
    [root@hdss7-11 kafka]# bin/kafka-server-start.sh -daemon config/server.properties
    
  2. 安装kafk-amanager(可选)

    [root@hdss7-200 ~]# docker pull sheepkiller/kafka-manager:latest
    [root@hdss7-200 ~]# mkdir -p /data/dockerfile/kafka-manager
    [root@hdss7-200 ~]# cd !$ ; vi Dockerfile
    FROM hseeberger/scala-sbt
    
    ENV ZK_HOSTS=10.4.7.11:2181 \
        KM_VERSION=2.0.0.2
    
    RUN mkdir -p /tmp && \
        cd /tmp && \
        wget https://github.com/yahoo/kafka-manager/archive/${KM_VERSION}.tar.gz && \
        tar xxf ${KM_VERSION}.tar.gz && \
        cd /tmp/kafka-manager-${KM_VERSION} && \
        sbt clean dist && \
        unzip  -d / ./target/universal/kafka-manager-${KM_VERSION}.zip && \
        rm -fr /tmp/${KM_VERSION} /tmp/kafka-manager-${KM_VERSION}
    
    WORKDIR /kafka-manager-${KM_VERSION}
    
    EXPOSE 9000
    ENTRYPOINT ["./bin/kafka-manager","-Dconfig.file=conf/application.conf"]
    
    docker pull stanleyws/kafka-manager
    [root@hdss7-200 kafka-manager]# docker build . -t harbor.od.com/infra/kafka-manager:v2.0.0.2
    [root@hdss7-200 kafka-manager]# docker tag 29badab5ea08 harbor.od.com/infra/kafka-manager:v2.0.0.2
    [root@hdss7-200 kafka-manager]# docker push harbor.od.com/infra/kafka-manager
    [root@hdss7-200 kafka-manager]# cd /data/k8s-yaml/
    [root@hdss7-200 k8s-yaml]# mkdir kafkamanager
    [root@hdss7-200 k8s-yaml]# cd kafkamanager/
    [root@hdss7-200 kafkamanager]# vim dp.yaml
    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: kafka-manager
      namespace: infra
      labels:
        name: kafka-manager
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: kafka-manager
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxUnavailable: 1
          maxSurge: 1
      revisionHistoryLimit: 7
      progressDeadlineSeconds: 600
      template:
        metadata:
          labels:
            app: kafka-manager
        spec:
          containers:
          - name: kafka-manager
            image: harbor.od.com/infra/kafka-manager:v2.0.0.2
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 9000
              protocol: TCP
            env:
            - name: ZK_HOSTS
              value: zk1.od.com:2181
            - name: APPLICATION_SECRET
              value: letmein
          imagePullSecrets:
          - name: harbor
          terminationGracePeriodSeconds: 30
          securityContext:
            runAsUser: 0
    [root@hdss7-200 kafkamanager]# vi svc.yaml
    kind: Service
    apiVersion: v1
    metadata:
      name: kafka-manager
      namespace: infra
    spec:
      ports:
      - protocol: TCP
        port: 9000
        targetPort: 9000
      selector:
        app: kafka-manager
    [root@hdss7-200 kafkamanager]# vi ingress.yaml
    kind: Ingress
    apiVersion: extensions/v1beta1
    metadata:
      name: kafka-manager
      namespace: infra
    spec:
      rules:
      - host: km.od.com
        http:
          paths:
          - path: /
            backend:
              serviceName: kafka-manager
              servicePort: 9000
    [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/kafkamanager/dp.yaml
    [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/kafkamanager/svc.yaml
    [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/kafkamanager/ingress.yaml
    

filebeat

轻量级的日志采集器,可收集与转发

  1. 准备dockerfile即可,注意env的value

    [root@hdss7-200 src]# cd /data/dockerfile/
    [root@hdss7-200 dockerfile]# mkdir filebeat ; cd filebeat
    [root@hdss7-200 filebeat]# vi Dockerfile
    FROM debian:jessie
    
    ENV FILEBEAT_VERSION=7.5.1 \
        FILEBEAT_SHA1=daf1a5e905c415daf68a8192a069f913a1d48e2c79e270da118385ba12a93aaa91bda4953c3402a6f0abf1c177f7bcc916a70bcac41977f69a6566565a8fae9c
    
    RUN set -x && \
      apt-get update && \
      apt-get install -y wget && \
      wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${FILEBEAT_VERSION}-linux-x86_64.tar.gz -O /opt/filebeat.tar.gz && \
      cd /opt && \
      echo "${FILEBEAT_SHA1}  filebeat.tar.gz" | sha512sum -c - && \
      tar xzvf filebeat.tar.gz && \
      cd filebeat-* && \
      cp filebeat /bin && \
      cd /opt && \
      rm -rf filebeat* && \
      apt-get purge -y wget && \
      apt-get autoremove -y && \
      apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
    
    COPY docker-entrypoint.sh /
    ENTRYPOINT ["/docker-entrypoint.sh"]
    [root@hdss7-200 filebeat]# vim docker-entrypoint.sh
    #!/bin/bash
    
    ENV=${ENV:-"test"}                          # 环境名
    PROJ_NAME=${PROJ_NAME:-"no-define"}         # 项目名
    MULTILINE=${MULTILINE:-"^\d{2}"}            # 以两个数字开头的做正则并认为是新的一行,否则进行多行匹配适用于java的日志
    
    cat > /etc/filebeat.yaml << EOF
    filebeat.inputs:
    - type: log
      fields_under_root: true
      fields:
        topic: logm-${PROJ_NAME}
      paths:                                   # 多日志匹配logm目录下面5级目录所有以.log结尾的文件
        - /logm/*.log
        - /logm/*/*.log
        - /logm/*/*/*.log
        - /logm/*/*/*/*.log
        - /logm/*/*/*/*/*.log
      scan_frequency: 120s
      max_bytes: 10485760
      multiline.pattern: '$MULTILINE'         # 匹配规则
      multiline.negate: true
      multiline.match: after
      multiline.max_lines: 100
    - type: log
      fields_under_root: true
      fields:
        topic: logu-${PROJ_NAME}
      paths:                                  # 面对不需要多行匹配的日志
        - /logu/*.log
        - /logu/*/*.log
        - /logu/*/*/*.log
        - /logu/*/*/*/*.log
        - /logu/*/*/*/*/*.log
        - /logu/*/*/*/*/*/*.log
    output.kafka:                             # kafka节点
      hosts: ["10.4.7.11:9092"]
      topic: k8s-fb-$ENV-%{[topic]}           # topic名字,拼接自己的变量得出
      version: 2.0.0                          # 超过2.0必须写2.0
      required_acks: 0
      max_message_bytes: 10485760
    EOF
    
    set -xe
    
    # If user don't provide any command
    # Run filebeat
    if [[ "$1" == "" ]]; then
        exec filebeat  -c /etc/filebeat.yaml
    else
        # Else allow the user to run arbitrarily commands like bash
        exec "$@"
    fi
    [root@hdss7-200 filebeat]# chmod u+x docker-entrypoint.sh
    [root@hdss7-200 filebeat]# docker build . -t harbor.od.com/infra/filebeat:v7.5.1
    [root@hdss7-200 filebeat]# docker push harbor.od.com/infra/filebeat:v7.5.1
    
  2. 使dubbo-demo-web接入filebeat

    [root@hdss7-200 filebeat]# cd /data/k8s-yaml/test/dubbo-demo-consumer/
    [root@hdss7-200 dubbo-demo-consumer]# echo > dp.yaml
    [root@hdss7-200 dubbo-demo-consumer]# vim dp.yaml
    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: dubbo-demo-consumer
      namespace: test
      labels:
        name: dubbo-demo-consumer
    spec:
      replicas: 1
      selector:
        matchLabels:
          name: dubbo-demo-consumer
      template:
        metadata:
          labels:
            app: dubbo-demo-consumer
            name: dubbo-demo-consumer
        spec:
          containers:
          - name: dubbo-demo-consumer
            image: harbor.od.com/app/dubbo-demo-web:tomcat_1111_2040
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 8080
              protocol: TCP
            env:
            - name: C_OPTS
              value: -Denv=fat -Dapollo.meta=http://config.od.com
            volumeMounts:
            - mountPath: /opt/tomcat/logs
              name: logm
          - name: filebeat
            image: harbor.od.com/infra/filebeat:v7.5.1
            imagePullPolicy: IfNotPresent
            env:
            - name: ENV
              value: test
            - name: PROJ_NAME
              value: dubbo-demo-web
            volumeMounts:
            - mountPath: /logm
              name: logm
          volumes:
          - emptyDir: {}
            name: logm
          imagePullSecrets:
          - name: harbor
          restartPolicy: Always
          terminationGracePeriodSeconds: 30
          securityContext:
            runAsUser: 0
          schedulerName: default-scheduler
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxUnavailable: 1
          maxSurge: 1
      revisionHistoryLimit: 7
      progressDeadlineSeconds: 600
    

logstash

  • 把日志从kafka拿到es
  • docker部署,选型参考es同版本即可
  • logstash官网
  1. 准备镜像

    [root@hdss7-200 ~]# docker pull logstash:6.8.6
    [root@hdss7-200 ~]# docker tag d0a2dac51fcb harbor.od.com/infra/logstash:v6.8.6
    [root@hdss7-200 ~]# docker push harbor.od.com/infra/logstash:v6.8.6
    
  2. 创建配置

    [root@hdss7-200 ~]# mkdir -p /etc/logstash/
    [root@hdss7-200 ~]# vim /etc/logstash/logstash-test.conf
    input {
      kafka {
        bootstrap_servers => "10.4.7.11:9092"         # kafka地址
        client_id => "10.4.7.200"
        consumer_threads => 4                         # 4线程
        group_id => "k8s_test"                        # 标示测试环境
        topics_pattern => "k8s-fb-test-.*"            # 匹配的topic
      }
    }
    
    filter {
      json {
        source => "message"
      }
    }
    
    output {
      elasticsearch {
        hosts => ["10.4.7.12:9200"]                  # es,集群则以逗号分割IP
        index => "k8s-test-%{+YYYY.MM.DD}"           # 格式
      }
    }
    
    input {
      kafka {
        bootstrap_servers => "10.4.7.11:9092"
        client_id => "10.4.7.200"
        consumer_threads => 4
        group_id => "k8s_prod"
        topics_pattern => "k8s-fb-prod-.*"
      }
    }
    [root@hdss7-200 ~]# cp /etc/logstash/logstash-test.conf  /etc/logstash/logstash-prod.conf
    [root@hdss7-200 ~]# vim !$
    filter {
      json {
        source => "message"
      }
    }
    
    output {
      elasticsearch {
        hosts => ["10.4.7.12:9200"]
        index => "k8s-prod-%{+YYYY.MM.DD}"
      }
    }
    [root@hdss7-200 ~]# docker run -d --name logstash-test -v /etc/logstash:/etc/logstash harbor.od.com/infra/logstash:v6.8.6 -f /etc/logstash/logstash-test.conf
    [root@hdss7-200 ~]# curl  http://10.4.7.12:9200/_cat/indices?v
    health status index                uuid                   pri rep docs.count docs.deleted store.size pri.store.size
    green  open   k8s-test-2020.11.318 491wyBlZSVaqpw1ForyJDw   5   0         28            0      130kb          130kb
    

kibana

  1. 准备docker镜像

    [root@hdss7-200 ~]#  docker pull kibana:6.8.3
    [root@hdss7-200 ~]# docker tag 54db200915ee harbor.od.com/infra/kibana:v6.8.3
    [root@hdss7-200 ~]# docker push harbor.od.com/infra/kibana:v6.8.3
    
  2. 交付到k8s

    [root@hdss7-200 ~]# cd /data/k8s-yaml/
    [root@hdss7-200 k8s-yaml]# mkdir kibana
    [root@hdss7-200 k8s-yaml]# cd kibana/
    [root@hdss7-200 kibana]# vim dp.yaml
    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: kibana
      namespace: infra
      labels:
        name: kibana
    spec:
      replicas: 1
      selector:
        matchLabels:
          name: kibana
      template:
        metadata:
          labels:
            app: kibana
            name: kibana
        spec:
          containers:
          - name: kibana
            image: harbor.od.com/infra/kibana:v6.8.3
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 5601
              protocol: TCP
            env:
            - name: ELASTICSEARCH_URL
              value: http://10.4.7.12:9200
          imagePullSecrets:
          - name: harbor
          securityContext:
            runAsUser: 0
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxUnavailable: 1
          maxSurge: 1
      revisionHistoryLimit: 7
      progressDeadlineSeconds: 600
    [root@hdss7-200 kibana]# vi svc.yaml
    kind: Service
    apiVersion: v1
    metadata:
      name: kibana
      namespace: infra
    spec:
      ports:
      - protocol: TCP
        port: 5601
        targetPort: 5601
      selector:
        app: kibana
    [root@hdss7-200 kibana]# vi ingress.yaml
    kind: Ingress
    apiVersion: extensions/v1beta1
    metadata:
      name: kibana
      namespace: infra
    spec:
      rules:
      - host: kibana.od.com
        http:
          paths:
          - path: /
            backend:
              serviceName: kibana
              servicePort: 5601
    [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/kibana/dp.yaml
    [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/kibana/svc.yaml
    [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/kibana/ingress.yaml
    [root@hdss7-11 ~]# vi /var/named/od.com.zone
    ...
    kibana             A    10.4.7.10
    [root@hdss7-11 ~]# systemctl restart named
    
  3. 页面使用kibana

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-rG7cPvrY-1625111946885)(https://raw.githubusercontent.com/ma823956028/image/master/picgo/20201114140756.png)]

    • 区分环境: Management -> Index Patterns -> Create index pattern 中输入 k8s-test* -> Next step ->
      Time Filter field name 选择 @timestamp -> Create index pattern
  4. kibana使用

    • kibana有4个选择器:
      • 时间选择器: 分别对应 quick relative(相对时间) absolute(绝对时间) recent(最近使用)
        20201114142646
      • 环境选择器: 对应不同环境
        20201114142910
      • 项目选择器: 对应不同环境中的项目
        20201114143150
      • 关键字选择器: >_ Search ...(e.g.status:200 AND extension:PHP)
      • log.file.path对应日志文件名,hostname则对应k8s容器名
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值