完整K8S日志系统解决方案:fluentd+elasticsearch+kibana,网上坑太多,研究3整天,望一文章解决所有问题

41 篇文章 1 订阅
17 篇文章 0 订阅

1、环境准备

k8s环境:
master:10.XX.XX.XX, node: 10.XX.XX.XX,10.XX.XX.XX,10.XX.XX.XX
对外发布地址:10.41.10.60
ceph环境:
manage:10.41.10.81,node: 10.41.10.XX,XX,XX
docker 镜像仓库环境:
10.41.10.81
以上环境在看本文时,应该是全部完成。

2、下载docker镜像

这一步网上的坑太多,提供大多是gcr.io.*的镜像,并且版本非常低,类如: gcr.io/google_containers/elasticsearch:v2.4.1或者gcr.io/google_containers/fluentd-elasticsearch:1.22,因gcr.io被墙,有的文章建议使用docker.io/bigwhite/。但我的测试结果是这些镜像都不能用,测试了很多方法,也许是我没有找到最正确的方法吧,那就让它成为过去吧。
正确的姿势:

#在镜像仓库服务器上下载镜像
docker pull openfirmware/fluentd-elasticsearch
docker tag openfirmware/fluentd-elasticsearch 10.41.10.81:5000/openfirmware/fluentd-elasticsearch
docker pull docker.io/elasticsearch:6.8.8
docker pull docker.io/kibana:6.8.8
docker tag docker.io/kibana:6.8.8 10.41.10.81:5000/kibana
docker tag docker.io/elasticsearch:6.8.8 10.41.10.81:5000/elasticsearch
docker push 10.41.10.81:5000/elasticsearch
docker push 10.41.10.81:5000/kibana
docker push 10.41.10.81:5000/openfirmware/fluentd-elasticsearch

3、部署fluentd应用

进入k8s的master,按步骤执行如下命令

#首先我们做一个配置文件图,这个配置文件用于每个节点上的fluentd应用。
#这里我们使用K8S里强大的configmap功能
cat <<eof >fluentd-configmap.yaml
apiVersion: v1
data:
  td-agent.conf: |
    #<source>
    #  type tail
    #  format json
    #  path /var/log/*.log
    #  pos_file /var/log/log.pos
    #  tag var.log
    #</source>
    <source>
      type tail
      format none
      path /var/log/containers/*.log
      pos_file /var/log/containers.pos
      tag containers_log
    </source>
    <match **>
      type elasticsearch
      log_level info
      include_tag_key true
      hosts 10.105.206.227:9200
      logstash_format true
      index_name k8s_
      buffer_chunk_limit 2M
      buffer_queue_limit 32
      flush_interval 5s
      max_retry_wait 30
      disable_retry_limit
    </match>
kind: ConfigMap
metadata:
  name: fluentd-config
eof 
kubectl apply -f  fluentd-configmap.yaml

上面我们配置的hosts 10.105.206.227:9200,这个是elasticsearch后端接口地址,后面我们还需要再修改。
然后我们配置一个DaemonSet,让所有节点自动运行fluentd应用。

cat <<eof >fluentd-ds.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-ds
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      containers:
      - name: fluentd-ds
        image: 10.41.10.81:5000/openfirmware/fluentd-elasticsearch
        env:
        - name: restartenv
          value: "4"
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: containers
          mountPath: /var/lib/docker/containers
        - name: td-agent-config
          mountPath: /etc/td-agent
        - name: docker-volume
          mountPath: /var/docker/lib/containers
      serviceAccountName: fluentd-admin
      serviceAccount: fluentd-admin
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: containers
        hostPath:
          path: /var/lib/docker/containers
      - name: td-agent-config
        configMap:
          name: fluentd-config
      - name: docker-volume
        hostPath:
          path: /var/docker/lib/containers #这一条很重要,否则fluentd启动后虽然能发现/var/log/containers里的日志,但却无法读取,因为它们是链接文件,链接到/var/docker/lib/containers。这一步坑了我好久!!!
eof
kubectl apply -f fluentd-ds.yaml  #生效

我们看,这时三台fluentd都正常启动了
在这里插入图片描述在这里插入图片描述

4、部署elasticsearch

首先我们需要添加一个持久存储与存储声明,方便保存数据

cat<<eof > fluentd-elasticsearch-pv-pvc.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: fluentd-elasticsearch-pv
spec:
  capacity:
    storage: 80Gi
  accessModes:
    - ReadWriteMany
  cephfs:
    monitors:
      - 10.41.10.81:6789,10.41.10.82:6789,10.41.10.83:6789
    path: /fluentd_elasticsearch_data
    user: admin
    readOnly: false
    secretRef:
      name: ceph-secret
  persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: fluentd-elasticsearch-pvc
spec:
  accessModes:
    - ReadWriteMany
  volumeName: fluentd-elasticsearch-pv
  resources:
    requests:
      storage: 80Gi
eof
kubectl apply -f fluentd-elasticsearch-pv-pvc.yaml

看一下执行效果
在这里插入图片描述然后通过ReplicationController添加elasticsearch的应用编排

cat<<eof > elasticsearch-rc.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: elasticsearch-rc
spec:
  replicas: 1
  selector:
      node: elasticsearch-rc
  template:
    metadata:
      labels:
        node: elasticsearch-rc
    spec:
      containers:
      - image: 10.41.10.81:5000/elasticsearch 
        name: elasticsearch-rc
        env:
        - name: restart
          value: "2"
        securityContext:
           privileged: true
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
        volumeMounts:
        - name: es-persistent-storage
          mountPath: /usr/share/elasticsearch/data
      volumes:
      - name: es-persistent-storage
        persistentVolumeClaim:
          claimName: fluentd-elasticsearch-pvc
eof
kubectl apply -f elasticsearch-rc.yaml

查看一下效果
在这里插入图片描述在这里插入图片描述下一步,我们需要添加一个服务,内部发布9200与9300端口,来供集群内部被fluentd与kibana应用连接

cat<<eof > elasticsearch-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-svc
spec:
  ports:
  - port: 9200
    name: db
    targetPort: 9200
  - port: 9300
    name: transport
    targetPort: 9300
  selector:
    node: elasticsearch-rc
eof
kubectl apply -f elasticsearch-svc.yaml

看一下效果
在这里插入图片描述在这里插入图片描述
此时我们需要调整一下前面已经配置好的fluentd的配置图,因为需要fluentd访问这个elasticsearch服务。
只需要修改fluentd-configmap.yaml文件中的hosts 10.105.206.227:9200记录,将hosts后面的ip地址修改为这里看到的集群地址。

5、部署kibana

还是一样,我们首先配置配置图configmap,实现统一的对kibana做配置文件

cat<<eof > kibana.yml 
# Kibana is served by a back end server. This controls which port to use.
server.port: 5601

# The host to bind the server to.
server.host: "0.0.0.0"

# If you are running kibana behind a proxy, and want to mount it at a path,
# specify that path here. The basePath can't end in a slash.
# server.basePath: ""

# The maximum payload size in bytes on incoming server requests.
# server.maxPayloadBytes: 1048576

# The Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://10.105.206.227:9200"

# preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false,
# then the host you use to connect to *this* Kibana instance will be sent.
# elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations
# and dashboards. It will create a new index if it doesn't already exist.
# kibana.index: ".kibana"

# The default application to load.
# kibana.defaultAppId: "discover"

# If your Elasticsearch is protected with basic auth, these are the user credentials
# used by the Kibana server to perform maintenance on the kibana_index at startup. Your Kibana
# users will still need to authenticate with Elasticsearch (which is proxied through
# the Kibana server)
# elasticsearch.username: "user"
# elasticsearch.password: "pass"

# SSL for outgoing requests from the Kibana Server to the browser (PEM formatted)
# server.ssl.cert: /path/to/your/server.crt
# server.ssl.key: /path/to/your/server.key

# Optional setting to validate that your Elasticsearch backend uses the same key files (PEM formatted)
# elasticsearch.ssl.cert: /path/to/your/client.crt
# elasticsearch.ssl.key: /path/to/your/client.key

# If you need to provide a CA certificate for your Elasticsearch instance, put
# the path of the pem file here.
# elasticsearch.ssl.ca: /path/to/your/CA.pem

# Set to false to have a complete disregard for the validity of the SSL
# certificate.
# elasticsearch.ssl.verify: true

# Time in milliseconds to wait for elasticsearch to respond to pings, defaults to
# request_timeout setting
# elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or elasticsearch.
# This must be > 0
# elasticsearch.requestTimeout: 30000

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers.
# elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards.
# Set to 0 to disable.
# elasticsearch.shardTimeout: 0

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying
# elasticsearch.startupTimeout: 5000

# Set the path to where you would like the process id file to be created.
# pid.file: /var/run/kibana.pid

# If you would like to send the log output to a file you can set the path below.
# logging.dest: stdout

# Set this to true to suppress all logging output.
# logging.silent: false

# Set this to true to suppress all logging output except for error messages.
# logging.quiet: false

# Set this to true to log all events, including system usage information and all requests.
# logging.verbose: false
eof

以上是带有官方原版的配置文件,我只是修改了端口、elasticsearch地址、以及监听地址。
然后通过它生成一个configmap

kubectl create configmap kibana-config --from-file=kibana.yml
kubectl get configmap

在这里插入图片描述
下面我需要配置ReplicationController应用编排

cat<<eof > kibana-rc.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: kibana-rc
spec:
  replicas: 1
  selector:
      node: kibana
  template:
    metadata:
      labels:
        node: kibana
    spec:
      containers:
      - name: kibana-rc
        image: 10.41.10.81:5000/kibana
        env:
        - name: re
          value: "1"
        volumeMounts:
        - name: config
          mountPath: /usr/share/kibana/config
        ports:
        - containerPort: 5601
      volumes:
      - name: config
        configMap:
          name: kibana-config
eof
kubectl apply -f kibana-rc.yaml 

在这里插入图片描述继续,添加服务,实现外部访问kibana

cat<<eof > kibana-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: kibana-svc
spec:
  ports:
  - port: 5601
    targetPort: 5601
  selector:
    node: kibana
  type: NodePort
  externalIPs:
  - 10.41.10.60
eof
kubectl apply -f kibana-svc.yaml

在这里插入图片描述在这里插入图片描述
至此,全部工作完成,我们使用浏览器打开10.41.10.60
在这里插入图片描述在这里插入图片描述
附:
我遇到一个情况是启动Kibana时卡在“server is not ready yet” 这里,我的解决方案是这样的:

curl -XDELETE http://10.105.206.227:9200/.kibana*
kubectl delete rc kibana-rc  #重启kibana
kubectl apply -f kibana-rc.yaml #重启kibana
  • 2
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值