image.png

上一篇博客简单记录了在k8s中部署es集群,我们采用statefulset资源对象进行了es集群的部署,采用nfs作为后端的数据存储。

使用 Elastic 技术栈构建 K8S 全栈监控(1)

pvc和pv
[root@jumpserver-app-1 elk-test]# kubectl -n elastic get pvc
NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
es-data-elasticsearch-0   Bound    pvc-c91f2629-d23d-4363-9de6-38be405c21fc   10Gi       RWO            managed-nfs-storage   20h
es-data-elasticsearch-1   Bound    pvc-7175588b-0201-4a42-8ede-f790d72fb3c6   10Gi       RWO            managed-nfs-storage   20h
es-data-elasticsearch-2   Bound    pvc-d24d435b-c4ca-4c7d-86e1-8abd5b86a54c   10Gi       RWO            managed-nfs-storage   20h
es-logs-elasticsearch-0   Bound    pvc-d0fa2e75-4578-49a2-9a5d-d0688ddcee73   5Gi        RWO            managed-nfs-storage   20h
es-logs-elasticsearch-1   Bound    pvc-9f597246-e940-4ed4-820a-881fa10a0d2d   5Gi        RWO            managed-nfs-storage   20h
es-logs-elasticsearch-2   Bound    pvc-c8f87dc5-9382-4ed2-813f-1279117868d7   5Gi        RWO            managed-nfs-storage   20h
[root@jumpserver-app-1 elk-test]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                             STORAGECLASS          REASON   AGE
pvc-6eda7250-6957-4cb1-b9c2-91b96b760455   10Gi       RWX            Delete           Bound    default/mysql-pv-claim            managed-nfs-storage            6d
pvc-7175588b-0201-4a42-8ede-f790d72fb3c6   10Gi       RWO            Delete           Bound    elastic/es-data-elasticsearch-1   managed-nfs-storage            20h
pvc-9f597246-e940-4ed4-820a-881fa10a0d2d   5Gi        RWO            Delete           Bound    elastic/es-logs-elasticsearch-1   managed-nfs-storage            20h
pvc-c8f87dc5-9382-4ed2-813f-1279117868d7   5Gi        RWO            Delete           Bound    elastic/es-logs-elasticsearch-2   managed-nfs-storage            20h
pvc-c91f2629-d23d-4363-9de6-38be405c21fc   10Gi       RWO            Delete           Bound    elastic/es-data-elasticsearch-0   managed-nfs-storage            20h
pvc-d0fa2e75-4578-49a2-9a5d-d0688ddcee73   5Gi        RWO            Delete           Bound    elastic/es-logs-elasticsearch-0   managed-nfs-storage            20h
pvc-d24d435b-c4ca-4c7d-86e1-8abd5b86a54c   10Gi       RWO            Delete           Bound    elastic/es-data-elasticsearch-2   managed-nfs-storage            20h

部署kibana

es集群部署完成后,需要部署Kibana,这是es的数据可视化工具。

配置文件(kibana.configmap.yml)
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: elastic
  name: kibana-config
  labels:
    app: kibana
data:
  kibana.yml: |-
    server.host: 0.0.0.0
    elasticsearch:
      hosts: ${ELASTICSEARCH_HOSTS}
      username: ${ELASTICSEARCH_USER}
      password: ${ELASTICSEARCH_PASSWORD}
部署文件(kibana.deploy.yml)

kibana只是用于数据在web界面展示及查询,不需要数据持久化,所以采用Deployment资源对象进行创建

---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: elastic
  name: kibana
spec:
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: kibana:7.8.0
        resources:
          limits:
            memory: "2048Mi"
            cpu: "500m"
        ports:
        - containerPort: 5601
          name: kibanaweb
        env:
          - name: ELASTICSEARCH_HOSTS
            value: "http://es-svc:9200"
          - name: ELASTICSEARCH_USER
            value: "elastic"
          - name: ELASTICSEARCH_PASSWORD
            valueFrom:
              secretKeyRef:
                key: password
                name: es-pw
        volumeMounts:
          - name: config
            mountPath: /usr/share/kibana/config/kibana.yml
            readOnly: true
            subPath: kibana.yml
      volumes:
        - name: config
          configMap:
            name: kibana-config
生成secret

使用上一篇的elastic用户的mima生成secret资源对象,使用volume挂载到kibana容器中,用于连接es集群。

kubectl create secret generic es-pw -n elastic --from-literal password=XixD3tzLt8k5xTLoOux3
服务(kibana-service.yml)
---
kind: Service
apiVersion: v1
metadata:
  namespace: elastic
  name: kibana-svc
  labels:
    app: kibana 
spec:
  selector:
    app: kibana
  ports:
  - name: web
    port: 5601
映射至外网(kibana-ingress.yml)

配置ingress,提供一个外部访问的域名(kibana.liheng.com)

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: elastic
  name: kibana-ingress
spec:
  rules:
  - host: kibana.liheng.com
    http: 
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: kibana-svc
            port: 
              number: 5601
部署

创建上面的4个资源对象即可

# kubectl apply -f kibana.configmap.yml
# kubectl apply -f kibana.deploy.yml
# kubectl apply -f kibana-service.yml
# kubectl apply -f kibana-ingress.yml
查看资源状态
# kubectl -n elastic get pod,svc,ingress,cm,secret
NAME                          READY   STATUS    RESTARTS   AGE
pod/elasticsearch-0           1/1     Running   0          20h
pod/elasticsearch-1           1/1     Running   0          20h
pod/elasticsearch-2           1/1     Running   0          20h
pod/kibana-76d755b7d6-zs6zn   1/1     Running   0          6h55m

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
service/es-svc       ClusterIP   None         <none>        9200/TCP,9300/TCP   6h28m
service/kibana-svc   ClusterIP   10.0.0.244   <none>        5601/TCP            7h

NAME                                       CLASS    HOSTS                ADDRESS      PORTS   AGE
ingress.networking.k8s.io/es-ingress       <none>   elastic.liheng.com   10.0.0.243   80      20h
ingress.networking.k8s.io/kibana-ingress   <none>   kibana.liheng.com    10.0.0.243   80      7h

NAME                         DATA   AGE
configmap/es-config          1      20h
configmap/kibana-config      1      7h
configmap/kube-root-ca.crt   1      4d9h

NAME                         TYPE                                  DATA   AGE
secret/default-token-hbd88   kubernetes.io/service-account-token   3      4d9h
secret/es-pw                 Opaque                                1      7h2m
浏览器访问

浏览器访问http://kibana.liheng.com image.png

image.png

到此我们就完成了es集群和kibana的部署及对接,接下来安装和配置Metricbeat,用来收集Kubernetes集群相关监控指标。

如果文章对您有帮助,还想了解更过关于k8s相关的实战经验,请微信扫描下方二维码关注“IT运维图谱”公众号或着通过微信搜一搜关注公众号。 扫码_搜索联合传播样式-白色版.png