Prometheus和Grafana持久化存储

之前部署的数据是在临时的存储目录里面,当pod重启或者被删除后,数据也就没了 对于Prometheus监控来说根据需求保存1周或者1个月,但是一定要持久化存储

一、配置Prometheus数据持久化

查看之前创建的nfs动态存储
root@guoguo-M5-Pro:~# kubectl get storageclasses.storage.k8s.io
NAME         PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-guoguo   nfs-provisioner-01   Retain          Immediate           false                  28d
  • 1.
  • 2.
  • 3.
查看需要存储的pod
root@guoguo-M5-Pro:~# kubectl get pods -n monitoring | grep prometheus-k8s
prometheus-k8s-0                       2/2     Running   1          20h
prometheus-k8s-1                       2/2     Running   1          20h
  • 1.
  • 2.
  • 3.
Prometheus用的是有状态服务,StatefulSet 用户不需要手工创建PVC 通过在Pod的spec中指定storageClassName和所需的存储资源来触发动态PVC的创建。
root@guoguo-M5-Pro:~# kubectl get pods -n monitoring prometheus-k8s-1 -o yaml | grep kind
kind: Pod
    kind: StatefulSet
    #可以看到 使用的是有状态存储
  • 1.
  • 2.
  • 3.
  • 4.
root@guoguo-M5-Pro:/apps/k8s# kubectl get statefulsets.apps -n monitoring
NAME                READY   AGE
alertmanager-main   3/3     44h
prometheus-k8s      2/2     7m32s  #可以在有状态服务里面 找到他
  • 1.
  • 2.
  • 3.
  • 4.
那么只需要提需求就可以了
root@guoguo-M5-Pro:/apps/k8s# kubectl edit prometheus -n monitoring k8s
......
......
  name: k8s
  namespace: monitoring
  resourceVersion: "10762913"
  uid: 167328a1-df5a-4249-a107-fc7189f9f20c
spec:
  storage:   #将这些加到spec下面
    volumeClaimTemplate:
      spec:
        accessModes:  #模式
          - ReadWriteOnce  #只能被单个pod读写
        storageClassName: "nfs-guoguo"  #storageclasses名字  nfs的
        resources:
          requests:
            storage: 1Gi  #限制  这个对nfs没用
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
保存退出
然后创建了两个这样就做到持久化存储了
root@guoguo-M5-Pro:/apps/k8s# kubectl get pv -n monitoring  | grep 1Gi
pvc-221c5314-7bce-41de-82c8-6d9243179657   1Gi        RWO            Retain           Bound    monitoring/prometheus-k8s-db-prometheus-k8s-1   nfs-guoguo              33s
pvc-69ecee9f-d335-4059-9247-4b0ce1be7e9c   1Gi        RWO            Retain           Bound    monitoring/prometheus-k8s-db-prometheus-k8s-0   nfs-guoguo              33s
  • 1.
  • 2.
  • 3.
prometheus的两个pod重启了
root@guoguo-M5-Pro:/apps/k8s# kubectl get pods -n monitoring | grep prometheus-k8s
prometheus-k8s-0                       2/2     Running   1          2m9s
prometheus-k8s-1                       2/2     Running   1          2m9s
  • 1.
  • 2.
  • 3.
到此Prometheus的持久化存储就完成了

查看一下

root@guoguo-M5-Pro:~# kubectl get pods -n monitoring prometheus-k8s-1 -o wide
NAME               READY   STATUS    RESTARTS   AGE   IP               NODE          
prometheus-k8s-1   2/2     Running   1          36m   10.244.252.248   172.17.20.112 #得到node的IP
#去node ip主机上查看下
[root@k8s-node2 ~]# ip a | grep  172.17.20.112
    inet 172.17.20.112/16 brd 172.17.255.255 scope global noprefixroute eth0
[root@k8s-node2 ~]# df -hT | grep "prometheus"
172.17.20.109:/apps/nfs_dir/monitoring-prometheus-k8s-db-prometheus-k8s-1-pvc-221c5314-7bce-41de-82c8-6d9243179657               nfs4      100G   14G   87G  14% /data/kubelet/pods/0fd25f79-1afe-425a-8fcd-659e33230532/volumes/kubernetes.io~nfs/pvc-221c5314-7bce-41de-82c8-6d9243179657
172.17.20.109:/apps/nfs_dir/monitoring-prometheus-k8s-db-prometheus-k8s-1-pvc-221c5314-7bce-41de-82c8-6d9243179657/prometheus-db nfs4      100G   14G   87G  14% /data/kubelet/pods/0fd25f79-1afe-425a-8fcd-659e33230532/volume-subpaths/pvc-221c5314-7bce-41de-82c8-6d9243179657/prometheus/2
#可以看到挂载信息
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.

二、配置Grafana的持久化存储

查看之前创建的nfs动态存储

root@guoguo-M5-Pro:/apps/k8s# kubectl get storageclasses.storage.k8s.io
NAME         PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-guoguo   nfs-provisioner-01   Retain          Immediate           false                  28d
  • 1.
  • 2.
  • 3.
Grafana是无状态服务,我们使用nfs动态存储类 只需要创建一个pvc 就会自动给我们创建合适pv
root@guoguo-M5-Pro:/apps/k8s# kubectl get deployments.apps -n monitoring
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
blackbox-exporter     1/1     1            1           44h
grafana               1/1     1            1           44h #可以看到Grafana是无状态应用
kube-state-metrics    1/1     1            1           44h
prometheus-adapter    2/2     2            2           44h
prometheus-operator   1/1     1            1           45h
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
看下现在grafana使用的存储
root@guoguo-M5-Pro:~# kubectl get -n monitoring pods grafana-d8b7bc7f-qjk5k -oyaml | grep -A5 volumes:
  volumes:
  - emptyDir: {}   #现在使用的临时存储 如果pod重启或者删除 那么存储的数据就会删除 消失
    name: grafana-storage
  - name: grafana-datasources
    secret:
      defaultMode: 420
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
写一个pvc
root@guoguo-M5-Pro:/apps/k8s/prometheus/grafana# cat pvc-grafana.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: grafana-pvc-nfs
  namespace: monitoring
spec:
  accessModes:  #使用的模式
    - ReadWriteMany  #可以被多个节点同时读写
  resources:
    requests:
      storage: "2Gi"  #存储大小为2G  不过对于NFS来说 这个没用  限制不了
  storageClassName: nfs-guoguo   #使用的storageclass名字
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
############旧的配置
kind: deployment
spec
  template:
    sepc:
.......
......
      volumes:
      - emptyDir: {}     #这行删掉
        name: grafana-storage  #这行删掉



#改为新的配置 
kind: deployment
spec
  template:
    sepc:
   ......
   ......
      volumes:
      - name: grafana-storage     #这行加上
        persistentVolumeClaim:    #这行加上
          claimName: grafana-pvc-nfs  #这行加上
          readOnly: false    #这行加上

  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.

查看下挂载的信息

root@guoguo-M5-Pro:/apps/k8s/prometheus/grafana# kubectl get pods -n monitoring grafana-757cd77d66-dv28z -owide
NAME                       READY   STATUS    RESTARTS   AGE     IP               NODE            
grafana-757cd77d66-dv28z   1/1     Running   0          2m15s   10.244.252.215   172.17.20.112 #pod所在node ip
#去node ip主机上查看下

[root@k8s-node2 ~]# ip a | grep 172.17.20.112
    inet 172.17.20.112/16 brd 172.17.255.255 scope global noprefixroute eth0
[root@k8s-node2 ~]# df -hT | grep grafana
tmpfs                                                                                                                            tmpfs     3.0G  4.0K  3.0G    1% /data/kubelet/pods/b7db2b8e-5a42-42bf-b47f-d7144ed438c6/volumes/kubernetes.io~secret/grafana-datasources
172.17.20.109:/apps/nfs_dir/monitoring-grafana-pvc-nfs-pvc-6fac2e44-dc6e-4910-b097-80cea15715dc                                  nfs4      100G   15G   86G   15% /data/kubelet/pods/b7db2b8e-5a42-42bf-b47f-d7144ed438c6/volumes/kubernetes.io~nfs/pvc-6fac2e44-dc6e-4910-b097-80cea15715dc
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.