k8s部署 普罗米修斯(Prometheus)监控

目录

一、Prometheus简介

1.1  Prometheus架构

1.1.1 组件功能:

二、在k8s中部署Prometheus

2.1 下载部署Prometheus所需资源

2.2 开始部署

2.3 登陆grafana

 2.4 导入面板

 2.5 访问Prometheus 主程序

三、监控使用示例

3.1 建立监控项目

 3.2 监控调整


一、Prometheus简介

Prometheus 是一个开源的服务监控系统和时序数据库
其提供了通用的数据模型和快捷数据采集、存储和查询接口
它的核心组件 Prometheus 服务器定期从静态配置的监控目标或者基于服务发现自动配置的目标中进行拉 取数据
新拉取到啊的 数据大于配置的内存缓存区时,数据就会持久化到存储设备当中

1.1  Prometheus架构

1.1.1 组件功能:

  • 监控代理程序:如node_exporter:收集主机的指标数据,如平均负载、CPU、内存、磁盘、网络等等多个维度的指标数据。

  • kubelet(cAdvisor):收集容器指标数据,也是K8S的核心指标收集,每个容器的相关指标数据包括:CPU使用率、限额、文件系统读写限额、内存使用率和限额、网络报文发送、接收、丢弃速率等等。

  • API Server:收集API Server的性能指标数据,包括控制队列的性能、请求速率和延迟时长等等

  • etcd:收集etcd存储集群的相关指标数据

  • kube-state-metrics:该组件可以派生出k8s相关的多个指标数据,主要是资源类型相关的计数器和元数据信息,包括制定类型的对象总数、资源限额、容器状态以及Pod资源标签系列等。

  • 每个被监控的主机都可以通过专用的exporter程序提供输出监控数据的接口,并等待Prometheus服务器周期性的进行数据抓取

  • 如果存在告警规则,则抓取到数据之后会根据规则进行计算,满足告警条件则会生成告警,并发送到Alertmanager完成告警的汇总和分发

  • 当被监控的目标有主动推送数据的需求时,可以以Pushgateway组件进行接收并临时存储数据,然后等待Prometheus服务器完成数据的采集

  • 任何被监控的目标都需要事先纳入到监控系统中才能进行时序数据采集、存储、告警和展示

  • 监控目标可以通过配置信息以静态形式指定,也可以让Prometheus通过服务发现的机制进行动态管理

二、在k8s中部署Prometheus

2.1 下载部署Prometheus所需资源

在网络状况好的情况下,可以直接用helm获取:
#在helm中添加Prometheus仓库
[root@k8s-master helm]# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
"prometheus-community" has been added to your repositories

#下载Prometheus项目
[root@k8s-master helm]# helm pull prometheus-community/kube-prometheus-stack
[root@k8s-master helm]# ls
kube-prometheus-stack-62.6.0.tgz

#解压项目包
[root@k8s-master helm]# tar zxf kube-prometheus-stack-62.6.0.tgz
[root@k8s-master helm]# ls
kube-prometheus-stack  kube-prometheus-stack-62.6.0.tgz
[root@k8s-master helm]# cd kube-prometheus-stack/
[root@k8s-master kube-prometheus-stack]# ls
Chart.lock  charts  Chart.yaml  CONTRIBUTING.md  README.md  templates  values.yaml

#根据所有项目中的values.yaml中指定的image路径下载容器镜像并上传至harbor仓库

[root@k8s-master ~]# mkdir Prometheus
[root@k8s-master ~]# cd Prometheus/
[root@k8s-master Prometheus]# ls
grafana-11.2.0.tar                kube-state-metrics-2.13.0.tar  nginx-exporter-1.3.0-debian-12-r2.tar  prometheus-62.6.0.tar
kube-prometheus-stack-62.6.0.tgz  nginx-18.1.11.tgz              node-exporter-1.8.2.tar
[root@k8s-master Prometheus]# 

[root@k8s-master kube-prometheus-stack]# vim values.yaml 
imageRegistry: "reg.timingding.org"

去valuse.yaml文件中看镜像文件,一个一个下,如果tag没写,就去Chart.yaml文件中去看:
docker pull quay.io/prometheus-operator/admission-webhook:(版本号)

开始拉去镜像并上传仓库:
[root@k8s-master Prometheus]# docker load -i prometheus-62.6.0.tar

[root@k8s-master Prometheus]# docker tag quay.io/prometheus/prometheus:v2.54.1 reg.timingding.org/prometheus/prometheus:v2.54.1
[root@k8s-master Prometheus]# docker tag quay.io/thanos/thanos:v0.36.1 reg.timingding.org/thanos/thanos:v0.36.1
[root@k8s-master Prometheus]# docker tag quay.io/prometheus/alertmanager:v0.27.0 reg.timingding.org/prometheus/alertmanager:v0.27.0
[root@k8s-master Prometheus]# docker tag quay.io/prometheus-operator/admission-webhook:v0.76.1 reg.timingding.org/prometheus-operator/admission-webhook:v0.76.1
[root@k8s-master Prometheus]# docker tag registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6 reg.timingding.org/ingress-nginx/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6
[root@k8s-master Prometheus]# docker tag quay.io/prometheus-operator/prometheus-operator:v0.76.1 reg.timingding.org/prometheus-operator/prometheus-operator:v0.76.1
[root@k8s-master Prometheus]# docker tag quay.io/prometheus-operator/prometheus-config-reloader:v0.76.1 reg.timingding.org/prometheus-operator/prometheus-config-reloader:v0.76.1
[root@k8s-master Prometheus]#

上传镜像:
[root@k8s-master Prometheus]# docker push reg.timingding.org/prometheus/prometheus:v2.54.1
[root@k8s-master Prometheus]# docker push reg.timingding.org/thanos/thanos:v0.36.1
[root@k8s-master Prometheus]# docker push reg.timingding.org/prometheus/alertmanager:v0.27.0
[root@k8s-master Prometheus]# docker push reg.timingding.org/prometheus-operator/admission-webhook:v0.76.1
[root@k8s-master Prometheus]# docker push reg.timingding.org/ingress-nginx/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6
[root@k8s-master Prometheus]# docker push reg.timingding.org/prometheus-operator/prometheus-operator:v0.76.1
[root@k8s-master Prometheus]# docker push reg.timingding.org/prometheus-operator/prometheus-config-reloader:v0.76.1

修改grafana中仓库域名:
[root@k8s-master charts]# cd grafana/
[root@k8s-master grafana]# ls
Chart.yaml  ci  dashboards  README.md  templates  values.yaml
[root@k8s-master grafana]# vim values.yaml
[root@k8s-master grafana]# 
imageRegistry: "reg.timingding.org"

[root@k8s-master Prometheus]# docker load -i grafana-11.2.0.tar
[root@k8s-master Prometheus]# docker tag grafana/grafana:11.2.0 reg.timingding.org/grafana/grafana:11.2.0
[root@k8s-master Prometheus]# docker push reg.timingding.org/grafana/grafana:11.2.0
[root@k8s-master Prometheus]# docker tag grafana/grafana-image-renderer:latest reg.timingding.org/grafana/grafana-image-renderer:latest
[root@k8s-master Prometheus]# docker push reg.timingding.org/grafana/grafana-image-renderer:latest
[root@k8s-master Prometheus]# docker tag bats/bats:v1.4.1 reg.timingding.org/bats/bats:v1.4.1
[root@k8s-master Prometheus]# docker push reg.timingding.org/bats/bats:v1.4.1
[root@k8s-master Prometheus]# docker tag quay.io/kiwigrid/k8s-sidecar:1.27.4 reg.timingding.org/kiwigrid/k8s-sidecar:1.27.4
[root@k8s-master Prometheus]# docker push reg.timingding.org/kiwigrid/k8s-sidecar:1.27.4

修改busybox镜像,修改成已经上传的:
[root@k8s-master Prometheus]# cd kube-prometheus-stack/
[root@k8s-master kube-prometheus-stack]# cd charts/
[root@k8s-master charts]# cd grafana/
[root@k8s-master grafana]# vim values.yaml 
 
 image:
    # -- The Docker registry
    registry: docker.io
    repository: library/busybox
    tag: "latest"
    sha: ""
    pullPolicy: IfNotPresent
    
[root@k8s-master charts]# docker tag docker.io/library/busybox:latest reg.timingding.org/library/busybox:latest
[root@k8s-master charts]# docker push reg.timingding.org/library/busybox:latest

修改仓库路径:
[root@k8s-master charts]# cd kube-state-metrics/
[root@k8s-master kube-state-metrics]# vim values.yaml 
image:
  registry: reg.timingding.org
  repository: kube-state-metrics/kube-state-metrics
  # If unset use v + .Charts.appVersion
  tag: ""
  sha: ""
  pullPolicy: IfNotPresent

[root@k8s-master Prometheus]# docker load -i kube-state-metrics-2.13.0.tar
[root@k8s-master Prometheus]# docker tag registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.13.0 reg.timingding.org/kube-state-metrics/kube-state-metrics:v2.13.0
[root@k8s-master Prometheus]# docker push reg.timingding.org/kube-state-metrics/kube-state-metrics:v2.13.0
[root@k8s-master Prometheus]# docker tag quay.io/brancz/kube-rbac-proxy:v0.18.0 reg.timingding.org/brancz/kube-rbac-proxy:v0.18.0
[root@k8s-master Prometheus]# docker push reg.timingding.org/brancz/kube-rbac-proxy:v0.18.0

修改镜像路径: 
[root@k8s-master Prometheus]# cd kube-prometheus-stack/
[root@k8s-master kube-prometheus-stack]# cd charts/
[root@k8s-master charts]# cd prometheus-node-exporter/
[root@k8s-master prometheus-node-exporter]# ls
Chart.yaml  ci  README.md  templates  values.yaml
[root@k8s-master prometheus-node-exporter]# vim values.yaml 
image:
  registry: reg.timingding.org
  repository: prometheus/node-exporter
  # Overrides the image tag whose default is {{ printf "v%s" .Chart.AppVersion }}
  tag: ""
  pullPolicy: IfNotPresent
  digest: ""


[root@k8s-master Prometheus]# docker load -i node-exporter-1.8.2.tar
[root@k8s-master Prometheus]# docker tag quay.io/prometheus/node-exporter:v1.8.2 reg.timingding.org/prometheus/node-exporter:v1.8.2
[root@k8s-master Prometheus]# docker push reg.timingding.org/prometheus/node-exporter:v1.8.2

2.2 开始部署

创建Prometheus命名空间:
[root@k8s-master kube-prometheus-stack]# kubectl create namespace kube-prometheus-stack
namespace/kube-prometheus-stack created
[root@k8s-master kube-prometheus-stack]# kubectl get namespaces 
NAME                    STATUS   AGE
default                 Active   2d12h
ingress-nginx           Active   2d3h
kube-node-lease         Active   2d12h
kube-prometheus-stack   Active   11s
kube-public             Active   2d12h
kube-system             Active   2d12h
metallb-system          Active   6h42m
[root@k8s-master kube-prometheus-stack]#

注意,在安装过程中千万别ctrl+c
[root@k8s-master kube-prometheus-stack]# helm  -n kube-prometheus-stack install kube-prometheus-stack .
NAME: kube-prometheus-stack
LAST DEPLOYED: Thu Sep 12 18:03:19 2024
NAMESPACE: kube-prometheus-stack
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
  kubectl --namespace kube-prometheus-stack get pods -l "release=kube-prometheus-stack"

Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
[root@k8s-master kube-prometheus-stack]# kubectl --namespace kube-prometheus-stack ge
NAME                                                        READY   STATUS    RESTART
alertmanager-kube-prometheus-stack-alertmanager-0           2/2     Running   0      
kube-prometheus-stack-grafana-6684f7c544-5q299              3/3     Running   0      
kube-prometheus-stack-kube-state-metrics-778b44b8f9-hq7xc   1/1     Running   0      
kube-prometheus-stack-operator-8589d668cd-z94w6             1/1     Running   0      
kube-prometheus-stack-prometheus-node-exporter-fbgtq        1/1     Running   0      
kube-prometheus-stack-prometheus-node-exporter-fzw4d        1/1     Running   0      
kube-prometheus-stack-prometheus-node-exporter-wrgfr        1/1     Running   0      
prometheus-kube-prometheus-stack-prometheus-0               2/2     Running   0     

查看所有pod是否运行:
[root@k8s-master kube-prometheus-stack]# kubectl -n kube-prometheus-stack get svc
NAME                                             TYPE        CLUSTER-IP      EXTERNAL
alertmanager-operated                            ClusterIP   None            <none>  
kube-prometheus-stack-alertmanager               ClusterIP   10.98.33.162    <none>  
kube-prometheus-stack-grafana                    ClusterIP   10.111.54.200   <none>  
kube-prometheus-stack-kube-state-metrics         ClusterIP   10.98.171.13    <none>  
kube-prometheus-stack-operator                   ClusterIP   10.96.121.82    <none>  
kube-prometheus-stack-prometheus                 ClusterIP   10.96.175.104   <none>  
kube-prometheus-stack-prometheus-node-exporter   ClusterIP   10.107.34.179   <none>  
prometheus-operated                              ClusterIP   None            <none> 

查看svc
[root@k8s-master kube-prometheus-stack]# kubectl -n kube-prometheus-stack get svc
NAME                                             TYPE        CLUSTER-IP      EXTERNAL
alertmanager-operated                            ClusterIP   None            <none>  
kube-prometheus-stack-alertmanager               ClusterIP   10.98.33.162    <none>  
kube-prometheus-stack-grafana                    ClusterIP   10.111.54.200   <none>  
kube-prometheus-stack-kube-state-metrics         ClusterIP   10.98.171.13    <none>  
kube-prometheus-stack-operator                   ClusterIP   10.96.121.82    <none>  
kube-prometheus-stack-prometheus                 ClusterIP   10.96.175.104   <none>  
kube-prometheus-stack-prometheus-node-exporter   ClusterIP   10.107.34.179   <none>  
prometheus-operated                              ClusterIP   None            <none>

#修改暴漏方式
[root@k8s-master kube-prometheus-stack]# kubectl -n kube-prometheus-stack edit svc kube-prometheus-stack-grafana
  type: LoadBalancer
各个svc的作用

alertmanager-operated							告警管理

kube-prometheus-stack-grafana					展示prometheus采集到的指标

kube-prometheus-stack-prometheus-node-exporter     收集节点级别的指标的工具

kube-prometheus-stack-prometheus				 主程序

2.3 登陆grafana

#查看grafana密码:
[root@k8s-master helm]# kubectl -n kube-prometheus-stack get secrets kube-prometheus-stack-grafana -o yaml
apiVersion: v1
data:
  admin-password: cHJvbS1vcGVyYXRvcg==
  admin-user: YWRtaW4=
  ldap-toml: ""
kind: Secret
metadata:
  annotations:
    meta.helm.sh/release-name: kube-prometheus-stack
    meta.helm.sh/release-namespace: kube-prometheus-stack
  creationTimestamp: "2024-09-12T10:03:30Z"
  labels:
    app.kubernetes.io/instance: kube-prometheus-stack
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: grafana
    app.kubernetes.io/version: 11.2.0
    helm.sh/chart: grafana-8.5.1
  name: kube-prometheus-stack-grafana
  namespace: kube-prometheus-stack
  resourceVersion: "11478"
  uid: b92ae9c6-eb8a-4c29-9b71-7a4062e52faa
type: Opaque

[root@k8s-master helm]# echo -n "cHJvbS1vcGVyYXRvcg==" | base64 -d
prom-operator		#密码
[root@k8s-mecho "YWRtaW4=" | base64 -d" | base64 -d
admin				#用户
[root@k8s-master helm]# 

[root@k8s-master helm]# kubectl -n kube-prometheus-stack get svc
NAME                                             TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
alertmanager-operated                            ClusterIP      None            <none>          9093/TCP,9094/TCP,9094/UDP   10m
kube-prometheus-stack-alertmanager               ClusterIP      10.98.33.162    <none>          9093/TCP,8080/TCP            10m
kube-prometheus-stack-grafana                    LoadBalancer   10.111.54.200   172.25.254.50   80:31003/TCP                 10m
kube-prometheus-stack-kube-state-metrics         ClusterIP      10.98.171.13    <none>          8080/TCP                     10m
kube-prometheus-stack-operator                   ClusterIP      10.96.121.82    <none>          443/TCP                      10m
kube-prometheus-stack-prometheus                 ClusterIP      10.96.175.104   <none>          9090/TCP,8080/TCP            10m
kube-prometheus-stack-prometheus-node-exporter   ClusterIP      10.107.34.179   <none>          9100/TCP                     10m
prometheus-operated       

网页输入分配的IP 172.25.254.50,输入上面查看的用户和密码登录

 2.4 导入面板

官方监控模板:Grafana dashboards | Grafana Labs

导入步骤如下:

 

 

 

 导入后效果如下:

 2.5 访问Prometheus 主程序

[root@k8s-master helm]# kubectl -n kube-prometheus-stack edit svc kubeprometheus-stack-prometheus
type: LoadBalancer

[root@k8s-master helm]# kubectl -n kube-prometheus-stack get svc kube-prometheus-stack-prometheus
NAME                               TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                         AGE
kube-prometheus-stack-prometheus   LoadBalancer   10.96.175.104   172.25.254.51   9090:30163/TCP,8080:30206/TCP   79m
[root@k8s-master helm]# 

登录172.25.254.51:9090

 

 

三、监控使用示例

3.1 建立监控项目

[root@k8s-master ~]# mkdir test
[root@k8s-master ~]# cd test/
[root@k8s-master test]# ls
nginx-18.1.11.tgz  nginx-exporter-1.3.0-debian-12-r2.tar
[root@k8s-master test]# tar zxf nginx-18.1.11.tgz 
[root@k8s-master test]# cd nginx/

修改项目开启监控:
[root@k8s-master nginx]# vim values.yaml 

metrics:
  ## @param metrics.enabled Start a Prometheus exporter sidecar container
  ##
  enabled: true
serviceMonitor:
    ## @param metrics.serviceMonitor.enabled Creates a Prometheus Operator ServiceMonitor (also requires `metrics.enabled` to be `true`)
    ##
    enabled: true
    ## @param metrics.serviceMonitor.namespace Namespace in which Prometheus is running
    ##
    namespace: "kube-prometheus-stack"

自行添加:
labels:
	release: kube-prometheus-stack #指定监控标签
	
查看标签:
[root@k8s-master ~]# kubectl -n kube-prometheus-stack get
servicemonitors.monitoring.coreos.com --show-labels

[root@k8s-master test]# docker load -i nginx-1.27.1-debian-12-r2.tar 
30f5b1069b7f: Loading layer  190.1MB/190.1MB
Loaded image: bitnami/nginx:1.27.1-debian-12-r2
[root@k8s-master test]# docker tag bitnami/nginx:1.27.1-debian-12-r2 reg.timingding.org/bitnami/nginx:1.27.1-debian-12-r2
[root@k8s-master test]# docker push reg.timingding.org/bitnami/nginx:1.27.1-debian-12-r2
[root@k8s-master test]# docker tag bitnami/nginx-exporter:1.3.0-debian-12-r2 reg.timingding.org/bitnami/nginx-exporter:1.3.0-debian-12-r2
[root@k8s-master test]# docker push reg.timingding.org/bitnami/nginx-exporter:1.3.0-debian-12-r2


安装项目,在安装之前一定要上传镜像到仓库中:
[root@k8s-master nginx]# helm install timingding .
NAME: timingding
LAST DEPLOYED: Thu Sep 12 19:38:54 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: nginx
CHART VERSION: 18.1.11
APP VERSION: 1.27.1

[root@k8s-master nginx]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
timingding-nginx-5cb5bcfd75-h6mqd   2/2     Running   0          41s
[root@k8s-master nginx]#
[root@k8s-master nginx]# kubectl get svc
NAME               TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                                     AGE
kubernetes         ClusterIP      10.96.0.1        <none>          443/TCP                                     11h
timingding-nginx   LoadBalancer   10.101.225.200   172.25.254.52   80:32519/TCP,443:31360/TCP,9113:30331/TCP   58s
[root@k8s-master nginx]#

[root@k8s-master nginx]# curl 172.25.254.52
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@k8s-master nginx]#

压力测试:
[root@k8s-master nginx]# ab -c 5 -n 100 http://172.25.254.52/index.html
This is ApacheBench, Version 2.3 <$Revision: 1903618 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 172.25.254.52 (be patient).....done


Server Software:        nginx
Server Hostname:        172.25.254.52
Server Port:            80

Document Path:          /index.html
Document Length:        615 bytes

Concurrency Level:      5
Time taken for tests:   0.020 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      87000 bytes
HTML transferred:       61500 bytes
Requests per second:    4990.02 [#/sec] (mean)
Time per request:       1.002 [ms] (mean)
Time per request:       0.200 [ms] (mean, across all concurrent requests)
Transfer rate:          4239.57 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       1
Processing:     0    1   0.3      1       2
Waiting:        0    1   0.3      0       2
Total:          0    1   0.3      1       2
ERROR: The median and mean for the waiting time are more than twice the standard
       deviation apart. These results are NOT reliable.

Percentage of the requests served within a certain time (ms)
  50%      1
  66%      1
  75%      1
  80%      1
  90%      1
  95%      2
  98%      2
  99%      2
 100%      2 (longest request)
[root@k8s-master nginx]#

 3.2 监控调整

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

俗庸203

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值