K8S学习之Prometheus+Grafana+Alertmanager搭建全方位的监控告警系统

Prometheus+Grafana+Alertmanager搭建全方位的监控告警系统

部署prometheus、grafana、alertmanager,并且配置prometheus的动态、静态服务发现,实现对容器、物理节点、service、pod等资源指标监控,并在Grafana的web界面展示prometheus的监控指标,然后通过配置自定义告警规则,通过alertmanager实现qq、钉钉、微信报警。

prometheus特点

1.多维度数据模型
时间序列数据由metrics名称和键值对来组成
可以对数据进行聚合,切割等操作
所有的metrics都可以设置任意的多维标签。
2.灵活的查询语言(PromQL)可以对采集的metrics指标进行加法,乘法,连接等操作
3.可以直接在本地部署,不依赖其他分布式存储;
4.通过基于HTTP的pull方式采集时序数据;
5.可以通过中间网关pushgateway的方式把时间序列数据推送到prometheus server端;
6.可通过服务发现或者静态配置来发现目标服务对象(targets)。
7.有多种可视化图像界面,如Grafana等。
8.高效的存储,每个采样数据占3.5 bytes左右,300万的时间序列,30s间隔,保留60天,消耗磁盘大概200G。

prometheus组件介绍

1.Prometheus Server: 用于收集和存储时间序列数据。

2.Client Library: 客户端库,检测应用程序代码,当Prometheus抓取实例的HTTP端点时,客户端库会将所有跟踪的metrics指标的当前状态发送到prometheus server端。

3.Exporters: prometheus支持多种exporter,通过exporter可以采集metrics数据,然后发送到prometheus server端

4.Alertmanager: 从 Prometheus server 端接收到 alerts 后,会进行去重,分组,并路由到相应的接收方,发出报警,常见的接收方式有:电子邮件,微信,钉钉, slack等。

5.Grafana**:**监控仪表盘

6.pushgateway: 各个目标主机可上报数据到pushgatewy,然后prometheus server统一从pushgateway拉取数据。
在这里插入图片描述
prometheus工作流程:

1.Prometheus server可定期从活跃的(up)目标主机上(target)拉取监控指标数据,目标主机的监控数据可通过配置静态job或者服务发现的方式被prometheus server采集到,这种方式默认的pull方式拉取指标;也可通过pushgateway把采集的数据上报到prometheus server中;还可通过一些组件自带的exporter采集相应组件的数据;

2.Prometheus server把采集到的监控指标数据保存到本地磁盘或者数据库;

3.Prometheus采集的监控指标数据按时间序列存储,通过配置报警规则,把触发的报警发送到alertmanager

4.Alertmanager通过配置报警接收方,发送报警到邮件,微信或者钉钉等

5.Prometheus 自带的web ui界面提供PromQL查询语言,可查询监控数据

6.Grafana可接入prometheus数据源,把监控数据以图形化形式展示出

安装node-exporter组件

[root@master ~]# vim node-export.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: monitor-sa
  labels:
    name: node-exporter
spec:
  selector:
    matchLabels:
     name: node-exporter
  template:
    metadata:
      labels:
        name: node-exporter
    spec:
      hostPID: true
      hostIPC: true
      hostNetwork: true
      containers:
      - name: node-exporter
        image: prom/node-exporter:v0.16.0
        ports:
        - containerPort: 9100
        resources:
          requests:
            cpu: 0.15
        securityContext:
          privileged: true
        args:
        - --path.procfs
        - /host/proc
        - --path.sysfs
        - /host/sys
        - --collector.filesystem.ignored-mount-points
        - '"^/(sys|proc|dev|host|etc)($|/)"'
        volumeMounts:
        - name: dev
          mountPath: /host/dev
        - name: proc
          mountPath: /host/proc
        - name: sys
          mountPath: /host/sys
        - name: rootfs
          mountPath: /rootfs
      tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Exists"
        effect: "NoSchedule"
      volumes:
        - name: proc
          hostPath:
            path: /proc
        - name: dev
          hostPath:
            path: /dev
        - name: sys
          hostPath:
            path: /sys
        - name: rootfs
          hostPath:
            path: /
[root@master ~]# kubectl create namespace monitor-sa
namespace/monitor-sa created
[root@master ~]# kubectl apply -f node-export.yaml
daemonset.apps/node-exporter created
[root@master ~]# kubectl get pod -n monitor-sa -owide
NAME    			  READY   STATUS    RESTARTS   AGE     IP     NODE        
node-exporter-jd8fh   1/1     Running   0   2m29s   192.168.1.12   node1 
node-exporter-kq6dr   1/1     Running   0   2m29s   192.168.1.11  master 
[root@master ~]# curl 192.168.1.12:9100/metrics
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
.....
# HELP 		解释当前指标的含义
# TYPE      当前指标的数据类型
# HELP node_cpu_guest_seconds_total Seconds the cpus spent in guests (VMs) for each mode.
# TYPE node_cpu_guest_seconds_total counter
node_cpu_guest_seconds_total{cpu="0",mode="nice"} 0
node_cpu_guest_seconds_total{cpu="0",mode="user"} 0
node_cpu_guest_seconds_total{cpu="1",mode="nice"} 0
node_cpu_guest_seconds_total{cpu="1",mode="user"} 0

#创建一个sa账号

[root@master ~]# kubectl create serviceaccount monitor -n monitor-sa
serviceaccount/monitor created
[root@master ~]# kubectl get sa -n monitor-sa
NAME      SECRETS   AGE
monitor   1         13s

把sa账号monitor通过clusterrolebing绑定到clusterrole上

[root@master ~]# kubectl create clusterrolebinding monitor-clusterrolebinding -n monitor-sa --clusterrole=cluster-admin --serviceaccount=monitor-sa:monitor

2.创建数据目录 node1上操作如下命令:

[root@node1 ~]# mkdir /data
[root@node1 ~]# chmod 777 /data/

3.安装prometheus,以下步骤均在在k8s集群的master1节点操作
1)创建一个configmap存储卷,用来存放prometheus配置信息

[root@master ~]# vim prometheus-cfg.yaml
---
kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    app: prometheus
  name: prometheus-config
  namespace: monitor-sa
data:
  prometheus.yml: |
    global:
      scrape_interval: 15s
      scrape_timeout: 10s
      evaluation_interval: 1m
    scrape_configs:
    - job_name: 'kubernetes-node'
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - source_labels: [__address__]
        regex: '(.*):10250'
        replacement: '${1}:9100'
        target_label: __address__
        action: replace
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
    - job_name: 'kubernetes-node-cadvisor'
      kubernetes_sd_configs:
      - role:  node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
    - job_name: 'kubernetes-apiserver'
      kubernetes_sd_configs:
      - role: endpoints
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: default;kubernetes;https
    - job_name: 'kubernetes-service-endpoints'
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
        regex: (https?)
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_name 
promtheus-cfg.yaml文件需要手动修改部分${1}  $1  $2   
22行的replacement: ':9100'变成replacement: '${1}:9100'
42行的replacement: /api/v1/nodes//proxy/metrics/cadvisor变成
              replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
73行的replacement:  变成replacement: $1:$2
[root@master ~]# kubectl apply  -f  prometheus-cfg.yaml
configmap/prometheus-config created
[root@master ~]# kubectl get configmaps -n monitor-sa
NAME                DATA   AGE
prometheus-config   1      18s

2)通过deployment部署prometheus

cat  >prometheus-deploy.yaml <<EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus-server
  namespace: monitor-sa
  labels:
    app: prometheus
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
      component: server
    #matchExpressions:
    #- {key: app, operator: In, values: [prometheus]}
    #- {key: component, operator: In, values: [server]}
  template:
    metadata:
      labels:
        app: prometheus
        component: server
      annotations:
        prometheus.io/scrape: 'false'
    spec:
      nodeName: node1
      serviceAccountName: monitor
      containers:
      - name: prometheus
        image: prom/prometheus:v2.11.0
        imagePullPolicy: IfNotPresent
        command:
          - prometheus
          - --config.file=/etc/prometheus/prometheus.yml
          - --storage.tsdb.path=/prometheus
          - --storage.tsdb.retention=720h
        ports:
        - containerPort: 9090
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/prometheus/prometheus.yml
          name: prometheus-config
          subPath: prometheus.yml
        - mountPath: /prometheus/
          name: prometheus-storage-volume
      volumes:
        - name: prometheus-config
          configMap:
            name: prometheus-config
            items:
              - key: prometheus.yml
                path: prometheus.yml
                mode: 0644
        - name: prometheus-storage-volume
          hostPath:
           path: /data
           type: Directory
EOF

注意:在上面的prometheus-deploy.yaml文件有个nodeName字段,这个就是用来指定创建的这个prometheus的pod调度到哪个节点上,我们这里让nodeName=node1,也即是让pod调度到node1节点上,因为node1节点我们创建了数据目录/data,所以大家记住:你在k8s集群的哪个节点创建/data,就让pod调度到哪个节点。

[root@master ~]# kubectl apply -f prometheus-deploy.yaml
deployment.apps/prometheus-server created
[root@master ~]# kubectl get pod -n monitor-sa
NAME                                 READY   STATUS    RESTARTS   AGE
node-exporter-jd8fh                  1/1     Running   0          5m
node-exporter-kq6dr                  1/1     Running   0          5m
prometheus-server-86c6ff4ffb-ztxnt   1/1     Running   0          25s

3)给prometheus pod创建一个service

cat  > prometheus-svc.yaml << EOF
---
apiVersion: v1
kind: Service
metadata:
  name: prometheus
  namespace: monitor-sa
  labels:
    app: prometheus
spec:
  type: NodePort
  ports:
    - port: 9090
      targetPort: 9090
      protocol: TCP
  selector:
    app: prometheus
    component: server
EOF
[root@master ~]# kubectl  apply -f prometheus-svc.yaml
service/prometheus created
[root@master ~]# kubectl get svc -n monitor-sa
NAME         TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
prometheus   NodePort   10.105.149.225   <none>        9090:30035/T  17s

浏览器访问
在这里插入图片描述
在这里插入图片描述

prometheus热更新
为了每次修改配置文件可以热加载prometheus,也就是不停止prometheus,就可以使配置生效,如修改prometheus-cfg.yaml,想要使配置生效可用如下热加载命令:

[root@master ~]# kubectl get pods -n monitor-sa -o wide | grep prometheus
prometheus-server-86  1/1     Running   0          7m25s   10.244.1.8
[root@master ~]# curl -X POST http://10.244.1.8:9090/-/reload

Grafana安装和配置

cat  >grafana.yaml <<  EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: monitoring-grafana
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      task: monitoring
      k8s-app: grafana
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: grafana
    spec:
      containers:
      - name: grafana
        image: k8s.gcr.io/heapster-grafana-amd64:v5.0.4
        ports:
        - containerPort: 3000
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/ssl/certs
          name: ca-certificates
          readOnly: true
        - mountPath: /var
          name: grafana-storage
        env:
        - name: INFLUXDB_HOST
          value: monitoring-influxdb
        - name: GF_SERVER_HTTP_PORT
          value: "3000"
          # The following env variables are required to make Grafana accessible via
          # the kubernetes api-server proxy. On production clusters, we recommend
          # removing these env variables, setup auth for grafana, and expose the grafana
          # service using a LoadBalancer or a public IP.
        - name: GF_AUTH_BASIC_ENABLED
          value: "false"
        - name: GF_AUTH_ANONYMOUS_ENABLED
          value: "true"
        - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          value: Admin
        - name: GF_SERVER_ROOT_URL
          # If you're only using the API Server proxy, set this value instead:
          # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
          value: /
      volumes:
      - name: ca-certificates
        hostPath:
          path: /etc/ssl/certs
      - name: grafana-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-grafana
  name: monitoring-grafana
  namespace: kube-system
spec:
  # In a production setup, we recommend accessing Grafana through an external Loadbalancer
  # or through a public IP.
  # type: LoadBalancer
  # You could also use NodePort to expose the service at a randomly-generated port
  # type: NodePort
  ports:
  - port: 80
    targetPort: 3000
  selector:
    k8s-app: grafana
  type: NodePort
EOF
[root@node1 ~]# docker load -i heapster-grafana-amd64_v5_0_4.tar.gz
####grafana上传到node节点
[root@master ~]# kubectl  apply -f grafana.yaml
[root@master ~]# kubectl get pods -n kube-system  | grep  grafana
monitoring-grafana-7d7f6cf5c6-hvb2m   1/1     Running   0          9s
[root@master ~]# kubectl get svc -n kube-system  | grep  grafana
NAME        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)            AGE
monitoring-grafana   NodePort  10.108.0.93   <none>   80:31619/TCP       17s

访问192.168.1.12:31619,创建
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
安装配置kube-state-metrics组件

kube-state-metrics是什么?
kube-state-metrics通过监听API Server生成有关资源对象的状态指标,比如Deployment、Node、Pod,需要注意的是kube-state-metrics只是简单的提供一个metrics数据,并不会存储这些指标数据,所以我们可以使用Prometheus来抓取这些数据然后存储,主要关注的是业务相关的一些元数据,比如Deployment、Pod、副本状态等;调度了多少个replicas?现在可用的有几个?多少个Pod是running/stopped/terminated状态?Pod重启了多少次?我有多少job在运行中。

安装kube-state-metrics组件
1)创建sa,并对sa授权

[root@master ~]# kubectl create clusterrolebinding kubestate-clusterrolebinding -n kube-system --clusterrole=cluster-admin --serviceaccount=kube-system:kubestate
clusterrolebinding.rbac.authorization.k8s.io/kubestate-clusterrolebinding created
[root@master ~]# kubectl create sa kubestate -n kube-system
serviceaccount/kubestate created

2)安装kube-state-metrics组件
在k8s的master1节点生成一个kube-state-metrics-deploy.yaml文件

cat > kube-state-metrics-deploy.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-state-metrics
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kube-state-metrics
  template:
    metadata:
      labels:
        app: kube-state-metrics
    spec:
      serviceAccountName: kubestate
      containers:
      - name: kube-state-metrics
        image: quay.io/coreos/kube-state-metrics:v1.9.0
        ports:
        - containerPort: 8080
EOF
[root@master ~]# kubectl apply -f kube-state-metrics-deploy.yaml
[root@master ~]# kubectl get pod -n kube-system | grep kube-state-metrics
kube-state-metrics-557f66c7d-bx5xp    1/1     Running   0          44s

3)创建service
在8s的master1节点生成一个kube-state-metrics-svc.yaml文件

cat >kube-state-metrics-svc.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: 'true'
  name: kube-state-metrics
  namespace: kube-system
  labels:
    app: kube-state-metrics
spec:
  ports:
  - name: kube-state-metrics
    port: 8080
    protocol: TCP
  selector:
    app: kube-state-metrics
EOF
[root@master ~]# kubectl apply -f kube-state-metrics-svc.yaml
service/kube-state-metrics created
[root@master ~]# kubectl get svc -n kube-system | grep kube-state-metrics
kube-state-metrics   ClusterIP   10.101.125.39   <none>        8080/TCP   6s

**在grafana web界面导入Kubernetes Cluster (Prometheus)-1577674936972.json
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
安装和配置Alertmanager-发送报警到qq邮箱
在k8s的master1节点创建alertmanager-cm.yaml文件

cat >alertmanager-cm.yaml <<EOF
kind: ConfigMap
apiVersion: v1
metadata:
 name: alertmanager
 namespace: monitor-sa
data:
 alertmanager.yml: |-
   global:
     resolve_timeout: 1m
     smtp_smarthost: 'smtp.163.com:25'
     smtp_from: '15011572657@163.com'
     smtp_auth_username: '15011572657'
     smtp_auth_password: 'BDBPRMLNZGKWRFJP'
     smtp_require_tls: false
   route:
     group_by: [alertname]
     group_wait: 10s
     group_interval: 10s
     repeat_interval: 10m
     receiver: default-receiver
   receivers:
   - name: 'default-receiver'
     email_configs:
     - to: '1980570647@qq.com'
       send_resolved: true
EOF

alertmanager配置文件解释说明:
smtp_smarthost: ‘smtp.163.com:25’ #用于发送邮件的邮箱的SMTP服务器地址+端口
smtp_from: ‘15011572657@163.com’ #这是指定从哪个邮箱发送报警
smtp_auth_username: ‘15011572657’ #这是发送邮箱的认证用户,不是邮箱名
smtp_auth_password: ‘BDBPRMLNZGKWRFJP’ #这是发送邮箱的授权码而不是登录密码
email_configs: - to: ‘1980570647@qq.com’ #to后面指定发送到哪个邮箱

[root@master ~]# kubectl apply -f alertmanager-cm.yaml
[root@master ~]# kubectl get configmaps -n monitor-sa
NAME                DATA   AGE
alertmanager        1      78s

**在k8s的master1节点重新生成一个prometheus-cfg.yaml文件
**
cat prometheus-cfg.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    app: prometheus
  name: prometheus-config
  namespace: monitor-sa
data:
  prometheus.yml: |
    rule_files:
    - /etc/prometheus/rules.yml
    alerting:
      alertmanagers:
      - static_configs:
        - targets: ["localhost:9093"]
    global:
      scrape_interval: 15s
      scrape_timeout: 10s
      evaluation_interval: 1m
    scrape_configs:
    - job_name: 'kubernetes-node'
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - source_labels: [__address__]
        regex: '(.*):10250'
        replacement: '${1}:9100'
        target_label: __address__
        action: replace
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
    - job_name: 'kubernetes-node-cadvisor'
      kubernetes_sd_configs:
      - role:  node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
    - job_name: 'kubernetes-apiserver'
      kubernetes_sd_configs:
      - role: endpoints
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: default;kubernetes;https
    - job_name: 'kubernetes-service-endpoints'
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
        regex: (https?)
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_name 
    - job_name: kubernetes-pods
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: keep
        regex: true
        source_labels:
        - __meta_kubernetes_pod_annotation_prometheus_io_scrape
      - action: replace
        regex: (.+)
        source_labels:
        - __meta_kubernetes_pod_annotation_prometheus_io_path
        target_label: __metrics_path__
      - action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        source_labels:
        - __address__
        - __meta_kubernetes_pod_annotation_prometheus_io_port
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: kubernetes_namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: kubernetes_pod_name
    - job_name: 'kubernetes-schedule'
      scrape_interval: 5s
      static_configs:
      - targets: ['192.168.1.11:10251']
    - job_name: 'kubernetes-controller-manager'
      scrape_interval: 5s
      static_configs:
      - targets: ['192.168.1.11:10252']
    - job_name: 'kubernetes-kube-proxy'
      scrape_interval: 5s
      static_configs:
      - targets: ['192.168.1.11:10249','192.168.1.12:10249']
    - job_name: 'kubernetes-etcd'
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/ca.crt
        cert_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/server.crt
        key_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/server.key
      scrape_interval: 5s
      static_configs:
      - targets: ['192.168.1.11:2379']
  rules.yml: |
    groups:
    - name: example
      rules:
      - alert: kube-proxy的cpu使用率大于80%
        expr: rate(process_cpu_seconds_total{job=~"kubernetes-kube-proxy"}[1m]) * 100 > 80
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过80%"
      - alert:  kube-proxy的cpu使用率大于90%
        expr: rate(process_cpu_seconds_total{job=~"kubernetes-kube-proxy"}[1m]) * 100 > 90
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过90%"
      - alert: scheduler的cpu使用率大于80%
        expr: rate(process_cpu_seconds_total{job=~"kubernetes-schedule"}[1m]) * 100 > 80
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过80%"
      - alert:  scheduler的cpu使用率大于90%
        expr: rate(process_cpu_seconds_total{job=~"kubernetes-schedule"}[1m]) * 100 > 90
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过90%"
      - alert: controller-manager的cpu使用率大于80%
        expr: rate(process_cpu_seconds_total{job=~"kubernetes-controller-manager"}[1m]) * 100 > 80
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过80%"
      - alert:  controller-manager的cpu使用率大于90%
        expr: rate(process_cpu_seconds_total{job=~"kubernetes-controller-manager"}[1m]) * 100 > 0
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过90%"
      - alert: apiserver的cpu使用率大于80%
        expr: rate(process_cpu_seconds_total{job=~"kubernetes-apiserver"}[1m]) * 100 > 80
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过80%"
      - alert:  apiserver的cpu使用率大于90%
        expr: rate(process_cpu_seconds_total{job=~"kubernetes-apiserver"}[1m]) * 100 > 90
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过90%"
      - alert: etcd的cpu使用率大于80%
        expr: rate(process_cpu_seconds_total{job=~"kubernetes-etcd"}[1m]) * 100 > 80
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过80%"
      - alert:  etcd的cpu使用率大于90%
        expr: rate(process_cpu_seconds_total{job=~"kubernetes-etcd"}[1m]) * 100 > 90
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过90%"
      - alert: kube-state-metrics的cpu使用率大于80%
        expr: rate(process_cpu_seconds_total{k8s_app=~"kube-state-metrics"}[1m]) * 100 > 80
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.k8s_app}}组件的cpu使用率超过80%"
          value: "{{ $value }}%"
          threshold: "80%"      
      - alert: kube-state-metrics的cpu使用率大于90%
        expr: rate(process_cpu_seconds_total{k8s_app=~"kube-state-metrics"}[1m]) * 100 > 0
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.k8s_app}}组件的cpu使用率超过90%"
          value: "{{ $value }}%"
          threshold: "90%"      
      - alert: coredns的cpu使用率大于80%
        expr: rate(process_cpu_seconds_total{k8s_app=~"kube-dns"}[1m]) * 100 > 80
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.k8s_app}}组件的cpu使用率超过80%"
          value: "{{ $value }}%"
          threshold: "80%"      
      - alert: coredns的cpu使用率大于90%
        expr: rate(process_cpu_seconds_total{k8s_app=~"kube-dns"}[1m]) * 100 > 90
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.k8s_app}}组件的cpu使用率超过90%"
          value: "{{ $value }}%"
          threshold: "90%"      
      - alert: kube-proxy打开句柄数>600
        expr: process_open_fds{job=~"kubernetes-kube-proxy"}  > 600
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>600"
          value: "{{ $value }}"
      - alert: kube-proxy打开句柄数>1000
        expr: process_open_fds{job=~"kubernetes-kube-proxy"}  > 1000
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>1000"
          value: "{{ $value }}"
      - alert: kubernetes-schedule打开句柄数>600
        expr: process_open_fds{job=~"kubernetes-schedule"}  > 600
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>600"
          value: "{{ $value }}"
      - alert: kubernetes-schedule打开句柄数>1000
        expr: process_open_fds{job=~"kubernetes-schedule"}  > 1000
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>1000"
          value: "{{ $value }}"
      - alert: kubernetes-controller-manager打开句柄数>600
        expr: process_open_fds{job=~"kubernetes-controller-manager"}  > 600
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>600"
          value: "{{ $value }}"
      - alert: kubernetes-controller-manager打开句柄数>1000
        expr: process_open_fds{job=~"kubernetes-controller-manager"}  > 1000
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>1000"
          value: "{{ $value }}"
      - alert: kubernetes-apiserver打开句柄数>600
        expr: process_open_fds{job=~"kubernetes-apiserver"}  > 600
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>600"
          value: "{{ $value }}"
      - alert: kubernetes-apiserver打开句柄数>1000
        expr: process_open_fds{job=~"kubernetes-apiserver"}  > 1000
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>1000"
          value: "{{ $value }}"
      - alert: kubernetes-etcd打开句柄数>600
        expr: process_open_fds{job=~"kubernetes-etcd"}  > 600
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>600"
          value: "{{ $value }}"
      - alert: kubernetes-etcd打开句柄数>1000
        expr: process_open_fds{job=~"kubernetes-etcd"}  > 1000
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>1000"
          value: "{{ $value }}"
      - alert: coredns
        expr: process_open_fds{k8s_app=~"kube-dns"}  > 600
        for: 2s
        labels:
          severity: warnning 
        annotations:
          description: "插件{{$labels.k8s_app}}({{$labels.instance}}): 打开句柄数超过600"
          value: "{{ $value }}"
      - alert: coredns
        expr: process_open_fds{k8s_app=~"kube-dns"}  > 1000
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "插件{{$labels.k8s_app}}({{$labels.instance}}): 打开句柄数超过1000"
          value: "{{ $value }}"
      - alert: kube-proxy
        expr: process_virtual_memory_bytes{job=~"kubernetes-kube-proxy"}  > 2000000000
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}): 使用虚拟内存超过2G"
          value: "{{ $value }}"
      - alert: scheduler
        expr: process_virtual_memory_bytes{job=~"kubernetes-schedule"}  > 2000000000
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}): 使用虚拟内存超过2G"
          value: "{{ $value }}"
      - alert: kubernetes-controller-manager
        expr: process_virtual_memory_bytes{job=~"kubernetes-controller-manager"}  > 2000000000
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}): 使用虚拟内存超过2G"
          value: "{{ $value }}"
      - alert: kubernetes-apiserver
        expr: process_virtual_memory_bytes{job=~"kubernetes-apiserver"}  > 2000000000
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}): 使用虚拟内存超过2G"
          value: "{{ $value }}"
      - alert: kubernetes-etcd
        expr: process_virtual_memory_bytes{job=~"kubernetes-etcd"}  > 2000000000
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}): 使用虚拟内存超过2G"
          value: "{{ $value }}"
      - alert: kube-dns
        expr: process_virtual_memory_bytes{k8s_app=~"kube-dns"}  > 2000000000
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "插件{{$labels.k8s_app}}({{$labels.instance}}): 使用虚拟内存超过2G"
          value: "{{ $value }}"
      - alert: HttpRequestsAvg
        expr: sum(rate(rest_client_requests_total{job=~"kubernetes-kube-proxy|kubernetes-kubelet|kubernetes-schedule|kubernetes-control-manager|kubernetes-apiservers"}[1m]))  > 1000
        for: 2s
        labels:
          team: admin
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}): TPS超过1000"
          value: "{{ $value }}"
          threshold: "1000"   
      - alert: Pod_restarts
        expr: kube_pod_container_status_restarts_total{namespace=~"kube-system|default|monitor-sa"} > 0
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "在{{$labels.namespace}}名称空间下发现{{$labels.pod}}这个pod下的容器{{$labels.container}}被重启,这个监控指标是由{{$labels.instance}}采集的"
          value: "{{ $value }}"
          threshold: "0"
      - alert: Pod_waiting
        expr: kube_pod_container_status_waiting_reason{namespace=~"kube-system|default"} == 1
        for: 2s
        labels:
          team: admin
        annotations:
          description: "空间{{$labels.namespace}}({{$labels.instance}}): 发现{{$labels.pod}}下的{{$labels.container}}启动异常等待中"
          value: "{{ $value }}"
          threshold: "1"   
      - alert: Pod_terminated
        expr: kube_pod_container_status_terminated_reason{namespace=~"kube-system|default|monitor-sa"} == 1
        for: 2s
        labels:
          team: admin
        annotations:
          description: "空间{{$labels.namespace}}({{$labels.instance}}): 发现{{$labels.pod}}下的{{$labels.container}}被删除"
          value: "{{ $value }}"
          threshold: "1"
      - alert: Etcd_leader
        expr: etcd_server_has_leader{job="kubernetes-etcd"} == 0
        for: 2s
        labels:
          team: admin
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}): 当前没有leader"
          value: "{{ $value }}"
          threshold: "0"
      - alert: Etcd_leader_changes
        expr: rate(etcd_server_leader_changes_seen_total{job="kubernetes-etcd"}[1m]) > 0
        for: 2s
        labels:
          team: admin
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}): 当前leader已发生改变"
          value: "{{ $value }}"
          threshold: "0"
      - alert: Etcd_failed
        expr: rate(etcd_server_proposals_failed_total{job="kubernetes-etcd"}[1m]) > 0
        for: 2s
        labels:
          team: admin
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}): 服务失败"
          value: "{{ $value }}"
          threshold: "0"
      - alert: Etcd_db_total_size
        expr: etcd_debugging_mvcc_db_total_size_in_bytes{job="kubernetes-etcd"} > 10000000000
        for: 2s
        labels:
          team: admin
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}):db空间超过10G"
          value: "{{ $value }}"
          threshold: "10G"
      - alert: Endpoint_ready
        expr: kube_endpoint_address_not_ready{namespace=~"kube-system|default"} == 1
        for: 2s
        labels:
          team: admin
        annotations:
          description: "空间{{$labels.namespace}}({{$labels.instance}}): 发现{{$labels.endpoint}}不可用"
          value: "{{ $value }}"
          threshold: "1"
    - name: 物理节点状态-监控告警
      rules:
      - alert: 物理节点cpu使用率
        expr: 100-avg(irate(node_cpu_seconds_total{mode="idle"}[5m])) by(instance)*100 > 90
        for: 2s
        labels:
          severity: ccritical
        annotations:
          summary: "{{ $labels.instance }}cpu使用率过高"
          description: "{{ $labels.instance }}的cpu使用率超过90%,当前使用率[{{ $value }}],需要排查处理" 
      - alert: 物理节点内存使用率
        expr: (node_memory_MemTotal_bytes - (node_memory_MemFree_bytes + node_memory_Buffers_bytes + node_memory_Cached_bytes)) / node_memory_MemTotal_bytes * 100 > 90
        for: 2s
        labels:
          severity: critical
        annotations:
          summary: "{{ $labels.instance }}内存使用率过高"
          description: "{{ $labels.instance }}的内存使用率超过90%,当前使用率[{{ $value }}],需要排查处理"
      - alert: InstanceDown
        expr: up == 0
        for: 2s
        labels:
          severity: critical
        annotations:   
          summary: "{{ $labels.instance }}: 服务器宕机"
          description: "{{ $labels.instance }}: 服务器延时超过2分钟"
      - alert: 物理节点磁盘的IO性能
        expr: 100-(avg(irate(node_disk_io_time_seconds_total[1m])) by(instance)* 100) < 60
        for: 2s
        labels:
          severity: critical
        annotations:
          summary: "{{$labels.mountpoint}} 流入磁盘IO使用率过高!"
          description: "{{$labels.mountpoint }} 流入磁盘IO大于60%(目前使用:{{$value}})"
      - alert: 入网流量带宽
        expr: ((sum(rate (node_network_receive_bytes_total{device!~'tap.*|veth.*|br.*|docker.*|virbr*|lo*'}[5m])) by (instance)) / 100) > 102400
        for: 2s
        labels:
          severity: critical
        annotations:
          summary: "{{$labels.mountpoint}} 流入网络带宽过高!"
          description: "{{$labels.mountpoint }}流入网络带宽持续5分钟高于100M. RX带宽使用率{{$value}}"
      - alert: 出网流量带宽
        expr: ((sum(rate (node_network_transmit_bytes_total{device!~'tap.*|veth.*|br.*|docker.*|virbr*|lo*'}[5m])) by (instance)) / 100) > 102400
        for: 2s
        labels:
          severity: critical
        annotations:
          summary: "{{$labels.mountpoint}} 流出网络带宽过高!"
          description: "{{$labels.mountpoint }}流出网络带宽持续5分钟高于100M. RX带宽使用率{{$value}}"
      - alert: TCP会话
        expr: node_netstat_Tcp_CurrEstab > 1000
        for: 2s
        labels:
          severity: critical
        annotations:
          summary: "{{$labels.mountpoint}} TCP_ESTABLISHED过高!"
          description: "{{$labels.mountpoint }} TCP_ESTABLISHED大于1000%(目前使用:{{$value}}%)"
      - alert: 磁盘容量
        expr: 100-(node_filesystem_free_bytes{fstype=~"ext4|xfs"}/node_filesystem_size_bytes {fstype=~"ext4|xfs"}*100) > 80
        for: 2s
        labels:
          severity: critical
        annotations:
          summary: "{{$labels.mountpoint}} 磁盘分区使用率过高!"
          description: "{{$labels.mountpoint }} 磁盘分区使用大于80%(目前使用:{{$value}}%)"

修改自己的集群IP

[root@master ~]# cat -n prometheus-cfg.yaml | grep 192.168
   120        - targets: ['192.168.1.11:10251']
   124        - targets: ['192.168.1.11:10252']
   128        - targets: ['192.168.1.11:10249','192.168.1.12:10249']
   137        - targets: ['192.168.1.11:2379']

方便查看报警修改的实际的报警指标

[root@master ~]# cat -n prometheus-cfg.yaml | grep "> 0"
   178          expr: rate(process_cpu_seconds_total{job=~"kubernetes-controller-manager"}[1m]) * 100 > 0
   222          expr: rate(process_cpu_seconds_total{k8s_app=~"kube-state-metrics"}[1m]) * 100 > 0
   402          expr: kube_pod_container_status_restarts_total{namespace=~"kube-system|default|monitor-sa"} > 0
   438          expr: rate(etcd_server_leader_changes_seen_total{job="kubernetes-etcd"}[1m]) > 0
   447          expr: rate(etcd_server_proposals_failed_total{job="kubernetes-etcd"}[1m]) > 0
[root@master ~]# kubectl apply -f prometheus-cfg.yaml
[root@master ~]# kubectl get configmaps -n monitor-sa
NAME                DATA   AGE
alertmanager        1      18m
prometheus-config   2      3h29m
[root@master ~]# kubectl describe configmaps prometheus-config -n monitor-sa

在k8s的master1节点重新生成一个prometheus-deploy.yaml文件

cat >prometheus-deploy.yaml <<EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus-server
  namespace: monitor-sa
  labels:
    app: prometheus
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
      component: server
    #matchExpressions:
    #- {key: app, operator: In, values: [prometheus]}
    #- {key: component, operator: In, values: [server]}
  template:
    metadata:
      labels:
        app: prometheus
        component: server
      annotations:
        prometheus.io/scrape: 'false'
    spec:
      nodeName: node1
      serviceAccountName: monitor
      containers:
      - name: prometheus
        image: prom/prometheus:v2.11.0
        imagePullPolicy: IfNotPresent
        command:
        - "/bin/prometheus"
        args:
        - "--config.file=/etc/prometheus/prometheus.yml"
        - "--storage.tsdb.path=/prometheus"
        - "--storage.tsdb.retention=24h"
        - "--web.enable-lifecycle"
        ports:
        - containerPort: 9090
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/prometheus
          name: prometheus-config
        - mountPath: /prometheus/
          name: prometheus-storage-volume
        - name: k8s-certs
          mountPath: /var/run/secrets/kubernetes.io/k8s-certs/etcd/
      - name: alertmanager
        image: prom/alertmanager:v0.14.0
        imagePullPolicy: IfNotPresent
        args:
        - "--config.file=/etc/alertmanager/alertmanager.yml"
        - "--log.level=debug"
        ports:
        - containerPort: 9093
          protocol: TCP
          name: alertmanager
        volumeMounts:
        - name: alertmanager-config
          mountPath: /etc/alertmanager
        - name: alertmanager-storage
          mountPath: /alertmanager
        - name: localtime
          mountPath: /etc/localtime
      volumes:
        - name: prometheus-config
          configMap:
            name: prometheus-config
        - name: prometheus-storage-volume
          hostPath:
           path: /data
           type: Directory
        - name: k8s-certs
          secret:
           secretName: etcd-certs
        - name: alertmanager-config
          configMap:
            name: alertmanager
        - name: alertmanager-storage
          hostPath:
           path: /data/alertmanager
           type: DirectoryOrCreate
        - name: localtime
          hostPath:
           path: /usr/share/zoneinfo/Asia/Shanghai
EOF

生成一个etcd-certs,这个在部署prometheus需要

[root@master ~]# kubectl -n monitor-sa create secret generic etcd-certs --from-file=/etc/kubernetes/pki/etcd/server.key --from-file=/etc/kubernetes/pki/etcd/server.crt --from-file=/etc/kubernetes/pki/etcd/ca.crt
secret/etcd-certs created
[root@master ~]# kubectl apply -f prometheus-deploy.yaml
deployment.apps/prometheus-server configured
[root@master ~]# kubectl get pods -n monitor-sa | grep prometheus
prometheus-server-77c6dddd55-s4bzg   2/2     Running   0          13s

在k8s的master1节点重新生成一个alertmanager-svc.yaml文件

cat >alertmanager-svc.yaml <<EOF
---
apiVersion: v1
kind: Service
metadata:
  labels:
    name: prometheus
    kubernetes.io/cluster-service: 'true'
  name: alertmanager
  namespace: monitor-sa
spec:
  ports:
  - name: alertmanager
    nodePort: 30066
    port: 9093
    protocol: TCP
    targetPort: 9093
  selector:
    app: prometheus
  sessionAffinity: None
  type: NodePort
EOF
[root@master ~]# kubectl apply -f alertmanager-svc.yaml
service/alertmanager created
[root@master ~]# kubectl get svc -n monitor-sa
NAME           TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
alertmanager   NodePort   10.103.60.117    <none>        9093:30066/TCP   5s
prometheus     NodePort   10.109.238.217   <none>        9090:31467/TCP

此时邮箱已经收到报警了,或者再稍等会儿查看
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

访问prometheus的web界面 192.168.1.11:30066
在这里插入图片描述
配置Alertmanager报警-发送报警到钉钉
在这里插入图片描述

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

2.安装钉钉的webhook插件,在k8s的master1节点操作

cd prometheus-webhook-dingtalk-0.3.0.linux-amd64

启动钉钉报警插件

[root@master ~]# tar zxvf prometheus-webhook-dingtalk-0.3.0.linux-amd64.tar.gz
[root@master ~]# cd prometheus-webhook-dingtalk-0.3.0.linux-amd64
[root@master prometheus-webhook-dingtalk-0.3.0.linux-amd64]# nohup ./prometheus-webhook-dingtalk --web.listen-address="0.0.0.0:8060" --ding.profile="cluster1=https://oapi.dingtalk.com/robot/send?access_token=67276f5bfaa874341d544bd" &
##网址就是上面的webhook地址

重新生成一个新的alertmanager-cm.yaml文件

cat >alertmanager-cm.yaml <<EOF
kind: ConfigMap
apiVersion: v1
metadata:
  name: alertmanager
  namespace: monitor-sa
data:
  alertmanager.yml: |-
    global:
      resolve_timeout: 1m
      smtp_smarthost: 'smtp.163.com:25'
      smtp_from: '15011572657@163.com'
      smtp_auth_username: '15011572657'
      smtp_auth_password: 'BDBPRMLNZGKWRFJP'
      smtp_require_tls: false
    route:
      group_by: [alertname]
      group_wait: 10s
      group_interval: 10s
      repeat_interval: 10m
      receiver: cluster1
    receivers:
    - name: cluster1
      webhook_configs:
      - url: 'http://192.168.1.11:8060/dingtalk/cluster1/send'
      #这里的IP为webhook运行的主机IP
        send_resolved: true
EOF

通过kubectl apply使配置生效,不起效的话删除重新apply

[root@master ~]# kubectl apply -f alertmanager-cm.yaml
[root@master ~]# kubectl apply -f prometheus-cfg.yaml
[root@master ~]# kubectl apply -f prometheus-deploy.yaml

在这里插入图片描述

  • 10
    点赞
  • 44
    收藏
    觉得还不错? 一键收藏
  • 7
    评论
微服务是什么?微服务是用于构建应用程序的架构风格,一个大的系统可由一个或者多个微服务组成,微服务架构可将应用拆分成多个核心功能,每个功能都被称为一项服务,可以单独构建和部署,这意味着各项服务在工作和出现故障的时候不会相互影响。为什么要用微服务?单体架构下的所有代码模块都耦合在一起,代码量大,维护困难,想要更新一个模块的代码,也可能会影响其他模块,不能很好的定制化代码。微服务中可以有java编写、有Python编写的,他们都是靠restful架构风格统一成一个系统的,所以微服务本身与具体技术无关、扩展性强。大型电商平台微服务功能图为什么要将SpringCloud项目部署到k8s平台?SpringCloud只能用在SpringBoot的java环境中,而kubernetes可以适用于任何开发语言,只要能被放进docker的应用,都可以在kubernetes上运行,而且更轻量,更简单。SpringCloud很多功能都跟kubernetes重合,比如服务发现,负载均衡,配置管理,所以如果把SpringCloud部署到k8s,那么很多功能可以直接使用k8s原生的,减少复杂度。Kubernetes作为成熟的容器编排工具,在国内外很多公司、世界500强等企业已经落地使用,很多中小型公司也开始把业务迁移到kubernetes中。kubernetes已经成为互联网行业急需的人才,很多企业都开始引进kubernetes技术人员,实现其内部的自动化容器云平台的建设。对于开发、测试、运维、架构师等技术人员来说k8s已经成为的一项重要的技能,下面列举了国内外在生产环境使用kubernetes的公司: 国内在用k8s的公司:阿里巴巴、百度、腾讯、京东、360、新浪、头条、知乎、华为、小米、富士康、移动、银行、电网、阿里云、青云、时速云、腾讯、优酷、抖音、快手、美团等国外在用k8s的公司:谷歌、IBM、丰田、iphone、微软、redhat等整个K8S体系涉及到的技术众多,包括存储、网络、安全、监控、日志、DevOps、微服务等,很多刚接触K8S的初学者,都会感到无从下手,为了能让大家系统学习,克服这些技术难点,推出了这套K8S架构师课程。Kubernetes的发展前景 kubernetes作为炙手可热的技术,已经成为云计算领域获取高薪要掌握的重要技能,在招聘网站搜索k8s,薪资水平也非常可观,为了让大家能够了解k8s目前的薪资分布情况,下面列举一些K8S的招聘截图: 讲师介绍:  先超容器云架构师、IT技术架构师、DevOps工程师,曾就职于世界500强上市公司,拥有多年一线运维经验,主导过上亿流量的pv项目的架构设计和运维工作;具有丰富的在线教育经验,对课程一直在改进和提高、不断的更新和完善、开发更多的企业实战项目。所教学员遍布京东、阿里、百度、电网等大型企业和上市公司。课程学习计划 学习方式:视频录播+视频回放+全套源码笔记 教学服务:模拟面试、就业指导、岗位内推、一对一答疑、远程指导 VIP终身服务:一次购买,终身学习课程亮点:1. 学习方式灵活,不占用工作时间:可在电脑、手机观看,随时可以学习,不占用上班时间2.老师答疑及时:老师24小时在线答疑3. 知识点覆盖全、课程质量高4. 精益求精、不断改进根据学员要求、随时更新课程内容5. 适合范围广,不管你是0基础,还是拥有工作经验均可学习:0基础1-3年工作经验3-5年工作经验5年以上工作经验运维、开发、测试、产品、前端、架构师其他行业转行做技术人员均可学习课程部分项目截图   课程大纲 k8s+SpringCloud全栈技术:基于世界500强的企业实战课程-大纲第一章 开班仪式老师自我介绍、课程大纲介绍、行业背景、发展趋势、市场行情、课程优势、薪资水平、给大家的职业规划、课程学习计划、岗位内推第二章 kubernetes介绍Kubernetes简介kubernetes起源和发展kubernetes优点kubernetes功能kubernetes应用领域:在大数据、5G、区块链、DevOps、AI等领域的应用第三章  kubernetes中的资源对象最小调度单元Pod标签Label和标签选择器控制器Replicaset、Deployment、Statefulset、Daemonset等四层负载均衡器Service第四章 kubernetes架构和组件熟悉谷歌的Borg架构kubernetes单master节点架构kubernetes多master节点高可用架构kubernetes多层架构设计原理kubernetes API介绍master(控制)节点组件:apiserver、scheduler、controller-manager、etcdnode(工作)节点组件:kube-proxy、coredns、calico附加组件:prometheus、dashboard、metrics-server、efk、HPA、VPA、Descheduler、Flannel、cAdvisor、Ingress     Controller。第五章 部署多master节点的K8S高可用集群(kubeadm)第六章 带你体验kubernetes可视化界面dashboard在kubernetes中部署dashboard通过token令牌登陆dashboard通过kubeconfig登陆dashboard限制dashboard的用户权限在dashboard界面部署Web服务在dashboard界面部署redis服务第七章 资源清单YAML文件编写技巧编写YAML文件常用字段,YAML文件编写技巧,kubectl explain查看帮助命令,手把手教你创建一个Pod的YAML文件第八章 通过资源清单YAML文件部署tomcat站点编写tomcat的资源清单YAML文件、创建service发布应用、通过HTTP、HTTPS访问tomcat第九章  kubernetes Ingress发布服务Ingress和Ingress Controller概述Ingress和Servcie关系安装Nginx Ingress Controller安装Traefik Ingress Controller使用Ingress发布k8s服务Ingress代理HTTP/HTTPS服务Ingress实现应用的灰度发布-可按百分比、按流量分发第十章 私有镜像仓库Harbor安装和配置Harbor简介安装HarborHarbor UI界面使用上传镜像到Harbor仓库从Harbor仓库下载镜像第十一章 微服务概述什么是微服务?为什么要用微服务?微服务的特性什么样的项目适合微服务?使用微服务需要考虑的问题常见的微服务框架常见的微服务框架对比分析第十二章 SpringCloud概述SpringCloud是什么?SpringCloud和SpringBoot什么关系?SpringCloud微服务框架的优缺点SpringCloud项目部署到k8s的流程第十三章 SpringCloud组件介绍服务注册与发现组件Eureka客户端负载均衡组件Ribbon服务网关Zuul熔断器HystrixAPI网关SpringCloud Gateway配置中心SpringCloud Config第十四章 将SpringCloud项目部署到k8s平台的注意事项如何进行服务发现?如何进行配置管理?如何进行负载均衡?如何对外发布服务?k8s部署SpringCloud项目的整体流程第十五章 部署MySQL数据库MySQL简介MySQL特点安装部署MySQL在MySQL数据库导入数据对MySQL数据库授权第十六章 将SpringCLoud项目部署到k8s平台SpringCloud的微服务电商框架安装openjdk和maven修改源代码、更改数据库连接地址通过Maven编译、构建、打包源代码在k8s中部署Eureka组件在k8s中部署Gateway组件在k8s中部署前端服务在k8s中部署订单服务在k8s中部署产品服务在k8s中部署库存服务第十七章 微服务的扩容和缩容第十八章 微服务的全链路监控什么是全链路监控?为什么要进行全链路监控?全链路监控能解决哪些问题?常见的全链路监控工具:zipkin、skywalking、pinpoint全链路监控工具对比分析第十九章 部署pinpoint服务部署pinpoint部署pinpoint agent在k8s中重新部署带pinpoint agent的产品服务在k8s中重新部署带pinpoint agent的订单服务在k8s中重新部署带pinpoint agent的库存服务在k8s中重新部署带pinpoint agent的前端服务在k8s中重新部署带pinpoint agent的网关和eureka服务Pinpoint UI界面使用第二十章 基于Jenkins+k8s+harbor等构建企业级DevOps平台第二十一章 基于Promethues+Alert+Grafana搭建企业级监控系统第二十二章 部署智能化日志收集系统EFK 

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值