Prometheus Operator部署和使用_prometheus podmonitor

为了做好运维面试路上的助攻手,特整理了上百道 【运维技术栈面试题集锦】 ,让你面试不慌心不跳,高薪offer怀里抱!

这次整理的面试题,小到shell、MySQL,大到K8s等云原生技术栈,不仅适合运维新人入行面试需要,还适用于想提升进阶跳槽加薪的运维朋友。

本份面试集锦涵盖了

  • 174 道运维工程师面试题
  • 128道k8s面试题
  • 108道shell脚本面试题
  • 200道Linux面试题
  • 51道docker面试题
  • 35道Jenkis面试题
  • 78道MongoDB面试题
  • 17道ansible面试题
  • 60道dubbo面试题
  • 53道kafka面试
  • 18道mysql面试题
  • 40道nginx面试题
  • 77道redis面试题
  • 28道zookeeper

总计 1000+ 道面试题, 内容 又全含金量又高

  • 174道运维工程师面试题

1、什么是运维?

2、在工作中,运维人员经常需要跟运营人员打交道,请问运营人员是做什么工作的?

3、现在给你三百台服务器,你怎么对他们进行管理?

4、简述raid0 raid1raid5二种工作模式的工作原理及特点

5、LVS、Nginx、HAproxy有什么区别?工作中你怎么选择?

6、Squid、Varinsh和Nginx有什么区别,工作中你怎么选择?

7、Tomcat和Resin有什么区别,工作中你怎么选择?

8、什么是中间件?什么是jdk?

9、讲述一下Tomcat8005、8009、8080三个端口的含义?

10、什么叫CDN?

11、什么叫网站灰度发布?

12、简述DNS进行域名解析的过程?

13、RabbitMQ是什么东西?

14、讲一下Keepalived的工作原理?

15、讲述一下LVS三种模式的工作过程?

16、mysql的innodb如何定位锁问题,mysql如何减少主从复制延迟?

17、如何重置mysql root密码?

网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。

需要这份系统化的资料的朋友,可以点击这里获取!

一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!

service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager created
secret/grafana-datasources created
configmap/grafana-dashboard-apiserver created
configmap/grafana-dashboard-cluster-total created
configmap/grafana-dashboard-controller-manager created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-node created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-k8s-resources-workload created
configmap/grafana-dashboard-k8s-resources-workloads-namespace created
configmap/grafana-dashboard-kubelet created
configmap/grafana-dashboard-namespace-by-pod created
configmap/grafana-dashboard-namespace-by-workload created
configmap/grafana-dashboard-node-cluster-rsrc-use created
configmap/grafana-dashboard-node-rsrc-use created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-persistentvolumesusage created
configmap/grafana-dashboard-pod-total created
configmap/grafana-dashboard-pods created
configmap/grafana-dashboard-prometheus-remote-write created
configmap/grafana-dashboard-prometheus created
configmap/grafana-dashboard-proxy created
configmap/grafana-dashboard-scheduler created
configmap/grafana-dashboard-statefulset created
configmap/grafana-dashboard-workload-total created
configmap/grafana-dashboards created
deployment.apps/grafana created
service/grafana created
serviceaccount/grafana created
servicemonitor.monitoring.coreos.com/grafana created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
role.rbac.authorization.k8s.io/kube-state-metrics created
rolebinding.rbac.authorization.k8s.io/kube-state-metrics created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
service/node-exporter created
serviceaccount/node-exporter created
servicemonitor.monitoring.coreos.com/node-exporter created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
configmap/adapter-config created
deployment.apps/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus-operator created
prometheus.monitoring.coreos.com/k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-rules created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created
service/kube-controller-manager created
service/kube-scheduler created


查看Kubernetes资源,可以看出prometheus-k8s和alertmanager-main的控制器类型为statefulset,



kubectl get all -nmonitoring
NAME READY STATUS RESTARTS AGE
pod/alertmanager-main-0 2/2 Running 4 12h
pod/alertmanager-main-1 2/2 Running 0 12h
pod/alertmanager-main-2 2/2 Running 6 12h
pod/grafana-58dc7468d7-d8bmt 1/1 Running 0 12h
pod/kube-state-metrics-78b46c84d8-wsvrb 3/3 Running 0 12h
pod/node-exporter-6m4kd 2/2 Running 0 12h
pod/node-exporter-bhxw2 2/2 Running 6 12h
pod/node-exporter-tkvq5 2/2 Running 0 12h
pod/prometheus-adapter-5cd5798d96-5ffb5 1/1 Running 0 12h
pod/prometheus-k8s-0 3/3 Running 10 12h
pod/prometheus-k8s-1 3/3 Running 1 12h
pod/prometheus-operator-99dccdc56-89l7n 1/1 Running 0 12h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/alertmanager-main NodePort 10.1.96.0 9093:39093/TCP 12h
service/alertmanager-operated ClusterIP None 9093/TCP,9094/TCP,9094/UDP 12h
service/grafana NodePort 10.1.165.84 3000:33000/TCP 12h
service/kube-state-metrics ClusterIP None 8443/TCP,9443/TCP 12h
service/node-exporter ClusterIP None 9100/TCP 12h
service/prometheus-adapter ClusterIP 10.1.114.161 443/TCP 12h
service/prometheus-k8s NodePort 10.1.162.187 9090:39090/TCP 12h
service/prometheus-operated ClusterIP None 9090/TCP 12h
service/prometheus-operator ClusterIP None 8080/TCP 12h

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/node-exporter 3 3 3 3 3 kubernetes.io/os=linux 12h

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/grafana 1/1 1 1 12h
deployment.apps/kube-state-metrics 1/1 1 1 12h
deployment.apps/prometheus-adapter 1/1 1 1 12h
deployment.apps/prometheus-operator 1/1 1 1 12h

NAME DESIRED CURRENT READY AGE
replicaset.apps/grafana-58dc7468d7 1 1 1 12h
replicaset.apps/kube-state-metrics-78b46c84d8 1 1 1 12h
replicaset.apps/prometheus-adapter-5cd5798d96 1 1 1 12h
replicaset.apps/prometheus-operator-99dccdc56 1 1 1 12h

NAME READY AGE
statefulset.apps/alertmanager-main 3/3 12h
statefulset.apps/prometheus-k8s 2/2 12h


* #### 访问Prometheus


地址:<http://192.168.1.55:39090/>。


![在这里插入图片描述](https://img-blog.csdnimg.cn/20200402085737124.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3R3aW5nYW8=,size_16,color_FFFFFF,t_70)


查看Prometheus的targets页面,可以看到所有的targets都可以正常监控。


![在这里插入图片描述](https://img-blog.csdnimg.cn/20200402085815121.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3R3aW5nYW8=,size_16,color_FFFFFF,t_70)  
 查看Prometheus的alerts页面,可以看到预置了多个告警规则。


![在这里插入图片描述](https://img-blog.csdnimg.cn/20200402085847922.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3R3aW5nYW8=,size_16,color_FFFFFF,t_70)


* #### 访问Grafana


地址:<http://192.168.1.55:33000/>。


![在这里插入图片描述](https://img-blog.csdnimg.cn/20200402085952865.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3R3aW5nYW8=,size_16,color_FFFFFF,t_70)


Grafana缺省内置了多个dashboard。


![在这里插入图片描述](https://img-blog.csdnimg.cn/20200402090034638.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3R3aW5nYW8=,size_16,color_FFFFFF,t_70)


以下为和Node相关的Dashboard,可以看到自动添加了prometheus数据源,可以查看不同Node的监控数据。


![在这里插入图片描述](https://img-blog.csdnimg.cn/20200402090115229.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3R3aW5nYW8=,size_16,color_FFFFFF,t_70)


以下为和Pod相关的Dashboard,可以查看不同命名空间和Pod的监控数据。


![在这里插入图片描述](https://img-blog.csdnimg.cn/2020040209014270.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3R3aW5nYW8=,size_16,color_FFFFFF,t_70)


* #### 访问Alertmanager


地址:<http://192.168.1.55:39093/>。


![在这里插入图片描述](https://img-blog.csdnimg.cn/20200402090230998.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3R3aW5nYW8=,size_16,color_FFFFFF,t_70)


* #### 应用自定义监控-PodMonitor


我们以Nginx Ingress为例,使用Prometheus Operator来监控Nginx Ingress。我们先通过Helm部署,选择nginx/nginx-ingress。



helm repo add nginx https://helm.nginx.com/stable
helm search repo nginx
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/nginx 5.1.1 1.16.1 Chart for the nginx server
bitnami/nginx-ingress-controller 5.2.2 0.26.2 Chart for the nginx Ingress controller
nginx/nginx-ingress 0.4.3 1.6.3 NGINX Ingress Controller
stable/nginx-ingress 1.27.0 0.26.1 An nginx Ingress controller that uses ConfigMap…
stable/nginx-ldapauth-proxy 0.1.3 1.13.5 nginx proxy with ldapauth
stable/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego
stable/gcloud-endpoints 0.1.2 1 DEPRECATED Develop, deploy, protect and monitor…


先渲染模板来分析一下,为了集群外访问,“type: LoadBalancer"应该改为"type: NodePort”,并将nodePort配置为固定端口。Pod公开了prometheus端口"containerPort: 9113",我们下面先使用PodMonitor进行监控。



helm template gateway nginx/nginx-ingress

Source: nginx-ingress/templates/controller-service.yaml

apiVersion: v1
kind: Service
metadata:
name: gateway-nginx-ingress
namespace: default
labels:
app.kubernetes.io/name: gateway-nginx-ingress
helm.sh/chart: nginx-ingress-0.4.3
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: gateway
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:

  • port: 80
    targetPort: 80
    protocol: TCP
    name: http
  • port: 443
    targetPort: 443
    protocol: TCP
    name: https
    selector:
    app: gateway-nginx-ingress

Source: nginx-ingress/templates/controller-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: gateway-nginx-ingress
namespace: default
labels:
app.kubernetes.io/name: gateway-nginx-ingress
helm.sh/chart: nginx-ingress-0.4.3
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: gateway
spec:
replicas: 1
selector:
matchLabels:
app: gateway-nginx-ingress
template:
metadata:
labels:
app: gateway-nginx-ingress
annotations:
prometheus.io/scrape: “true”
prometheus.io/port: “9113”
spec:
serviceAccountName: gateway-nginx-ingress
hostNetwork: false
containers:
- image: “nginx/nginx-ingress:1.6.3”
name: gateway-nginx-ingress
imagePullPolicy: “IfNotPresent”
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: prometheus
containerPort: 9113


根据上面的分析,设置以下覆盖参数来安装nginx-ingress。



helm install gateway nginx/nginx-ingress
–set controller.service.type=NodePort
–set controller.service.httpPort.nodePort=30080
–set controller.service.httpsPort.nodePort=30443


查看Kubernetes资源。



NAME READY STATUS RESTARTS AGE
pod/gateway-nginx-ingress-55886df446-bwbts 1/1 Running 0 12m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gateway-nginx-ingress NodePort 10.1.10.126 80:30080/TCP,443:30443/TCP 12m
service/kubernetes ClusterIP 10.1.0.1 443/TCP 108d

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/gateway-nginx-ingress 1/1 1 1 12m

NAME DESIRED CURRENT READY AGE
replicaset.apps/gateway-nginx-ingress-55886df446 1 1 1 12m

kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
gateway-nginx-ingress-55886df446-bwbts 1/1 Running 0 13m 10.244.2.46 k8s-node2


可以直接从Pod地址访问metrics。



curl http://10.244.2.46:9113/metrics

HELP nginx_ingress_controller_ingress_resources_total Number of handled ingress resources

TYPE nginx_ingress_controller_ingress_resources_total gauge

nginx_ingress_controller_ingress_resources_total{type=“master”} 0
nginx_ingress_controller_ingress_resources_total{type=“minion”} 0
nginx_ingress_controller_ingress_resources_total{type=“regular”} 0

HELP nginx_ingress_controller_nginx_last_reload_milliseconds Duration in milliseconds of the last NGINX reload

TYPE nginx_ingress_controller_nginx_last_reload_milliseconds gauge

nginx_ingress_controller_nginx_last_reload_milliseconds 195

HELP nginx_ingress_controller_nginx_last_reload_status Status of the last NGINX reload

TYPE nginx_ingress_controller_nginx_last_reload_status gauge

nginx_ingress_controller_nginx_last_reload_status 1

HELP nginx_ingress_controller_nginx_reload_errors_total Number of unsuccessful NGINX reloads

TYPE nginx_ingress_controller_nginx_reload_errors_total counter

nginx_ingress_controller_nginx_reload_errors_total 0

HELP nginx_ingress_controller_nginx_reloads_total Number of successful NGINX reloads

TYPE nginx_ingress_controller_nginx_reloads_total counter

nginx_ingress_controller_nginx_reloads_total 2

HELP nginx_ingress_controller_virtualserver_resources_total Number of handled VirtualServer resources

TYPE nginx_ingress_controller_virtualserver_resources_total gauge

nginx_ingress_controller_virtualserver_resources_total 0

HELP nginx_ingress_controller_virtualserverroute_resources_total Number of handled VirtualServerRoute resources

TYPE nginx_ingress_controller_virtualserverroute_resources_total gauge

nginx_ingress_controller_virtualserverroute_resources_total 0

HELP nginx_ingress_nginx_connections_accepted Accepted client connections

TYPE nginx_ingress_nginx_connections_accepted counter

nginx_ingress_nginx_connections_accepted 6


我们创建PodMonitor,其中几个需要注意的关键点。


* PodMonitor的name最终会反应到Prometheus的配置中,作为job\_name。
* PodMonitor的命名空间必须和Prometheus所在的命名空间相同,此处为monitoring。
* podMetricsEndpoints.interval为抓取间隔。
* podMetricsEndpoints.port需要和Pod/Deployment中的拉取metrics的ports.name对应,此处为prometheus。
* namespaceSelector.matchNames需要和被监控的Pod所在的命名空间相同,此处为default。
* selector.matchLabels的标签必须和被监控的Pod中能唯一标明身份的标签对应。


创建Pod对应的PodMonitor。



vi prometheus-podMonitorNginxIngress.yaml
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
labels:
app: nginx-ingress
name: nginx-ingress
namespace: monitoring
spec:
podMetricsEndpoints:

  • interval: 15s
    path: /metrics
    port: prometheus
    namespaceSelector:
    matchNames:
    • default
      selector:
      matchLabels:
      app: gateway-nginx-ingress

kubectl apply -f prometheus-podMonitorNginxIngress.yaml


此PodMonitor其实就是一个配置文件,Prometheus Operator会根据PodMonitor进行Prometheus的相关配置,自动对该Pod进行监控。到Prometheus查看监控目标。注意label中有`pod="gateway-nginx-ingress-55886df446-bwbts"`,标明监控Pod。


![在这里插入图片描述](https://img-blog.csdnimg.cn/20200402090340576.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3R3aW5nYW8=,size_16,color_FFFFFF,t_70)


Prometheus已经出现Nginx Ingress相关的配置`job_name: monitoring/nginx-ingress/0`。


![在这里插入图片描述](https://img-blog.csdnimg.cn/2020040209042153.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3R3aW5nYW8=,size_16,color_FFFFFF,t_70)


Prometheus监控指标`irate(nginx_ingress_nginx_http_requests_total[1m])`。


![在这里插入图片描述](https://img-blog.csdnimg.cn/20200402090454799.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3R3aW5nYW8=,size_16,color_FFFFFF,t_70)


* #### 应用自定义监控-ServiceMonitor


我们仍旧以Nginx Ingress为例,我们先通过Helm部署。


先重新渲染模板来分析一下,Pod公开了prometheus端口"containerPort: 9113",但Service没有公开该端口。而ServiceMonitor恰恰是是通过Service的prometheus端口拉取metrics数据的,所以我们通过controller.service.customPorts来向Service添加该端口。



helm template gateway nginx/nginx-ingress
–set controller.service.type=NodePort
–set controller.service.httpPort.nodePort=30080
–set controller.service.httpsPort.nodePort=30443

Source: nginx-ingress/templates/controller-service.yaml

apiVersion: v1
kind: Service
metadata:
name: gateway-nginx-ingress
namespace: default
labels:
app.kubernetes.io/name: gateway-nginx-ingress
helm.sh/chart: nginx-ingress-0.4.3
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: gateway
spec:
externalTrafficPolicy: Local
type: NodePort
ports:

  • port: 80
    targetPort: 80
    protocol: TCP
    name: http
    nodePort: 30080
  • port: 443
    targetPort: 443
    protocol: TCP
    name: https
    nodePort: 30443
    selector:
    app: gateway-nginx-ingress

Source: nginx-ingress/templates/controller-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: gateway-nginx-ingress
namespace: default
labels:
app.kubernetes.io/name: gateway-nginx-ingress
helm.sh/chart: nginx-ingress-0.4.3
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: gateway
spec:
replicas: 1
selector:
matchLabels:
app: gateway-nginx-ingress
template:
metadata:
labels:
app: gateway-nginx-ingress
annotations:
prometheus.io/scrape: “true”
prometheus.io/port: “9113”
spec:
serviceAccountName: gateway-nginx-ingress
hostNetwork: false
containers:
- image: “nginx/nginx-ingress:1.6.3”
name: gateway-nginx-ingress
imagePullPolicy: “IfNotPresent”
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: prometheus
containerPort: 9113


根据上面的分析,设置以下覆盖参数来安装nginx-ingress。



helm install gateway nginx/nginx-ingress
–set controller.service.type=NodePort
–set controller.service.httpPort.nodePort=30080
–set controller.service.httpsPort.nodePort=30443
–set controller.service.customPorts[0].port=9113
–set controller.service.customPorts[0].targetPort=9113
–set controller.service.customPorts[0].protocol=TCP
–set controller.service.customPorts[0].name=prometheus
–set controller.service.customPorts[0].nodePort=39113


查看Kubernetes资源,可以看出service/gateway-nginx-ingress暴露了9113端口。



kubectl get all
NAME READY STATUS RESTARTS AGE
pod/gateway-nginx-ingress-55886df446-mwjs8 1/1 Running 0 10s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gateway-nginx-ingress NodePort 10.1.98.109 9113:39113/TCP,80:30080/TCP,443:30443/TCP 10s
service/kubernetes ClusterIP 10.1.0.1 443/TCP 107d

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/gateway-nginx-ingress 1/1 1 1 10s

NAME DESIRED CURRENT READY AGE
replicaset.apps/gateway-nginx-ingress-55886df446 1 1 1 10s


可以访问metrics。



curl http://192.168.1.55:39113/metrics

HELP nginx_ingress_controller_ingress_resources_total Number of handled ingress resources

TYPE nginx_ingress_controller_ingress_resources_total gauge

nginx_ingress_controller_ingress_resources_total{type=“master”} 0
nginx_ingress_controller_ingress_resources_total{type=“minion”} 0
nginx_ingress_controller_ingress_resources_total{type=“regular”} 0

HELP nginx_ingress_controller_nginx_last_reload_milliseconds Duration in milliseconds of the last NGINX reload

TYPE nginx_ingress_controller_nginx_last_reload_milliseconds gauge

nginx_ingress_controller_nginx_last_reload_milliseconds 152

HELP nginx_ingress_controller_nginx_last_reload_status Status of the last NGINX reload

TYPE nginx_ingress_controller_nginx_last_reload_status gauge

nginx_ingress_controller_nginx_last_reload_status 1

HELP nginx_ingress_controller_nginx_reload_errors_total Number of unsuccessful NGINX reloads

TYPE nginx_ingress_controller_nginx_reload_errors_total counter

nginx_ingress_controller_nginx_reload_errors_total 0

HELP nginx_ingress_controller_nginx_reloads_total Number of successful NGINX reloads

TYPE nginx_ingress_controller_nginx_reloads_total counter

nginx_ingress_controller_nginx_reloads_total 2

HELP nginx_ingress_controller_virtualserver_resources_total Number of handled VirtualServer resources

TYPE nginx_ingress_controller_virtualserver_resources_total gauge

nginx_ingress_controller_virtualserver_resources_total 0

HELP nginx_ingress_controller_virtualserverroute_resources_total Number of handled VirtualServerRoute resources

TYPE nginx_ingress_controller_virtualserverroute_resources_total gauge

nginx_ingress_controller_virtualserverroute_resources_total 0

HELP nginx_ingress_nginx_connections_accepted Accepted client connections

TYPE nginx_ingress_nginx_connections_accepted counter

nginx_ingress_nginx_connections_accepted 5


查看Prometheus自定义资源,Prometheus根据serviceMonitorSelector关联ServiceMonitor,如果serviceMonitorSelector没有定义,那意味着会关联所有的ServiceMonitor。



kubectl get prometheuses.monitoring.coreos.com/k8s -nmonitoring -oyaml
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{“apiVersion”:“monitoring.coreos.com/v1”,“kind”:“Prometheus”,“metadata”:{“annotations”:{},“labels”:{“prometheus”:“k8s”},“name”:“k8s”,“namespace”:“monitoring”},“spec”:{“alerting”:{“alertmanagers”:[{“name”:“alertmanager-main”,“namespace”:“monitoring”,“port”:“web”}]},“baseImage”:“quay.io/prometheus/prometheus”,“nodeSelector”:{“kubernetes.io/os”:“linux”},“podMonitorSelector”:{},“replicas”:2,“resources”:{“requests”:{“memory”:“400Mi”}},“ruleSelector”:{“matchLabels”:{“prometheus”:“k8s”,“role”:“alert-rules”}},“securityContext”:{“fsGroup”:2000,“runAsNonRoot”:true,“runAsUser”:1000},“serviceAccountName”:“prometheus-k8s”,“serviceMonitorNamespaceSelector”:{},“serviceMonitorSelector”:{},“version”:“v2.11.0”}}
creationTimestamp: “2020-03-26T13:09:35Z”
generation: 1
labels:
prometheus: k8s
name: k8s
namespace: monitoring
resourceVersion: “15195”
selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/prometheuses/k8s
uid: 802c8f09-95e8-43e8-a2ea-131877fc6b4e
spec:
alerting:
alertmanagers:
- name: alertmanager-main
namespace: monitoring
port: web
baseImage: quay.io/prometheus/prometheus
nodeSelector:
kubernetes.io/os: linux
podMonitorSelector: {}
replicas: 2
resources:
requests:
memory: 400Mi
ruleSelector:
matchLabels:
prometheus: k8s
role: alert-rules
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccountName: prometheus-k8s
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector: {}
version: v2.11.0

网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。

需要这份系统化的资料的朋友,可以点击这里获取!

一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!

/prometheus/prometheus
nodeSelector:
kubernetes.io/os: linux
podMonitorSelector: {}
replicas: 2
resources:
requests:
memory: 400Mi
ruleSelector:
matchLabels:
prometheus: k8s
role: alert-rules
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccountName: prometheus-k8s
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector: {}
version: v2.11.0

网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。

需要这份系统化的资料的朋友,可以点击这里获取!

一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!

  • 14
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值