k8s prometheus-operator+grafana+alertmanager+钉钉告警

1、描述

1.1 简介

Prometheus是一个开源系统监控和警报工具包,最初在 SoundCloud构建。自 2012 年成立以来,许多公司和组织都采用了 Prometheus,该项目拥有非常活跃的开发者和用户社区。它现在是一个独立的开源项目,独立于任何公司维护。为了强调这一点,并明确项目的治理结构,Prometheus 于 2016 年加入 云原生计算基金会,成为继Kubernetes之后的第二个托管项目。

Prometheus 将其指标收集并存储为时间序列数据,即指标信息与记录时的时间戳以及称为标签的可选键值对一起存储。

有关 Prometheus 的更详细概述,请参阅 媒体部分链接的资源。

1.2 特征

Prometheus 相比于其他传统监控工具主要有以下几个特点:

  • 具有由 metric 名称和键/值对标识的时间序列数据的多维数据模型
  • 有一个灵活的查询语言
  • 不依赖分布式存储,只和本地磁盘有关
  • 通过 HTTP 的服务拉取时间序列数据
  • 也支持推送的方式来添加时间序列数据
  • 还支持通过服务发现或静态配置发现目标
  • 多种图形和仪表板支持

1.3 组件

Prometheus 由多个组件组成,但是其中许多组件是可选的:

  • Prometheus Server:用于抓取指标、存储时间序列数据
  • exporter:暴露指标让任务来抓
  • pushgateway:push 的方式将指标数据推送到该网关
  • alertmanager:处理报警的报警组件
  • adhoc:用于数据查询

大多数 Prometheus 组件都是用 Go 编写的,因此很容易构建和部署为静态的二进制文件。

1.4 架构

下图是 Prometheus 官方提供的架构及其一些相关的生态系统组件:

        整体流程比较简单,Prometheus 直接接收或者通过中间的 Pushgateway 网关被动获取指标数据,在本地存储所有的获取的指标数据,并对这些数据进行一些规则整理,用来生成一些聚合数据或者报警信息,Grafana 或者其他工具用来可视化这些数据。

---------------------------------------------以上均来自官网介绍------------------------------------------------------

2 监控和告警机制

2.1 监控方案

硬件指标:
  通过prometheus-node-exporter采集主机的性能指标数据,并通过暴露的 /metrics 接口用prometheus抓取
  daemonset方式部署到每个node
 集群组件指标:
  通过kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy自身暴露的 /metrics 获取节点上与k8s集群相关的一些指标数据

 容器指标:
  通过cadvisor采集容器、Pod相关的性能指标数据(容器),并通过暴露的 /metrics 接口用prometheus抓取
  通过annotation暴露自身的metrics暴露/metrics
  cAdvisor内置在 kubelet 组件之中,数据路径为/api/v1/nodes//proxy/metrics
  使用node的服务发现模式,每个节点都有 kubelet,自然都有cAdvisor采集到的数据指标

 网络指标:
  通过blackbox-exporter采集应用的网络性能(http、tcp、icmp等)数据,并通过暴露的 /metrics 接口用prometheus抓取
  应用可以在service中指定约定的annotation,实现Prometheus对该应用的网络服务进行探测

资源对象指标:
  通过kube-state-metrics采集k8s资源对象的状态指标数据,并通过暴露的 /metrics 接口用prometheus抓取
 

2.2 展示方案

grafana连接prometheus数据库,通过模版视图展示,模版可以自定义,也可以去在线模版仓库获取

2.3 告警方案

prometheus将告警信息推送给alertmanager,然后alertmanager通过邮件,短信、webhook等方式告警,本次我们介绍通过webhook方式推送钉钉告警,webhook也可以推动微信告警,本次略

3 安装

因本次主要介绍钉钉告警,所以不使用手动安装方式,直接使用operator方式快速安装

3.1 安装必要条件

克隆代码

[root@k8s-master ~]# git clone https://github.com/coreos/kube-prometheus.git
正克隆到 'kube-prometheus'...
remote: Enumerating objects: 15481, done.
remote: Counting objects: 100% (167/167), done.
remote: Compressing objects: 100% (108/108), done.
remote: Total 15481 (delta 98), reused 80 (delta 48), pack-reused 15314
接收对象中: 100% (15481/15481), 7.82 MiB | 2.31 MiB/s, 完成.
处理 delta 中: 100% (9860/9860), 完成.
[root@k8s-master ~]#

安装prometheus-operator前提条件,部署CRD

[root@k8s-master ~]# cd kube-prometheus/manifests/
[root@k8s-master manifests]# ls
。。。 。。。
[root@k8s-master manifests]# kubectl apply -f setup/
customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created
namespace/monitoring created

 如果有一个yaml在apply的时候提示too lang就换成create创建,问题如下

[root@k8s-master manifests]# kubectl apply -f setup/0prometheusCustomResourceDefinition.yaml
The CustomResourceDefinition "prometheuses.monitoring.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes
[root@k8s-master manifests]# kubectl create -f setup/0prometheusCustomResourceDefinition.yaml
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created

因为实验环境方便对外访问,我们还需要修改几个地方的暴露方式为nodeport

[root@k8s-master manifests]# vim grafana-service.yaml
spec:
  ports:
  - name: http
    port: 3000
    targetPort: http
  type: NodePort


[root@k8s-master manifests]# vim prometheus-service.yaml

spec:
  ports:
  - name: web
    port: 9090
    targetPort: web
  - name: reloader-web
    port: 8080
    targetPort: reloader-web
  type: NodePort
  selector:

[root@k8s-master manifests]# vim alertmanager-service.yaml

spec:
  ports:
  - name: web
    port: 9093
    targetPort: web
  - name: reloader-web
    port: 8080
    targetPort: reloader-web
  type: NodePort

3.2 安装prometheus 

[root@k8s-master manifests]# kubectl apply -f .
alertmanager.monitoring.coreos.com/main created
poddisruptionbudget.policy/alertmanager-main created
prometheusrule.monitoring.coreos.com/alertmanager-main-rules created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager-main created
clusterrole.rbac.authorization.k8s.io/blackbox-exporter created
clusterrolebinding.rbac.authorization.k8s.io/blackbox-exporter created
configmap/blackbox-exporter-configuration created
deployment.apps/blackbox-exporter created
service/blackbox-exporter created
serviceaccount/blackbox-exporter created
servicemonitor.monitoring.coreos.com/blackbox-exporter created
secret/grafana-config created
secret/grafana-datasources created
configmap/grafana-dashboard-alertmanager-overview created
configmap/grafana-dashboard-apiserver created
configmap/grafana-dashboard-cluster-total created
configmap/grafana-dashboard-controller-manager created
configmap/grafana-dashboard-grafana-overview created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-node created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-k8s-resources-workload created
configmap/grafana-dashboard-k8s-resources-workloads-namespace created
configmap/grafana-dashboard-kubelet created
configmap/grafana-dashboard-namespace-by-pod created
configmap/grafana-dashboard-namespace-by-workload created
configmap/grafana-dashboard-node-cluster-rsrc-use created
configmap/grafana-dashboard-node-rsrc-use created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-persistentvolumesusage created
configmap/grafana-dashboard-pod-total created
configmap/grafana-dashboard-prometheus-remote-write created
configmap/grafana-dashboard-prometheus created
configmap/grafana-dashboard-proxy created
configmap/grafana-dashboard-scheduler created
configmap/grafana-dashboard-workload-total created
configmap/grafana-dashboards created
deployment.apps/grafana created
prometheusrule.monitoring.coreos.com/grafana-rules created
service/grafana created
serviceaccount/grafana created
servicemonitor.monitoring.coreos.com/grafana created
prometheusrule.monitoring.coreos.com/kube-prometheus-rules created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
prometheusrule.monitoring.coreos.com/kube-state-metrics-rules created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
prometheusrule.monitoring.coreos.com/kubernetes-monitoring-rules created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
prometheusrule.monitoring.coreos.com/node-exporter-rules created
service/node-exporter created
serviceaccount/node-exporter created
servicemonitor.monitoring.coreos.com/node-exporter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
poddisruptionbudget.policy/prometheus-k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus-k8s created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
configmap/adapter-config created
deployment.apps/prometheus-adapter created
poddisruptionbudget.policy/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
servicemonitor.monitoring.coreos.com/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
prometheusrule.monitoring.coreos.com/prometheus-operator-rules created
service/prometheus-operator created
serviceaccount/prometheus-operator created
servicemonitor.monitoring.coreos.com/prometheus-operator created
[root@k8s-master manifests]#

有几个pod创建失败了,因为镜像拉不下来,需要修改其他镜像地址

[root@k8s-master manifests]# kubectl get pod -n monitoring
NAME                                   READY   STATUS             RESTARTS   AGE
alertmanager-main-0                    2/2     Running            0          5m2s
alertmanager-main-1                    2/2     Running            0          5m2s
alertmanager-main-2                    2/2     Running            0          5m2s
blackbox-exporter-69894767d5-t9bcv     3/3     Running            0          5m34s
grafana-7b77d779f6-x5khg               1/1     Running            0          5m33s
kube-state-metrics-c655879df-m5k9z     2/3     ImagePullBackOff   0          5m33s
node-exporter-2pfbx                    2/2     Running            0          5m33s
node-exporter-m4htq                    2/2     Running            0          5m33s
node-exporter-m5lvn                    2/2     Running            0          5m33s
node-exporter-v2cn4                    2/2     Running            0          5m33s
prometheus-adapter-6cf5d8bfcf-bgn4n    0/1     ImagePullBackOff   0          5m32s
prometheus-adapter-6cf5d8bfcf-qv8pn    0/1     ImagePullBackOff   0          5m32s
prometheus-k8s-0                       2/2     Running            0          39s
prometheus-k8s-1                       2/2     Running            0          39s
prometheus-operator-545bcb5949-k2dl5   2/2     Running            0          5m32s

vim kubeStateMetrics-deployment.yaml

        镜像换成 bitnami/kube-state-metrics:2.3.0

vim prometheusAdapter-deployment.yaml

        镜像换成image: selina5288/prometheus-adapter:v0.9.1

 

[root@k8s-master manifests]# kubectl apply -f prometheusAdapter-deployment.yaml -f kubeStateMetrics-deployment.yaml
deployment.apps/prometheus-adapter configured
deployment.apps/kube-state-metrics configured
[root@k8s-master manifests]# kubectl get pod -n monitoring
NAME                                   READY   STATUS    RESTARTS   AGE
alertmanager-main-0                    2/2     Running   0          18m
alertmanager-main-1                    2/2     Running   0          18m
alertmanager-main-2                    2/2     Running   0          18m
blackbox-exporter-69894767d5-t9bcv     3/3     Running   0          18m
grafana-7b77d779f6-x5khg               1/1     Running   0          18m
kube-state-metrics-9f5b9468f-qzztn     3/3     Running   0          45s
node-exporter-2pfbx                    2/2     Running   0          18m
node-exporter-m4htq                    2/2     Running   0          18m
node-exporter-m5lvn                    2/2     Running   0          18m
node-exporter-v2cn4                    2/2     Running   0          18m
prometheus-adapter-7fc8b8c559-t5qvw    1/1     Running   0          6m54s
prometheus-adapter-7fc8b8c559-vhp56    1/1     Running   0          6m54s
prometheus-k8s-0                       2/2     Running   0          13m
prometheus-k8s-1                       2/2     Running   0          13m
prometheus-operator-545bcb5949-k2dl5   2/2     Running   0          18m

已经创建成功

3.3 访问页面

下面开始访问页面

[root@k8s-master manifests]# kubectl get svc -n monitoring
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
alertmanager-main       NodePort    10.109.63.172    <none>        9093:31169/TCP,8080:30473/TCP   20m
alertmanager-operated   ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP      19m
blackbox-exporter       ClusterIP   10.110.214.160   <none>        9115/TCP,19115/TCP              20m
grafana                 NodePort    10.101.37.121    <none>        3000:32697/TCP                  20m
kube-state-metrics      ClusterIP   None             <none>        8443/TCP,9443/TCP               20m
node-exporter           ClusterIP   None             <none>        9100/TCP                        20m
prometheus-adapter      ClusterIP   10.109.181.16    <none>        443/TCP                         20m
prometheus-k8s          NodePort    10.110.143.173   <none>        9090:30136/TCP,8080:32610/TCP   20m
prometheus-operated     ClusterIP   None             <none>        9090/TCP                        15m
prometheus-operator     ClusterIP   None             <none>        8443/TCP                        20m
[root@k8s-master manifests]#

根据上面查询svc的nodeport可以得出地址(192.168.3.50是我的nodeIP)

prometheus :192.168.3.50:30136/

grafana: 192.168.3.50:32697/

alertmanager: 192.168.3.50:31169/

访问prometheus

访问正常,targets中也都是up,node-exporter,kube-state-metrics、blackbox-exporter、集群等都正常

 

 访问grafana

默认密码都是admin,会提示修改密码

创建data sources,其实默认已经有了prometheus的数据源 

导入模版,可以到官网搜索模版,

官网地址:Dashboards | Grafana Labs

联网的话直接填写ID即可,好用的模版比如8685、8919、10000等等

 

 

 访问alertmanager

4 钉钉告警

4.1 创建钉钉机器人

创建钉钉群聊,点击群设置--> 智能群助手-->添加机器人-->webhook方式

 

 此处需要一个安全设置,我这里选择IP,IP是调用钉钉机器人的公网IP,不知道你自己IP的可以百度查询一下,关键字就写IP,第一个结果就是你的公网IP

 

 

 生成的webhook地址记得保存下来,后面有用

4.2 创建钉钉推送服务

[root@k8s-master manifests]# cat dingtalk-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: dingtalk-config
  namespace: monitoring
data:
  config.yml: |-
    targets:
      webhook:
        url: https://oapi.dingtalk.com/robot/send?access_token=xxxxxxx #设置成刚才的webhook地址
        mention:
          all: true

[root@k8s-master manifests]# cat dingtalk-deployment.yaml
apiVersion: v1
kind: Service
metadata:
  name: dingtalk
  namespace: monitoring
  labels:
    app: dingtalk
  annotations:
    prometheus.io/scrape: 'false'
spec:
  selector:
    app: dingtalk
  ports:
  - name: dingtalk
    port: 8060
    protocol: TCP
    targetPort: 8060

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dingtalk
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dingtalk
  template:
    metadata:
      name: dingtalk
      labels:
        app: dingtalk
    spec:
      containers:
      - name: dingtalk
        image: timonwong/prometheus-webhook-dingtalk:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8060
        volumeMounts:
        - name: config
          mountPath: /etc/prometheus-webhook-dingtalk
      volumes:
      - name: config
        configMap:
          name: dingtalk-config
[root@k8s-master manifests]#
[root@k8s-master manifests]# kubectl apply -f dingtalk-config.yaml -f dingtalk-deployment.yaml
configmap/dingtalk-config created
service/dingtalk created
deployment.apps/dingtalk created
[root@k8s-master manifests]# kubectl get pod -n monitoring
NAME                                   READY   STATUS    RESTARTS   AGE
alertmanager-main-0                    2/2     Running   0          73m
alertmanager-main-1                    2/2     Running   0          73m
alertmanager-main-2                    2/2     Running   0          73m
blackbox-exporter-69894767d5-t9bcv     3/3     Running   0          73m
dingtalk-77d475cc8-gdqf4               1/1     Running   0          16s
grafana-7b77d779f6-x5khg               1/1     Running   0          73m
kube-state-metrics-9f5b9468f-qzztn     3/3     Running   0          56m
node-exporter-2pfbx                    2/2     Running   0          73m
node-exporter-m4htq                    2/2     Running   0          73m
node-exporter-m5lvn                    2/2     Running   0          73m
node-exporter-v2cn4                    2/2     Running   0          73m
prometheus-adapter-7fc8b8c559-t5qvw    1/1     Running   0          62m
prometheus-adapter-7fc8b8c559-vhp56    1/1     Running   0          62m
prometheus-k8s-0                       2/2     Running   0          69m
prometheus-k8s-1                       2/2     Running   0          69m
prometheus-operator-545bcb5949-k2dl5   2/2     Running   0          73m

4.3 修改alertmanager告警访问为webhook

alertmanager的配置文件是alertmanager.yaml,需要被做成configmaps挂载进去,prometheus-operator是做成了secert,其实是一样的功效,下面我们修改secert

将alertmanager-secert.yaml中stringData里的alertmanager.yaml内容修改成如下内容,原有的可删掉

[root@k8s-master manifests]# cat alertmanager-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  labels:
    app.kubernetes.io/component: alert-router
    app.kubernetes.io/instance: main
    app.kubernetes.io/name: alertmanager
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 0.23.0
  name: alertmanager-main
  namespace: monitoring
stringData:
  alertmanager.yaml: |-
    "global":
      "resolve_timeout": "5m"
    "receivers":
    - "name": "Webhook"
      "webhook_configs":
      - "url": "http://dingtalk.monitoring.svc.cluster.local:8060/dingtalk/webhook/send"
    "route":
      "group_by":
      - "namespace"
      "group_wait": "30s"
      "receiver": "Webhook"
      "repeat_interval": "12h"
      "routes":
      - "matchers":
        - "alertname = Webhook"
        "receiver": "Webhook"
type: Opaque

[root@k8s-master manifests]# kubectl apply -f alertmanager-secret.yaml
secret/alertmanager-main configured

我们可以通过查看alertmanager的status页面的底部,发现配置已经修改,如果长时间还未生效可以尝试重启一下alertmanager-main的pod

 

4.4 测试告警

随便搞乱点什么

 

  • 2
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 6
    评论
微服务是什么?微服务是用于构建应用程序的架构风格,一个大的系统可由一个或者多个微服务组成,微服务架构可将应用拆分成多个核心功能,每个功能都被称为一项服务,可以单独构建和部署,这意味着各项服务在工作和出现故障的时候不会相互影响。为什么要用微服务?单体架构下的所有代码模块都耦合在一起,代码量大,维护困难,想要更新一个模块的代码,也可能会影响其他模块,不能很好的定制化代码。微服务中可以有java编写、有Python编写的,他们都是靠restful架构风格统一成一个系统的,所以微服务本身与具体技术无关、扩展性强。大型电商平台微服务功能图为什么要将SpringCloud项目部署到k8s平台?SpringCloud只能用在SpringBoot的java环境中,而kubernetes可以适用于任何开发语言,只要能被放进docker的应用,都可以在kubernetes上运行,而且更轻量,更简单。SpringCloud很多功能都跟kubernetes重合,比如服务发现,负载均衡,配置管理,所以如果把SpringCloud部署到k8s,那么很多功能可以直接使用k8s原生的,减少复杂度。Kubernetes作为成熟的容器编排工具,在国内外很多公司、世界500强等企业已经落地使用,很多中小型公司也开始把业务迁移到kubernetes中。kubernetes已经成为互联网行业急需的人才,很多企业都开始引进kubernetes技术人员,实现其内部的自动化容器云平台的建设。对于开发、测试、运维、架构师等技术人员来说k8s已经成为的一项重要的技能,下面列举了国内外在生产环境使用kubernetes的公司: 国内在用k8s的公司:阿里巴巴、百度、腾讯、京东、360、新浪、头条、知乎、华为、小米、富士康、移动、银行、电网、阿里云、青云、时速云、腾讯、优酷、抖音、快手、美团等国外在用k8s的公司:谷歌、IBM、丰田、iphone、微软、redhat等整个K8S体系涉及到的技术众多,包括存储、网络、安全、监控、日志、DevOps、微服务等,很多刚接触K8S的初学者,都会感到无从下手,为了能让大家系统地学习,克服这些技术难点,推出了这套K8S架构师课程。Kubernetes的发展前景 kubernetes作为炙手可热的技术,已经成为云计算领域获取高薪要掌握的重要技能,在招聘网站搜索k8s,薪资水平也非常可观,为了让大家能够了解k8s目前的薪资分布情况,下面列举一些K8S的招聘截图: 讲师介绍:  先超容器云架构师、IT技术架构师、DevOps工程师,曾就职于世界500强上市公司,拥有多年一线运维经验,主导过上亿流量的pv项目的架构设计和运维工作;具有丰富的在线教育经验,对课程一直在改进和提高、不断的更新和完善、开发更多的企业实战项目。所教学员遍布京东、阿里、百度、电网等大型企业和上市公司。课程学习计划 学习方式:视频录播+视频回放+全套源码笔记 教学服务:模拟面试、就业指导、岗位内推、一对一答疑、远程指导 VIP终身服务:一次购买,终身学习课程亮点:1. 学习方式灵活,不占用工作时间:可在电脑、手机观看,随时可以学习,不占用上班时间2.老师答疑及时:老师24小时在线答疑3. 知识点覆盖全、课程质量高4. 精益求精、不断改进根据学员要求、随时更新课程内容5. 适合范围广,不管你是0基础,还是拥有工作经验均可学习:0基础1-3年工作经验3-5年工作经验5年以上工作经验运维、开发、测试、产品、前端、架构师其他行业转行做技术人员均可学习课程部分项目截图   课程大纲 k8s+SpringCloud全栈技术:基于世界500强的企业实战课程-大纲第一章 开班仪式老师自我介绍、课程大纲介绍、行业背景、发展趋势、市场行情、课程优势、薪资水平、给大家的职业规划、课程学习计划、岗位内推第二章 kubernetes介绍Kubernetes简介kubernetes起源和发展kubernetes优点kubernetes功能kubernetes应用领域:在大数据、5G、区块链、DevOps、AI等领域的应用第三章  kubernetes中的资源对象最小调度单元Pod标签Label和标签选择器控制器Replicaset、Deployment、Statefulset、Daemonset等四层负载均衡器Service第四章 kubernetes架构和组件熟悉谷歌的Borg架构kubernetes单master节点架构kubernetes多master节点高可用架构kubernetes多层架构设计原理kubernetes API介绍master(控制)节点组件:apiserver、scheduler、controller-manager、etcdnode(工作)节点组件:kube-proxy、coredns、calico附加组件:prometheus、dashboard、metrics-server、efk、HPA、VPA、Descheduler、Flannel、cAdvisor、Ingress     Controller。第五章 部署多master节点的K8S高可用集群(kubeadm)第六章 带你体验kubernetes可视化界面dashboard在kubernetes中部署dashboard通过token令牌登陆dashboard通过kubeconfig登陆dashboard限制dashboard的用户权限在dashboard界面部署Web服务在dashboard界面部署redis服务第七章 资源清单YAML文件编写技巧编写YAML文件常用字段,YAML文件编写技巧,kubectl explain查看帮助命令,手把手教你创建一个Pod的YAML文件第八章 通过资源清单YAML文件部署tomcat站点编写tomcat的资源清单YAML文件、创建service发布应用、通过HTTP、HTTPS访问tomcat第九章  kubernetes Ingress发布服务Ingress和Ingress Controller概述Ingress和Servcie关系安装Nginx Ingress Controller安装Traefik Ingress Controller使用Ingress发布k8s服务Ingress代理HTTP/HTTPS服务Ingress实现应用的灰度发布-可按百分比、按流量分发第十章 私有镜像仓库Harbor安装和配置Harbor简介安装HarborHarbor UI界面使用上传镜像到Harbor仓库从Harbor仓库下载镜像第十一章 微服务概述什么是微服务?为什么要用微服务?微服务的特性什么样的项目适合微服务?使用微服务需要考虑的问题常见的微服务框架常见的微服务框架对比分析第十二章 SpringCloud概述SpringCloud是什么?SpringCloud和SpringBoot什么关系?SpringCloud微服务框架的优缺点SpringCloud项目部署到k8s的流程第十三章 SpringCloud组件介绍服务注册与发现组件Eureka客户端负载均衡组件Ribbon服务网关Zuul熔断器HystrixAPI网关SpringCloud Gateway配置中心SpringCloud Config第十四章 将SpringCloud项目部署到k8s平台的注意事项如何进行服务发现?如何进行配置管理?如何进行负载均衡?如何对外发布服务?k8s部署SpringCloud项目的整体流程第十五章 部署MySQL数据库MySQL简介MySQL特点安装部署MySQL在MySQL数据库导入数据对MySQL数据库授权第十六章 将SpringCLoud项目部署到k8s平台SpringCloud的微服务电商框架安装openjdk和maven修改源代码、更改数据库连接地址通过Maven编译、构建、打包源代码在k8s中部署Eureka组件在k8s中部署Gateway组件在k8s中部署前端服务在k8s中部署订单服务在k8s中部署产品服务在k8s中部署库存服务第十七章 微服务的扩容和缩容第十八章 微服务的全链路监控什么是全链路监控?为什么要进行全链路监控?全链路监控能解决哪些问题?常见的全链路监控工具:zipkin、skywalking、pinpoint全链路监控工具对比分析第十九章 部署pinpoint服务部署pinpoint部署pinpoint agent在k8s中重新部署带pinpoint agent的产品服务在k8s中重新部署带pinpoint agent的订单服务在k8s中重新部署带pinpoint agent的库存服务在k8s中重新部署带pinpoint agent的前端服务在k8s中重新部署带pinpoint agent的网关和eureka服务Pinpoint UI界面使用第二十章 基于Jenkins+k8s+harbor等构建企业级DevOps平台第二十一章 基于Promethues+Alert+Grafana搭建企业级监控系统第二十二章 部署智能化日志收集系统EFK 

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值