云原生组件笔记 -- Kubernetes

一、基础架构

1.1 概念特性

  • 概述
  1. 谷歌2014年开源的 容器化集群管理系统
  2. 使用 k8s 进行容器化应用部署有利于应用扩展;
  3. k8s 的目标是让部署容器化应用更加简洁高效。
  • 特性
  1. 自动装箱:基于容器对应用运行环境的资源配置要求自动部署应用容器
  2. 自我修复:当容器失败时,会对容器进行重启;当所部署的 Node 节点有问题时,会对容器进行重新部署和重新调度当容器未通过监控检查时,会关闭此容器直到容器正常运行时,才会对外提供服务
  3. 水平扩展:通过简单的命令、用户 UI 界面或基于 CPU 等资源使用情况,对应用容器进行规模扩大或规模剪裁
  4. 服务发现:用户不需使用额外的服务发现机制,就能够基于 Kubernetes 自身能力实现服务发现和负载均衡
  5. 滚动更新:可以根据应用的变化,对应用容器运行的应用,进行一次性或批量式更新
  6. 版本回退:可以根据应用部署情况,对应用容器运行的应用,进行历史版本即时回退
  7. 密钥和配置管理:在不需要重新构建镜像的情况下,可以部署和更新密钥和应用配置,类似热部署。
  8. 存储编排:自动实现存储系统挂载及应用,特别对有状态应用实现数据持久化非常重要;存储系统可以来自于本地目录、网络存储(NFS、Gluster、Ceph 等)、公共云存储服务
  9. 批处理:提供一次性任务,定时任务;满足批量数据处理和分析的场景

1.2 架构组件

在这里插入图片描述

  • Master 组件
  1. API Server:集群统一入口,以 restful 方式,交给 etcd 存储;
  2. Scheduler:节点调度,选择 Node 节点进行应用部署;
  3. Controller manager:处理集群中常规后台任务,一个资源对应一个控制器;
  4. ETCD:存储系统,用于保存集群相关数据;
  • Node 组件
  1. kubelet:master节点派到node节点的代表,管理本机容器;
  2. kube-proxy:提供网络代理、负载均衡等操作。

1.3 核心概念

通过 Service 统一入口进行访问,然后由 Controller 创建 Pod 进行部署。

  • Pod
  1. 最小部署单元
  2. 包含多个容器(一组容器的集合)
  3. 一个 Pod 中容器共享网络命名空间
  4. 生命周期是短暂的
  • Controller
  1. 无 / 有状态应用部署
  2. 一次性任务和定时任务
  3. 确保预期的 pod 副本数量
  4. 确保所有的 node 运行同一个pod
  • Service
  1. 定义一组 pod 的访问规则

二、部署使用

2.1 部署

  • 前置操作(克隆虚拟机前
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 生产环境建议不要关闭防火墙而是像下面这样:
## Master 节点
### firewall-cmd --zone=public --permanent --add-rich-rule='rule protocol value=\"vrrp\" accept'
### firewall-cmd --permanent --add-port=6443/tcp --add-port=16443/tcp --add-port=2379-2380/tcp --add-port=10250-10252/tcp
### firewall-cmd --reload
## Node 节点 
### firewall-cmd --permanent --add-port=10251-10252/tcp --add-port=30000-32767/tcp
### firewall-cmd --reload

# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时

# 关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # 生效

# 时间同步
yum install ntpdate -y
ntpdate time.windows.com
  • 集群规划(虚拟机克隆后、所有节点
节点名称角色NATHost-Only
cloud-mn01master10.0.2.101192.168.1.101
cloud-dn01node10.0.2.201192.168.1.201
cloud-dn02node10.0.2.202192.168.1.202
# 添加hosts
cat >> /etc/hosts << EOF
192.168.1.101 cloud-mn01
192.168.1.201 cloud-dn01
192.168.1.202 cloud-dn02
EOF

# 设置主机名
hostnamectl set-hostname cloud-mn01
  • 安装 Docker(所有节点
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl enable docker && systemctl start docker
docker --version
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

systemctl restart docker.service
  • 安装kubeadm,kubelet和kubectl(所有节点
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
systemctl enable kubelet
  • 初始化 Master(cloud-mn01
kubeadm init \
  --apiserver-advertise-address=192.168.1.101 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.18.0 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16
...
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.101:6443 --token 76yhkd.edbqca3hk81hyjgs \
    --discovery-token-ca-cert-hash sha256:5717e5ded7a5984a6bb2731e1e4235c646b0f25597d9f8ff62c6b291709e6faf 

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 初始化 Nodes(cloud-dn01,cloud-dn02
kubeadm join 192.168.1.101:6443 --token 76yhkd.edbqca3hk81hyjgs \
    --discovery-token-ca-cert-hash sha256:5717e5ded7a5984a6bb2731e1e4235c646b0f25597d9f8ff62c6b291709e6faf 

# 默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,操作如下:
# kubeadm token create --print-join-command
  • 部署CNI网络插件
# 部署网络插件
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# 查看系统 pod 运行状态
kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-grwjc             1/1     Running   1          28m
coredns-7ff77c879f-zpjdj             1/1     Running   1          28m
etcd-cloud-mn01                      1/1     Running   1          28m
kube-apiserver-cloud-mn01            1/1     Running   1          28m
kube-controller-manager-cloud-mn01   1/1     Running   1          28m
kube-flannel-ds-2zwl6                1/1     Running   2          24m
kube-flannel-ds-59gzc                1/1     Running   1          24m
kube-flannel-ds-6f9mk                1/1     Running   1          24m
kube-proxy-2gz4j                     1/1     Running   1          27m
kube-proxy-pscqj                     1/1     Running   1          28m
kube-proxy-scbqz                     1/1     Running   1          28m
kube-scheduler-cloud-mn01            1/1     Running   1          28m

# 查看系统节点状态
kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
cloud-dn01   Ready    <none>   17m   v1.18.0
cloud-dn02   Ready    <none>   17m   v1.18.0
cloud-mn01   Ready    master   18m   v1.18.0
  • 测试
# 创建 pod
kubectl create deployment nginx --image=nginx

# 暴露 pod 端口
kubectl expose deployment nginx --port=80 --type=NodePort

# 查看 pod 端口(30531)
kubectl get pod,svc
[root@cloud-mn01 ~]# kubectl get pod,svc
NAME                        READY   STATUS    RESTARTS   AGE
pod/nginx-f89759699-g2qd2   1/1     Running   0          14m

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        20m
service/nginx        NodePort    10.109.97.101   <none>        80:30531/TCP   13m
[root@cloud-mn01 ~]# 

浏览器输入Node IP + Pod Port 即192.168.1.201:30531 或 192.168.1.202:30531

在这里插入图片描述

2.2 kubectl

  • 语法

kubectl [command] [TYPE] [NAME] [flags]

  1. command:指定要对资源 执行的操作,例如 create、get、describe 和 delete;
  2. TYPE:指定 资源类型。资源类型是大小写敏感的,开发者能够以单数、复数和缩略的形式例如 kubectl get pod [pod-name]、kubectl get pods [pod-name] 和 kubectl get po [pod-name];
  3. NAME:指定 资源名称,名称也是大小写敏感的。如果省略名称,则会显示所有资源;
  4. flags:指定 可选参数。例如,可用 -s 或者 -server 参数指定 Kubernetes API server 的地址和端口。

2.3 yaml

以数据为中心、可读性高、用来表达数据序列格式的标记语言。

  • 基本语法
  1. 通过缩进(两个空格)表示层级关系;
  2. 符号后缩进一个空格,比如冒号及逗号等;
  3. 使用 ‘—’ 表示新的 yaml 文件开始;
  4. 使用 ‘#’ 表示注释
  • 获取 yaml 模板
  1. 通过 Dry Run 获取
# [root@cloud-mn01 ~]# kubectl create deployment webapp --image=nginx \
# > -o yaml --dry-run=client

## -o yaml : 输出 yaml 资源清单文件
## --dry-run=client : 不真正执行部署操作

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: webapp
  name: webapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webapp
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: webapp
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
status: {}
  1. 通过 Get Deploy 获取
[root@cloud-mn01 ~]# kubectl get deploy
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           2d3h
[root@cloud-mn01 ~]# kubectl get deploy nginx -o=yaml
  • 常用字段
字段说明
apiVersionAPI 版本
kind资源类型
metadata资源元数据
spec资源规格
replicas副本数量
selector标签选择器
templatePod 模板
metadataPod 元数据
specPod 规格
containers容器配置

三、核心概念

3.1 Pod

  • 存在意义
  1. Docker 每个容器对应一个进程运行一个应用程序;
  2. Pod 是多进程设计:每个 pod 有多个容器,每个容器运行一个应用;
  3. Pod 存在是为了亲密性应用:两个应用间需要频繁交互调用。

在这里插入图片描述

  • 实现机制
  1. 共享网络:通过 Pause 容器,把其它业务容器加入到 Pause 容器里面,让所有业务容器在同一个名称空间中,可以实现网络共享;
  2. 共享存储:引入数据卷概念 Volumn,使用数据卷进行持久化存储。
  • 镜像拉取策略
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: nginx
      image: nginx:1.14
      imagePullPolicy: IfNotPresent
  1. IfNotPresent:默认值,镜像在宿主机上不存在时才拉取;
  2. Always:每次创建 Pod 都会重新拉取一次镜像;
  3. Never:Pod 永远不会主动拉取这个镜像。
  • 资源限制
apiVersion: v1
kind: Pod
metadata:
  name: frontend
spec:
  containers:
  - name: db
    image: mysql
    env:
    - name: MYSQL_ROOT_PASSWORD
      value: "password"
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
  • 重启策略
apiVersion: v1
kind: Pod
metadata:
  name: dns-test
spec:
  containers:
  - name: busybox
    image: busybox:1.28.4
    args:
    - /bin/sh
    - -c
    - sleep 3600
  restartPolicy: never
  1. always:默认策略,当容器终止退出后,总是重启容器;
  2. onFailure:当容器异常退出(退出状态码非 0)时,才重启容器;
  3. never:当容器终止退出,从不重启容器。
  • 健康检查

livenessProbe(存活检查):如果检查失败,将杀死容器,根据 Pod 的 restartPolicy 来操作;

readinessProbe(就绪检查):如果检查失败,Kubernetes 会把 Pod 从 service endpoints 中剔除。

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-exec
spec:
  containers:
  - name: liveness
   image: busybox
   args:
   - /bin/sh
   - -c
   - touch /tmp/healthy; sleep 30; rm -rf /tmp/healty
   livenessProbe:
     exec:
       command:
       - cat
       - /tmp/healthy
       initialDelaySeconds: 5
       periodSeconds: 5

Probe 支持以下三种检查方法:

  1. httpGet:发送 HTTP 请求,返回 200-400 范围状态码为成功;
  2. exec:执行 shell 命令返回状态码是 0 为成功;
  3. tcpSocket:发起 TCP Socket 建立成功。
  • 调度流程

在这里插入图片描述

  • 影响 Pod 分配的属性
  1. 资源限制:node 必须保证 pod 请求的资源得到满足
resources:
  requests:
    memory: "64Mi"
    cpu: "250m"
  1. 节点选择器
spec:
  nodeSelector:
    env_role: dev
  containers:
  - name: nginx
    image: nginx:1.15
[root@cloud-mn01 ~]# kubectl label node cloud-dn01 env_role=dev
node/cloud-dn01 labeled
[root@cloud-mn01 ~]# kubectl get node cloud-dn01 --show-labels
NAME         STATUS   ROLES    AGE   VERSION   LABELS
cloud-dn01   Ready    <none>   8d    v1.18.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env_role=dev,kubernetes.io/arch=amd64,kubernetes.io/hostname=cloud-dn01,kubernetes.io/os=linux
[root@cloud-mn01 ~]# 
  • 节点亲和性

与之前 nodeSelector 类似,根据节点上的标签约束来决定 Pod 分配到哪些节点上。

apiVersion: v1
kind: Pod
metadata:
  name: with-node-affinity
spec:
  affinity:
    nodeAffinity:
      # 一定要满足
      requiredDuringSchedulingIgnoreDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: env_role
            # 常用操作符:In NotIn Exists Gt Lt DoesNotExists
            operator: In
            values:
            - dev
            - test
      # 偏好,非硬性要求
      preferredDuringSchedulingIgnoreDuringExecution:
      - wight: 1
        preference:
          matchExpressions:
          - key: group
            operator: In
            values:
            - highCompute
  • 污点

nodeSelector 和 nodeAffinity 是 Pod 级别的属性,Taint 污点是节点级别的属性。

  1. 查看
# <none> : 没有设置污点
# NoSchedule : 一定不被调度
# PreferNoSchedule : 尽量不被调度
# NoExecute : 不被调度,且驱逐现已分配的 Pod 到其它节点
[root@cloud-mn01 ~]# kubectl describe node cloud-mn01 | grep Taints
Taints:             node-role.kubernetes.io/master:NoSchedule
[root@cloud-mn01 ~]# 
  1. 添加污点并演示效果
[root@cloud-mn01 ~]# kubectl taint node cloud-dn01 node-health=red:NoSchedule
node/cloud-dn01 tainted
[root@cloud-mn01 ~]# kubectl describe node cloud-dn01 | grep Taint
Taints:             node-health=red:NoSchedule
[root@cloud-mn01 ~]# kubectl create deployment webapp --image nginx
deployment.apps/webapp created
[root@cloud-mn01 ~]# kubectl scale deployment webapp --replicas=3
deployment.apps/webapp scaled

# Pod 均分配到了 cloud-dn02

[root@cloud-mn01 ~]# kubectl get pods -o wide
NAME                      READY   STATUS              RESTARTS   AGE   IP       NODE         NOMINATED NODE   READINESS GATES
webapp-59d9889648-8pd5k   0/1     ContainerCreating   0          29s   <none>   cloud-dn02   <none>           <none>
webapp-59d9889648-bjtrn   0/1     ContainerCreating   0          49s   <none>   cloud-dn02   <none>           <none>
webapp-59d9889648-qk662   0/1     ContainerCreating   0          29s   <none>   cloud-dn02   <none>           <none>
[root@cloud-mn01 ~]# 
  1. 删除污点并演示效果
[root@cloud-mn01 ~]# kubectl taint node cloud-dn01 node-health=red:NoSchedule-
node/cloud-dn01 untainted
[root@cloud-mn01 ~]# kubectl describe node cloud-dn01 | grep Taint
Taints:             <none>
[root@cloud-mn01 ~]# kubectl delete deployment webapp
deployment.apps "webapp" deleted
[root@cloud-mn01 ~]# kubectl create deployment webapp --image nginx
deployment.apps/webapp created
[root@cloud-mn01 ~]# kubectl scale deployment webapp --replicas=3
deployment.apps/webapp scaled

# Pods 被均匀分配到各个节点

[root@cloud-mn01 ~]# kubectl get pods -o wide
NAME                      READY   STATUS              RESTARTS   AGE   IP       NODE         NOMINATED NODE   READINESS GATES
webapp-59d9889648-25v4c   0/1     ContainerCreating   0          6s    <none>   cloud-dn02   <none>           <none>
webapp-59d9889648-6mkgv   0/1     ContainerCreating   0          12s   <none>   cloud-dn02   <none>           <none>
webapp-59d9889648-sv5gd   0/1     ContainerCreating   0          6s    <none>   cloud-dn01   <none>           <none>
[root@cloud-mn01 ~]# 
  1. 污点容忍

表示该 Pod 可以容忍打上 “node-health=red:PreferNoSchedule” 污点的节点

spec:
  tolerations:
  - key: "node-health"
   operator: "Equal"
   value: "red"
   effect: "PreferNoSchedule"

3.2 Controller

  • 简介
  1. Controller 是在集群上管理和运行容器的对象;
  2. Pod 是通过 Controller 实现应用的运维,比如伸缩、滚动升级等等;
  3. Pod 和 Controller 之间是通过 “label” 标签建立关系。
# Pod
labels:
  app: nginx

# Controller
selector:
  matchLabels:
    app: nginx

3.2.1 Deployment 无状态

部署无状态应用、管理 Pod 和 ReplicaSet、部署及滚动升级等功能。

  1. 部署无状态应用
[root@cloud-mn01 ~]# kubectl create deployment webapp --image=nginx --dry-run=client -o yaml > webapp.yaml
[root@cloud-mn01 ~]# kubectl apply -f webapp.yaml 
deployment.apps/webapp created
[root@cloud-mn01 ~]# kubectl get pods
NAME                      READY   STATUS              RESTARTS   AGE
webapp-59d9889648-8lw5z   0/1     ContainerCreating   0          5s
[root@cloud-mn01 ~]# 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: webapp
  name: webapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webapp
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: webapp
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
status: {}
  1. 发布对外访问接口
[root@cloud-mn01 ~]# kubectl expose deployment webapp --port=80 --type=NodePort --dry-run=client -o yaml > webapp-svc.yaml
[root@cloud-mn01 ~]# kubectl apply -f webapp-svc.yaml 
service/webapp created
[root@cloud-mn01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        8d
webapp       NodePort    10.96.140.221   <none>        80:30935/TCP   2s
[root@cloud-mn01 ~]# 
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: webapp
  name: webapp
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: webapp
  type: NodePort
status:
  loadBalancer: {}
  1. 应用升级
## 指定容器的版本
[root@cloud-mn01 ~]# grep image webapp.yaml
      - image: nginx:1.14
[root@cloud-mn01 ~]# kubectl apply -f webapp.yaml 
deployment.apps/webapp created
[root@cloud-mn01 ~]# kubectl describe pod webapp | grep Image
    Image:          nginx:1.14
    Image ID:       docker-pullable://nginx@sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d
[root@cloud-mn01 ~]# 
## 升级
[root@cloud-mn01 ~]# kubectl set image deployment webapp nginx=nginx:1.15
deployment.apps/webapp image updated
[root@cloud-mn01 ~]# kubectl describe pod webapp | grep Image
    Image:          nginx:1.15
    Image ID:       docker-pullable://nginx@sha256:23b4dcdf0d34d4a129755fc6f52e1c6e23bb34ea011b315d87e193033bcd1b68
[root@cloud-mn01 ~]# 
  1. 应用回滚
## 查看升级回滚状态
[root@cloud-mn01 ~]# kubectl rollout status deployment webapp
deployment "webapp" successfully rolled out
## 查看升级回滚历史
[root@cloud-mn01 ~]# kubectl rollout history deployment webapp
deployment.apps/webapp 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

## 回滚到上一个版本
[root@cloud-mn01 ~]# kubectl rollout undo deployment webapp
deployment.apps/webapp rolled back
## 回滚到指定版本
[root@cloud-mn01 ~]# kubectl rollout undo deployment webapp --to-revision=2
deployment.apps/webapp rolled back
[root@cloud-mn01 ~]# 
  1. 弹性伸缩
[root@cloud-mn01 ~]# kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
webapp-74d879b68f-ltcrv   1/1     Running   0          107s
[root@cloud-mn01 ~]# kubectl scale deployment webapp --replicas=5
deployment.apps/webapp scaled
[root@cloud-mn01 ~]# kubectl get pods
NAME                      READY   STATUS              RESTARTS   AGE
webapp-74d879b68f-4pvm7   0/1     ContainerCreating   0          2s
webapp-74d879b68f-727zj   1/1     Running             0          2s
webapp-74d879b68f-brnrh   0/1     ContainerCreating   0          2s
webapp-74d879b68f-lf8ch   0/1     ContainerCreating   0          2s
webapp-74d879b68f-ltcrv   1/1     Running             0          2m11s
[root@cloud-mn01 ~]# 

3.2.2 StatefulSet 有状态

用于部署有状态应用

  1. 无状态
# 认为 Pod 都是一样的
# 没有顺序要求
# 不用考虑在哪个 node 运行
# 随意进行伸缩和扩展
  1. 有状态
# 让每个 Pod 都是独立的
# 唯一的网络标识符,持久存储
# 保证 Pod 启动顺序,比如 mysql 主从
  1. 无头 service
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  # 无头 Service 即 ClusterIP 为 None
  clusterIP: None
  selector:
    app: nginx

---

apiVersion: apps/v1
# 类型:有状态容器集合
kind: StatefulSet
metadata:
  name: nginx-statefulset
  namespace: default
spec:
  serviceName: nginx
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
  1. 部署有状态应用及查看、删除
[root@cloud-mn01 ~]# kubectl apply -f sts.yaml 
service/nginx created
statefulset.apps/nginx-statefulset created
[root@cloud-mn01 ~]# kubectl get statefulset
NAME                READY   AGE
nginx-statefulset   3/3     2m7s
[root@cloud-mn01 ~]# kubectl get pods
NAME                  READY   STATUS    RESTARTS   AGE
nginx-statefulset-0   1/1     Running   0          73s
nginx-statefulset-1   1/1     Running   0          53s
nginx-statefulset-2   1/1     Running   0          27s
[root@cloud-mn01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   9d
nginx        ClusterIP   None         <none>        80/TCP    75s
[root@cloud-mn01 ~]# kubectl delete statefulset --all
statefulset.apps "nginx-statefulset" deleted
[root@cloud-mn01 ~]# kubectl delete svc nginx
service "nginx" deleted
[root@cloud-mn01 ~]#
  1. deploymentstatefulset 的区别

statefulset 部署的应用中的每个 Pod 都是拥有唯一标识的。

# 根据 主机名 按照一定规则生成域名

# 格式:主机名称.service名称.名称空间.svc.cluster.local

# 例如:nginx-statefulset-0.nginx.default.svc.cluster.local

3.2.3 DaemonSet 守护

部署守护进程,确保所有的 node 运行同一个 Pod。使用场景如:在每个 node 节点安装数据采集工具。

  1. YAML
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ds-test 
  labels:
    app: filebeat
spec:
  selector:
    matchLabels:
      app: filebeat
  template:
    metadata:
      labels:
        app: filebeat
    spec:
      containers:
      - name: logs
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - name: varlog
          mountPath: /tmp/log
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
  1. 部署及查看
[root@cloud-mn01 ~]# kubectl apply -f ds.yaml 
daemonset.apps/ds-test created
[root@cloud-mn01 ~]# kubectl get daemonset
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
ds-test   2         2         2       2            2           <none>          7m22s
[root@cloud-mn01 ~]# kubectl get pods -o wide
NAME            READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
ds-test-dxd5b   1/1     Running   0          7m33s   10.244.2.18   cloud-dn02   <none>           <none>
ds-test-lp9js   1/1     Running   0          7m33s   10.244.1.16   cloud-dn01   <none>           <none>
[root@cloud-mn01 ~]#
  1. 登录容器观察
# 登录容器
[root@cloud-mn01 ~]# kubectl exec -it ds-test-lp9js bash
root@ds-test-lp9js:/# ls /tmp/log        
anaconda	   boot.log-20210727  cron	     firewalld		 maillog	    pods     secure-20210727   tuned
audit		   boot.log-20210728  cron-20210727  grubby		 maillog-20210727   qemu-ga  spooler	       wtmp
boot.log	   btmp		      dmesg	     grubby_prune_debug  messages	    rhsm     spooler-20210727  yum.log
boot.log-20210720  containers	      dmesg.old      lastlog		 messages-20210727  secure   tallylog
root@ds-test-lp9js:/# 

# node 节点上模拟日志文件生成
[root@cloud-dn01 ~]# echo "this is a test" > /var/log/test.log
[root@cloud-dn01 ~]# 

# 在 Pod 中查看(node中数据文件的变化反映在 pod 中)
root@ds-test-lp9js:/# cat /tmp/log/test.log
this is a test
root@ds-test-lp9js:/# 

3.2.4 Job 一次性

  1. YAML
apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4
  1. 创建
[root@cloud-mn01 ~]# kubectl create -f job.yaml 
job.batch/pi created
[root@cloud-mn01 ~]# kubectl get pods
NAME            READY   STATUS              RESTARTS   AGE
ds-test-dxd5b   1/1     Running             0          44m
ds-test-lp9js   1/1     Running             0          44m
pi-lmzjq        0/1     ContainerCreating   0          19s
[root@cloud-mn01 ~]# kubectl get pods
NAME            READY   STATUS      RESTARTS   AGE
ds-test-dxd5b   1/1     Running     0          45m
ds-test-lp9js   1/1     Running     0          45m
pi-lmzjq        0/1     Completed   0          2m1s
[root@cloud-mn01 ~]# kubectl get jobs
NAME   COMPLETIONS   DURATION   AGE
pi     1/1           81s        2m3s
[root@cloud-mn01 ~]# 
  1. 查看结果
[root@cloud-mn01 ~]# kubectl logs pi-lmzjq
3.1415926535897932...
[root@cloud-mn01 ~]# 
  1. 删除
[root@cloud-mn01 ~]# kubectl delete -f job.yaml 
job.batch "pi" deleted
[root@cloud-mn01 ~]# 

3.2.5 CronJob 定时

  1. YAML
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure
  1. 创建、查看及删除
[root@cloud-mn01 ~]# kubectl apply -f cronjob.yaml 
cronjob.batch/hello created
[root@cloud-mn01 ~]# kubectl get pods
NAME                     READY   STATUS      RESTARTS   AGE
ds-test-dxd5b            1/1     Running     0          51m
ds-test-lp9js            1/1     Running     0          51m
hello-1627443540-4rjl5   0/1     Completed   0          25s
[root@cloud-mn01 ~]# kubectl get cronjobs
NAME    SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
hello   */1 * * * *   False     0        47s             3m10s
[root@cloud-mn01 ~]# kubectl logs hello-1627443540-4rjl5
Wed Jul 28 03:39:23 UTC 2021
Hello from the Kubernetes cluster
[root@cloud-mn01 ~]# kubectl get pods
NAME                     READY   STATUS              RESTARTS   AGE
ds-test-dxd5b            1/1     Running             0          53m
ds-test-lp9js            1/1     Running             0          53m
hello-1627443540-4rjl5   0/1     Completed           0          2m12s
hello-1627443600-bw4kw   0/1     Completed           0          71s
hello-1627443660-nwql2   0/1     ContainerCreating   0          11s
[root@cloud-mn01 ~]# kubectl delete -f cronjob.yaml 
cronjob.batch "hello" deleted
[root@cloud-mn01 ~]# 

3.2.6 Secret 加密变量

加密数据存在 etcd 里面,让 Pod 容器以挂载 Volumn 方式进行访问。

  1. 创建加密数据
apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  username: YWRtaW4=
  password: YWJjZDEyMzQuLg==
[root@cloud-mn01 ~]# kubectl create -f secret.yaml 
secret/mysecret created
[root@cloud-mn01 ~]# kubectl get secrets
NAME                  TYPE                                  DATA   AGE
default-token-29kgd   kubernetes.io/service-account-token   3      10d
mysecret              Opaque                                2      13s
[root@cloud-mn01 ~]# 
  1. 变量方式使用
apiVersion: v1
kind: Pod
metadata:
  name: secret-var
spec:
  containers:
  - name: nginx
    image: nginx
    env:
      - name: SECRET_USERNAME
        valueFrom:
          secretKeyRef:
            name: mysecret
            key: username
      - name: SECRET_PASSWORD
        valueFrom:
          secretKeyRef:
            name: mysecret
            key: password
[root@cloud-mn01 ~]# kubectl apply -f secret_var.yaml 
pod/secret-var created
[root@cloud-mn01 ~]# kubectl get pods
NAME         READY   STATUS              RESTARTS   AGE
secret-var   0/1     ContainerCreating   0          10s
[root@cloud-mn01 ~]# kubectl exec -it secret-var bash
root@secret-var:/# echo $SECRET_USERNAME
admin
root@secret-var:/# echo $SECRET_PASSWORD
abcd1234..
root@secret-var:/# 
  1. 目录挂载方式使用
apiVersion: v1
kind: Pod
metadata:
  name: secret-vol
spec:
  containers:
  - name: nginx
    image: nginx
    volumeMounts:
    - name: foo
      mountPath: "/etc/foo"
      readOnly: true
  volumes:
  - name: foo
    secret:
      secretName: mysecret
[root@cloud-mn01 ~]# kubectl apply -f secret-vol.yaml 
pod/secret-vol created
[root@cloud-mn01 ~]# kubectl get pods
NAME         READY   STATUS              RESTARTS   AGE
secret-var   1/1     Running             0          2m44s
secret-vol   0/1     ContainerCreating   0          5s
[root@cloud-mn01 ~]# kubectl exec -it secret-vol bash
root@secret-vol:/# cat /etc/foo/username
adminroot@secret-vol:/# cat /etc/foo/password
abcd1234..root@secret-vol:/# 

3.2.7 ConfigMap 普通变量

  1. 配置文件方式创建
[root@cloud-mn01 ~]# cat redis.properties 
redis.host=simwor.com
redis.port=6379
redis.password=abcd1234..
[root@cloud-mn01 ~]# kubectl create configmap redis-config --from-file=redis.properties
configmap/redis-config created
[root@cloud-mn01 ~]# kubectl get configmap
NAME           DATA   AGE
redis-config   1      9s
[root@cloud-mn01 ~]# kubectl describe configmap redis-config
Name:         redis-config
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
redis.properties:
----
redis.host=simwor.com
redis.port=6379
redis.password=abcd1234..

Events:  <none>
[root@cloud-mn01 ~]# 
  1. 数据卷方式使用
[root@cloud-mn01 ~]# cat configmap-vol.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: configmap-vol
spec:
  containers:
    - name: busybox
      image: busybox
      command: [ "/bin/sh","-c","cat /etc/config/redis.properties" ]
      volumeMounts:
      - name: config-volume
        mountPath: /etc/config
  volumes:
    - name: config-volume
      configMap:
        name: redis-config
  restartPolicy: Never
[root@cloud-mn01 ~]# kubectl apply -f configmap-vol.yaml 
pod/configmap-vol created
[root@cloud-mn01 ~]# kubectl logs configmap-vol
redis.host=simwor.com
redis.port=6379
redis.password=abcd1234..
[root@cloud-mn01 ~]# 
  1. YAML 方式 KV 创建
[root@cloud-mn01 ~]# cat myconfig.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: myconfig
  namespace: default
data:
  special.level: info
  special.type: hello
[root@cloud-mn01 ~]# kubectl apply -f myconfig.yaml 
configmap/myconfig created
[root@cloud-mn01 ~]# kubectl describe configmap myconfig
Name:         myconfig
Namespace:    default
Labels:       <none>
Annotations:  
Data
====
special.level:
----
info
special.type:
----
hello
Events:  <none>
[root@cloud-mn01 ~]# 
  1. 变量方式使用
[root@cloud-mn01 ~]# cat configmap-val.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: configmap-val
spec:
  containers:
    - name: busybox
      image: busybox
      command: [ "/bin/sh", "-c", "echo $(LEVEL) $(TYPE)" ]
      env:
        - name: LEVEL
          valueFrom:
            configMapKeyRef:
              name: myconfig
              key: special.level
        - name: TYPE
          valueFrom:
            configMapKeyRef:
              name: myconfig
              key: special.type
  restartPolicy: Never
[root@cloud-mn01 ~]# kubectl apply -f configmap-val.yaml 
pod/configmap-val created
[root@cloud-mn01 ~]# kubectl logs configmap-val
info hello
[root@cloud-mn01 ~]# 

3.3 Service

通过 Service 实现 Pod 的服务发现和负载均衡

在这里插入图片描述

  • Pod 与 Service 关联
# Pod
labels:
  app: nginx

# Service
selector:
  app: nginx
  • 常用类型
  1. ClusterIP:集群内部使用
  2. NodePort:对外访问应用使用
  3. LoadBalancer:对外访问应用使用,公有云

3.4 安全机制

访问 K8s 资源时都要统一经由 api-server 并完成 认证、鉴权、准入控制 三步。

  • 三步骤
  1. 认证:https 基于 ca 的证书认证、http 基于用户 token 的认证、http 基于用户名密码的基本认证;
  2. 鉴权:基于 RBAC(Role Based Access Control)角色控制的鉴权;
  3. 准入控制:即准入控制器列表,列表中有请求内容通过。
  • RBAC
  1. 角色:包含 role 特定命名空间访问权限和ClusterRole 所有命名空间访问权限;
  2. 角色绑定:包含 roleBinding 角色绑定到主体和 ClusterRoleBinding 集群角色绑定到主体;
  3. 主体:user用户、group用户组、serviceAccount服务账号。
  • 命名空间
  1. 创建命名空间
[root@cloud-mn01 ~]# kubectl create namespace roledemo
namespace/roledemo created
[root@cloud-mn01 ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   11d
kube-node-lease   Active   11d
kube-public       Active   11d
kube-system       Active   11d
roledemo          Active   13s
[root@cloud-mn01 ~]# 
  1. 在特定命名空间中创建 Pod
[root@cloud-mn01 ~]# kubectl create deployment webapp --image=nginx -n roledemo
deployment.apps/webapp created
[root@cloud-mn01 ~]# kubectl get pod -n roledemo
NAME                      READY   STATUS    RESTARTS   AGE
webapp-59d9889648-b8mwl   1/1     Running   0          61s
[root@cloud-mn01 ~]# 
  1. 创建角色
[root@cloud-mn01 ~]# cat role.yaml 
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: roledemo
  name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
[root@cloud-mn01 ~]# kubectl create -f role.yaml 
role.rbac.authorization.k8s.io/pod-reader created
[root@cloud-mn01 ~]# kubectl get role -n roledemo
NAME         CREATED AT
pod-reader   2021-07-30T01:36:03Z
[root@cloud-mn01 ~]# 
  1. 角色绑定
[root@cloud-mn01 ~]# cat role-binding.yaml 
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: read-pods
  namespace: roledemo
subjects:
- kind: User
  name: lucy # Name is case sensitive
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role #this must be Role or ClusterRole
  name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to
  apiGroup: rbac.authorization.k8s.io
[root@cloud-mn01 ~]# kubectl create -f role-binding.yaml 
rolebinding.rbac.authorization.k8s.io/read-pods created
[root@cloud-mn01 ~]# kubectl get rolebinding -n roledemo
NAME        ROLE              AGE
read-pods   Role/pod-reader   19s
[root@cloud-mn01 ~]# 

四、高级内容

4.1 Ingress

  • 场景
  1. Service 里面的 NodePort 通过暴露 node 节点的端口号对外提供访问;
  2. NodePort 会在每个节点上都暴露此端口,一个端口号只能使用一次;
  3. 实际访问中都是使用域名,根据不同域名跳转到不同端口服务中
  • 与 Pod 间的关系
  1. pod 与 ingress 通过 service 关联;
  2. ingress 作为统一入口,由 service 关联一组 pod。

在这里插入图片描述

  • 使用
  1. 创建应用并对外暴露端口
[root@cloud-mn01 ~]# kubectl create deployment webapp --image=nginx
deployment.apps/webapp created
[root@cloud-mn01 ~]# kubectl expose deployment webapp --port=80 --target-port=80 --type=NodePort
service/webapp exposed
[root@cloud-mn01 ~]# 
  1. 部署 ingress controller
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      hostNetwork: true
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: lizhenliang/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---

apiVersion: v1
kind: LimitRange
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  limits:
  - min:
      memory: 90Mi
      cpu: 100m
    type: Container
[root@cloud-mn01 ~]# kubectl apply -f ingress-controller.yaml 
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created
[root@cloud-mn01 ~]# kubectl get pods -n ingress-nginx
NAME                                       READY   STATUS    RESTARTS   AGE
nginx-ingress-controller-766fb9f77-l2jbh   0/1     Running   0          47s
[root@cloud-mn01 ~]# 
  1. 创建 ingress 规则
[root@cloud-mn01 ~]# cat ingress-rule.yaml 
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: example-ingress
spec:
  rules:
  - host: example.ingressdemo.com
    http:
      paths:
      - path: /
        backend:
          serviceName: webapp
          servicePort: 80
[root@cloud-mn01 ~]# kubectl apply -f ingress-rule.yaml 
ingress.networking.k8s.io/example-ingress created
[root@cloud-mn01 ~]# kubectl get ingress
NAME              CLASS    HOSTS                     ADDRESS   PORTS   AGE
example-ingress   <none>   example.ingressdemo.com             80      11s
[root@cloud-mn01 ~]# 
  1. 通过 ingress 使用域名访问 pod
[root@cloud-mn01 ~]# kubectl get pods -o wide -n ingress-nginx
NAME                                       READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
nginx-ingress-controller-766fb9f77-l2jbh   1/1     Running   0          14m   192.168.1.202   cloud-dn02   <none>           <none>
[root@cloud-mn01 ~]# kubectl get ingress
NAME              CLASS    HOSTS                     ADDRESS   PORTS   AGE
example-ingress   <none>   example.ingressdemo.com             80      8m57s
[root@cloud-mn01 ~]# 

已知 ingress controller 部署在 192.168.1.202 cloud-dn02 节点上,将 192.168.1.202 example.ingressdemo.com 配置在 windows hosts 文件中即可实现域名访问。

在这里插入图片描述

4.2 Helm

  • 场景
  1. 普通方式部署应用要编写一套yaml文件(deployment、service、ingress);
  2. 云化的微服务项目可能有几十个应用,需维护大量yaml文件,版本管理不方便;
  3. helm 可以将这些yaml作为整体进行管理,实现高效复用,并可以进行应用级别的版本管理
  • 简介

Helm 是一个 Kubernetes 的包管理工具,就像 Linux 下的包管理器,如 yum/apt 等,可以很方便的将之前打包好的 yaml 文件部署到 kubernetes 上。

  1. helm:一个命令行客户端工具,主要用于 Kubernetes 应用 chart 的创建、打包、发布和管理。
  2. Chart:应用描述,一系列用于描述 k8s 资源相关文件的集合。
  3. Release:基于 Chart 的部署实体,一个 chart 被 Helm 运行后将会生成对应的一个release;将在 k8s 中创建出真实运行的资源对象。
  • 架构

在这里插入图片描述

  • 安装
  1. 解压
[root@cloud-mn01 ~]# tar -zxf helm-v3.0.0-linux-amd64.tar.gz 
[root@cloud-mn01 ~]# ll
total 11800
-rw-r--r-- 1 root root 12082866 Sep  7  2020 helm-v3.0.0-linux-amd64.tar.gz
drwxr-xr-x 2 3434 3434       50 Nov 13  2019 linux-amd64
[root@cloud-mn01 ~]# mv linux-amd64/
helm       LICENSE    README.md  
[root@cloud-mn01 ~]# mv linux-amd64/helm /usr/bin
[root@cloud-mn01 ~]# hel
helm  help  
[root@cloud-mn01 ~]# hel
  1. 添加仓库
[root@cloud-mn01 ~]# helm repo add stable https://charts.helm.sh/stable
"stable" has been added to your repositories
[root@cloud-mn01 ~]# helm repo add kubelog https://charts.kubelog.com/stable
"kubelog" has been added to your repositories
[root@cloud-mn01 ~]# 
[root@cloud-mn01 ~]# 
[root@cloud-mn01 ~]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "stable" chart repository
...Successfully got an update from the "kubelog" chart repository
Update Complete. ⎈ Happy Helming![root@cloud-mn01 ~]# helm repo list
NAME   	URL                              
stable 	https://charts.helm.sh/stable    
kubelog	https://charts.kubelog.com/stable
[root@cloud-mn01 ~]# 
  • 常用命令
命令描述
create创建一个 chart 并指定名字
dependency管理 chart 依赖
get下载一个 release。可用子命令:all、hooks、manifest、notes、values
history获取 release 历史
install安装一个 chart
list列出 release
package将 chart 目录打包到 chart 存档文件中
pull从远程仓库中下载 chart 并解压到本地 # helm pull stable/mysql – untar
repo添加,列出,移除,更新和索引 chart 仓库。可用子命令:add、index、list、remove、update
rollback从之前版本回滚
search根据关键字搜索 chart。可用子命令:hub、repo
show查看 chart 详细信息。可用子命令:all、chart、readme、values
status显示已命名版本的状态
template本地呈现模板
uninstall卸载一个 release
upgrade更新一个 release
version查看 helm
  • 快速部署应用
  1. 部署应用
[root@cloud-mn01 ~]# helm search repo weave
NAME               	CHART VERSION	APP VERSION	DESCRIPTION                                       
kubelog/weave-cloud	0.3.9        	1.4.0      	DEPRECATED - Weave Cloud is a add-on to Kuberne...
kubelog/weave-scope	1.1.12       	1.12.0     	DEPRECATED - A Helm chart for the Weave Scope c...
stable/weave-cloud 	0.3.9        	1.4.0      	DEPRECATED - Weave Cloud is a add-on to Kuberne...
stable/weave-scope 	1.1.12       	1.12.0     	DEPRECATED - A Helm chart for the Weave Scope c...

[root@cloud-mn01 ~]# helm install ui stable/weave-scope
NAME: ui
LAST DEPLOYED: Fri Jul 30 12:57:02 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
You should now be able to access the Scope frontend in your web browser, by
using kubectl port-forward:

kubectl -n default port-forward $(kubectl -n default get endpoints \
ui-weave-scope -o jsonpath='{.subsets[0].addresses[0].targetRef.name}') 8080:4040

then browsing to http://localhost:8080/.
For more details on using Weave Scope, see the Weave Scope documentation:

https://www.weave.works/docs/scope/latest/introducing/

[root@cloud-mn01 ~]# helm list
NAME	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART             	APP VERSION
ui  	default  	1       	2021-07-30 12:57:02.102575712 +0800 CST	deployed	weave-scope-1.1.12	1.12.0     

[root@cloud-mn01 ~]# helm status ui
NAME: ui
LAST DEPLOYED: Fri Jul 30 12:57:02 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
You should now be able to access the Scope frontend in your web browser, by
using kubectl port-forward:

kubectl -n default port-forward $(kubectl -n default get endpoints \
ui-weave-scope -o jsonpath='{.subsets[0].addresses[0].targetRef.name}') 8080:4040

then browsing to http://localhost:8080/.
For more details on using Weave Scope, see the Weave Scope documentation:

https://www.weave.works/docs/scope/latest/introducing/
[root@cloud-mn01 ~]#
  1. 对外暴露端口
[root@cloud-mn01 ~]# kubectl get svc
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes       ClusterIP   10.96.0.1       <none>        443/TCP   11d
ui-weave-scope   ClusterIP   10.105.66.201   <none>        80/TCP    5m36s
[root@cloud-mn01 ~]# kubectl edit svc ui-weave-scope

# spec: type: ClusterIP -> NodePort
spec:
  clusterIP: 10.105.66.201
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
  selector:
    app: weave-scope
    component: frontend
    release: ui
  sessionAffinity: None
  type: NodePort

service/ui-weave-scope edited
[root@cloud-mn01 ~]# kubectl get svc
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes       ClusterIP   10.96.0.1       <none>        443/TCP        11d
ui-weave-scope   NodePort    10.105.66.201   <none>        80:30657/TCP   6m24s
[root@cloud-mn01 ~]# 
  1. 查看

在这里插入图片描述

  • 自定义 chart 部署应用
  1. 创建 chart 生成模板
[root@cloud-mn01 ~]# helm create mychart
Creating mychart
[root@cloud-mn01 ~]# ll mychart/
total 8
drwxr-xr-x 2 root root    6 Jul 30 13:10 charts
-rw-r--r-- 1 root root  905 Jul 30 13:10 Chart.yaml
drwxr-xr-x 3 root root  146 Jul 30 13:10 templates
-rw-r--r-- 1 root root 1490 Jul 30 13:10 values.yaml
[root@cloud-mn01 ~]# 

# Chart.yaml - 当前 chart 属性配置文件
# templates - 模板 yaml 文件
# values.yaml - yaml 文件可以使用的全局变量
  1. 创建模板
[root@cloud-mn01 mychart]# kubectl create deployment webapp --image=nginx --dry-run=client -o yaml > templates/deployment.yaml 

[root@cloud-mn01 mychart]# kubectl expose deployment webapp --port=80 --target-port=80 --type=NodePort --dry-run=client -o yaml > templates/service.yaml 
Error from server (NotFound): deployments.apps "webapp" not found
[root@cloud-mn01 mychart]# kubectl create deployment webapp --image=nginx
[root@cloud-mn01 mychart]# kubectl expose deployment webapp --port=80 --target-port=80 --type=NodePort --dry-run=client -o yaml > templates/service.yaml 
[root@cloud-mn01 mychart]# kubectl delete deployment webapp
deployment.apps "webapp" deleted
[root@cloud-mn01 mychart]#
  1. 安装自定义 chart
[root@cloud-mn01 ~]# helm install webapp mychart
NAME: webapp
LAST DEPLOYED: Fri Jul 30 13:25:03 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=mychart,app.kubernetes.io/instance=webapp" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:80
[root@cloud-mn01 ~]# kubectl get deployments
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
weave-scope-cluster-agent-ui   1/1     1            1           28m
weave-scope-frontend-ui        1/1     1            1           28m
webapp                         0/1     1            0           13s
[root@cloud-mn01 ~]# kubectl get pods
NAME                                            READY   STATUS              RESTARTS   AGE
weave-scope-agent-ui-4mdsb                      1/1     Running             0          28m
weave-scope-agent-ui-cps6j                      1/1     Running             0          28m
weave-scope-agent-ui-lh4mh                      1/1     Running             0          28m
weave-scope-cluster-agent-ui-7498b8d4f4-7gj92   1/1     Running             0          28m
weave-scope-frontend-ui-649c7dcd5d-kwbtn        1/1     Running             0          28m
webapp-59d9889648-m78pb                         0/1     ContainerCreating   0          17s
[root@cloud-mn01 ~]# kubectl get svc
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes       ClusterIP   10.96.0.1       <none>        443/TCP        11d
ui-weave-scope   NodePort    10.105.66.201   <none>        80:30657/TCP   28m
webapp           NodePort    10.97.72.40     <none>        80:32391/TCP   21s
[root@cloud-mn01 ~]# 
  1. 升级
[root@cloud-mn01 ~]# vi mychart/templates/deployment.yaml 
[root@cloud-mn01 ~]# grep replicas mychart/templates/deployment.yaml 
  replicas: 3
[root@cloud-mn01 ~]# helm upgrade webapp mychart
Release "webapp" has been upgraded. Happy Helming!
NAME: webapp
LAST DEPLOYED: Fri Jul 30 13:26:31 2021
NAMESPACE: default
STATUS: deployed
REVISION: 2
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=mychart,app.kubernetes.io/instance=webapp" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:80
[root@cloud-mn01 ~]# kubectl get pods
NAME                                            READY   STATUS              RESTARTS   AGE
weave-scope-agent-ui-4mdsb                      1/1     Running             0          29m
weave-scope-agent-ui-cps6j                      1/1     Running             0          29m
weave-scope-agent-ui-lh4mh                      1/1     Running             0          29m
weave-scope-cluster-agent-ui-7498b8d4f4-7gj92   1/1     Running             0          29m
weave-scope-frontend-ui-649c7dcd5d-kwbtn        1/1     Running             0          29m
webapp-59d9889648-5zwxr                         0/1     ContainerCreating   0          5s
webapp-59d9889648-m78pb                         1/1     Running             0          93s
webapp-59d9889648-wp946                         0/1     ContainerCreating   0          5s
[root@cloud-mn01 ~]# 
  • 模板的高效复用

在 values.yaml 定义变量,模板中变化的字段取变量的值,动态渲染模板达到高效复用。

  1. 定义变量
[root@cloud-mn01 ~]# cat mychart/values.yaml 
image: nginx
label: nginx
port: 80
[root@cloud-mn01 ~]# 
  1. 使用变量
[root@cloud-mn01 ~]# cat mychart/templates/deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: {{ .Values.label}}
  name: {{ .Release.Name}}
spec:
  replicas: 3
  selector:
    matchLabels:
      app: {{ .Values.label}}
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: {{ .Values.label}}
    spec:
      containers:
      - image: {{ .Values.image}}
        name: nginx
        resources: {}
status: {}
[root@cloud-mn01 ~]# 
[root@cloud-mn01 ~]# cat mychart/templates/service.yaml 
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: {{ .Values.label}}
  name: {{ .Release.Name}}
spec:
  ports:
  - port: {{ .Values.port}}
    protocol: TCP
    targetPort: 80
  selector:
    app: {{ .Values.label}}
  type: NodePort
status:
  loadBalancer: {}
[root@cloud-mn01 ~]# 
  1. 安装应用
[root@cloud-mn01 ~]# helm install webapp1 --dry-run mychart
NAME: webapp1
LAST DEPLOYED: Fri Jul 30 13:47:25 2021
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
HOOKS:
MANIFEST:
---
# Source: mychart/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: webapp1
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: NodePort
status:
  loadBalancer: {}
---
# Source: mychart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: webapp1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
status: {}

[root@cloud-mn01 ~]# helm install webapp1 mychart
NAME: webapp1
LAST DEPLOYED: Fri Jul 30 13:47:58 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

[root@cloud-mn01 ~]# kubectl get deployment webapp1
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
webapp1   0/3     3            0           12s
[root@cloud-mn01 ~]# kubectl get pods
NAME                                            READY   STATUS    RESTARTS   AGE
weave-scope-agent-ui-4mdsb                      1/1     Running   0          52m
weave-scope-agent-ui-cps6j                      1/1     Running   0          52m
weave-scope-agent-ui-lh4mh                      1/1     Running   0          52m
weave-scope-cluster-agent-ui-7498b8d4f4-7gj92   1/1     Running   0          52m
weave-scope-frontend-ui-649c7dcd5d-kwbtn        1/1     Running   0          52m
webapp-59d9889648-5zwxr                         1/1     Running   0          22m
webapp-59d9889648-m78pb                         1/1     Running   0          24m
webapp-59d9889648-wp946                         1/1     Running   0          22m
webapp1-f89759699-6js79                         1/1     Running   0          85s
webapp1-f89759699-pwn5s                         1/1     Running   0          85s
webapp1-f89759699-wdrwd                         1/1     Running   0          85s
[root@cloud-mn01 ~]# kubectl get svc
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes       ClusterIP   10.96.0.1       <none>        443/TCP        11d
ui-weave-scope   NodePort    10.105.66.201   <none>        80:30657/TCP   52m
webapp           NodePort    10.97.72.40     <none>        80:32391/TCP   24m
webapp1          NodePort    10.97.85.235    <none>        80:31916/TCP   88s
[root@cloud-mn01 ~]# 

4.3 持久化存储

  • NFS
  1. 配置 NFS

这里将 nfs 部署在管理节点,同时所有 node 节点也要安装 nfs-utils。

[root@cloud-mn01 ~]# yum install nfs-utils
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.ustc.edu.cn
 * updates: mirrors.163.com
Package 1:nfs-utils-1.3.0-0.68.el7.1.x86_64 already installed and latest version
Nothing to do
[root@cloud-mn01 ~]# cat /etc/exports
/data/nfs *(rw,no_root_squash)
[root@cloud-mn01 ~]# mkdir -p /data/nfs
[root@cloud-mn01 ~]# systemctl start nfs
  1. 使用 NFS 作为持久化存储
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        volumeMounts:
        - name: wwwroot
          mountPath: /usr/share/nginx/html
        ports:
        - containerPort: 80
      volumes:
        - name: wwwroot
          nfs:
            server: 192.168.1.101
            path: /data/nfs
[root@cloud-mn01 ~]# kubectl apply -f nfs-test.yaml 
deployment.apps/nfs-test created
[root@cloud-mn01 ~]# kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
nfs-test-5c77cd745-qhqk7   1/1     Running   0          50s
[root@cloud-mn01 ~]# kubectl get deployment
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
nfs-test   1/1     1            1           58s
[root@cloud-mn01 ~]# kubectl expose deployment nfs-test --port=80 --target-port=80 --type=NodePort
service/nfs-test exposed
[root@cloud-mn01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        11d
nfs-test     NodePort    10.106.16.61   <none>        80:30716/TCP   7s

[root@cloud-mn01 ~]# echo "Hello NFS" > /data/nfs/index.html
[root@cloud-mn01 ~]# curl cloud-dn01:30716
Hello NFS
[root@cloud-mn01 ~]# 
  • PV

NFS 会直接暴露服务器地址和目录,更推荐使用PV的方式申请存储,使用PVC方式实现持久化(PVC不关注存储服务器在哪里)。

  1. 简介
    1.1 PV:持久化存储,对存储资源进行抽象,对外提供可以调用的地方;
    1.2 PVC:通过调用PV实现Pod的持久化存储,不需要关注PV里面的细节;
    1.3 调用流程如下:

在这里插入图片描述

  1. 创建 pv, pvc
[root@cloud-mn01 ~]# cat pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /data/nfs
    server: 192.168.1.101
    
[root@cloud-mn01 ~]# cat pvc.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pvc-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        volumeMounts:
        - name: wwwroot
          mountPath: /usr/share/nginx/html
        ports:
        - containerPort: 80
      volumes:
      - name: wwwroot
        persistentVolumeClaim:
          claimName: my-pvc

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
[root@cloud-mn01 ~]# 
[root@cloud-mn01 ~]# kubectl apply -f pv.yaml 
persistentvolume/my-pv created
[root@cloud-mn01 ~]# kubectl apply -f pvc.yaml 
deployment.apps/pvc-test created
persistentvolumeclaim/my-pvc created
[root@cloud-mn01 ~]# kubectl get pv,pvc
NAME                     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM            STORAGECLASS   REASON   AGE
persistentvolume/my-pv   5Gi        RWX            Retain           Bound    default/my-pvc                           14s

NAME                           STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/my-pvc   Bound    my-pv    5Gi        RWX                           11s
[root@cloud-mn01 ~]# 
  1. 验证
[root@cloud-mn01 ~]# kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
pvc-test-58b7bf955f-blpvb   1/1     Running   0          100s
[root@cloud-mn01 ~]# kubectl get deployments
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
pvc-test   1/1     1            1           109s

[root@cloud-mn01 ~]# echo "Hello PV" > /data/nfs/index.html 
[root@cloud-mn01 ~]# kubectl exec -it pvc-test-58b7bf955f-blpvb cat /usr/share/nginx/html/index.html
Hello PV
[root@cloud-mn01 ~]# 

4.4 集群监控

  • 监控指标
  1. 集群监控:节点资源利用率、节点数、运行pods
  2. Pod 监控:容器指标、应用程序
  • 监控平台

Prometheus + Grafana:前者定期抓取监控数据,后者人性化图表展示。

  1. Prometheus:以HTTP协议周期性抓取被监控组件的状态,不需要复杂的集成过程;
  2. Grafana:支持多种数据源的数据分析和可视化工具。
  • 部署
[root@cloud-mn01 ~]# kubectl apply -f node-exportor.yaml 
daemonset.apps/node-exporter created
service/node-exporter created

[root@cloud-mn01 prometheus]# kubectl apply -f rbac-setup.yaml 
clusterrole.rbac.authorization.k8s.io/prometheus created
serviceaccount/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
[root@cloud-mn01 prometheus]# kubectl apply -f configmap.yaml 
configmap/prometheus-config created
[root@cloud-mn01 prometheus]# kubectl apply -f prometheus.deploy.yml 
deployment.apps/prometheus created
[root@cloud-mn01 prometheus]# kubectl apply -f prometheus.svc.yml 
service/prometheus created
[root@cloud-mn01 ~]# kubectl get deployments -n kube-system
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
coredns      2/2     2            2           12d
prometheus   1/1     1            1           60s
[root@cloud-mn01 ~]# 

[root@cloud-mn01 grafana]# kubectl apply -f grafana-deploy.yaml 
deployment.apps/grafana-core created
[root@cloud-mn01 grafana]# kubectl apply -f grafana-svc.yaml 
service/grafana created
[root@cloud-mn01 grafana]# kubectl apply -f grafana-ing.yaml 
ingress.extensions/grafana created
[root@cloud-mn01 grafana]# kubectl get deployments -n kube-system
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
coredns        2/2     2            2           12d
grafana-core   0/1     1            0           21s
prometheus     1/1     1            1           4m3s
[root@cloud-mn01 grafana]# 
[root@cloud-mn01 ~]# cat node-exportor.yaml 
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: kube-system
  labels:
    k8s-app: node-exporter
spec:
  selector:
    matchLabels:
      k8s-app: node-exporter
  template:
    metadata:
      labels:
        k8s-app: node-exporter
    spec:
      containers:
      - image: prom/node-exporter
        name: node-exporter
        ports:
        - containerPort: 9100
          protocol: TCP
          name: http
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: node-exporter
  name: node-exporter
  namespace: kube-system
spec:
  ports:
  - name: http
    port: 9100
    nodePort: 31672
    protocol: TCP
  type: NodePort
  selector:
    k8s-app: node-exporter

[root@cloud-mn01 ~]# 
[root@cloud-mn01 ~]# 
[root@cloud-mn01 ~]# cat prometheus/rbac-setup.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: kube-system
[root@cloud-mn01 ~]# cat prometheus/configmap.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  namespace: kube-system
data:
  prometheus.yml: |
    global:
      scrape_interval:     15s
      evaluation_interval: 15s
    scrape_configs:

    - job_name: 'kubernetes-apiservers'
      kubernetes_sd_configs:
      - role: endpoints
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: default;kubernetes;https

    - job_name: 'kubernetes-nodes'
      kubernetes_sd_configs:
      - role: node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics

    - job_name: 'kubernetes-cadvisor'
      kubernetes_sd_configs:
      - role: node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor

    - job_name: 'kubernetes-service-endpoints'
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
        regex: (https?)
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_name

    - job_name: 'kubernetes-services'
      kubernetes_sd_configs:
      - role: service
      metrics_path: /probe
      params:
        module: [http_2xx]
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__address__]
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox-exporter.example.com:9115
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        target_label: kubernetes_name

    - job_name: 'kubernetes-ingresses'
      kubernetes_sd_configs:
      - role: ingress
      relabel_configs:
      - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
        regex: (.+);(.+);(.+)
        replacement: ${1}://${2}${3}
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox-exporter.example.com:9115
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_ingress_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_ingress_name]
        target_label: kubernetes_name

    - job_name: 'kubernetes-pods'
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: kubernetes_pod_name
[root@cloud-mn01 ~]# cat prometheus/prometheus.deploy.yml 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: prometheus-deployment
  name: prometheus
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      containers:
      - image: prom/prometheus:v2.0.0
        name: prometheus
        command:
        - "/bin/prometheus"
        args:
        - "--config.file=/etc/prometheus/prometheus.yml"
        - "--storage.tsdb.path=/prometheus"
        - "--storage.tsdb.retention=24h"
        ports:
        - containerPort: 9090
          protocol: TCP
        volumeMounts:
        - mountPath: "/prometheus"
          name: data
        - mountPath: "/etc/prometheus"
          name: config-volume
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
          limits:
            cpu: 500m
            memory: 2500Mi
      serviceAccountName: prometheus    
      volumes:
      - name: data
        emptyDir: {}
      - name: config-volume
        configMap:
          name: prometheus-config   
[root@cloud-mn01 ~]# cat prometheus/prometheus.svc.yml 
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: prometheus
  name: prometheus
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 9090
    targetPort: 9090
    nodePort: 30003
  selector:
    app: prometheus
[root@cloud-mn01 ~]# 
[root@cloud-mn01 ~]# 
[root@cloud-mn01 ~]# cat grafana/grafana-deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana-core
  namespace: kube-system
  labels:
    app: grafana
    component: core
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
      component: core
  template:
    metadata:
      labels:
        app: grafana
        component: core
    spec:
      containers:
      - image: grafana/grafana:4.2.0
        name: grafana-core
        imagePullPolicy: IfNotPresent
        # env:
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 100Mi
          requests:
            cpu: 100m
            memory: 100Mi
        env:
          # The following env variables set up basic auth twith the default admin user and admin password.
          - name: GF_AUTH_BASIC_ENABLED
            value: "true"
          - name: GF_AUTH_ANONYMOUS_ENABLED
            value: "false"
          # - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          #   value: Admin
          # does not really work, because of template variables in exported dashboards:
          # - name: GF_DASHBOARDS_JSON_ENABLED
          #   value: "true"
        readinessProbe:
          httpGet:
            path: /login
            port: 3000
          # initialDelaySeconds: 30
          # timeoutSeconds: 1
        volumeMounts:
        - name: grafana-persistent-storage
          mountPath: /var
      volumes:
      - name: grafana-persistent-storage
        emptyDir: {}
[root@cloud-mn01 ~]# cat grafana/grafana-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: kube-system
  labels:
    app: grafana
    component: core
spec:
  type: NodePort
  ports:
    - port: 3000
  selector:
    app: grafana
    component: core
[root@cloud-mn01 ~]# cat grafana/grafana-ing.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
   name: grafana
   namespace: kube-system
spec:
   rules:
   - host: k8s.grafana
     http:
       paths:
       - path: /
         backend:
          serviceName: grafana
          servicePort: 3000
[root@cloud-mn01 ~]# 
  • 配置
  1. 查看访问端口
[root@cloud-mn01 ~]# kubectl get svc -n kube-system
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
grafana         NodePort    10.99.175.201   <none>        3000:32531/TCP           107s
kube-dns        ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   12d
node-exporter   NodePort    10.106.35.20    <none>        9100:31672/TCP           9m53s
prometheus      NodePort    10.96.200.111   <none>        9090:30003/TCP           5m27s
[root@cloud-mn01 ~]# 
  1. 登录 Grafana

http://192.168.1.201:32531/login 默认用户名 admin 密码 admin

在这里插入图片描述

  1. 添加 Grafana 数据源

在这里插入图片描述
在这里插入图片描述

  1. 导入监控模板

Dashboards -> Import -> Grafana.net Dashboard -> 315

在这里插入图片描述

  1. 效果展示

在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值