k8s学习笔记

一、安装前准备

(先把docker装好,集群内每一台都需要)

  • 一台兼容的 Linux 主机。Kubernetes 项目为基于 Debian 和 Red Hat 的 Linux 发行版以及一些不提供包管理器的发行版提供通用的指令。
  • 每台机器 2 GB 或更多的 RAM(如果少于这个数字将会影响你应用的运行内存)。
  • CPU 2 核心及以上。
  • 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)。
  • 节点之中不可以有重复的主机名、MAC 地址或 product_uuid。请参见这里了解更多详细信息。
  • 开启机器上的某些端口。请参见这里了解更多详细信息。
  • 禁用交换分区。为了保证 kubelet 正常工作,你必须禁用交换分区。

所有机器执行以下操作

# 给集群内各个机器设置域名,避免重复
hostnamectl set-hostname xxxx

# 将 SELinux 设置为 permissive 模式(相当于将其禁用)(linux系统中的安全设置)
# 临时禁用
sudo setenforce 0
# 永久禁用
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# 查看当前内存使用情况(-m 以字节的形式展示)
free -m

# 关闭swap
# 临时关闭
swapoff -a
# 永久关闭  
sed -ri 's/.*swap.*/#&/' /etc/fstab

# 允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

# 使配置文件生效
sudo sysctl --system

二、安装

 1、安装kubelet、kubeadm、kubectl

所有机器执行

# 告诉linux去哪里下载kubernetes
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

# 下载
sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes

# 启动kubelet并设置开机自启
# 启动后,如果使用systemctl status kubelet查看状态,会出现启动/停止无限闪亮的过程,因为它陷入了一个等待 kubeadm 指令的死循环
sudo systemctl enable --now kubelet

2、使用kubeadm引导集群

1、下载各个机器需要的镜像

所有机器执行

# 除了kubelet以外,其余都是以镜像的方式运行,其他模块都是由kubelet来获取镜像
sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF
   
chmod +x ./images.sh && ./images.sh

2、初始化主节点

# 所有机器添加master域名映射
echo "192.168.31.27  cluster-endpoint" >> /etc/hosts



# 主节点初始化
# 只需要在主节点执行
# 第二行的ip必须是master的地址,第三行的域名必须是master的域名
kubeadm init \
--apiserver-advertise-address=192.168.31.27 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.169.0.0/16

# 所有网络范围不重叠,docker安装后占用的是172.17.0.1/16

# 可在主节点使用此命令判断是否安装成功

kubectl get nodes

执行后,需要记录输出内容,后续有用

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:
#第一步,复制执行即可
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

#第二步,需要下载网络插件
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

#增加master
  kubeadm join cluster-endpoint:6443 --token 8yjd6q.r660tz3f0myr529a \
    --discovery-token-ca-cert-hash sha256:1546719bf3b2b6fa4afce5d4b8bf04602cd0287417d0d32bbe4ff63aec00afa6 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

#增加工作节点   此处的token24小时有效
kubeadm join cluster-endpoint:6443 --token 8yjd6q.r660tz3f0myr529a \
    --discovery-token-ca-cert-hash sha256:1546719bf3b2b6fa4afce5d4b8bf04602cd0287417d0d32bbe4ff63aec00afa6
#下载calico网络插件配置文件
curl https://docs.projectcalico.org/manifests/calico.yaml -O
#kubectl version,发现版本是v1.20.9,对应的calico版本是v3.20
curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O

#初始化主节点时,--pod-network-cidr=192.168.0.0/16,默认为192.168网段,修改为了192.169,因此需要修改calico中的对应配置

#给k8s里安装calico
kubectl apply -f calico.yaml

# 查看集群中部署了哪些应用 之前的docker叫容器,k8s叫pod
# -A 查看所有,不加默认查看default命名空间中的
# -w 可以看到输出日志,比如开始初始化某个pod
# watch -n 1 kubectl get pods -A   每一秒查看一下状态
# kubectl get pod -owide 查看更详细的pod信息 带ip
kubectl get pods -A

3、加入node节点

# 根据之前保存的内容,在工作节点机器上执行即可
kubeadm join cluster-endpoint:6443 --token 8yjd6q.r660tz3f0myr529a \
>     --discovery-token-ca-cert-hash sha256:1546719bf3b2b6fa4afce5d4b8bf04602cd0287417d0d32bbe4ff63aec00afa6

卡住不动了,关闭主节点防火墙即可

[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
        [WARNING Hostname]: hostname "k8s-node1" could not be reached
        [WARNING Hostname]: hostname "k8s-node1": lookup k8s-node1 on 192.168.31.1:53: no such host

令牌过期后怎么办?重新生成(在master节点执行)

kubeadm token create --print-join-command
kubeadm join cluster-endpoint:6443 --token qwzp8v.qfwfeh7x3pdc3a1r     --discovery-token-ca-cert-hash sha256:1546719bf3b2b6fa4afce5d4b8bf04602cd0287417d0d32bbe4ff63aec00afa6 

3、部署dashboard

1、主节点安装

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.3.1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.6
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

2、设置访问端口

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

type: ClusterIP 改为 type: NodePort

# 找到端口,在安全组放行
kubectl get svc -A |grep kubernetes-dashboard

访问: https://集群任意IP:端口(我的是30427)

https://139.198.165.238:30427  

3、创建访问账号

# 创建访问账号,准备一个yaml文件; vi dash.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
kubectl apply -f dash.yaml

4、令牌访问获取token

# 获取访问令牌
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"


eyJhbGciOiJSUzI1NiIsImtpZCI6IklYTTRxZHNTb0lkclltRnN0aDY2OXJ3RzlhUkxucjNISG1tbW44X3VFdVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWR6aHE0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjYzY0ODdiYy1mMWFhLTQwN2ItOTFkZC0yN2I3ODdlZGU2MjQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.d9rUEo5u0-DRYnXfUn3nRhVTncCWDsijRQYQwTmeNdL0U8Dv8k_yUrJ4W1kV2AP9VArt-pv4U3eXM2ts875CT-3L6vpg6JE42WDtJy4ama92NLiX4n7HFdugThhoowAV53Ac_6O4YaTc7o-TROplowLkHZ4hDjo9OYo1u21QhhGfq9uGkBz6jsvUhCe5oTpxFmmjimUN3_yUsUFf6nwS0dWk_d986A-de0hLfj4-wC1_soWpFVIK7j0wjHk2brQbultH07YPsXb-c_brixl0QvsUqtCka9OUxSQ1nlgCqoVVWK30RwSw7GbDkzh798zfkONu_ofHejw_srxvmeqoPw

三、实战

1、资源创建方式

  • 命令行
  • YAML

2、Namespace

名称空间用来对集群资源进行隔离划分。默认只隔离资源,不隔离网络。(想要隔离网络需配置)

# 获取k8s中的命名空间
kubectl get ns/namspace

# 获取指定命名空间的pod,不加-n为default,-A为所有。创建创建的资源如果不指定命名空间的话,会创建到default空间下
kubectl get pods -n kubernetes-dashboard

# 创建自定义命名空间
kubectl create ns hello

# 删除自定义命名空间,系统级的不要删,default拒绝删除。删除的时候会把下面的资源也删掉。
kubectl delete ns hello

以yaml的形式创建命名空间

apiVersion: v1
kind: Namespace
metadata:
  name: hello

以yaml形式创建,最好也以yaml的形式删除

kubectl apply -f hello.yaml

kubectl delete -f hello.yaml

3、pod

运行中的一组容器,pod是kubernetes中应用的最小单位

(k8s将docker中的容器再封装一次,即pod,pod中可以有一个容器,也可以是多个,为一组,构成一个原子pod)

命令行形式 

# 创建pod
kubectl run mynginx --image=nginx

# 查看pod描述
kubectl describe pod mynginx

Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  13m   default-scheduler  Successfully assigned default/mynginx to k8s-node2
  Normal  Pulling    13m   kubelet            Pulling image "nginx"
  Normal  Pulled     12m   kubelet            Successfully pulled image "nginx" in 49.9333842s
  Normal  Created    12m   kubelet            Created container mynginx
  Normal  Started    12m   kubelet            Started container mynginx

# 将任务分配给了node2工作节点,k8s-node2,底层还是docker容器,可以通过docker ps查看

# 删除pod
kubectl delete pod mynginx
# -n 指定命名空间
#kubectl delete pod mynginx -n default
# 删除多个pod 空格分隔即可
kubectl delete pod myapp mynginx -n default

以yaml形式创建pod

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: mynginx
  name: mynginx
  namespace: default
spec:
  containers:
  - image: nginx
    name: mynginx

kubectl apply -f pod.yaml

kubectl delete -f pod.yaml

# 查看pod日志  只有pod有日志,所以不用加pod   -f 阻塞式追踪日志
kubectl logs mynginx
kubectl logs -f mynginx

# 每个pod k8s都会给分配一个ip
# --pod-network-cidr=192.169.0.0/16  初始化主节点的配置
# 使用pod的ip + pod里面运行容器的端口即可访问
# 集群中的任意一个机器以及任意的应用都能通过Pod分配的ip来访问这个Pod
# 此时外部还不能访问
# curl 192.169.169.132
kubectl get pod -owide
NAME      READY   STATUS    RESTARTS   AGE     IP                NODE        NOMINATED NODE   READINESS GATES
mynginx   1/1     Running   0          4m17s   192.169.169.132   k8s-node2   <none>           <none>

#进入pod内部  还可在dashboard中点执行进入pod内部
kubectl exec -it mynginx -- /bin/bash

dashboard中创建pod,需要选择好namespace,否则需要在yaml中指定namespace

可在页面中查看日志、描述、删除、执行等操作,对应上面的各种命令

 一个pod中运行多个容器

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: myapp
  name: myapp
spec:
  containers:
  - image: nginx
    name: nginx
  - image: tomcat:8.5.68
    name: tomcat

# 查看ip
kubectl get pod -owide
NAME      READY   STATUS    RESTARTS   AGE     IP                NODE        NOMINATED NODE   READINESS GATES
myapp     2/2     Running   0          3m53s   192.169.36.66     k8s-node1   <none>           <none>
mynginx   1/1     Running   0          36m     192.169.169.132   k8s-node2   <none>           <none>

# 访问nginx
curl 192.169.36.66

# 访问tomcat
curl 192.169.36.66:8080

# 内部相互访问时使用127.0.0.1即可

 

 测试一个pod中启动两个nginx(端口占用)

# myapp-2 失败
kubectl get pod
NAME      READY   STATUS    RESTARTS   AGE
myapp     2/2     Running   0          19m
myapp-2   1/2     Error     1          51s
mynginx   1/1     Running   0          51m

#排错
kubectl describe pod myapp-2
# 查看日志  可以通过命令行,也可以通过dashboard
# -c 指定pod内部容器名   当有多个容器时必填
# 正常
kubectl logs -c nginx01 myapp-2
# Address already in use  k8s会一直重试
kubectl logs -c nginx02 myapp-2

注(--pod-network-cidr):

二-2-2初始化主节点时,有一个配置是 --pod-network-cidr=192.169.0.0/16,这也是为什么这次学习期间,k8s给pod分配的网络都是192.169开头(pod网络)

思考:如果非要安装两个nginx怎么办?自定义端口?怎么自定义?

4、Deployment

控制pod,使pod拥有多副本、自愈、扩缩容等能力

 1、自愈能力

# 使用原始方法创建pod
kubectl run mynginx --image=nginx

# 使用deployment方式创建pod 可简写为deploy
kubectl create deployment mytomcat --image=tomcat:8.5.68

# 测试两种方式的不同  k8s的自愈能力
# kubectl delte pod mynginx后,kubectl get pod查看mynginx是真的被删除
# deployment方式创建后,名字为随机的,例:mytomcat-6f5f895f4f-668dp,删除后会立马重启一个新的,类似宕机后重启(自愈能力)

# 查看deployment  可简写为deploy
# -n namespace
kubectl get deployment

# 删除deployment 可简写为deploy
kubectl delete deployment -n default mytomcat

2、多副本

命令行部署 

# 一次创建三个副本
kubectl create deploy my-dep --image=nginx --replicas=3

 yaml部署

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: my-dep
  name: my-dep
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-dep
  template:
    metadata:
      labels:
        app: my-dep
    spec:
      containers:
      - image: nginx
        name: nginx

dashboard表单创建

 3、扩缩容

# 扩容
kubectl scale deploy/my-dep --replicas=5

# 缩容 会随机选择pod,关闭对应数量
kubectl scale deploy/my-dep --replicas=2

# 修改yaml的方式进行扩缩容操作
# 修改sepc下的replicas即可
kubectl edit deploy my-dep

 也可以在dashboard中操作对应的缩放功能

 4、故障转移

NAME                      READY   STATUS    RESTARTS   AGE   IP                NODE        NOMINATED NODE   READINESS GATES
my-dep-5b7868d854-5wp9t   1/1     Running   0          28m   192.169.169.134   k8s-node2   <none>           <none>
my-dep-5b7868d854-cnlxs   1/1     Running   0          28m   192.169.36.70     k8s-node1   <none>           <none>
my-dep-5b7868d854-djbfq   1/1     Running   0          28m   192.169.169.135   k8s-node2   <none>           <none>

# 自愈
# docker stop xxx  关闭my-dep-5b7868d854-cnlxs对应节点的容器模拟宕机
# k8s会重启一个新的,之前的通过docker ps -a可以看到,为退出状态
kubectl get pod -w
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5b7868d854-5wp9t   1/1     Running   0          29m
my-dep-5b7868d854-cnlxs   1/1     Running   0          29m
my-dep-5b7868d854-djbfq   1/1     Running   0          29m
my-dep-5b7868d854-cnlxs   0/1     Completed   0          29m
my-dep-5b7868d854-cnlxs   1/1     Running     1          29m

# 故障转移
# 手动关闭node1后,等待大概5分钟,会关闭cnlxs,重新开启一个k9977,在node2上,此为故障转移
# 在node1启动前,cnlxs状态一直为Terminating,启动后才会关闭成功
kubectl get pod -w
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5b7868d854-5wp9t   1/1     Running   0          29m
my-dep-5b7868d854-cnlxs   1/1     Running   0          29m
my-dep-5b7868d854-djbfq   1/1     Running   0          29m
my-dep-5b7868d854-cnlxs   0/1     Completed   0          29m
my-dep-5b7868d854-cnlxs   1/1     Running     1          29m
my-dep-5b7868d854-cnlxs   1/1     Running     1          37m
my-dep-5b7868d854-cnlxs   1/1     Terminating   1          42m
my-dep-5b7868d854-k9977   0/1     Pending       0          0s
my-dep-5b7868d854-k9977   0/1     Pending       0          0s
my-dep-5b7868d854-k9977   0/1     ContainerCreating   0          0s
my-dep-5b7868d854-k9977   0/1     ContainerCreating   0          9s
my-dep-5b7868d854-k9977   1/1     Running             0          11s

kubectl get pod -owide
NAME                      READY   STATUS        RESTARTS   AGE     IP                NODE        NOMINATED NODE   READINESS GATES
my-dep-5b7868d854-5wp9t   1/1     Running       0          46m     192.169.169.134   k8s-node2   <none>           <none>
my-dep-5b7868d854-cnlxs   0/1     Terminating   1          46m     <none>            k8s-node1   <none>           <none>
my-dep-5b7868d854-djbfq   1/1     Running       0          46m     192.169.169.135   k8s-node2   <none>           <none>
my-dep-5b7868d854-k9977   1/1     Running       0          3m54s   192.169.169.137   k8s-node2   <none>           <none>

5、滚动更新

类似于不停机更新,升级版本,不直接关闭之前的pod内容器,而是启动一个关闭一个

# 以yaml格式获取deploy,- image: nginx。或者查看pod的描述也可以找到版本
kubectl get deploy my-dep -oyaml

# nginx=nginx:1.16.1   - image:的值(旧版本): 新版本
# 一般使用yaml格式更新
kubectl set image deploy/my-dep nginx=nginx:1.16.1 --record

kubectl get pod -w
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5b7868d854-5wp9t   1/1     Running   0          18h
my-dep-5b7868d854-djbfq   1/1     Running   0          18h
my-dep-5b7868d854-k9977   1/1     Running   0          17h
my-dep-6b48cbf4f9-sgnfc   0/1     Pending   0          0s
my-dep-6b48cbf4f9-sgnfc   0/1     Pending   0          0s
my-dep-6b48cbf4f9-sgnfc   0/1     ContainerCreating   0          0s
my-dep-6b48cbf4f9-sgnfc   0/1     ContainerCreating   0          0s
my-dep-6b48cbf4f9-sgnfc   1/1     Running             0          40s
my-dep-5b7868d854-k9977   1/1     Terminating         0          17h
my-dep-6b48cbf4f9-tfpb8   0/1     Pending             0          0s
my-dep-6b48cbf4f9-tfpb8   0/1     Pending             0          0s
my-dep-6b48cbf4f9-tfpb8   0/1     ContainerCreating   0          0s
my-dep-5b7868d854-k9977   1/1     Terminating         0          17h
my-dep-6b48cbf4f9-tfpb8   0/1     ContainerCreating   0          2s
my-dep-6b48cbf4f9-tfpb8   1/1     Running             0          3s
my-dep-5b7868d854-djbfq   1/1     Terminating         0          18h
my-dep-6b48cbf4f9-kndkc   0/1     Pending             0          0s
my-dep-6b48cbf4f9-kndkc   0/1     Pending             0          0s
my-dep-6b48cbf4f9-kndkc   0/1     ContainerCreating   0          0s
my-dep-5b7868d854-k9977   0/1     Terminating         0          17h
my-dep-5b7868d854-djbfq   1/1     Terminating         0          18h
my-dep-6b48cbf4f9-kndkc   0/1     ContainerCreating   0          1s
my-dep-5b7868d854-djbfq   0/1     Terminating         0          18h
my-dep-5b7868d854-djbfq   0/1     Terminating         0          18h
my-dep-5b7868d854-djbfq   0/1     Terminating         0          18h
my-dep-5b7868d854-k9977   0/1     Terminating         0          17h
my-dep-5b7868d854-k9977   0/1     Terminating         0          17h
my-dep-6b48cbf4f9-kndkc   1/1     Running             0          17s
my-dep-5b7868d854-5wp9t   1/1     Terminating         0          18h
my-dep-5b7868d854-5wp9t   1/1     Terminating         0          18h
my-dep-5b7868d854-5wp9t   0/1     Terminating         0          18h
my-dep-5b7868d854-5wp9t   0/1     Terminating         0          18h
my-dep-5b7868d854-5wp9t   0/1     Terminating         0          18h

实际使用滚动更新用的yaml,怎么写?两次的区别怎么弄?

6、版本回退

# 查看历史记录  使用了--record的都会被记录下来
kubectl rollout history deploy/my-dep

# 查看某个历史详情
kubectl rollout history deploy/my-dep --revision=2

# 回滚到上次 也是停一个起一个
kubectl rollout undo deploy/my-dep

# 回滚到指定版本
kubectl rollout undo deploy/my-dep --to-revision=2

7、更多

除了Deployment,k8s还有 StatefulSetDaemonSetJob 等 类型资源。我们都称为 工作负载

StatefulSet  有状态副本集(比如中间件)

Deployment 无状态副本集(比如项目jar包)

(比如mysql、redis等,在一台机器宕机后,另一台机器启动后,需要记录之前的数据,这种就是有状态)

DaemonSet是守护进程集(比如日志收集器,每个机器都要启动,有且仅有一份)

工作负载资源 | Kubernetes

5、Service

Pod的服务发现与负载均衡。将一组Pods公开为网络服务的抽象方法。

# 暴露deploy 对外的端口为8000,访问pod内的端口是80
kubectl expose deploy my-dep --port=8000 --target-port=80

#对应的yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-dep
  name: my-dep
spec:
  selector:
    app: my-dep
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80

# 查看
kubectl get service

NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP    7d19h
my-dep       ClusterIP   10.96.66.208   <none>        8000/TCP   3d17h

# 说明:service是根据deploy的label来筛选一组pod为service(不写默认为app:{name}),--show-labels可以看标签

#使用标签检索pod
kubectl get pod -l app=my-dep

# 负载均衡  
# 这个ip只在集群内有效
# curl serviceIp:servicePort
curl 10.96.66.208:8000

# curl 服务名.所在名称空间.svc:servicePort  域名只在容器内部有效
# kubectl create deploy my-tomcat-dep --image=tomcat    进入到容器内部执行
# 其他pod内部访问这一组pod(也就是deploy)时可以通过域名,机器节点不可以
curl my-dep.default.svc:8000

1、ClusterIP

上面创建的service默认就是clusterIp,即 集群内访问

# 默认type为ClusterIP
kubectl expose deploy my-dep --prot=8000 --target-port=80
kublctl expose deploy my-dep --port=8000 --target-port=80 --type=ClusterIP

# 对应的yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-dep
  name: my-dep
spec:
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80
  selector:
    app: my-dep
  type: ClusterIP

测试服务发现机制:

当前my-dep为3份,缩放为2份后,有一个pod下线,通过curl访问测试,发现服务会感知到pod下线,不会把流量再打给这个pod,接着再次缩放,增加为3个,发现服务会感知到pod上线,会再次把流量打给这个pod

2、NodePort

--type=NodePort   集群外也可以访问

# 开始测试 删除my-dep svc
kubectl delete svc -n default my-dep
# 使用NodePort方式创建svc
kubectl expose deploy my-dep --port=8000 --target-port=80 --type=NodePort

NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP          7d20h
my-dep       NodePort    10.96.6.70   <none>        8000:31787/TCP   6s

# 会发现跟ClusterIp模式,相比,port列多了一个31787端口,这个端口就是用来在集群外访问时使用的,是在每个pod上都开启了这个端口
# 类似31787这样的端口,是k8s随机开放的,范围是30000-32767之间
# 测试,可以使用外部机器进行访问,同时支持ClusterIP模式的特性
# 外部访问用的是服务器ip,而不是clusterip,不要弄混了

注(--service-cidr):

二-2-2初始化主节点中,有一个配置是--service-cidr=10.96.0.0/16,这也是为什么expose后创建的svc服务网络都是10.96开头(service网络)

6、Ingress

Service的统一网关入口,底层是nginx。ingress-->service-->pod

1、安装

# 下载yaml文件
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/baremetal/deploy.yaml

# 修改镜像
vi deploy.yaml
# 将image的值改为如下值:
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/ingress-nginx-controller:v0.46.0

# 检查安装的结果
kubectl get pod,svc -n ingress-nginx

# 最后别忘记把svc暴露的端口要放行

下载不了,用这个 

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx

---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - configmaps
    resourceNames:
      - ingress-controller-leader-nginx
    verbs:
      - get
      - update
  - apiGroups:
      - ''
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  type: ClusterIP
  ports:
    - name: https-webhook
      port: 443
      targetPort: webhook
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/component: controller
  revisionHistoryLimit: 10
  minReadySeconds: 0
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    spec:
      dnsPolicy: ClusterFirst
      containers:
        - name: controller
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/ingress-nginx-controller:v0.46.0
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
          args:
            - /nginx-ingress-controller
            - --election-id=ingress-controller-leader
            - --ingress-class=nginx
            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            runAsUser: 101
            allowPrivilegeEscalation: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: LD_PRELOAD
              value: /usr/local/lib/libmimalloc.so
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
            - name: webhook
              containerPort: 8443
              protocol: TCP
          volumeMounts:
            - name: webhook-cert
              mountPath: /usr/local/certificates/
              readOnly: true
          resources:
            requests:
              cpu: 100m
              memory: 90Mi
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  name: ingress-nginx-admission
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    matchPolicy: Equivalent
    rules:
      - apiGroups:
          - networking.k8s.io
        apiVersions:
          - v1beta1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions:
      - v1
      - v1beta1
    clientConfig:
      service:
        namespace: ingress-nginx
        name: ingress-nginx-controller-admission
        path: /networking/v1beta1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - secrets
    verbs:
      - get
      - create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-create
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
spec:
  template:
    metadata:
      name: ingress-nginx-admission-create
      labels:
        helm.sh/chart: ingress-nginx-3.33.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.47.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: create
          image: docker.io/jettech/kube-webhook-certgen:v1.5.1
          imagePullPolicy: IfNotPresent
          args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-patch
  annotations:
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
spec:
  template:
    metadata:
      name: ingress-nginx-admission-patch
      labels:
        helm.sh/chart: ingress-nginx-3.33.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.47.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: patch
          image: docker.io/jettech/kube-webhook-certgen:v1.5.1
          imagePullPolicy: IfNotPresent
          args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000

 现在启动会有问题

ingress-nginx-admission-create-gj8xw

W0214 04:01:01.251242 1 client_config.go:608] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
{"err":"secrets \"ingress-nginx-admission\" not found","level":"info","msg":"no secret found","source":"k8s/k8s.go:106","time":"2023-02-14T04:01:01Z"}
{"level":"info","msg":"creating new secret","source":"cmd/create.go:23","time":"2023-02-14T04:01:01Z"}

ingress-nginx-admission-patch-jw27k

W0214 04:01:15.196024 1 client_config.go:608] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
{"level":"info","msg":"patching webhook configurations 'ingress-nginx-admission' mutating=false, validating=true, failurePolicy=Fail","source":"k8s/k8s.go:39","time":"2023-02-14T04:01:15Z"}
W0214 04:01:15.220946 1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W0214 04:01:15.250645 1 warnings.go:67] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
{"level":"info","msg":"Patched hook(s)","source":"k8s/k8s.go:96","time":"2023-02-14T04:01:15Z"}

具体怎么改,因为什么,我也不知道,目前认为可能的原因是版本、虚拟机网络等方面,先往后学习,之后回过头来解决,如果有大佬知道的话,麻烦指点指点

# ingress 安装后,会生成两个service
kubectl get svc -A
NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.96.24.178   <none>        80:30021/TCP,443:31712/TCP   5m1s
ingress-nginx-controller-admission   ClusterIP   10.96.2.223    <none>        443/TCP                      5m6s

# 可以看到是以10.96开头的,对应初始化主节点时的配置 
# --service-cidr=10.96.0.0/16 \

# 是以NodePort的方式启动的,80:30021也就是说http请求用30021端口,https请求用31712端口
# http://服务器ip:30021   
# https://服务器ip:31712
http://192.168.31.27:30021
https://192.168.31.27:31712

2、使用

官网地址:Welcome - NGINX Ingress Controller

就是nginx做的

搭建测试环境

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-server
spec:
  replicas: 2
  selector:
    matchLabels:
      app: hello-server
  template:
    metadata:
      labels:
        app: hello-server
    spec:
      containers:
      - name: hello-server
        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server
        ports:
        - containerPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-demo
  name: nginx-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-demo
  template:
    metadata:
      labels:
        app: nginx-demo
    spec:
      containers:
      - image: nginx
        name: nginx
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-demo
  name: nginx-demo
spec:
  selector:
    app: nginx-demo
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: hello-server
  name: hello-server
spec:
  selector:
    app: hello-server
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 9000
# 搭建成功后,可以看到hello-server和nginx-demo服务
# 说明:两个服务都是clusterIp模式,集群内可访问,测试可使用clusterip+port
# curl 10.96.13.124:8000
# curl 10.96.128.132:8000

[root@k8s-master ~]# kubectl get svc
NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
hello-server   ClusterIP   10.96.13.124    <none>        8000/TCP         5h31m
kubernetes     ClusterIP   10.96.0.1       <none>        443/TCP          9d
my-dep         NodePort    10.96.6.70      <none>        8000:31787/TCP   28h
nginx-demo     ClusterIP   10.96.128.132   <none>        8000/TCP         5h31m

3、域名重写

设置ingress转发规则

apiVersion: networking.k8s.io/v1
kind: Ingress  
metadata:
  name: ingress-host-bar
spec:
  ingressClassName: nginx
  rules:
  - host: "hello.test.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: hello-server
            port:
              number: 8000
  - host: "demo.test.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: nginx-demo
            port:
              number: 8000
# 应用ingress转发规则
kubectl apply -f ingress-rules.yaml

# 由于本次学习使用的是虚拟机,没有真实域名,所以只能修改虚拟机的hosts文件进行简单测试
vi /etc/hosts
192.168.31.27  cluster-endpoint hello.test.com demo.test.com

# 本地直接curl hello.test.com:32140 和 curl demo.test.com:32140 测试即可
# 32140为ingress给开通的http端口,整体逻辑是网络流量先进入ingress内,由ingress内部的转发规则来
# 决定转发给哪个service,最后再由具体的pod执行

[root@k8s-master ~]# curl http://hello.test.com:32140/
Hello World!
[root@k8s-master ~]# curl http://demo.test.com:32140/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
(巴拉巴拉的不复制了,没有意义)

# 修改nginx-demo的转发规则,也就是path路径,由"/"改为"/nginx"
[root@k8s-master ~]# kubectl get ing
NAME               CLASS   HOSTS                          ADDRESS          PORTS   AGE
ingress-host-bar   nginx   hello.test.com,demo.test.com   192.168.31.193   80      26m
[root@k8s-master ~]# kubectl edit ing ingress-host-bar


# 开始测试
[root@k8s-master ~]# curl demo.test.com:32140
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>


[root@k8s-master ~]# curl demo.test.com:32140/nginx
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.21.5</center>
</body>
</html>

# 结果:发现两个请求都是404,但是一个带了nginx版本,一个不带。
# 说明:修改后,转发规则成了/nginx,当直接访问/时,ingress发现没有匹配的转发规则,会直接给过滤
# 掉,所以不带版本号。当访问/nginx时,ingress放行,进入到真正的service中执行,底层的pod会查找
# nginx资源,资源不存在,所以返回了404,此时带了版本号

# 随便进到nginx-demo里,在/usr/share/nginx/html下echo 111 > nginx,再次方式/nginx时,负载
# 均衡到这个机器时就不会出现404了,说明转发规则配置完成

4、路径重写

增加metadata.annotations和path即可

具体规则可参考Rewrite - NGINX Ingress Controller

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/use-regex: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /$2
  name: ingress-host-bar
spec:
  ingressClassName: nginx
  rules:
  - host: "hello.test.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: hello-server
            port:
              number: 8000
  - host: "demo.test.com"
    http:
      paths:
      - pathType: Prefix
        path: "/(/|$)(.*)"
        backend:
          service:
            name: nginx-demo
            port:
              number: 8000
# 也可采用edit方式
kubectl apply -f ingress-rules.yaml

# 修改后path由/变为/nginx(/|$)(.*)
# curl demo.test.com:32140/nginx  可以看到welcome to nginx 
# 之前如果在某个pod内增加了nginx文件,可以用curl demo.test.com:32140/nginx/nginx再次测试结果

5、流量限制

参考:Annotations - NGINX Ingress Controller

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-limit-rate
  annotations:
    nginx.ingress.kubernetes.io/limit-rps: "1"
spec:
  ingressClassName: nginx
  rules:
  - host: "demo2.test.com"
    http:
      paths:
      - pathType: Exact
        path: "/"
        backend:
          service:
            name: nginx-demo
            port:
              number: 8000
# 注意:需要等待ingress被分配了网络地址后才可以测试
kubectl apply -f ingress-rules-2.yaml

# 设置了limit-rps,每秒只能有一个请求,curl过快时就会出现503
[root@k8s-master ~]# curl demo2.test.com:32140
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body>
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>nginx</center>
</body>
</html>

# pathType有Exact和Prefix
# Exact为精确模式,demo2.test.com可以匹配,demo2.test.com/a就不行了
# Prefix为前缀模式 demo.test.com/ 和 demo.test.com/a 都可以匹配到

总结:

k8s的网络模型:外部流量打入到ingress,ingress负载均衡分配给service,service再负载均衡给pod。在外部流量进入ingress之前,也可以再有一个负载均衡的配置,外部流量进入ingress的负载均衡。

7、存储抽象

背景:

先抛开k8s不谈,如果是docker的话,启动的很多镜像,比如说nginx、mysql、tomcat等等,都需要使用挂载卷技术把部分目录挂载出来方便修改。

但是如果在k8s内,比如有三个pod,这个三个pod分别在三个服务器上,master-node1-node2,node2突然宕机,k8s发现后故障转移,node2上的pod可能会在node1或者mster上重启,但是挂载的数据并不会过去,等于是新启动了一个空的pod,而且都在外面挂载会很乱,为了解决这样的问题,k8s进行一个统一管理,叫做存储层。

k8s不限制存储层使用的技术,可以用Glusterfs、NFS、CephFS等等,本次使用NFS(网络文件系统)

1、环境准备

所有节点

#所有机器安装
yum install -y nfs-utils

主节点

# nfs主节点  把/nfs/data/目录暴露出去,*表示所有人可以访问
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
# 创建对应目录
mkdir -p /nfs/data
# 开启rpc远程绑定
systemctl enable rpcbind --now
# 开启nfs服务
systemctl enable nfs-server --now
# 配置生效
exportfs -r
ro只读访问
rw读写访问
sync所有数据在请求时写入共享
asyncNFS在写入数据前可以相应请求
secureNFS通过1024以下的安全TCPIIP端口发送
insecureNFS通过1024以上的端口发送
wdelay如果多个用户要写入NFS目录,则归组写入 (默认)
no wdelay如果多人用户要写入NFS目录,则立即写入,当使用async时,无需此设置
Hide在NFS共享目录中不共享其子目录
no hide共享NFS目录的子目录
subtree check如果共享/usr/bin之类的子目录时,强制NFS检查父目录的权限 (默认)
no subtree check和上面相对,不检查父目录权限
all squash共享文件的UID和GID映射匿名用户anonymous,适合公用目录
no all squash保留共享文件的UID和GID (默认)
root squashroot用户的所有请求映射成如anonymous用户一样的权限 (默认)
no root squasroot用户具有根目录的完全管理访问权限
anonuid=xxx指定NFS服务器/etc/passwd文件中匿名用户的ulD

从节点

# 测试nfs server暴露的目录
showmount -e 192.168.31.27

[root@k8s-node1 ~]# showmount -e 192.168.31.27
Export list for 192.168.31.27:
/nfs/data *


# 执行以下命令挂载 nfs 服务器上的共享目录到本机路径 /nfs/data
mkdir -p /nfs/data
# 远程ip:目录 本地挂载目录
mount -t nfs 192.168.31.27:/nfs/data /nfs/data

# 写入一个测试文件
echo "hello nfs server" > /nfs/data/test.txt

# 从节点写入主节点也可以看到,相互同步

原生方式数据挂载

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-pv-demo
  name: nginx-pv-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-pv-demo
  template:
    metadata:
      labels:
        app: nginx-pv-demo
    spec:
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
        - name: html
          nfs:
            server: 192.168.31.27
            path: /nfs/data/nginx-pv

注意:

1、需要先创建好nginx-pv目录   

2、containners.volumeMounts.name = volumes.name 根据这个对应

 2、PV&PVC

使用1中的方式创建pod并挂载到k8s存储层存在的问题:

1、例如nginx-pv目录每次都需要手动创建

2、当把pod删除后,挂载出来的文件目录并不会删除,造成资源浪费

3、没有容量限制

PV:持久卷(Persistent Volume),将应用需要持久化的数据保存到指定位置

PVC:持久卷申明(Persistent Volume Claim),申明需要使用的持久卷规格

静态供应

1、创建pv池

mkdir -p /nfs/data/01
mkdir -p /nfs/data/02
mkdir -p /nfs/data/03

2、创建pv

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv01-10m
spec:
  capacity:
    storage: 10M
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/01
    server: 192.168.31.27
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv02-1gi
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/02
    server: 192.168.31.27
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv03-3gi
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/03
    server: 192.168.31.27
[root@k8s-master ~]# kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv01-10m   10M        RWX            Retain           Available           nfs                     16s
pv02-1gi   1Gi        RWX            Retain           Available           nfs                     15s
pv03-3gi   3Gi        RWX            Retain           Available           nfs                     9s

3、创建PVC

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi
  storageClassName: nfs
[root@k8s-master ~]# kubectl get pvc
NAME        STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nginx-pvc   Bound    pv02-1gi   1Gi        RWX            nfs            7s


[root@k8s-master ~]# kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM               STORAGECLASS   REASON   AGE
pv01-10m   10M        RWX            Retain           Available                       nfs                     2m35s
pv02-1gi   1Gi        RWX            Retain           Bound       default/nginx-pvc   nfs                     2m34s
pv03-3gi   3Gi        RWX            Retain           Available                       nfs                     2m28s

创建pvc后,再次查看pv,就会发现1g空间的已经被占用

storageClassName相当于一个ns,在相同ns下寻找合适的空间

测试把pvc删除,再重新apply

[root@k8s-master ~]# kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM               STORAGECLASS   REASON   AGE
pv01-10m   10M        RWX            Retain           Available                       nfs                     4m18s
pv02-1gi   1Gi        RWX            Retain           Released    default/nginx-pvc   nfs                     4m17s
pv03-3gi   3Gi        RWX            Retain           Available                       nfs                     4m11s
[root@k8s-master ~]# kubectl apply -f k8s-pvc.yaml
persistentvolumeclaim/nginx-pvc created
[root@k8s-master ~]# kubectl get pvc
NAME        STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nginx-pvc   Bound    pv03-3gi   3Gi        RWX            nfs            6s


# 重新apply后,占用了3g空间的pv,1g的pv一直是Released,意思是放开了,也就是说再删除后这个空间释放了
# 虽然这个空间释放了,但是一直无法再次占用,需要把这个pv删除,重新创建一个pv才可以
# 这是因为使用的回收策略是retain,会释放空间,但是也会一直保留着
# 可以edit或者创建的时候选择使用recycle,delete之后等待一段时间就会变成available

4、创建Pod绑定PVC

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-deploy-pvc
  name: nginx-deploy-pvc
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-deploy-pvc
  template:
    metadata:
      labels:
        app: nginx-deploy-pvc
    spec:
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
        - name: html
          persistentVolumeClaim:
            claimName: nginx-pvc

这种的是静态供应,也就是说pv池是在使用前准备好的,依旧存在空间浪费的情况。还有另外一种是动态供应,不用提前准备,并且会按照指定的空间来创建pv,更好一些

动态供应

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
          # resources:
          #    limits:
          #      cpu: 10m
          #    requests:
          #      cpu: 10m
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.31.27 ## 指定自己nfs服务器地址
            - name: NFS_PATH
              value: /nfs/data  ## nfs服务器共享的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.31.27
            path: /nfs/data
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
# kubectl get storageClass 简写sc
# kubectl get sc
NAME                    PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-storage (default)   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  102m

# 和之前静态的相比,少写了stroageClass,走默认的
# kubectl get sc

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-pvc2
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi
# 执行完可以看到,创建了一个200m的申请书,默认动态创建了一个200m的空间用以存储

[root@k8s-master ~]# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nginx-pvc2   Bound    pvc-5a80ac52-b7da-4d3c-93e0-8100db9b0552   200Mi      RWX            nfs-storage    6s
[root@k8s-master ~]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE
pvc-5a80ac52-b7da-4d3c-93e0-8100db9b0552   200Mi      RWX            Delete           Bound    default/nginx-pvc2   nfs-storage             11s

3、ConfigMap

抽取应用配置,并且可以自动更新

使用redis测试

1、创建redis配置文件

vi redis.conf
appendonly yes

2、把之前的配置文件创建为配置集

# 创建configMap 简称cm
kubectl create cm redis-conf --from-file=redis.conf
kubectl edit cm redis-conf

apiVersion: v1
data:
  redis.conf: |
    appendonly yes
kind: ConfigMap
metadata:
  creationTimestamp: "2023-02-17T02:12:21Z"
  name: redis-conf
  namespace: default
  resourceVersion: "432347"
  uid: 2bf47603-939c-41eb-9b2c-2b52bcf4f3da

# data是所有真正的数据,key:默认是文件名   value:配置文件的内容

3、创建pod

apiVersion: v1
kind: Pod
metadata:
  name: redis
spec:
  containers:
  - name: redis
    image: redis
    command:
      - redis-server
      - "/redis-master/redis.conf"  #指的是redis容器内部的位置
    ports:
    - containerPort: 6379
    volumeMounts:
    - mountPath: /data
      name: data
    - mountPath: /redis-master
      name: config
  volumes:
    - name: data
      emptyDir: {}
    - name: config
      configMap:
        name: redis-conf
        items:
        - key: redis.conf
          path: redis.conf

说明:

整体来说是在redis启动时,将redis内部的配置跟configMap中的对应文件关联起来

1、spec.containers.command是redis内部需要执行的命令

        redis-server /redis-master/redis.conf

2、spec.containers.volumeMounts 挂载卷 一个名字是data,挂载到/data下,另一个

        是config,挂载到/redis-master下。具体在spec.volumes下看,data是个空目录,

        config使用的是cm中名字为redis-conf的文件,items.key对应的是data.redis.conf

        (edit cm 中可以看到),path是挂载文件的名字(容器内的),config挂载卷的位置

        是/redis-master,挂载的文件名字是redis.conf,

        那整体路径就是/redis-master/redis.conf

# 测试  进入redis内部查看配置文件
root@redis:/redis-master# cat redis.conf 
appendonly yes

# 修改cm 查看是否会同步到redis内部
kubectl edit cm redis-conf

# 大概一分钟后会同步

# redis-cli 进入redis
127.0.0.1:6379> config get appendonly
1) "appendonly"
2) "yes"
127.0.0.1:6379> config get requirepass
1) "requirepass"
2) ""

# 发现requirepass并没有生效,pod删除重启后就可以了,如果使用的是deploy,删除会自动重启
# 这个是因为redis本身不自带热部署,k8s已经将配置文件同步到pod内部了

4、Secret

Secret对象类型用来保存敏感信息,例如密码、OAuth令牌和SSH密钥。将这些信息放在secret中比放在pod的定义或者容器镜像中来说更加安全和灵活。

secret的实现原理和configmap一样,只不过secret保存敏感信息,configmap保存配置信息

##命令格式
kubectl create secret docker-registry regcred \
  --docker-server=<你的镜像仓库服务器> \
  --docker-username=<你的用户名> \
  --docker-password=<你的密码> \
  --docker-email=<你的邮箱地址>

# docker-sever 有默认值 	https://index.docker.io/v1/

kubectl create secret docker-registry docker-secret \
--docker-username=zhangsan \
--docker-password=123123123 \
--docker-email=123123123@qq.com

[root@k8s-master ~]# kubectl get secret
NAME                  TYPE                                  DATA   AGE
default-token-rw64m   kubernetes.io/service-account-token   3      10d
docker-secret         kubernetes.io/dockerconfigjson        1      21s

kubectl edit secret docker-secret

会看到data中将账号密码邮箱等敏感信息进行了加密处理
apiVersion: v1
kind: Pod
metadata:
  name: private-nginx
spec:
  containers:
  - name: private-nginx
    image: zhangsan/test:v1.0
  imagePullSecrets:
  - name: docker-secret

拉取私有镜像的时候就需要指定secret名字,不需要每次输入,既安全又方便


*以上内容是根据雷丰阳老师视频学习的记录,包括一些个人的理解,不对之处望指教

k8s暂时学习到这里,都比较基础,还有很多没有提到的需要继续学习,之后学完再继续更新

  • 1
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值