Kubernetes实战入门

一、组件介绍

(一)master主控节点

apiserver
集群统一入口,以restful方式,交给etcd存储
scheduler
节点调度,选择node节点应用部署
controller-manager
处理集群中常规后台任务,一个资源对应一个控制器
etcd
存储系统,用于保存集群相关的数据

(二)node工作节点

kubelet
相当于master派到node节点的代表,管理本机容器
kube-proxy
提供网络代理,负载均衡等操作

二、k8s核心概念

(一)pod

  • 最小部署单元
  • 一组容器的集合
  • 共享网络
  • 生命周期是短暂的

(二)controller

  • 确保预期的pod副本数量
  • 无状态应用部署
  • 有状态应用部署
  • 确保所有的node运行同一个pod
  • 一次性任务和定时任务

(三)service

  • 定义一组pod访问规则

三、搭建k8s集群

(一)基于集群部署工具kubeadm

1、准备
- 一台或多台机器,操作系统 CentOS7.x-86_x64
- 硬件配置:2GB 或更多 RAM,2 个 CPU 或更多 CPU,硬盘 30GB 或更多
- 集群中所有机器之间网络互通
- 可以访问外网,需要拉取镜像
- 禁止 swap 分区
2、系统初始化

关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

关闭selinux

sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config	#相当于将其禁用
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时

关闭swap

swapoff -a # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久

允许 iptables 检查桥接流量

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

主机名

hostnamectl set-hostname <hostname>

在master添加hosts

cat >> /etc/hosts << EOF
10.0.0.10 k8s-master01
10.0.0.20 k8s-node01
10.0.0.21 k8s-node02
EOF

将桥接的 IPv4 流量传递到 iptables 的链

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效

时间同步

yum -y install ntpdate
ntpdate time.windows.com
3、所有节点安装Docker、kubeadm、kubelet、kubectl

Kubernetes 默认 CRI(容器运行时)为 Docker,因此先安装 Docker。

安装docker

卸载旧版本
yum remove docker \
             docker-client \
             docker-client-latest \
             docker-common \
             docker-latest \
             docker-latest-logrotate \
             docker-logrotate \
             docker-engine

安装gcc相关
yum -y install gcc
yum -y install gcc-c++

安装Docker需要软件包
yum install -y yum-utils

设置stable镜像仓库,推荐用国内镜像
yum-config-manager \
    --add-repo \
    http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

更新yum软件包索引
yum makecache fast

安装docker ce
yum -y install docker-ce docker-ce-cli containerd.io	#默认下载最新版
yum -y install docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6

启动docker
systemctl enable docker --now

查看是否成功
docker info

阿里云镜像加速

这里额外添加了docker的生产环境核心配置cgroup
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://gz8jbxo9.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

查看是否配置成功
docker info

安装kubelet、kubeadm、kubectl

添加yum源
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF


sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes

sudo systemctl enable --now kubelet

(二)使用kubeadm引导集群

下载各个机器需要的镜像

sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF

chmod +x ./images.sh && ./images.sh

docker images	#查看是否下载成功

(三)初始化主节点

#所有机器添加master域名映射,以下需要修改为自己的
echo "10.0.0.10 cluster-endpoint" >> /etc/hosts
ping cluster-endpoint	#可以ping通就ok

#主节点初始化
kubeadm init \
--apiserver-advertise-address=10.0.0.10 \	#这个ip为master节点ip
--control-plane-endpoint=cluster-endpoint \	#上面的域名
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \	#设置svc的ip范围
--pod-network-cidr=192.168.0.0/16	#设置pod的ip范围

#所有网络范围不重叠

这里一定要把这些内容保存下来,因为后面要用到

成功
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join cluster-endpoint:6443 --token opruux.2gtd5h2mm8tuiuga \
    --discovery-token-ca-cert-hash sha256:ce7da72a702d49f3ec945dc633ed2549505503da39afc960b41339f20a78c57d \
    --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join cluster-endpoint:6443 --token opruux.2gtd5h2mm8tuiuga \
    --discovery-token-ca-cert-hash sha256:ce7da72a702d49f3ec945dc633ed2549505503da39afc960b41339f20a78c57d
按照上面的提示,复制并执行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

可以检查一下,状态是没有准备
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES                  AGE     VERSION
k8s-master01   NotReady   control-plane,master   9m53s   v1.20.9

安装网络组件
calico官网

curl https://docs.projectcalico.org/v3.8/manifests/calico.yaml -O

kubectl apply -f calico.yaml

常用简单命令

查看集群所有节点
kubectl get nodes/node	

根据配置文件,给集群创建资源
kubectl apply -f xxx.yaml	

查看集群部署了哪些应用
kubectl get pods/pod -A

实时监控加-w就可以
kubectl get pod -A -w	

相当于每一秒刷新一次
watch -n 1 kubectl get pod -A

这时候主节点就是准备状态了

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES                  AGE   VERSION
k8s-master01   Ready    control-plane,master   35m   v1.20.9

(四)加入node节点

#24小时内有效,加的时候注意点,别复制成主节点的了
kubeadm join cluster-endpoint:6443 --token opruux.2gtd5h2mm8tuiuga \
    --discovery-token-ca-cert-hash sha256:ce7da72a702d49f3ec945dc633ed2549505503da39afc960b41339f20a78c57d

This node has joined the cluster:	#加入之后有这句话就是加入成功了

[root@k8s-node02 ~]# kubectl get nodes	#从节点是使用不了的
The connection to the server localhost:8080 was refused - did you specify the right host or port?

[root@k8s-master01 ~]# kubectl get nodes	#主节点查看已经完成了
NAME           STATUS   ROLES                  AGE     VERSION
k8s-master01   Ready    control-plane,master   46m     v1.20.9
k8s-node01     Ready    <none>                 5m36s   v1.20.9
k8s-node02     Ready    <none>                 5m33s   v1.20.9

自我修复能力

reboot	#重启
在连上每一台机器,查看
kebectl get node
kubectl get pod -A

创建新令牌

上面说令牌是24小时,过时之后想在加入就需要输入以下命令新建令牌了
kubeadm token create --print-join-command	#在master节点执行

四、部署dashboard

(一)部署

kubernetes官方提供的可视化界面:https://github.com/kubernetes/dashboard

安装可视化界面
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

如果网慢下载不下来:

用这个自己创建,然后应用即可
vi dashboard.yaml

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.3.1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.6
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

执行
kubectl apply -f dashboard.yaml

(二)访问

设置访问端口

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
输入/type找到type:ClusterIP改为type:NodePort
wq保存退出

kubectl get svc -A |grep kubernetes-dashboard	#找到端口
443:31642/TCP	

访问: https://集群任意IP:端口 https://10.0.0.10:31642
浏览器会提示不安全,点击高级,点击继续访问即可
在这里插入图片描述

(三)创建访问账号

#创建访问账号,准备一个yaml文件; 
vi dash.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard


应用
kubectl apply -f dash.yaml

(四)令牌访问

#获取访问令牌
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

eyJhbGciOiJSUzI1NiIsImtpZCI6InlDRHIxNWVrUUFPNTlra0JFU0xmSUtRdGFvZFJhOVFKRlF3cm9TVVp2cWsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXZqOGpiIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4NWU2ZTI2ZS03MzkyLTQ5YmEtYTg5Ny01ODJmYzMzY2ExYjMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.qh8ddT8LARzrxRWBSNkK14I4BXkmOUfdUlr8eP51-4eYdk0AhL2qMtnGxNrFaRZL6eKy2UvE6_ExIfncphIjn28NYOS_7jF-vloEwCCNDnaUvdWwVF4OjJidqWK8UbPHhdPYx-IM-TOG2qpDXrYOdfujXJCmAcHNWAChcCEtW8qnWYIEs2NFL4-z_s7qRbmjDRS2QzXcdqOrQrnZweRWLzzFdDS0hoFmY3YxdqBtsEcqyNWyVdHUlLvyQPerjzlckSWzHuayhyIIKJLWN6AZMTpFpRsvGTHbLNEbu9fuGsY5FimUfdYsb2_mSPJc0FmbzMpU8M9JLj8ITVkB_ztcTg

(五)登陆

把建立好的令牌输入到dashboard token
在这里插入图片描述

(六)界面显示

在这里插入图片描述
在这里插入图片描述

五、k8s核心实战

(一)资源创建方式

  • 命令行
  • YAML

(二)Namespace

名称空间,用来对集群资源进行隔离划分。默认只隔离资源,不隔离网络

查看名称空间namespace/ns(简写)
[root@k8s-master01 ~]# kubectl get ns
NAME                   STATUS   AGE
default                Active   151m
kube-node-lease        Active   151m
kube-public            Active   151m
kube-system            Active   151m
kubernetes-dashboard   Active   77m

查看名称空间下部署了哪些应用
kubectl get pod -n kubernetes-dashboard	#-n后面跟上你要查看的名称空间

创建名称空间,ns后面跟上要创建的名称空间名称
kubectl create ns hello

yaml格式创建
vi hello.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: hello
  
kubectl apply -f hello.yaml	#直接应用就创建了

删除名称空间,ns后面跟上要创建的名称空间名称
kubectl delete ns hello	#默认会删除该名称空间下的所有应用,所以谨慎使用

以yaml格式删除
kubectl delete -f hello.yaml

(三)Pod

Pod:运行中的一组容器,Pod是kubernetes中应用的最小单位.

1、创建pod1
#直接run,指定名字,指定拉取的镜像,不指定名称空间就是default
kubectl run mynginx --image=nginx

#查看default名称空间的Pod
kubectl get pod

#查看pod完整详细信息,每个pod,k8s都会分配一个ip
kubectl get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
mynginx   1/1     Running   0          8m43s   192.168.58.196   k8s-node02   <none>           <none>

# 使用Pod的ip+pod里面运行容器的端口,不写默认是80
curl 192.168.58.196	# 集群中的任意一个机器以及任意的应用都能通过Pod分配的ip来访问这个Pod

#进入pod内部
[root@k8s-master01 ~]# kubectl exec -it mynginx -- /bin/bash
root@mynginx:/# ls
bin  boot  dev	docker-entrypoint.d  docker-entrypoint.sh  etc	home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@mynginx:/# cd /usr/share/nginx/html/
root@mynginx:/usr/share/nginx/html# echo "1111" > index.html
root@mynginx:/usr/share/nginx/html# exit
[root@k8s-node02 ~]# curl 192.168.58.196
1111

# 描述,可以看到容器运行在哪个节点,报了什么错等
kubectl describe pod 你自己的Pod名字

# 删除,默认删除default名称空间,如果在别的名称空间,需要-n xxx名称空间
kubectl delete pod Pod名字

# 查看Pod的运行日志,实时追踪就是logs -f pod名字
kubectl logs Pod名字

#可以每个节点都输入这个命令查看,也可以用上面的方法
docker ps |grep mynginx
docker images
2、创建pod2
#yaml格式
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: mynginx
  name: mynginx
#  namespace: default
spec:
  containers:
  - image: nginx
    name: mynginx

kubectl apply -f pod.yaml	#应用
kubectl delete -f pod.yaml	#删除

3、创建pod3

dashboard创建

时间长不管可视化界面会自动退出,用生成的令牌再次输入登陆即可

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

3、创建多个容器的pod
vi multicontainer-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: myapp
  name: myapp
spec:
  containers:
  - image: nginx
    name: nginx
  - image: tomcat:8.5.68
    name: tomcat

kubectl apply -f multicontainer-pod.yaml	#应用
kubectl get pod	#查看
NAME      READY   STATUS    RESTARTS   AGE
myapp     2/2     Running   0          22m
mynginx   1/1     Running   0          54m

访问nginx就是curl ip+80端口,tomcat就是curl ip+8080端口,可以在可视化界面操作
如果是nginx访问tomcat,只需要进入各自的内部,curl 127.0.0.1:端口即可(因为他们在同一个pod,共享网络)

但是此时的应用还不能外部访问

(四)Deployment

控制Pod,使Pod拥有多副本,自愈,扩缩容等能力

# 清除所有Pod,比较下面两个命令有何不同效果
kubectl delete pod myapp mynginx -n default	#清除上面所创建的

kubectl run mynginx --image=nginx	#之前的方式
kubectl create deployment mytomcat --image=tomcat:8.5.68	#创建一次应用部署
#自愈能力

案例
[root@k8s-master01 ~]# kubectl get pod	#查看
NAME                        READY   STATUS    RESTARTS   AGE
mynginx                     1/1     Running   0          2m21s
mytomcat-6f5f895f4f-st44k   1/1     Running   0          69s

kubectl delete pod mynginx	#删除之前的
kubectl get pod	#查看是没有了
kubectl delete pod mytomcat-6f5f895f4f-st44k	#删除用deployment创建的
kubectl get pod	#查看原来的是没有了,但是它会立马创建一个新的

删除
kubectl get deployment	#查看deployment
kubectl get deploy	#简写
kubectl delete deploy mytomcat	#删除之后,它所创建的应用就会删除了
1、多副本

方式一

kubectl create deploy my-dep --image=nginx --replicas=3
	#my-dep是指定部署名称,--image指定镜像,--replicas指定副本数量
kubectl get deploy -o wide	#查看部署
kubectl get pod -o wide		#查看pod
kubectl delete deploy my-dep	#删除

方式二
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
方式三

#yaml格式
vi my-dep.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: my-dep
  name: my-dep
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-dep
  template:
    metadata:
      labels:
        app: my-dep
    spec:
      containers:
      - image: nginx
        name: nginx

kubectl apply -f my-dep.yaml
kubectl get deploy
kubectl get pod
2、扩缩容

方式一

扩容
[root@k8s-master01 ~]# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5b7868d854-5lffm   1/1     Running   0          7m31s
my-dep-5b7868d854-7r2sv   1/1     Running   0          7m31s
my-dep-5b7868d854-mcgfd   1/1     Running   0          7m31s
#指定副本为5,就会自动给你多加两个,默认就是default名称空间
kubectl scale deploy/my-dep --replicas=5	

缩容
#指定副本为2,就会自动给你删除3个
kubectl scale deploy/my-dep --replicas=2	
[root@k8s-master01 ~]# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5b7868d854-7r2sv   1/1     Running   0          9m57s
my-dep-5b7868d854-mcgfd   1/1     Running   0          9m57s

方式二

kubectl edit deploy my-dep
输入/replicas把replicas:2改为replicas:4
wq保存退出即可

[root@k8s-master01 ~]# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5b7868d854-7r2sv   1/1     Running   0          15m
my-dep-5b7868d854-hpsxx   1/1     Running   0          2m23s
my-dep-5b7868d854-ltwpg   1/1     Running   0          2m23s
my-dep-5b7868d854-mcgfd   1/1     Running   0          15m

在这里插入图片描述

3、自愈&故障转移

自愈

#查看当前容器运行在哪个节点
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
my-dep-5b7868d854-7r2sv   1/1     Running   0          27m   192.168.58.202   k8s-node02   <none>           <none>
my-dep-5b7868d854-hpsxx   1/1     Running   0          14m   192.168.85.205   k8s-node01   <none>           <none>
my-dep-5b7868d854-ltwpg   1/1     Running   0          14m   192.168.85.206   k8s-node01   <none>           <none>
my-dep-5b7868d854-mcgfd   1/1     Running   0          27m   192.168.58.201   k8s-node02   <none>           <none>

#去node01节点
[root@k8s-node01 ~]# docker ps |grep my-dep-5b7868d854-hpsxx	#查找一下这个容器
bbbfeb93ed2d   nginx                                                        "/docker-entrypoint.…"   15 minutes ago   Up 15 minutes             k8s_nginx_my-031b1f-8f42-48e5-9a89-c7bf95359857_0
874fda112edf   registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/pause:3.2   "/pause"                 15 minutes ago   Up 15 minutes             k8s_POD_my-de1b1f-8f42-48e5-9a89-c7bf95359857_0

#停掉这个容器id
[root@k8s-node01 ~]# docker stop bbbfeb93ed2d
bbbfeb93ed2d

#master节点查看,有一个机器挂掉了
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME                      READY   STATUS      RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
my-dep-5b7868d854-7r2sv   1/1     Running     0          29m   192.168.58.202   k8s-node02   <none>           <none>
my-dep-5b7868d854-hpsxx   0/1     Completed   0          16m   192.168.85.205   k8s-node01   <none>           <none>
my-dep-5b7868d854-ltwpg   1/1     Running     0          16m   192.168.85.206   k8s-node01   <none>           <none>
my-dep-5b7868d854-mcgfd   1/1     Running     0          29m   192.168.58.201   k8s-node02   <none>           <none>

#等一会在查看,就自动起来了
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
my-dep-5b7868d854-7r2sv   1/1     Running   0          30m   192.168.58.202   k8s-node02   <none>           <none>
my-dep-5b7868d854-hpsxx   1/1     Running   1          17m   192.168.85.205   k8s-node01   <none>           <none>
my-dep-5b7868d854-ltwpg   1/1     Running   0          17m   192.168.85.206   k8s-node01   <none>           <none>
my-dep-5b7868d854-mcgfd   1/1     Running   0          30m   192.168.58.201   k8s-node02   <none>           <none>

故障转移
随便停掉node01机器,或者node02机器;
例如,停掉node01,那么本机的pod就会挂掉,等待差不多的五分钟时间,就会重新启动两个pod,运行在node02节点上,因为有网络波动,很可能只是因为网络波动的原因,所以有一个阈值,超过这个阈值在启动新的pod

4、滚动更新

方式一

#查看当前的pod
[root@k8s-master01 ~]# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5b7868d854-7r2sv   1/1     Running   0          55m
my-dep-5b7868d854-ltwpg   1/1     Running   0          42m
my-dep-5b7868d854-mcgfd   1/1     Running   0          55m

#获取my-dep这个部署用的是哪个镜像,-o指定以yaml的格式输出
kubectl get deploy my-dep -o yaml
#找到容器名字,以及用的镜像
spec:
      containers:
      - image: nginx	#镜像
        imagePullPolicy: Always
        name: nginx	#容器名字

#设置新镜像
kubectl set image deploy/my-dep nginx=nginx:1.16.1 --record
#deploy/my-dep 指定要修改的是deploy下面名称为my-dep的部署
#nginx 容器名字
#nginx:1.16.1 指定要更新的新镜像
#--record 对本次更新做一个记录

#查看由老的更新为新的了,启动一个并运行,然后杀掉一个,循环到完成
[root@k8s-master01 ~]# kubectl get pod
NAME                      READY   STATUS              RESTARTS   AGE
my-dep-5b7868d854-7r2sv   1/1     Running             0          62m
my-dep-5b7868d854-ltwpg   1/1     Running             0          49m
my-dep-5b7868d854-mcgfd   1/1     Running             0          62m
my-dep-6b48cbf4f9-zrtnx   0/1     ContainerCreating   0          23s
[root@k8s-master01 ~]# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-6b48cbf4f9-hf7vk   1/1     Running   0          2m5s
my-dep-6b48cbf4f9-lfpsm   1/1     Running   0          107s
my-dep-6b48cbf4f9-zrtnx   1/1     Running   0          2m53s

方式二

修改 kubectl edit deployment/my-dep
5、版本回退
#查看历史记录
[root@k8s-master01 ~]# kubectl rollout history deployment/my-dep
deployment.apps/my-dep
REVISION  CHANGE-CAUSE
1         <none>
2         kubectl set image deploy/my-dep nginx=nginx:1.16.1 --record=true
#1是刚部署my-dep的时候产生的记录
#2是滚动更新之后的记录,因为更新的时候加了--record,所有有记录

#查看某个历史详情
[root@k8s-master01 ~]# kubectl rollout history deployment/my-dep --revision=2
deployment.apps/my-dep with revision #2
Pod Template:
  Labels:	app=my-dep
	pod-template-hash=6b48cbf4f9
  Annotations:	kubernetes.io/change-cause: kubectl set image deploy/my-dep nginx=nginx:1.16.1 --record=true
  Containers:
   nginx:
    Image:	nginx:1.16.1
    Port:	<none>
    Host Port:	<none>
    Environment:	<none>
    Mounts:	<none>
  Volumes:	<none>

#回滚(回到上次)
kubectl rollout undo deployment/my-dep

#回滚(回到指定版本)
kubectl rollout undo deployment/my-dep --to-revision=2

kubectl rollout undo deployment/my-dep --to-revision=1	#回到版本1
#也是一个滚动更新的过程
[root@k8s-master01 ~]# kubectl get pod
NAME                      READY   STATUS              RESTARTS   AGE
my-dep-5b7868d854-7896m   0/1     ContainerCreating   0          2s
my-dep-6b48cbf4f9-hf7vk   1/1     Running             0          17m
my-dep-6b48cbf4f9-lfpsm   1/1     Running             0          17m
my-dep-6b48cbf4f9-zrtnx   1/1     Running
[root@k8s-master01 ~]# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5b7868d854-246m2   1/1     Running   0          44s
my-dep-5b7868d854-7896m   1/1     Running   0          80s
my-dep-5b7868d854-l78nb   1/1     Running   0          62s

#查看是否回滚成功,可以看到镜像已经变成nginx了
[root@k8s-master01 ~]# kubectl get deployment/my-dep -o yaml |grep image
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"my-dep"},"name":"my-dep","namespace":"default"},"spec":{"replicas":3,"selector":{"matchLabels":{"app":"my-dep"}},"template":{"metadata":{"labels":{"app":"my-dep"}},"spec":{"containers":[{"image":"nginx","name":"nginx"}]}}}}
                f:imagePullPolicy: {}
                f:image: {}
      - image: nginx
        imagePullPolicy: Always

更多
除了Deployment,k8s还有 StatefulSet 、DaemonSet 、Job 等 类型资源。我们都称为 工作负载。
有状态应用使用 StatefulSet 部署,无状态应用使用 Deployment 部署
https://kubernetes.io/zh/docs/concepts/workloads/controllers/
在这里插入图片描述

(五)Service(简称svc)

Pod的服务发现与负载均衡,将一组 Pods 公开为网络服务的抽象方法。

准备

也可以去可视化界面修改,那里比较简单

#查看pod
[root@k8s-master01 ~]# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5b7868d854-246m2   1/1     Running   0          64m
my-dep-5b7868d854-7896m   1/1     Running   0          64m
my-dep-5b7868d854-l78nb   1/1     Running   0          64m

#进入第一个pod,写入111到nginx
[root@k8s-master01 ~]# kubectl exec -it my-dep-5b7868d854-246m2 /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@my-dep-5b7868d854-246m2:/# cd /usr/share/nginx/html/
root@my-dep-5b7868d854-246m2:/usr/share/nginx/html# echo "111" > index.html
root@my-dep-5b7868d854-246m2:/usr/share/nginx/html# exit

#进入第二个pod,写入222到nginx
[root@k8s-master01 ~]# kubectl exec -it my-dep-5b7868d854-7896m /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@my-dep-5b7868d854-7896m:/# cd /usr/share/nginx/html/
root@my-dep-5b7868d854-7896m:/usr/share/nginx/html# echo "222" > index.html
root@my-dep-5b7868d854-7896m:/usr/share/nginx/html# exit

#进入第三个pod,写入333到nginx
[root@k8s-master01 ~]# kubectl exec -it my-dep-5b7868d854-l78nb /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@my-dep-5b7868d854-l78nb:/# cd /usr/share/nginx/html/
root@my-dep-5b7868d854-l78nb:/usr/share/nginx/html# echo "333" > index.html
root@my-dep-5b7868d854-l78nb:/usr/share/nginx/html# exit

#查看每一个IP
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
my-dep-5b7868d854-246m2   1/1     Running   0          68m   192.168.58.205   k8s-node02   <none>           <none>
my-dep-5b7868d854-7896m   1/1     Running   0          69m   192.168.58.203   k8s-node02   <none>           <none>
my-dep-5b7868d854-l78nb   1/1     Running   0          68m   192.168.58.204   k8s-node02   <none>           <none>

#访问是否成功
[root@k8s-master01 ~]# curl 192.168.58.205:80
111
[root@k8s-master01 ~]# curl 192.168.58.203
222
[root@k8s-master01 ~]# curl 192.168.58.204
333
1、第一种方式

暴露端口

#删除名为my-dep的service
kubectl delete service my-dep	#删除命令

[root@k8s-master01 ~]# kubectl expose deploy my-dep --port=8000 --target-port=80
service/my-dep exposed
#expose	暴露
#deploy my-dep 指定暴露名为my-dep的部署
#--port=8000 指定服务的端口为8000
#--target-port=80 指定目标端口为80

#查看service
[root@k8s-master01 ~]# kubectl get service
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP    9h
my-dep       ClusterIP   10.96.2.99   <none>        8000/TCP   5m51s

访问

#负载均衡的效果
[root@k8s-master01 ~]# curl 10.96.2.99:8000
222
[root@k8s-master01 ~]# curl 10.96.2.99:8000
222
[root@k8s-master01 ~]# curl 10.96.2.99:8000
111
[root@k8s-master01 ~]# curl 10.96.2.99:8000
222
[root@k8s-master01 ~]# curl 10.96.2.99:8000
333
2、第二种方式

查看标签

#yaml格式
#删除之前的service
[root@k8s-master01 ~]# kubectl delete service my-dep
#查看标签
[root@k8s-master01 ~]# kubectl get pod --show-labels
NAME                      READY   STATUS    RESTARTS   AGE   LABELS
my-dep-5b7868d854-246m2   1/1     Running   0          83m   app=my-dep,pod-template-hash=5b7868d854
my-dep-5b7868d854-7896m   1/1     Running   0          84m   app=my-dep,pod-template-hash=5b7868d854
my-dep-5b7868d854-l78nb   1/1     Running   0          84m   app=my-dep,pod-template-hash=5b7868d854

编写yaml并应用

vi svc_pod.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-dep
  name: my-dep
spec:
  selector:
    app: my-dep
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80

[root@k8s-master01 ~]# kubectl apply -f svc_pod.yaml
service/my-dep created
[root@k8s-master01 ~]# kubectl get service
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP    10h
my-dep       ClusterIP   10.96.142.202   <none>        8000/TCP   6s

访问

[root@k8s-master01 ~]# curl 10.96.142.202:8000
222
[root@k8s-master01 ~]# curl 10.96.142.202:8000
111
[root@k8s-master01 ~]# curl 10.96.142.202:8000
222
[root@k8s-master01 ~]# curl 10.96.142.202:8000
111
[root@k8s-master01 ~]# curl 10.96.142.202:8000
222
[root@k8s-master01 ~]# curl 10.96.142.202:8000
111
[root@k8s-master01 ~]# curl 10.96.142.202:8000
111
[root@k8s-master01 ~]# curl 10.96.142.202:8000
333

创建新部署来访问

#创建一个tomcat
[root@k8s-master01 ~]# kubectl create deploy my-tomcat --image=tomcat

#通过IP加端口可以负载均衡的访问
[root@k8s-master01 ~]# kubectl exec -it my-tomcat-b4c9b6565-58x77 /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@my-tomcat-b4c9b6565-58x77:/usr/local/tomcat# curl 10.96.142.202:8000
333
root@my-tomcat-b4c9b6565-58x77:/usr/local/tomcat# curl 10.96.142.202:8000
111
root@my-tomcat-b4c9b6565-58x77:/usr/local/tomcat# curl 10.96.142.202:8000
111
root@my-tomcat-b4c9b6565-58x77:/usr/local/tomcat# curl 10.96.142.202:8000
222

#通过域名访问(根据自己实际的)
服务名.所在名称空间.svc
my-dep.default.svc

root@my-tomcat-b4c9b6565-58x77:/usr/local/tomcat# curl my-dep.default.svc:8000
222
root@my-tomcat-b4c9b6565-58x77:/usr/local/tomcat# curl my-dep.default.svc:8000
333
root@my-tomcat-b4c9b6565-58x77:/usr/local/tomcat# curl my-dep.default.svc:8000
111

上面两种方式都是集群内使用service的ip:port可以负载均衡的访问

3、ClusterIP

集群IP,只能集群内部访问,跟上面效果是一样的

# 等同于没有--type的,也叫集群IP
kubectl expose deployment my-dep --port=8000 --target-port=80 --type=ClusterIP

#yaml格式
apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-dep
  name: my-dep
spec:
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80
  selector:
    app: my-dep
  type: ClusterIP

服务发现
当你把pod缩放到两个之后,再访问就只会有两个内容;
例如干掉内容为111的,那么再访问就只会出现222或者333;
当你再把pod缩放为3个,再访问就还是三个内容;
以上面为例,因为新起来的没有设置nginx首页,那么访问结果就是222或者333或者nginx首页。

4、NodePort

节点端口,相当于在每个节点都暴露一个端口,就可以使用ip加端口在外部访问

NodePort范围在 30000-32767 之间,是随机的

#删除上面创建的svc
[root@k8s-master01 ~]# kubectl delete svc my-dep
[root@k8s-master01 ~]# kubectl expose deploy my-dep --port=8000 --target-port=80 --type=NodePort
service/my-dep exposed
#expose	暴露
#deploy my-dep 指定暴露名为my-dep的部署
#--port=8000 指定服务的端口为8000
#--target-port=80 指定目标端口为80
#--type=NodePort 指定节点端口集群外部也可以访问

[root@k8s-master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          10h
my-dep       NodePort    10.96.181.157   <none>        8000:31216/TCP   2m17s

访问

#集群内部负载均衡访问
[root@k8s-master01 ~]# curl 10.96.181.157:8000
222
[root@k8s-master01 ~]# curl 10.96.181.157:8000
333
[root@k8s-master01 ~]# curl 10.96.181.157:8000
111

#进入tomcat访问
#基于ip访问
[root@k8s-master01 ~]# kubectl exec -it my-tomcat-b4c9b6565-58x77 /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@my-tomcat-b4c9b6565-58x77:/usr/local/tomcat# curl 10.96.181.157:8000
111
root@my-tomcat-b4c9b6565-58x77:/usr/local/tomcat# curl 10.96.181.157:8000
333
root@my-tomcat-b4c9b6565-58x77:/usr/local/tomcat# curl 10.96.181.157:8000
222

#基于域名访问
root@my-tomcat-b4c9b6565-58x77:/usr/local/tomcat# curl my-dep.default.svc:8000
222
root@my-tomcat-b4c9b6565-58x77:/usr/local/tomcat# curl my-dep.default.svc:8000
111
root@my-tomcat-b4c9b6565-58x77:/usr/local/tomcat# curl my-dep.default.svc:8000
333

浏览器访问
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

在集群里面使用域名是访问不通的,ClusterIP,和NodePortd都一样

[root@k8s-master01 ~]# curl my-dep.default.svc:8000
curl: (6) Could not resolve host: my-dep.default.svc; 未知的错误

(六)Ingress(简称ing)

Service的统一网关入口

官网入口:
https://kubernetes.github.io/ingress-nginx/

就是nginx做的

安装
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/baremetal/deploy.yaml

#修改镜像
vi deploy.yaml
#将image的值改为如下值:
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/ingress-nginx-controller:v0.46.0
#别改错了,改为wq退出
spec:
      dnsPolicy: ClusterFirst
      containers:
        - name: controller
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/ingress-nginx-controller:v0.46.0
          
#应用
kubectl apply -f depoly.yaml
# 检查安装的结果
kubectl get pod,svc -n ingress-nginx
[root@k8s-master01 ~]# kubectl get pod,svc -n ingress-nginx
NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-ztq4l        0/1     Completed   0          9m44s
pod/ingress-nginx-admission-patch-2tx2c         0/1     Completed   2          9m44s
pod/ingress-nginx-controller-65bf56f7fc-hqct7   1/1     Running     0          9m44s

NAME                                         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             NodePort    10.96.167.46   <none>        80:32446/TCP,443:32288/TCP   9m44s
service/ingress-nginx-controller-admission   ClusterIP   10.96.248.86   <none>        443/TCP                      9m44s

下载失败则用以下文件

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx

---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - configmaps
    resourceNames:
      - ingress-controller-leader-nginx
    verbs:
      - get
      - update
  - apiGroups:
      - ''
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  type: ClusterIP
  ports:
    - name: https-webhook
      port: 443
      targetPort: webhook
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/component: controller
  revisionHistoryLimit: 10
  minReadySeconds: 0
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    spec:
      dnsPolicy: ClusterFirst
      containers:
        - name: controller
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/ingress-nginx-controller:v0.46.0
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
          args:
            - /nginx-ingress-controller
            - --election-id=ingress-controller-leader
            - --ingress-class=nginx
            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            runAsUser: 101
            allowPrivilegeEscalation: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: LD_PRELOAD
              value: /usr/local/lib/libmimalloc.so
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
            - name: webhook
              containerPort: 8443
              protocol: TCP
          volumeMounts:
            - name: webhook-cert
              mountPath: /usr/local/certificates/
              readOnly: true
          resources:
            requests:
              cpu: 100m
              memory: 90Mi
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  name: ingress-nginx-admission
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    matchPolicy: Equivalent
    rules:
      - apiGroups:
          - networking.k8s.io
        apiVersions:
          - v1beta1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions:
      - v1
      - v1beta1
    clientConfig:
      service:
        namespace: ingress-nginx
        name: ingress-nginx-controller-admission
        path: /networking/v1beta1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - secrets
    verbs:
      - get
      - create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-create
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
spec:
  template:
    metadata:
      name: ingress-nginx-admission-create
      labels:
        helm.sh/chart: ingress-nginx-3.33.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.47.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: create
          image: docker.io/jettech/kube-webhook-certgen:v1.5.1
          imagePullPolicy: IfNotPresent
          args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-patch
  annotations:
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
spec:
  template:
    metadata:
      name: ingress-nginx-admission-patch
      labels:
        helm.sh/chart: ingress-nginx-3.33.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.47.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: patch
          image: docker.io/jettech/kube-webhook-certgen:v1.5.1
          imagePullPolicy: IfNotPresent
          args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
使用

集群ip都可以访问;
https://10.0.0.10:32288
http://10.0.0.10:32446
默认都是404
在这里插入图片描述

在这里插入图片描述

测试环境

应用如下yaml,准备好测试环境

vi test.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-server
spec:
  replicas: 2
  selector:
    matchLabels:
      app: hello-server
  template:
    metadata:
      labels:
        app: hello-server
    spec:
      containers:
      - name: hello-server
        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server
        ports:
        - containerPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-demo
  name: nginx-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-demo
  template:
    metadata:
      labels:
        app: nginx-demo
    spec:
      containers:
      - image: nginx
        name: nginx
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-demo
  name: nginx-demo
spec:
  selector:
    app: nginx-demo
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: hello-server
  name: hello-server
spec:
  selector:
    app: hello-server
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 9000

kubectl apply -f test.yaml	#应用

[root@k8s-master01 ~]# kubectl apply -f test.yaml
deployment.apps/hello-server created
deployment.apps/nginx-demo created
service/nginx-demo created
service/hello-server created
1、域名访问一
vi ingress-rule.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress  
metadata:
  name: ingress-host-bar
spec:
  ingressClassName: nginx
  rules:
  - host: "hello.server.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: hello-server
            port:
              number: 8000
  - host: "hello.demo.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: nginx-demo
            port:
              number: 8000

[root@k8s-master01 ~]# kubectl apply -f ingress-rule.yaml	#应用
ingress.networking.k8s.io/ingress-host-bar created
[root@k8s-master01 ~]# kubectl get ingress	#查看规则
NAME               CLASS   HOSTS                             ADDRESS     PORTS   AGE
ingress-host-bar   nginx   hello.server.com,hello.demo.com   10.0.0.21   80      116s

配置
在这里插入图片描述
在这里插入图片描述
访问
在这里插入图片描述
在这里插入图片描述

1、域名访问二(改路径访问)

因为默认访问的是根路径

#查看ingress
[root@k8s-master01 ~]# kubectl get ingress
NAME               CLASS   HOSTS                             ADDRESS     PORTS   AGE
ingress-host-bar   nginx   hello.server.com,hello.demo.com   10.0.0.21   80      5h43m
#修改
[root@k8s-master01 ~]# kubectl edit ing ingress-host-bar
  - host: hello.demo.com
    http:
      paths:
      - backend:
          service:
            name: nginx-demo
            port:
              number: 8000
        path: /nginx	#把/改为/nginx
#修改规则
[root@k8s-master01 ~]# vi ingress-rule.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-host-bar
spec:
  ingressClassName: nginx
  rules:
  - host: "hello.server.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: hello-server
            port:
              number: 8000
  - host: "hello.demo.com"
    http:
      paths:
      - pathType: Prefix
        path: "/nginx"	# 把请求会转给下面的服务,下面的服务一定要能处理这个路径,不能处理就是404
        backend:
          service:
            name: nginx-demo	## java,比如使用路径重写,去掉前缀nginx
            port:
              number: 8000
#应用
[root@k8s-master01 ~]# kubectl apply -f ingress-rule.yaml
ingress.networking.k8s.io/ingress-host-bar configured

访问
在这里插入图片描述
在这里插入图片描述

#进入nginx内部
[root@k8s-master01 ~]# kubectl exec -it nginx-demo-7d56b74b84-8mbl9 /bin/bash
#创建nginx目录
root@nginx-demo-7d56b74b84-8mbl9:/usr/share/nginx/html# mkdir nginx
#写111到index.html
root@nginx-demo-7d56b74b84-8mbl9:/usr/share/nginx/html# echo "111" > nginx/index.html

访问
在这里插入图片描述

2、路径重写
#基于上面的来观察,添加了下面三行
annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    path: "/nginx(/|$)(.*)"
#相当于访问的时候把/路径后面的截取不要
    
[root@k8s-master01 ~]# vi ingress-rule.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress  
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
  name: ingress-host-bar
spec:
  ingressClassName: nginx
  rules:
  - host: "hello.server.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: hello-server
            port:
              number: 8000
  - host: "hello.demo.com"
    http:
      paths:
      - pathType: Prefix
        path: "/nginx(/|$)(.*)"
        backend:
          service:
            name: nginx-demo
            port:
              number: 8000

#应用
[root@k8s-master01 ~]# kubectl apply -f ingress-rule.yaml
ingress.networking.k8s.io/ingress-host-bar configured

访问

因为吧/路径后面的截掉了,所有访问根路径就是nginx首页了

在这里插入图片描述

3、流量限制
#规则介绍
name: ingress-limit-rate
  annotations:
    nginx.ingress.kubernetes.io/limit-rps: "1"	#相当于每秒放1个
- pathType: Prefix	#这个是前缀模式,所有的请求以path定义的路径开始就可以
- pathType: Exact	#这个是精确模式只有访问域名加路径才可以,别的路径就不行

vi ingress-rule-2.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-limit-rate
  annotations:
    nginx.ingress.kubernetes.io/limit-rps: "1"
spec:
  ingressClassName: nginx
  rules:
  - host: "hello.haha.com"
    http:
      paths:
      - pathType: Exact
        path: "/"
        backend:
          service:
            name: nginx-demo
            port:
              number: 8000

#应用
[root@k8s-master01 ~]# kubectl apply -f ingress-rule-2.yaml
ingress.networking.k8s.io/ingress-limit-rate created

#查看
[root@k8s-master01 ~]# kubectl get ing
NAME                 CLASS   HOSTS                             ADDRESS     PORTS   AGE
ingress-host-bar     nginx   hello.server.com,hello.demo.com   10.0.0.21   80      6h27m
ingress-limit-rate   nginx   hello.haha.com                    10.0.0.21   80      119s

修改hosts文件
在这里插入图片描述
在这里插入图片描述
访问
在这里插入图片描述
在这里插入图片描述

(七)存储抽象

环境准备
1、所有节点
#所有机器安装
yum -y install nfs-utils
2、主节点
#nfs主节点
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
#暴露/nfs/data/这个目录
#*	代表所有都能同步/nfs/data/这个目录
#insecure	以非安全的方式进行同步
#rw	以读写的方式进行同步

#创建文件夹
mkdir -p /nfs/data

#启动rpc远程绑定,并且开机自启和现在启动
systemctl enable rpcbind --now

#启动nfs服务器,并且开机自启和现在启动
systemctl enable nfs-server --now

#配置生效
exportfs -r

#检查是否是/nfs/data对外暴露
[root@k8s-master01 ~]# exportfs
/nfs/data     	<world>
3、从节点

所有从节点都要执行

#查看远程有哪些可以挂载
showmount -e 10.0.0.10	#写nfs的服务端ip
[root@k8s-node01 ~]# showmount -e 10.0.0.10
Export list for 10.0.0.10:
/nfs/data *	#代表这个目录可以被挂载


#执行以下命令挂载 nfs 服务器上的共享目录到本机路径 /root/nfsmount
mkdir -p /nfs/data	#这个路径可以起别的名字

#使用nfs网络文件系统挂载
#10.0.0.10就是知道要挂载到那个服务器
#第一个/nfs/data是远程暴露的文件夹
#第二个/nfs/data是本地的文件夹
mount -t nfs 10.0.0.10:/nfs/data /nfs/data

# 写入一个测试文件
[root@k8s-master01 ~]# echo "hello nfs server" > /nfs/data/test.txt
[root@k8s-master01 ~]# cat /nfs/data/test.txt
hello nfs server

#node01节点查看
[root@k8s-node01 ~]# cat /nfs/data/test.txt
hello nfs server

#node02节点查看
[root@k8s-node02 ~]# cat /nfs/data/test.txt
hello nfs server

#在node01节点追加一条数据
[root@k8s-node01 ~]# echo "111" >> /nfs/data/test.txt
[root@k8s-node01 ~]# cat /nfs/data/test.txt
hello nfs server
111

#主节点查看
[root@k8s-master01 ~]# cat /nfs/data/test.txt
hello nfs server
111

#node02节点查看
[root@k8s-node02 ~]# cat /nfs/data/test.txt
hello nfs server
111
4、原生方式数据挂载
#将/usr/share/nginx/html这个目录挂载到nfs主节点的/nfs/data/nginx-pv这个目录

vi deploy_html.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-pv-demo
  name: nginx-pv-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-pv-demo
  template:
    metadata:
      labels:
        app: nginx-pv-demo
    spec:
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
        - name: html
          nfs:
            server: 10.0.0.10
            path: /nfs/data/nginx-pv

#创建文件夹
[root@k8s-master01 ~]# cd /nfs/data/
[root@k8s-master01 data]# mkdir nginx-pv

#应用
[root@k8s-master01 ~]# kubectl apply -f deploy_html.yaml 
deployment.apps/nginx-pv-demo created
#查看,等到pod running
[root@k8s-master01 ~]# kubectl get pod
NAME                             READY   STATUS    RESTARTS   AGE
hello-server-6cbb679d85-46sgx    1/1     Running   1          24h
hello-server-6cbb679d85-sfvm4    1/1     Running   1          24h
my-dep-5b7868d854-8nhm6          1/1     Running   2          28h
my-dep-5b7868d854-q8z4r          1/1     Running   2          28h
my-dep-5b7868d854-s9vrc          1/1     Running   2          28h
my-tomcat-b4c9b6565-4xhwh        1/1     Running   2          28h
nginx-demo-7d56b74b84-4fd9z      1/1     Running   1          24h
nginx-demo-7d56b74b84-8mbl9      1/1     Running   1          24h
nginx-pv-demo-7bc5cc5ff4-74rj9   1/1     Running   0          41s
nginx-pv-demo-7bc5cc5ff4-gc47n   1/1     Running   0          41s
#测试
[root@k8s-master01 ~]# echo "hello" > /nfs/data/nginx-pv/index.html
[root@k8s-master01 ~]# cat /nfs/data/nginx-pv/index.html 
hello

#进入第一个nginx查看
[root@k8s-master01 ~]# kubectl exec -it nginx-pv-demo-7bc5cc5ff4-74rj9 /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx-pv-demo-7bc5cc5ff4-74rj9:/# cat /usr/share/nginx/html/index.html 
hello
root@nginx-pv-demo-7bc5cc5ff4-74rj9:/# exit	#退出
exit

#进入第二个nginx查看
[root@k8s-master01 ~]# kubectl exec -it nginx-pv-demo-7bc5cc5ff4-gc47n /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx-pv-demo-7bc5cc5ff4-gc47n:/# cat /usr/share/nginx/html/index.html 
hello

上面的方式实现后,当你真正的想删除这个pod,那么nfs主节点还是会存在这个数据

PV&PVC

PV:持久卷(Persistent Volume),将应用需要持久化的数据保存到指定位置 PVC:持久卷申明(Persistent
Volume Claim),申明需要使用的持久卷规格

1、创建PV池

静态供应

#nfs主节点创建三个空文件夹
mkdir -p /nfs/data/01
mkdir -p /nfs/data/02
mkdir -p /nfs/data/03

创建pv,ip改成nfs主节点的ip

vi pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv01-10m	#名字可以随意
spec:
  capacity:	#声明
    storage: 10M	#容量只用10M
  accessModes:
    - ReadWriteMany	#多节点可读而写
  storageClassName: nfs	#起一个类名
  nfs:
    path: /nfs/data/01	#分别指定上面创建的目录
    server: 10.0.0.10
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv02-1gi
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/02
    server: 10.0.0.10
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv03-3gi
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/03
    server: 10.0.0.10

#应用
[root@k8s-master01 ~]# kubectl apply -f pv.yaml 
persistentvolume/pv01-10m created
persistentvolume/pv02-1gi created
persistentvolume/pv03-3gi created

#查看资源,这时候的状态都是Available(可用的)
[root@k8s-master01 ~]# kubectl get persistentvolume
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv01-10m   10M        RWX            Retain           Available           nfs                     59s
pv02-1gi   1Gi        RWX            Retain           Available           nfs                     59s
pv03-3gi   3Gi        RWX            Retain           Available           nfs                     59s
2、PVC创建与绑定

创建PVC

vi pvc.yaml
kind: PersistentVolumeClaim	#申请书
apiVersion: v1
metadata:
  name: nginx-pvc	#名字随意
spec:
  accessModes:
    - ReadWriteMany	#要求多节点可读可写
  resources:
    requests:
      storage: 200Mi	#需要200Mi的大小
  storageClassName: nfs	#要和上面的类名对应

#应用
[root@k8s-master01 ~]# kubectl apply -f pvc.yaml 
persistentvolumeclaim/nginx-pvc created

#查看
[root@k8s-master01 ~]# kubectl get pvc
NAME        STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nginx-pvc   Bound    pv03-3gi   3Gi        RWX            nfs            2m2s

#查看资源,发现有一个已经被绑定了
[root@k8s-master01 ~]# kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM               STORAGECLASS   REASON   AGE
pv01-10m   10M        RWX            Retain           Available                       nfs                     6m58s
pv02-1gi   1Gi        RWX            Retain           Bound       default/nginx-pvc   nfs                     6m58s
pv03-3gi   3Gi        RWX            Retain           Available                       nfs                     6m58s
#删除申请书,在查看,状态是Released(释放)
[root@k8s-master01 ~]# kubectl delete -f pvc.yaml 
persistentvolumeclaim "nginx-pvc" deleted
[root@k8s-master01 ~]# kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM               STORAGECLASS   REASON   AGE
pv01-10m   10M        RWX            Retain           Available                       nfs                     9m17s
pv02-1gi   1Gi        RWX            Retain           Released    default/nginx-pvc   nfs                     9m17s
pv03-3gi   3Gi        RWX            Retain           Available                       nfs                     9m17s


#再次应用并查看,因为被释放的状态还未完全清空,还不能用,只能重新申请
[root@k8s-master01 ~]# kubectl apply -f pvc.yaml 
persistentvolumeclaim/nginx-pvc created
[root@k8s-master01 ~]# kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM               STORAGECLASS   REASON   AGE
pv01-10m   10M        RWX            Retain           Available                       nfs                     10m
pv02-1gi   1Gi        RWX            Retain           Released    default/nginx-pvc   nfs                     10m
pv03-3gi   3Gi        RWX            Retain           Bound       default/nginx-pvc   nfs                     10m

创建Pod绑定PVC

#意思是声明这次挂载要使用上面所创建的申请书对应的存储空间
vi deploy_pvc_pod.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-deploy-pvc
  name: nginx-deploy-pvc
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-deploy-pvc
  template:
    metadata:
      labels:
        app: nginx-deploy-pvc
    spec:
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
        - name: html
          persistentVolumeClaim:	#申请书
            claimName: nginx-pvc	#申请书名称
#应用
[root@k8s-master01 ~]# kubectl apply -f deploy_pvc_pod.yaml 
deployment.apps/nginx-deploy-pvc created
#查看,新创建的是名叫pvc的pod
[root@k8s-master01 ~]# kubectl get pod
NAME                                READY   STATUS              RESTARTS   AGE
hello-server-6cbb679d85-46sgx       1/1     Running             1          24h
hello-server-6cbb679d85-sfvm4       1/1     Running             1          24h
my-dep-5b7868d854-8nhm6             1/1     Running             2          28h
my-dep-5b7868d854-q8z4r             1/1     Running             2          28h
my-dep-5b7868d854-s9vrc             1/1     Running             2          28h
my-tomcat-b4c9b6565-4xhwh           1/1     Running             2          28h
nginx-demo-7d56b74b84-4fd9z         1/1     Running             1          24h
nginx-demo-7d56b74b84-8mbl9         1/1     Running             1          24h
nginx-deploy-pvc-79fc8558c7-7vcd5   0/1     ContainerCreating   0          51s
nginx-deploy-pvc-79fc8558c7-jngj4   0/1     ContainerCreating   0          51s
nginx-pv-demo-7bc5cc5ff4-74rj9      1/1     Running             0          53m
nginx-pv-demo-7bc5cc5ff4-gc47n      1/1     Running             0          53m

#我这里报错了
Events:
  Type     Reason       Age                From               Message
  ----     ------       ----               ----               -------
  Normal   Scheduled    47s                default-scheduler  Successfully assigned default/nginx-deploy-pvc-79fc8558c7-svvz8 to k8s-node01
  Warning  FailedMount  14s (x7 over 47s)  kubelet            MountVolume.SetUp failed for volume "pv03-3gi" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 10.0.0.10:/nfs/data/03 /var/lib/kubelet/pods/6b1afd9b-efb2-48ae-afb1-2e082d2a700e/volumes/kubernetes.io~nfs/pv03-3gi
Output: mount.nfs: mounting 10.0.0.10:/nfs/data/03 failed, reason given by server: No such file or directory

#增加了这个目录,可能我上面忘记创建了
[root@k8s-master01 ~]# cd /nfs/data/
[root@k8s-master01 data]# ls
nginx-pv  test.txt
[root@k8s-master01 data]# mkdir 03

#删掉之前的
[root@k8s-master01 ~]# kubectl delete deploy nginx-deploy-pvc
deployment.apps "nginx-deploy-pvc" deleted

#重新应用就ok了
[root@k8s-master01 ~]# kubectl apply -f deploy_pvc_pod.yaml
#测试
[root@k8s-master01 ~]# cd /nfs/data/03/
[root@k8s-master01 03]# echo "111" > index.html

#查看
[root@k8s-master01 data]# kubectl exec -it nginx-deploy-pvc-79fc8558c7-lcp7x /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx-deploy-pvc-79fc8558c7-lcp7x:/#cd /usr/share/nginx/html/
root@nginx-deploy-pvc-79fc8558c7-lcp7x:/usr/share/nginx/html# cat /usr/share/nginx/html/index.html 
111

如果你在/nfs/data/03路径下下载一个超过3G的东西,超出了就会报错

#在这里可以看到指定的大小就是3G
[root@k8s-master01 03]# kubectl get pv,pvc
NAME                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM               STORAGECLASS   REASON   AGE
persistentvolume/pv01-10m   10M        RWX            Retain           Available                       nfs                     84m
persistentvolume/pv02-1gi   1Gi        RWX            Retain           Released    default/nginx-pvc   nfs                     84m
persistentvolume/pv03-3gi   3Gi        RWX            Retain           Bound       default/nginx-pvc   nfs                     84m

NAME                              STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/nginx-pvc   Bound    pv03-3gi   3Gi        RWX            nfs            74m
pv池动态供应,因为上面的都是提前创建好的,动态供应就是会自动的创建一个你指定的空间并自动绑定
ConfigMap(简称cm)

抽取应用配置,并且可以自动更新

挂载目录用pv和pvc,那么挂载配置文件就用到ConfigMap了

redis示例
vi redis.conf
#代表之后的数据要持久化存储
appendonly yes
1、把之前的配置文件创建为配置集
# 创建配置,redis保存到k8s的etcd;
kubectl create cm redis-conf --from-file=redis.conf
#相当于把刚才创建的配置文件做成k8s认为的配置集
#redis-conf 就是你创建的名字,可以指定为别的名字

#查看
[root@k8s-master01 ~]# kubectl get cm
NAME               DATA   AGE
kube-root-ca.crt   1      2d4h
redis-conf         1      4s
#配置集实际存储的位置就是k8s的etcd资料库里面

#这个时候可以删掉了,因为只是把redis的配置文件做成一个配置集
[root@k8s-master01 ~]# rm -rf redis.conf 

#查看配置集
[root@k8s-master01 ~]# kubectl get cm redis-conf -o yaml
apiVersion: v1
data:
  redis.conf: |
    appendonly yes
kind: ConfigMap
metadata:
  creationTimestamp: "2023-05-19T11:53:51Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:redis.conf: {}
    manager: kubectl-create
    operation: Update
    time: "2023-05-19T11:53:51Z"
  name: redis-conf
  namespace: default
  resourceVersion: "140987"
  uid: 7b3dfe56-2891-4c28-bb53-e173b6261075


#只保留有用的数据
apiVersion: v1
data:	#data是所有真正的数据,key,value的形式
  redis.conf: |		#key是我们上面指定的文件名,value是配置文件的内容
    appendonly yes
kind: ConfigMap	#资源的类型
metadata:
  name: redis-conf	#资源名字是上面指定的名字
  namespace: default
2、创建Pod
vi redis.yaml

apiVersion: v1
kind: Pod
metadata:
  name: redis
spec:
  containers:
  - name: redis
    image: redis
    command:	#自定义命令启动
      - redis-server
      - "/redis-master/redis.conf"  #指的是redis容器内部的位置
    ports:
    - containerPort: 6379
    volumeMounts:
    - mountPath: /data
      name: data
    - mountPath: /redis-master
      name: config
  volumes:
    - name: data
      emptyDir: {}
    - name: config
      configMap:
        name: redis-conf
        items:
        - key: redis.conf
          path: redis.conf

在这里插入图片描述

#应用
[root@k8s-master01 ~]# kubectl apply -f redis.yaml 
pod/redis created


#查看
[root@k8s-master01 ~]# kubectl get pod |grep redis
redis                               1/1     Running   0          29s

#在外部查看内容,指定名称空间要加上,不加就是默认的default
[root@k8s-master01 ~]# kubectl exec -it redis -n default cat /redis-master/redis.conf
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
appendonly yes	#这里看到有我们之前写入的内容

#因为是默认的名称空间,所有不加也可以
[root@k8s-master01 ~]# kubectl exec -it redis cat /redis-master/redis.conf
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
appendonly yes
3、检查默认配置
kubectl exec -it redis -- redis-cli

127.0.0.1:6379> CONFIG GET appendonly
127.0.0.1:6379> CONFIG GET requirepass
4、修改ConfigMap

修改

#查看
[root@k8s-master01 ~]# kubectl get cm
NAME               DATA   AGE
kube-root-ca.crt   1      2d6h
redis-conf         1      118m

#修改cm
[root@k8s-master01 ~]# kubectl edit cm redis-conf
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  redis.conf: |
    appendonly yes
    requirepass 123456	#添加这行内容
kind: ConfigMap
metadata:
  creationTimestamp: "2023-05-19T11:53:51Z"
  name: redis-conf
  namespace: default
  resourceVersion: "140987"
  uid: 7b3dfe56-2891-4c28-bb53-e173b6261075

#查看内部,大概需要几十秒,需要同步过来
[root@k8s-master01 ~]# kubectl exec -it redis -n default cat /redis-master/redis.conf
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
appendonly yes
requirepass 123456
5、检查配置是否更新
[root@k8s-master01 ~]# kubectl exec -it redis -- redis-cli
127.0.0.1:6379> CONFIG GET appendonly
1) "appendonly"
2) "yes"

#这里需要redis重启之后才会生效,当然如果你是deploy部署的,那么删掉这个pod,自动创建新的也可以
127.0.0.1:6379> CONFIG GET requirepass
1) "requirepass"
2) ""

检查指定文件内容是否已经更新
修改了CM。Pod里面的配置文件会跟着变

配置值未更改,因为需要重新启动 Pod 才能从关联的 ConfigMap 中获取更新的值。
原因:我们的Pod部署的中间件自己本身没有热更新能力

Secret

Secret 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 SSH 密钥。 将这些信息放在 secret 中比放在 Pod 的定义或者 容器镜像 中来说更加安全和灵活。

#比如下载私有镜像,没有账号密码就下载失败,所以要指定
##命令格式
kubectl create secret docker-registry regcred \
  --docker-server=<你的镜像仓库服务器> \
  --docker-username=<你的用户名> \
  --docker-password=<你的密码> \
  --docker-email=<你的邮箱地址>

#默认的,当你创建了之后就会多一个
例如是xxx-,就会是xxx-docker这个名字
[root@k8s-master01 ~]# kubectl get secret
NAME                  TYPE                                  DATA   AGE
default-token-5fmhm   kubernetes.io/service-account-token   3      2d6h

#查看secret的内容
kubectl get secret xxx-docker -o yaml
vi mypod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: private-nginx
spec:
  containers:
  - name: private-nginx
    image: xxx/xxxnginx:v1.0
  imagePullSecrets:	#这里指定秘钥
  - name: xxx-docker	#秘钥的名字就是上面创建的名字

#应用
kubectl apply -f mypod.yaml

#查看
kubectl get pod

六、总结

Daemon Sets		#守护进程集
Deployments		#无状态副本集,部署无状态服务使用
Stateful Sets	#有状态副本集,例如mysql,redis就用有状态

无论用哪个都是创建pod,pod之间可以互相访问,因为每一个pod都有一个ip
但是用ip访问不是很好,例如pod重启,可能ip就变化了
所以在上层部署了Services,利用标签负载均衡访问各个pod
在Services上次部署了ingresses,可以通过域名来负载访问Services,可以限流,可以截串,比如截掉根路径后面的内容
挂载目录就用pvc,申请一个空间,就给你生成一个pv卷,并且自动绑定
挂载配置文件用Config Map,在外部做修改,就会更新到挂载的容器文件夹内部
如果是秘钥信息就用Secret,但是是base64不是很安全,很容易破解
  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值