Kubernetes(一)入门 k8s集群环境搭建

参考千峰李卫民

简介

Kubernetes 是 Google 2014 年创建管理的,是 Google 10 多年大规模容器管理技术 Borg 的开源版本。是容器集群管理系统,是一个开源的平台,可以实现容器集群的自动化部署、自动扩缩容、维护等功能。其目标是促进完善组件和工具的生态系统,以减轻应用程序在公有云或私有云中运行的负担。


统一的环境配置

在这里插入图片描述

注意 : 制作VMware镜像,避免逐台安装的痛苦

关闭交换空间(避免造成资源浪费)

swapoff -a

避免开机启动交换空间

#注释/dev/mapper/ubuntu--vg-swap_1 none            swap    sw              0       0
vi /etc/fstab

关闭防火墙

ufw disable

配置DNS

#取消DNS行注释,并增加DNS配置如:114.114.114.114,修改重启计算机

vi /etc/systemd/resolved.conf

安装Docker

# 更新软件源
sudo apt-get update
# 安装所需依赖
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# 安装 GPG 证书
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
# 新增软件源信息
sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
# 再次更新软件源
sudo apt-get -y update
# 安装 Docker CE 版
sudo apt-get -y install docker-ce

验证

docker version
Client: Docker Engine - Community
 Version:           19.03.4
 API version:       1.40
 Go version:        go1.12.10
 Git commit:        9013bf583a
 Built:             Fri Oct 18 15:53:51 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.4
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.10
  Git commit:       9013bf583a
  Built:            Fri Oct 18 15:52:23 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.10
  GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339
 runc:
  Version:          1.0.0-rc8+dev
  GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

配置加速器

/etc/docker/daemon.json中添加一下内容

{
  "registry-mirrors": [
    "https://registry.docker-cn.com"
  ]
}

验证是否配置成功

sudo systemctl restart docker
docker info
...
# 出现如下语句即表示配置成功
Registry Mirrors:
 https://registry.docker-cn.com/
...

更改主机名

在同一局域网中主机名不应该相同,所以我们需要做修改,下列操作步骤为修改 18.04 版本的 Hostname,如果是 16.04 或以下版本则直接修改 /etc/hostname 里的名称即可

查看当前Hostname

hostnamectl

修改 Hostname

# 使用 hostnamectl 命令修改,其中 kubernetes-master 为新的主机名
hostnamectl set-hostname kubernetes-maste

修改 cloud.cfg

如果 cloud-init package 安装了,需要修改 cloud.cfg 文件。该软件包通常缺省安装用于处理 cloud

# 如果有该文件
vi /etc/cloud/cloud.cfg

# 该配置默认为 false,修改为 true 即可
preserve_hostname: true

验证

root@ubuntu:~# hostnamectl
   Static hostname: kubernetes-master
Transient hostname: ubuntu
         Icon name: computer-vm
           Chassis: vm
        Machine ID: e10f7dfb5ddbed0998fef8705dc11573
           Boot ID: 75fe751b96b6495b96ca92dd9379efef
    Virtualization: vmware
  Operating System: Ubuntu 18.04.2 LTS
            Kernel: Linux 4.4.0-142-generic
      Architecture: x86-64

安装三个Kubernetes必备工具 kubeadm,kubelet,kubectl

配置软件源

# 安装系统工具
apt-get update && apt-get install -y apt-transport-https
# 安装 GPG 证书
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
# 写入软件源;注意:我们用系统代号为 bionic,但目前阿里云不支持,所以沿用 16.04 的 xenial
cat << EOF >/etc/apt/sources.list.d/kubernetes.list
> deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
> EOF

安装kubeadm,kubelet,kubectl

# 安装
apt-get update  
apt-get install -y kubelet kubeadm kubectl

# 设置 kubelet 自启动,并启动 kubelet
systemctl enable kubelet && systemctl start kubelet
  • kubeadm: 用于初始化Kubernetes集群

  • kubectl: Kubernetes的命令行工具,主要作用是部署和管理应用,查看各种资源,创建,删除,和更新组件

  • kubelet: 主要负责启动Pod和容器

同步时区

dpkg-reconfigure tzdata

# 选择Asia/Shanghai

# 安装ntpdate
apt-get install ntpdate

# 设置系统时间与网络时间同步
ntpdate cn.pool.ntp.org

# 将系统时间写入硬件时间
hwclock --systohc

# 确认时间
date

重启关机

# 记住是先重启再关机
reboot
shutdown now



创建 Kubernetes Master 节点主机

配置ip

教程地址

创建并修改配置

# 修改配置为如下内容
apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  # 修改为主节点 IP
  advertiseAddress: 192.168.100.130
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: kubernetes-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
# 国内不能访问 Google,修改为阿里云
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
# 修改版本号
kubernetesVersion: v1.14.1
networking:
  dnsDomain: cluster.local
  # 配置成 Calico 的默认网段
  podSubnet: "192.168.0.0/16"
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
# 开启 IPVS 模式
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs

查看和拉取镜像

# 查看所需镜像列表
kubeadm config images list --config kubeadm.yml
# 拉取镜像
kubeadm config images pull --config kubeadm.yml

registry.aliyuncs.com/google_containers/kube-apiserver:v1.16.3 # 相当于 docker stop 这一类型的api接口
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.16.3 # 自动重启 POD
registry.aliyuncs.com/google_containers/kube-scheduler:v1.16.3 # 控制服务的启动
registry.aliyuncs.com/google_containers/kube-proxy:v1.16.3 # 代理
registry.aliyuncs.com/google_containers/pause:3.1 
registry.aliyuncs.com/google_containers/etcd:3.3.15-0 # 服务注册中心
registry.aliyuncs.com/google_containers/coredns:1.6.2 # 

安装主节点

kubeadm init --config=kubeadm.yml --upload-certs | tee kubeadm-init.log

输出一下信息
##############################################################
root@kubernetes-master:/usr/local/kubernetes/cluster# kubeadm init --config=kubeadm.yml --upload-certs | tee kubeadm-init.log
[init] Using Kubernetes version: v1.16.3
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.4. Latest validated version: 18.09
	[WARNING Hostname]: hostname "kubernetes-master" could not be reached
	[WARNING Hostname]: hostname "kubernetes-master": lookup kubernetes-master on 8.8.8.8:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.211.110]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.211.110 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.211.110 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 63.013185 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
73559de6abcadd6ea91b8b7ebd29c3be58b3ed441ebbb99c3a9709da8457a643
[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.211.110:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:e84858a746767cb660aa546a5966692865181ab52b43e781d421bcd552f85400 

#############################################################################################


# 接着跟着安装提示走:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

将 slave 加入到集群

克隆两个装有kubeadm,kubelet,kebectl 的虚拟机,修改主机的主机名,并且把交换空间和防火墙都关掉

  • 在两个node主机中执行以下命令加入主节点(master)
kubeadm join 192.168.211.110:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:e84858a746767cb660aa546a5966692865181ab52b43e781d421bcd552f85400 
	
########成功如下#############
	[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.4. Latest validated version: 18.09
	[WARNING Hostname]: hostname "kubernetes-node-02" could not be reached
	[WARNING Hostname]: hostname "kubernetes-node-02": lookup kubernetes-node-02 on 8.8.8.8:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

说明:

  • token
    • 可以通过安装 master 时的日志查看 token 信息
    • 可以通过 kubeadm token list 命令打印出 token 信息
    • 如果 token 过期,可以使用 kubeadm token create 命令创建新的 token
  • discovery-token-ca-cert-hash
    • 可以通过安装 master 时的日志查看 sha256 信息
    • 可以通过 openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed ‘s/^.* //’ 命令查看 sha256 信息

验证是否成功

回到master 服务器

root@kubernetes-master:/usr/local/kubernetes/cluster# kubectl get nodes

#可以看出加入了两个节点
NAME                 STATUS     ROLES    AGE   VERSION
kubernetes-master    NotReady   master   74m   v1.16.3
kubernetes-node-01   NotReady   <none>   11m   v1.16.3
kubernetes-node-02   NotReady   <none>   11m   v1.16.3

查看 pod 状态

kubectl get pod -n kube-system -o wide

#############显示如下###################

root@kubernetes-master:/usr/local/kubernetes/cluster# kubectl get pod -n kube-system -o wide
NAME                                        READY   STATUS    RESTARTS   AGE   IP                NODE                 NOMINATED NODE   READINESS GATES
coredns-58cc8c89f4-gsl5c                    0/1     Pending   0          78m   <none>            <none>               <none>           <none>
coredns-58cc8c89f4-klblw                    0/1     Pending   0          78m   <none>            <none>               <none>           <none>
etcd-kubernetes-master                      1/1     Running   0          20m   192.168.211.110   kubernetes-master    <none>           <none>
kube-apiserver-kubernetes-master            1/1     Running   0          21m   192.168.211.110   kubernetes-master    <none>           <none>
kube-controller-manager-kubernetes-master   1/1     Running   0          20m   192.168.211.110   kubernetes-master    <none>           <none>
kube-proxy-kstvt                            1/1     Running   0          15m   192.168.211.122   kubernetes-node-02   <none>           <none>
kube-proxy-lj5pd                            1/1     Running   0          78m   192.168.211.110   kubernetes-master    <none>           <none>
kube-proxy-tvh8s                            1/1     Running   0          15m   192.168.211.121   kubernetes-node-01   <none>           <none>
kube-scheduler-kubernetes-master            1/1     Running   0          20m   192.168.211.110   kubernetes-master    <none>           <none>

配置网络

详情配置请参见

安装网络插件calico

# 在 Master 运行
kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml

###############显示如下输出#####################
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.extensions/calico-node created
serviceaccount/calico-node created
deployment.extensions/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

如果发现找不到什么之类的错误可能是calico版本的问题,升级一下版本即可

查看是否安装成功


#运行
watch kubectl get pods --all-namespaces
#############如下##################

NAMESPACE     NAME                                        READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-74c9747c46-z9dsb    1/1     Running   0          93m
kube-system   calico-node-gdcd5                           1/1     Running   0          93m
kube-system   calico-node-tsnfm                           1/1     Running   0          93m
kube-system   calico-node-w4wvt                           1/1     Running   0          93m
kube-system   coredns-58cc8c89f4-gsl5c                    1/1     Running   0          3h23m
kube-system   coredns-58cc8c89f4-klblw                    1/1     Running   0          3h23m
kube-system   etcd-kubernetes-master                      1/1     Running   0          145m
kube-system   kube-apiserver-kubernetes-master            1/1     Running   0          146m
kube-system   kube-controller-manager-kubernetes-master   1/1     Running   0          146m
kube-system   kube-proxy-kstvt                            1/1     Running   0          141m
kube-system   kube-proxy-lj5pd                            1/1     Running   0          3h23m
kube-system   kube-proxy-tvh8s                            1/1     Running   0          140m
kube-system   kube-scheduler-kubernetes-master            1/1     Running   0          146m

# 要全部running才可以 下载时间可能有点久

第一个Kubernetes容器

基础命令

# 检查组件运行状态
kubectl get cs


# 输出如下
NAME                 STATUS    MESSAGE             ERROR
# 调度服务,主要作用是将 POD 调度到 Node
scheduler            Healthy   ok                  
# 自动化修复服务,主要作用是 Node 宕机后自动修复 Node 回到正常的工作状态
controller-manager   Healthy   ok                  
# 服务注册与发现
etcd-0               Healthy   {"health":"true"} 

####################################################

# 检查 Master 状态
kubectl cluster-info
# 输出如下
# 主节点状态
Kubernetes master is running at https://192.168.211.110:6443
# DNS 状态
KubeDNS is running at https://192.168.211.110:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

#####################################################

# 检查nodes状态
kubectl get nodes
# 输出如下,STATUS 为 Ready 即为正常状态
NAME                 STATUS   ROLES    AGE     VERSION
kubernetes-master    Ready    master   3h37m   v1.16.3
kubernetes-node-01   Ready    <none>   154m    v1.16.3
kubernetes-node-02   Ready    <none>   154m    v1.16.3

运行一个容器实例

# 使用 kubectl 命令创建两个监听 80 端口的 Nginx Pod(Kubernetes 运行容器的最小单元)
# replicas表示启动两个
kubectl run nginx --image=nginx --replicas=2 --port=80
#显示输出如下
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created

查看pods的状态

kubectl get pods

# 输出以下信息 STATUS 为运行成功
NAME                     READY   STATUS    RESTARTS   AGE
nginx-5578584966-shvgb   1/1     Running   0          7m27s
nginx-5578584966-th5nm   1/1     Running   0          7m27s

查看已部署的服务

kubectl get deployment

#输出如下
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   2/2     2            2           10m

映射服务,让用户能访问 (相当于docker的暴露端口出去)


# 暴露一个80端口的nginx 服务类型是 LoadBalancer(负债均衡)
kubectl expose deployment nginx --port=80 --type=LoadBalancer

# 输出如下
service/nginx exposed

查看已经发布的服务

kubectl get service

#输出一下信息
kubernetes   ClusterIP      10.96.0.1        <none>        443/TCP        3h48m
# 表示nginx 已经发布,并暴露32603端口
nginx        LoadBalancer   10.111.238.197   <pending>     80:32603/TCP   84s

查看服务详情

kubectl describe service nginx

#显示如下:

Name:                     nginx
Namespace:                default
Labels:                   run=nginx
Annotations:              <none>
Selector:                 run=nginx
Type:                     LoadBalancer
IP:                       10.111.238.197
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  32603/TCP
Endpoints:                192.168.140.65:80,192.168.141.193:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

测试访问nginx是否部署成功

浏览器访问

http://192.168.211.121:31738/

在这里插入图片描述

http://192.168.211.122:31738/

在这里插入图片描述

停止服务

kubectl delete deployment nginx

#输出如下
deployment.apps "nginx" deleted
kubectl delete service nginx

#输出如下
service "nginx" deleted

总结

job 容器任务 指定时间运行某个服务的规则

参考链接

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值