目录
前瞻
Pod控制器及其功用
Pod控制器,又称之为工作负载(workload),是用于实现管理pod的中间层,确保pod资源符合预期的状态,pod的资源出现故障时,会尝试进行重启,当根据重启策略无效,则会重新新建pod的资源。
pod控制器的类型
1、ReplicaSet: 代用户创建指定数量的pod副本,确保pod副本数量符合预期状态,并且支持滚动式自动扩容和缩容功能。
ReplicaSet主要三个组件组成:
(1)用户期望的pod副本数量
(2)标签选择器,判断哪个pod归自己管理
(3)当现存的pod数量不足,会根据pod资源模板进行新建
帮助用户管理无状态的pod资源,精确反应用户定义的目标数量,但是RelicaSet不是直接使用的控制器,而是使用Deployment。
2、Deployment:工作在ReplicaSet之上,用于管理无状态应用,目前来说最好的控制器。支持滚动更新和回滚功能,还提供声明式配置。
ReplicaSet 与Deployment 这两个资源对象逐步替换之前RC的作用。
3、DaemonSet:用于确保集群中的每一个节点只运行特定的pod副本,通常用于实现系统级后台任务。比如ELK服务
特性:服务是无状态的
服务必须是守护进程
4、StatefulSet:管理有状态应用
5、Job:只要完成就立即退出,不需要重启或重建
6、Cronjob:周期性任务控制,不需要持续后台运行
Pod与控制器之间的关系
controllers:在集群上管理和运行容器的 pod 对象, pod 通过 label-selector 相关联。
Pod 通过控制器实现应用的运维,如伸缩,升级等。
Deployment + ReplicaSet
部署无状态应用(没有实时的数据需要存储)
负责创建和管理ReplicaSet,维护Pod副本数与预期值保持一致
负责创建和删除控制器管理的Pod资源,有多个Pod副本时是并行创建启动的,升级策略默认为滚动更新的方式
示例
vim nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
ports:
- containerPort: 80
kubectl create -f nginx-deployment.yaml
kubectl get pods,deploy,rs
查看控制器配置
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2021-04-19T08:13:50Z"
generation: 1
labels:
app: nginx #Deployment资源的标签
name: nginx-deployment
namespace: default
resourceVersion: "167208"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/nginx-deployment
uid: d9d3fef9-20d2-4196-95fb-0e21e65af24a
spec:
progressDeadlineSeconds: 600
replicas: 3 #期望的pod数量,默认是1
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25% #升级过程中会先启动的新Pod的数量不超过期望的Pod数量的25%,也可以是一个绝对值
maxUnavailable: 25% #升级过程中在新的Pod启动好后销毁的旧Pod的数量不超过期望的Pod数量的25%,也可以是一个绝对值
type: RollingUpdate #滚动升级
template:
metadata:
creationTimestamp: null
labels:
app: nginx #Pod副本关联的标签
spec:
containers:
- image: nginx:1.15.4 #镜像名称
imagePullPolicy: IfNotPresent #镜像拉取策略
name: nginx
ports:
- containerPort: 80 #容器暴露的监听端口
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always #容器重启策略
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
......
查看历史版本
kubectl rollout history deployment/nginx-deployment
StatefulSet
部署有状态应用(有实时的数据需要存储)
每个Pod名称标识都是唯一且固定不变的
每个Pod都可以有自己专属的持久化存储(基于PVC模板volumeClaimTemplates实现的)
需要提前创建一个Headless Service资源(无头服务,ClusterIP为Node的service资源),在StatefulSet资源配置中使用serviceName字段指定Headless Service资源名称
K8S集群的Pod可以通过<Pod名称>.<svc名称>.<命名空间>格式解析出StatefulSet控制器管理的Pod资源的PodIP(基于Headless Service和CoreDNS实现的)
创建、滚动升级、扩容、缩容Pod副本时都是有序进行的(由spec.podManagementPolicy字段决定的,默认为OrderedReady,如果设置为Parallel则并行的管理Pod)
创建、扩容是升序进行的(顺序为Pod标识号从0到n-1) 滚动升级、缩容是倒序进行的(顺序为Pod标识号从n-1到0)
Service资源的类型:4个常规类型(ClusterIP NodePort LoadBalancer ExternalName)+ 1个特殊类型(Headless Service)
常见的应用场景:数据库
官方网址:
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
官方示例:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
从上面的应用场景可以发现,StatefulSet由以下几个部分组成
●Headless Service(无头服务):用于为Pod资源标识符生成可解析的DNS记录。
●volumeClaimTemplates(存储卷申请模板):基于静态或动态PV供给方式为Pod资源提供专有的固定存储。
●StatefulSet:用于管控Pod资源。
为什么要有headless?
在deployment中,每一个pod是没有名称,是随机字符串,是无序的。而statefulset中是要求有序的,每一个pod的名称必须是固定的。当节点挂了,重建之后的标识符是不变的,每一个节点的节点名称是不能改变的。pod名称是作为pod识别的唯一标识符,必须保证其标识符的稳定并且唯一。
为了实现标识符的稳定,这时候就需要一个headless service 解析直达到pod,还需要给pod配置一个唯一的名称。
为什么要有volumeClaimTemplate?
大部分有状态副本集都会用到持久存储,比如分布式系统来说,由于数据是不一样的,每个节点都需要自己专用的存储节点。而在 deployment中pod模板中创建的存储卷是一个共享的存储卷,多个pod使用同一个存储卷,而statefulset定义中的每一个pod都不能使用同一个存储卷,由此基于pod模板创建pod是不适应的,这就需要引入volumeClaimTemplate,当在使用statefulset创建pod时,会自动生成一个PVC,从而请求绑定一个PV,从而有自己专用的存储卷。
应用场景
●动态性强:Pod会飘到别的node节点
●更新发布频繁:互联网思维小步快跑,先实现再优化,老板永远是先上线再慢慢优化,先把idea变成产品挣到钱然后再慢慢一点一点优化
●支持自动伸缩:一来大促,肯定是要扩容多个副本
K8S里服务发现的方式---DNS,使K8S集群能够自动关联Service资源的“名称”和“CLUSTER-IP”,从而达到服务被集群自动发现的目的。
实现K8S里DNS功能的插件
●skyDNS:Kubernetes 1.3之前的版本
●kubeDNS:Kubernetes 1.3至Kubernetes 1.11
●CoreDNS:Kubernetes 1.11开始至今
查看statefulset的定义
kubectl explain statefulset
KIND: StatefulSet
VERSION: apps/v1
DESCRIPTION:
StatefulSet represents a set of pods with consistent identities. Identities
are defined as:
- Network: A single stable DNS and hostname.
- Storage: As many VolumeClaims as requested. The StatefulSet guarantees
that a given network identity will always map to the same storage identity.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
spec <Object>
Spec defines the desired identities of pods in this set.
status <Object>
Status is the current status of Pods in this StatefulSet. This data may be
out of date by some window of time.
kubectl explain statefulset.spec
KIND: StatefulSet
VERSION: apps/v1
RESOURCE: spec <Object>
DESCRIPTION:
Spec defines the desired identities of pods in this set.
A StatefulSetSpec is the specification of a StatefulSet.
FIELDS:
podManagementPolicy <string>
podManagementPolicy controls how pods are created during initial scale up,
when replacing pods on nodes, or when scaling down. The default policy is
`OrderedReady`, where pods are created in increasing order (pod-0, then
pod-1, etc) and the controller will wait until each pod is ready before
continuing. When scaling down, the pods are removed in the opposite order.
The alternative policy is `Parallel` which will create pods in parallel to
match the desired scale without waiting, and on scale down will delete all
pods at once.
replicas <integer>
replicas is the desired number of replicas of the given Template. These are
replicas in the sense that they are instantiations of the same Template,
but individual replicas also have a consistent identity. If unspecified,
defaults to 1.
revisionHistoryLimit <integer>
revisionHistoryLimit is the maximum number of revisions that will be
maintained in the StatefulSet's revision history. The revision history
consists of all revisions not represented by a currently applied
StatefulSetSpec version. The default value is 10.
selector <Object> -required-
selector is a label query over pods that should match the replica count. It
must match the pod template's labels. More info:
https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
serviceName <string> -required-
serviceName is the name of the service that governs this StatefulSet. This
service must exist before the StatefulSet, and is responsible for the
network identity of the set. Pods get DNS/hostnames that follow the
pattern: pod-specific-string.serviceName.default.svc.cluster.local where
"pod-specific-string" is managed by the StatefulSet controller.
template <Object> -required-
template is the object that describes the pod that will be created if
insufficient replicas are detected. Each pod stamped out by the StatefulSet
will fulfill this Template, but have a unique identity from the rest of the
StatefulSet.
updateStrategy <Object>
updateStrategy indicates the StatefulSetUpdateStrategy that will be
employed to update Pods in the StatefulSet when a revision is made to
Template.
volumeClaimTemplates <[]Object>
volumeClaimTemplates is a list of claims that pods are allowed to
reference. The StatefulSet controller is responsible for mapping network
identities to claims in a way that maintains the identity of a pod. Every
claim in this list must have at least one matching (by name) volumeMount in
one container in the template. A claim in this list takes precedence over
any volumes in the template, with the same name.
FIELDS:
podManagementPolicy <string> #Pod管理策略
replicas <integer> #副本数量
revisionHistoryLimit <integer> #历史版本限制
selector <Object> -required- #选择器,必选项
serviceName <string> -required- #服务名称,必选项
template <Object> -required- #模板,必选项
updateStrategy <Object> #更新策略
volumeClaimTemplates <[]Object> #存储卷申请模板,必选项
清单定义StatefulSet
如上所述,一个完整的 StatefulSet 控制器由一个 Headless Service、一个 StatefulSet 和一个 volumeClaimTemplate 组成。如下资源清单中的定义:
vim stateful-demo.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-svc
labels:
app: myapp-svc
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: myapp-pod
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myapp
spec:
serviceName: myapp-svc
replicas: 3
selector:
matchLabels:
app: myapp-pod
template:
metadata:
labels:
app: myapp-pod
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
ports:
- containerPort: 80
name: web
volumeMounts:
- name: myappdata
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: myappdata
annotations: #动态PV创建时,使用annotations在PVC里声明一个StorageClass对象的标识进行关联
volume.beta.kubernetes.io/storage-class: nfs-client-storageclass
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 2Gi
解析上例:由于 StatefulSet 资源依赖于一个实现存在的 Headless 类型的 Service 资源,所以需要先定义一个名为 myapp-svc 的 Headless Service 资源,用于为关联到每个 Pod 资源创建 DNS 资源记录。接着定义了一个名为 myapp 的 StatefulSet 资源,它通过 Pod 模板创建了 3 个 Pod 资源副本,并基于 volumeClaimTemplates 向前面创建的PV进行了请求大小为 2Gi 的专用存储卷。
创建pv
#192.168.75.40节点
mkdir -p /data/volumes/v{1,2,3,4,5}
vim /etc/exports
/data/volumes/v1 192.168.75.0/24(rw,no_root_squash)
/data/volumes/v2 192.168.75.0/24(rw,no_root_squash)
/data/volumes/v3 192.168.75.0/24(rw,no_root_squash)
/data/volumes/v4 192.168.75.0/24(rw,no_root_squash)
/data/volumes/v5 192.168.75.0/24(rw,no_root_squash)
systemctl restart rpcbind
systemctl restart nfs
exportfs -arv
showmount -e
定义PV
vim pv-demo.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
name: pv001
spec:
nfs:
path: /data/volumes/v1
server: 192.168.75.40
accessModes: ["ReadWriteMany","ReadWriteOnce"]
capacity:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv002
labels:
name: pv002
spec:
nfs:
path: /data/volumes/v2
server: 192.168.75.40
accessModes: ["ReadWriteOnce"]
capacity:
storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv003
labels:
name: pv003
spec:
nfs:
path: /data/volumes/v3
server: 192.168.75.40
accessModes: ["ReadWriteMany","ReadWriteOnce"]
capacity:
storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv004
labels:
name: pv004
spec:
nfs:
path: /data/volumes/v4
server: 192.168.75.40
accessModes: ["ReadWriteMany","ReadWriteOnce"]
capacity:
storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv005
labels:
name: pv005
spec:
nfs:
path: /data/volumes/v5
server: 192.168.75.40
accessModes: ["ReadWriteMany","ReadWriteOnce"]
capacity:
storage: 2Gi
kubectl apply -f pv-demo.yaml
kubectl get pv
创建statefulset
vim stateful-demo.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-svc
labels:
app: myapp-svc
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: myapp-pod
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myapp
spec:
serviceName: myapp-svc
replicas: 3
selector:
matchLabels:
app: myapp-pod
template:
metadata:
labels:
app: myapp-pod
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
ports:
- containerPort: 80
name: web
volumeMounts:
- name: myappdata
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: myappdata
annotations: #动态PV创建时,使用annotations在PVC里声明一个StorageClass对象的标识进行关联
volume.beta.kubernetes.io/storage-class: nfs-client-storageclass
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 2Gi
kubectl apply -f stateful-demo.yaml
kubectl get svc #查看创建的无头服务myapp-svc
kubectl get sts #查看statefulset
kubectl get pvc #查看pvc绑定
kubectl get pv #查看pv绑定
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 1Gi RWO,RWX Retain Available 6m
pv002 2Gi RWO Retain Bound default/myappdata-myapp-0 6m
pv003 2Gi RWO,RWX Retain Bound default/myappdata-myapp-1 6m
pv004 2Gi RWO,RWX Retain Bound default/myappdata-myapp-2 6m
pv005 2Gi RWO,RWX Retain Available 6m
kubectl get pods #查看Pod信息
#当删除一个 StatefulSet 时,该 StatefulSet 不提供任何终止 Pod 的保证。为了实现 StatefulSet 中的 Pod 可以有序且体面地终止,可以在删除之前将 StatefulSet 缩容到 0。
kubectl scale statefulset myappdata-myapp --replicas=0
kubectl delete -f stateful-demo.yaml
#此时PVC依旧存在的,再重新创建pod时,依旧会重新去绑定原来的pvc
kubectl apply -f stateful-demo.yaml
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myappdata-myapp-0 Bound pv002 2Gi RWO 5m
myappdata-myapp-1 Bound pv003 2Gi RWO,RWX 5m
myappdata-myapp-2 Bound pv004 2Gi RWO,RWX
滚动更新
#StatefulSet 控制器将在 StatefulSet 中删除并重新创建每个 Pod。它将以与 Pod 终止相同的顺序进行(从最大的序数到最小的序数),每次更新一个 Pod。在更新其前身之前,它将等待正在更新的 Pod 状态变成正在运行并就绪。如下操作的滚动更新是按照2-0的顺序更新。
vim stateful-demo.yaml #修改image版本为v2
.....
image: ikubernetes/myapp:v2
....
kubectl apply -f stateful-demo.yaml
kubectl get pods -w #查看滚动更新的过程
在创建的每一个Pod中,每一个pod自己的名称都是可以被解析的
从上面的解析,我们可以看到在容器当中可以通过对Pod的名称进行解析到ip。其解析的域名格式如下:
(pod_name).(service_name).(namespace_name).svc.cluster.local
总结
无状态:
1)deployment 认为所有的pod都是一样的
2)不用考虑顺序的要求
3)不用考虑在哪个node节点上运行
4)可以随意扩容和缩容有状态
1)实例之间有差别,每个实例都有自己的独特性,元数据不同,例如etcd,zookeeper
2)实例之间不对等的关系,以及依靠外部存储的应用。常规service和无头服务区别
service:一组Pod访问策略,提供cluster-IP群集之间通讯,还提供负载均衡和服务发现。
Headless service:无头服务,不需要cluster-IP,而是直接以DNS记录的方式解析出被代理Pod的IP地址。
扩展伸缩
kubectl scale sts myapp --replicas=4 #扩容副本增加到4个
kubectl get pods -w #动态查看扩容
kubectl get pv #查看pv绑定
kubectl patch sts myapp -p '{"spec":{"replicas":2}}' #打补丁方式缩容
kubectl get pods -w #动态查看缩容
DaemonSet
通常用于部署daemon(守护进程)级别的无状态应用
理论上可以在K8S集群所有node节点上都创建一个相同的Pod副本,无论node节点何时加入到K8S集群(需要考虑到污点taint和cordon不可调度的影响)
DaemonSet资源配置不需要设置Pod副本数字段replicas
使用 DaemonSet 的一些典型用法
●运行集群存储 daemon,例如在每个 Node 上运行 glusterd、ceph。
●在每个 Node 上运行日志收集 daemon,例如fluentd、logstash。
●在每个 Node 上运行监控 daemon,例如 Prometheus Node Exporter、collectd、Datadog 代理、New Relic 代理,或 Ganglia gmond。
应用场景:Agent
官方案例(监控)
https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
示例
vim ds.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: tt
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
ports:
- containerPort: 80
kubectl apply -f ds.yaml
#DaemonSet会在每个node节点都创建一个Pod
kubectl get pods
Job
一次性的部署短期任务的Pod资源,Pod不会持续运行,并要求任务执行完毕后容器成功退出且不再重启
Job资源配置的容器重启策略要求不能设置为Always,一般推荐设置为Never
如果任务执行失败导致Pod容器异常退出,Job会根据backoffLimit字段的值决定重建Pod来重试任务的次数(默认为6)
示例
vim job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
backoffLimit: 4
参数解释
.spec.template.spec.restartPolicy该属性拥有三个候选值:OnFailure,Never和Always。默认值为Always。它主要用于描述Pod内容器的重启策略。在Job中只能将此属性设置为OnFailure或Never,否则Job将不间断运行。.spec.backoffLimit用于设置job失败后进行重试的次数,默认值为6。默认情况下,除非Pod失败或容器异常退出,Job任务将不间断的重试,此时Job遵循 .spec.backoffLimit上述说明。一旦.spec.backoffLimit达到,作业将被标记为失败。
#在所有node节点下载perl镜像,因为镜像比较大,所以建议提前下载好
docker pull perl
kubectl apply -f job.yaml
kubectl get pods
#结果输出到控制台
kubectl logs pi-zvrl9
#清除job资源
kubectl delete -f job.yaml
#backoffLimit
vim job-limit.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: busybox
spec:
template:
spec:
containers:
- name: busybox
image: busybox
imagePullPolicy: IfNotPresent
command: ["/bin/sh", "-c", "sleep 10;date;exit 1"]
restartPolicy: Never
backoffLimit: 2
kubectl apply -f job-limit.yaml
kubectl get job,pods
kubectl describe job busybox
CronJob
周期性的部署短期任务的Pod资源,Pod不会持续运行,并要求任务执行完毕后容器成功退出且不再重启
Pod容器重启策略要求不能设置为Always,一般推荐设置为Never
要配置schedule字段设置任务执行的周期表,格式为"分 时 日 月 周"
示例
#每分钟打印hello
vim cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
kubectl create -f cronjob.yaml
kubectl get cronjob
kubectl get pods
cronjob其它可用参数的配置
spec:
concurrencyPolicy: Allow #声明了 CronJob 创建的任务执行时发生重叠如何处理(并发性规则仅适用于相同 CronJob 创建的任务)。spec仅能声明下列规则中的一种:
●Allow (默认):CronJob 允许并发任务执行。
●Forbid:CronJob 不允许并发任务执行;如果新任务的执行时间到了而老任务没有执行完,CronJob 会忽略新任务的执行。
●Replace:如果新任务的执行时间到了而老任务没有执行完,CronJob 会用新任务替换当前正在运行的任务。
startingDeadlineSeconds: 15 #它表示任务如果由于某种原因错过了调度时间,开始该任务的截止时间的秒数。过了截止时间,CronJob 就不会开始任务,且标记失败.如果此字段未设置,那任务就没有最后期限。
successfulJobsHistoryLimit: 3 #要保留的成功完成的任务数(默认为3)
failedJobsHistoryLimit:1 #要保留多少已完成和失败的任务数(默认为1)
suspend:true #如果设置为 true ,后续发生的执行都会被挂起。 这个设置对已经开始的执行不起作用。默认是 false。
schedule: '*/1 * * * *' #必需字段,作业时间表。在此示例中,作业将每分钟运行一次
jobTemplate: #必需字段,作业模板。这类似于工作示例
#如果报错:Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log hello-1621587780-c7v54)
#解决办法:绑定一个cluster-admin的权限
kubectl create clusterrolebinding system:anonymous --clusterrole=cluster-admin --user=system:anonymous