Operator-SDK:自定义CRD实现Node的request信息收集
一个信息收集的demo,编写Controller自定义CRD实现Node的request信息收集,主要目的是获取CPU和内存的使用量。部分代码参考了kubectl命令中的describe
的源码实现。
文章目录
集群信息
先kubectl get node
查看一下集群信息,一个master结点,三个slave结点。
$ kubectl get node
NAME STATUS ROLES AGE VERSION
10.100.100.130-slave Ready <none> 49d v1.21.5-hc.1
10.100.100.131-master Ready control-plane,master 50d v1.21.5-hc.1
10.100.100.144-slave Ready <none> 49d v1.21.5-hc.1
10.100.100.147-slave Ready <none> 49d v1.21.5-hc.1
Controller 配置
使用Operator-SDK创建CRDNoderequest
。
在 controllers/memcached_controller.go
中的 SetupWithManager()
函数指定了controller如何构建监控CR和该控制器拥有和管理的其他资源。
For(&v1.Node{})
将 Node 类型指定为要监视的主要资源。对每个Node类型的 Add/Update/Delete 事件(event),协调循环reconcile loop 将为该Node对象发送一个 reconcile 请求Request
( namespace/name key) 。
Owns(&v1.Pod{})
将Pod类型指定为要监视的辅助资源(secondary resource)。对每个Pod类型的 Add/Update/Delete 事件(event),事件处理器 event handler 将每个事件映射到部署所有者(Owner)的协调请求中reconcile Request
。
Watches(&source.Kind{Type: &v1.Pod{}}, &handler.EnqueueRequestForObject{})
将Pod的变化纳入监控范围。
// SetupWithManager sets up the controller with the Manager.
func (r *NoderequestReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
//For(&noderequestv1alpha1.Noderequest{}).
For(&v1.Node{}).
Owns(&v1.Pod{}).
Watches(&source.Kind{Type: &v1.Pod{}}, &handler.EnqueueRequestForObject{}).
WithEventFilter(watchPodChange()).
Complete(r)
}
使用Predicates对Operator SDK进行事件筛选
Events 由分配给controller正在监视的资源的 Sources 生成。这些事件由 EventHandlers 转换为请求并传递给Reconcile()
。Predicates 允许控制器在事件提供给 EventHandlers 之前过滤事件。筛选非常有用,因为controller可能只希望处理特定类型的事件,过滤由助于减少与API server的交互次数,因为仅对EventHandlers转换的事件调用 Reconcile()
。
WithEventFilter(watchPodChange())
自定义watchPodChange()
函数将监控Pod 事件Add/Update/Delete(event)的具体变化:
func watchPodChange() predicate.Predicate {
return predicate.Funcs{
UpdateFunc: func(e event.UpdateEvent) bool {
if e.ObjectOld.GetNamespace() == "" {
return false
} else {
fmt.Println("update: ", e.ObjectOld.GetName())
return true
}
},
DeleteFunc: func(e event.DeleteEvent) bool {
// watch delete
fmt.Println("delete: ", e.Object.GetName())
return e.DeleteStateUnknown
},
CreateFunc: func(e event.CreateEvent) bool {
// watch create
//fmt.Println("create: ", e.Object.GetName())
if e.Object.GetNamespace() == "" {
return true
} else {
return false
}
},
}
}
Update
由于Node将会定期向ApiService发送心跳以保证健康检查,每个Node结点将会产生大量的update
请求,为了仅监控Pod的变化所引起的Node的request的变化,需要过滤大量的Node的更新事件。需要在Event事件中将Node的if条件判断是甄别出发生变化的是Node还是Pod,由于Node和Pod的显著区别是Node不具有Namespace,所以判断方法是e.ObjectOld.GetNamespace() == ""
。
任何监视资源的所有创建事件都将传递给 Funcs.UpdateFunc()
,并在方法计算结果为 false
时过滤掉,如果没有为特定类型注册 Predicate 方法,则不会筛选该类型的事件。
Delete
当检测到Pod删除事件时,通过e.DeleteStateUnknown
查看Pod是否已经被删除,任何监视资源的所有删除事件都将传递给 Funcs.DeleteFunc()
,并在方法计算结果为 false
时过滤掉,如果没有为特定类型注册 Predicate 方法,则不会筛选该类型的事件。
Create
由于创建pod时,create
会伴随着一些update
操作,所以并不需要对Pod的create事件进行监控,因为这些事情已经在update中做了,所以create是为了监控Node的创建。由e.Object.GetNamespace() == ""
判断创建的是否是Node,若true
则更新request的信息监控。
Reconcile loop
Reconcile能负责在系统的实际状态上强制执行所需的CR状态。每次监控的CR或资源发生变化的事件时,它都会运行,并根据这些状态是否匹配的结果返回对应的值。
创建变量 name
来获取Node结点的name值:
var name client.ObjectKey
通过if条件判断来识别当前request到的是Node结点还是Pod,判断依据是Node的Namespace
为空,而Pod的Namespace
不为空。
当判断到时Pod的时,即req.NamespacedName.Namespace != ""
时,由pod := &v1.Pod{}
获取到所有的Pod信息,若无法正确Get到,则抛出一个err,并在控制台打印错误信息fmt.Println("ERROR[GetPod]:", err)
。
name.Name = pod.Spec.NodeName
获取到当前Pod所在的Node结点的name值。
node := &v1.Node{}
获取到node信息,由pod.Status.Phase
判断处于Running
状态的Pod,这里进行判断的原因是当创建Pod时,create
事件发生时Pod并不处于Running
状态,只有当最后一个update
事件发生时,新建的Pod才会处于Running
状态,如果不进行判断,监控事件更新将会变得非常频繁,进行判断可以过滤掉大量的Pod的request
,只有当Pod正确运行起来时才会更新Node的CPU和内存信息。
func (r *NoderequestReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
// your logic here
var name client.ObjectKey // get node name
if req.NamespacedName.Namespace != "" {
pod := &v1.Pod{}
err := r.Client.Get(ctx, req.NamespacedName, pod)
if err != nil {
fmt.Println("ERROR[GetPod]:", err)
return ctrl.Result{}, nil
}
name.Name = pod.Spec.NodeName // use for.pod
fmt.Println(name.Name)
node := &v1.Node{}
err = r.Client.Get(ctx, name, node)
if err != nil {
fmt.Println("ERROR[GetNode]:", err)
return ctrl.Result{}, nil
}
if pod.Status.Phase == "Running" {
compute(ctx, req, name, r, node)
}
} else {
name.Name = req.NamespacedName.Name // use for.node
fmt.Println(name.Name)
node := &v1.Node{}
err := r.Client.Get(ctx, name, node)
if err != nil {
fmt.Println("ERROR[GetNode]:", err)
return ctrl.Result{}, nil
}
compute(ctx, req, name, r, node)
}
return ctrl.Result{}, nil
}
当监控到Node的request时,首先打印一下当前更新的Node的name,再对其request
进行重新的计算。
Compute函数
compute函数用于计算Node的request
。代码参考了kubectl命令中的describe
的源码实现。compute通过传入的name来判断需要重新计算哪个Node的结点信息。
使用describe查看结点信息
首先用describe
查看一下master结点的信息,确保计算出来的结果可以跟这个信息进行比对,为了方便查看,省略了部分内容:
$ kubectl describe node 10.100.100.131-master
Name: 10.100.100.131-master
Roles: control-plane,master
...
Addresses:
InternalIP: 10.100.100.131
Hostname: 10.100.100.131-master
Capacity:
cpu: 4
ephemeral-storage: 51175Mi
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8111548Ki
pods: 63
Allocatable:
cpu: 4
ephemeral-storage: 48294789041
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7599548Ki
pods: 63
...
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 2632m (65%) 5110m (127%)
memory 2237Mi (30%) 7223Mi (97%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
获取到所有的Pod
compute是计算的核心内容,需要详细解释其实现原理。
首先pods := &v1.PodList{}
获取到所有的Pod信息。
ListOption
中的client.InNamespace("")
是获取命名空间的Pod,当内容为""
,也就是空时,将会获取到所有的Pod。
opts := []client.ListOption{
client.InNamespace(""),
}
再通过allocatable := node.Status.Capacity
获取到当前结点可分配的cpu和内存大小,为计算提供基数。
获取所有Pod的request的cpu和mem
接下来的内容参考了kubectl命令中的describe
的源码实现。
for _, pod := range pods.Itemss
将对所有Pod进行循环,但是首先需要通过if条件判断能被计入的Pod,即排除Succeed
和Failed
状态的Pod,K8s中对describe
计算request的源码实现也是排除了这两个状态的Pod。
同时,根据当前传入的Node的name的值,来确保计算的是同一个Node结点上的Pod的信息,不在此Node上的Pod将不会纳入计算。if判断条件如下:
if pod.Status.Phase != "Succeed" && pod.Status.Phase != "Failed" && pod.Spec.NodeName == name.Name
以计算Cpu的request为例,将当前Pod所需的Cpu大小再除以可分配的Cpu大小,再取百分比,即可得到当前Pod所占的资源,mem的计算同理:
fractionCpuReq := float64(container.Resources.Requests.Cpu().MilliValue()) / float64(allocatable.Cpu().MilliValue()) * 100
为了方便查看所有的Pod是否计算正确,在控制台上打印计算出来的所有Pod的信息,同时为了方便查看,由于当Cpu或者mem同时为0时,这个Pod的计算将并不会影响最终结果,只有当Cpu或者mem有一个不为0时,才会对计算出来的request结果有所影响,所以需要将Cpu或者mem同时为0的Pod过滤掉:
if container.Resources.Requests.Cpu().String() != "0" || container.Resources.Requests.Memory().String() != "0" {
fmt.Printf("ReqC: %s(%d%%)\tReqM: %s(%d%%)\tLimC: %s(%d%%)\tLimM: %s(%d%%)\n",
container.Resources.Requests.Cpu().String(),
int64(fractionCpuReq),
container.Resources.Requests.Memory().String(),
int64(fractionMemoryReq),
container.Resources.Limits.Cpu().String(),
int64(fractionCpuLimits),
container.Resources.Limits.Memory().String(),
int64(fractionMemoryLimits),
)
}
addResourceList函数
addResourceList函数的目的是将所有需要计算的Pod的request的值添加到队列中,name将会有两个值,为cpu
和mem
,是为了将cpu和mem的计算区分开来,方便计算:
// addResourceList adds the resources in newList to list
func addResourceList(list, new v1.ResourceList) {
for name, quantity := range new {
if value, ok := list[name]; !ok {
list[name] = quantity.DeepCopy()
} else {
value.Add(quantity)
list[name] = value
}
}
Sum
以Pod列表循环,将每个Pod的req和limits存入队列。由于limits
为0的Pod代表没有request限制,所以在计算时需要将这类Pod去除,并且需要去除没有在Running
状态的Pod。
podReqs, podLimits := v1.ResourceList{}, v1.ResourceList{}
用来存储所有的Pod的req和limits,每循环一个Pod,就将其req和limits存入列表。sum代码如下:
// sum
podReqs, podLimits := v1.ResourceList{}, v1.ResourceList{}
addResourceList(reqs, container.Resources.Requests)
addResourceList(limits, container.Resources.Limits)
// Add overhead for running a pod to the sum of requests and to non-zero limits:
if pod.Spec.Overhead != nil {
addResourceList(reqs, pod.Spec.Overhead)
for name, quantity := range pod.Spec.Overhead {
if value, ok := limits[name]; ok && !value.IsZero() {
value.Add(quantity)
limits[name] = value
}
}
}
定义reqs, limits
来存储计算所需要的所有Pod的request
和limits
reqs, limits := map[v1.ResourceName]resource.Quantity{}, map[v1.ResourceName]resource.Quantity{}
将每个Pod的req和limits存入reqs, limits
,以req为例,podReqName
判断是cpu还是mem的内容,并将其存入对应的value:
for podReqName, podReqValue := range podReqs {
if value, ok := reqs[podReqName]; !ok {
reqs[podReqName] = podReqValue.DeepCopy()
} else {
value.Add(podReqValue)
reqs[podReqName] = value
}
}
for podLimitName, podLimitValue := range podLimits {
if value, ok := limits[podLimitName]; !ok {
limits[podLimitName] = podLimitValue.DeepCopy()
} else {
value.Add(podLimitValue)
limits[podLimitName] = value
}
}
此时cpuReqs, cpuLimits, memoryReqs, memoryLimits
获取到的就是对应request的cpu和mem,limits的cpu和mem的值:
cpuReqs, cpuLimits, memoryReqs, memoryLimits := reqs[v1.ResourceCPU], limits[v1.ResourceCPU], reqs[v1.ResourceMemory], limits[v1.ResourceMemory]
以CpuReqs为例,将所有cpuReqs的总和除以可分配的cpu值,再取百分比:
fractionCpuReqs = float64(cpuReqs.MilliValue()) / float64(allocatable.Cpu().MilliValue()) * 100
其他内容同理:
if allocatable.Cpu().MilliValue() != 0 {
fractionCpuReqs = float64(cpuReqs.MilliValue()) / float64(allocatable.Cpu().MilliValue()) * 100
fractionCpuLimits = float64(cpuLimits.MilliValue()) / float64(allocatable.Cpu().MilliValue()) * 100
}
fractionMemoryReqs := float64(0)
fractionMemoryLimits := float64(0)
if allocatable.Memory().Value() != 0 {
fractionMemoryReqs = float64(memoryReqs.Value()) / float64(allocatable.Memory().Value()) * 100
fractionMemoryLimits = float64(memoryLimits.Value()) / float64(allocatable.Memory().Value()) * 100
}
compute全部源码如下所示:
func compute(ctx context.Context, req ctrl.Request, name client.ObjectKey, r *NoderequestReconciler, node *v1.Node) {
pods := &v1.PodList{} // get all pods
opts := []client.ListOption{
client.InNamespace(""),
}
err := r.Client.List(ctx, pods, opts...)
if err != nil {
fmt.Println("ERROR[List]:", err)
}
allocatable := node.Status.Capacity
if len(node.Status.Allocatable) > 0 {
allocatable = node.Status.Allocatable
}
reqs, limits := map[v1.ResourceName]resource.Quantity{}, map[v1.ResourceName]resource.Quantity{}
// get request cpu & mem
for _, pod := range pods.Items {
if pod.Status.Phase != "Succeed" && pod.Status.Phase != "Failed" && pod.Spec.NodeName == name.Name {
for _, container := range pod.Spec.Containers {
// pod
fractionCpuReq := float64(container.Resources.Requests.Cpu().MilliValue()) / float64(allocatable.Cpu().MilliValue()) * 100
fractionMemoryReq := float64(container.Resources.Requests.Memory().Value()) / float64(allocatable.Memory().Value()) * 100
fractionCpuLimits := float64(container.Resources.Limits.Cpu().MilliValue()) / float64(allocatable.Cpu().MilliValue()) * 100
fractionMemoryLimits := float64(container.Resources.Limits.Memory().Value()) / float64(allocatable.Memory().Value()) * 100
if container.Resources.Requests.Cpu().String() != "0" || container.Resources.Requests.Memory().String() != "0" {
fmt.Printf("ReqC: %s(%d%%)\tReqM: %s(%d%%)\tLimC: %s(%d%%)\tLimM: %s(%d%%)\n",
container.Resources.Requests.Cpu().String(),
int64(fractionCpuReq),
container.Resources.Requests.Memory().String(),
int64(fractionMemoryReq),
container.Resources.Limits.Cpu().String(),
int64(fractionCpuLimits),
container.Resources.Limits.Memory().String(),
int64(fractionMemoryLimits),
)
}
// sum
podReqs, podLimits := v1.ResourceList{}, v1.ResourceList{}
addResourceList(reqs, container.Resources.Requests)
addResourceList(limits, container.Resources.Limits)
// Add overhead for running a pod to the sum of requests and to non-zero limits:
if pod.Spec.Overhead != nil {
addResourceList(reqs, pod.Spec.Overhead)
for name, quantity := range pod.Spec.Overhead {
if value, ok := limits[name]; ok && !value.IsZero() {
value.Add(quantity)
limits[name] = value
}
}
}
for podReqName, podReqValue := range podReqs {
if value, ok := reqs[podReqName]; !ok {
reqs[podReqName] = podReqValue.DeepCopy()
} else {
value.Add(podReqValue)
reqs[podReqName] = value
}
}
for podLimitName, podLimitValue := range podLimits {
if value, ok := limits[podLimitName]; !ok {
limits[podLimitName] = podLimitValue.DeepCopy()
} else {
value.Add(podLimitValue)
limits[podLimitName] = value
}
}
}
}
}
fmt.Printf("Resource\tRequests\tLimits\n")
fmt.Printf("--------\t--------\t------\n")
cpuReqs, cpuLimits, memoryReqs, memoryLimits := reqs[v1.ResourceCPU], limits[v1.ResourceCPU], reqs[v1.ResourceMemory], limits[v1.ResourceMemory]
fractionCpuReqs := float64(0)
fractionCpuLimits := float64(0)
if allocatable.Cpu().MilliValue() != 0 {
fractionCpuReqs = float64(cpuReqs.MilliValue()) / float64(allocatable.Cpu().MilliValue()) * 100
fractionCpuLimits = float64(cpuLimits.MilliValue()) / float64(allocatable.Cpu().MilliValue()) * 100
}
fractionMemoryReqs := float64(0)
fractionMemoryLimits := float64(0)
if allocatable.Memory().Value() != 0 {
fractionMemoryReqs = float64(memoryReqs.Value()) / float64(allocatable.Memory().Value()) * 100
fractionMemoryLimits = float64(memoryLimits.Value()) / float64(allocatable.Memory().Value()) * 100
}
fmt.Printf("%s\t%s (%d%%)\t%s (%d%%)\n", v1.ResourceCPU, cpuReqs.String(), int64(fractionCpuReqs), cpuLimits.String(), int64(fractionCpuLimits))
fmt.Printf("%s\t%s (%d%%)\t%s (%d%%)\n", v1.ResourceMemory, memoryReqs.String(), int64(fractionMemoryReqs), memoryLimits.String(), int64(fractionMemoryLimits))
fmt.Println("--------------------------------------------")
}
最终结果
可以看到输出了三个slave结点和一个maste结点的request和limits的信息,格式跟describe
出来的保持类似,再得到最终结果的同时,也输出了参与计算的各个Pod的值:
10.100.100.130-slave
ReqC: 100m(2%) ReqM: 128Mi(0%) LimC: 500m(12%) LimM: 512Mi(3%)
ReqC: 0(0%) ReqM: 200Mi(1%) LimC: 0(0%) LimM: 0(0%)
ReqC: 100m(2%) ReqM: 25Mi(0%) LimC: 100m(2%) LimM: 25Mi(0%)
ReqC: 100m(2%) ReqM: 128Mi(0%) LimC: 100m(2%) LimM: 128Mi(0%)
ReqC: 102m(2%) ReqM: 180Mi(1%) LimC: 250m(6%) LimM: 180Mi(1%)
ReqC: 10m(0%) ReqM: 20Mi(0%) LimC: 20m(0%) LimM: 60Mi(0%)
ReqC: 1(25%) ReqM: 2Gi(13%) LimC: 2(50%) LimM: 4Gi(26%)
ReqC: 100m(2%) ReqM: 25Mi(0%) LimC: 100m(2%) LimM: 25Mi(0%)
ReqC: 100m(2%) ReqM: 25Mi(0%) LimC: 100m(2%) LimM: 25Mi(0%)
ReqC: 100m(2%) ReqM: 20Mi(0%) LimC: 100m(2%) LimM: 30Mi(0%)
ReqC: 100m(2%) ReqM: 128Mi(0%) LimC: 100m(2%) LimM: 128Mi(0%)
ReqC: 250m(6%) ReqM: 250Mi(1%) LimC: 0(0%) LimM: 0(0%)
Resource Requests Limits
-------- -------- ------
cpu 2062m (51%) 3370m (84%)
memory 3177Mi (20%) 5209Mi (33%)
--------------------------------------------
10.100.100.131-master
ReqC: 0(0%) ReqM: 200Mi(2%) LimC: 0(0%) LimM: 0(0%)
ReqC: 100m(2%) ReqM: 25Mi(0%) LimC: 100m(2%) LimM: 25Mi(0%)
ReqC: 102m(2%) ReqM: 180Mi(2%) LimC: 250m(6%) LimM: 180Mi(2%)
ReqC: 10m(0%) ReqM: 20Mi(0%) LimC: 20m(0%) LimM: 60Mi(0%)
ReqC: 100m(2%) ReqM: 100Mi(1%) LimC: 200m(5%) LimM: 200Mi(2%)
ReqC: 200m(5%) ReqM: 256Mi(3%) LimC: 1(25%) LimM: 2Gi(27%)
ReqC: 100m(2%) ReqM: 0(0%) LimC: 0(0%) LimM: 0(0%)
ReqC: 250m(6%) ReqM: 250Mi(3%) LimC: 0(0%) LimM: 0(0%)
ReqC: 500m(12%) ReqM: 128Mi(1%) LimC: 1(25%) LimM: 256Mi(3%)
ReqC: 50m(1%) ReqM: 128Mi(1%) LimC: 100m(2%) LimM: 256Mi(3%)
ReqC: 10m(0%) ReqM: 20Mi(0%) LimC: 20m(0%) LimM: 40Mi(0%)
ReqC: 10m(0%) ReqM: 20Mi(0%) LimC: 20m(0%) LimM: 40Mi(0%)
ReqC: 100m(2%) ReqM: 150Mi(2%) LimC: 100m(2%) LimM: 150Mi(2%)
ReqC: 100m(2%) ReqM: 128Mi(1%) LimC: 100m(2%) LimM: 128Mi(1%)
ReqC: 250m(6%) ReqM: 0(0%) LimC: 0(0%) LimM: 0(0%)
ReqC: 100m(2%) ReqM: 128Mi(1%) LimC: 200m(5%) LimM: 256Mi(3%)
ReqC: 200m(5%) ReqM: 256Mi(3%) LimC: 1(25%) LimM: 2Gi(27%)
ReqC: 100m(2%) ReqM: 128Mi(1%) LimC: 500m(12%) LimM: 512Mi(6%)
ReqC: 100m(2%) ReqM: 100Mi(1%) LimC: 0(0%) LimM: 0(0%)
ReqC: 200m(5%) ReqM: 0(0%) LimC: 0(0%) LimM: 0(0%)
ReqC: 50m(1%) ReqM: 20Mi(0%) LimC: 500m(12%) LimM: 1Gi(13%)
Resource Requests Limits
-------- -------- ------
cpu 2632m (65%) 5110m (127%)
memory 2237Mi (30%) 7223Mi (97%)
--------------------------------------------
10.100.100.144-slave
ReqC: 102m(2%) ReqM: 180Mi(1%) LimC: 250m(6%) LimM: 180Mi(1%)
ReqC: 10m(0%) ReqM: 20Mi(0%) LimC: 20m(0%) LimM: 60Mi(0%)
ReqC: 500m(12%) ReqM: 4Gi(26%) LimC: 2(50%) LimM: 8Gi(52%)
ReqC: 100m(2%) ReqM: 128Mi(0%) LimC: 500m(12%) LimM: 512Mi(3%)
ReqC: 200m(5%) ReqM: 512Mi(3%) LimC: 1(25%) LimM: 1Gi(6%)
ReqC: 100m(2%) ReqM: 128Mi(0%) LimC: 100m(2%) LimM: 128Mi(0%)
ReqC: 1(25%) ReqM: 2Gi(13%) LimC: 2(50%) LimM: 4Gi(26%)
ReqC: 100m(2%) ReqM: 25Mi(0%) LimC: 100m(2%) LimM: 25Mi(0%)
ReqC: 100m(2%) ReqM: 25Mi(0%) LimC: 100m(2%) LimM: 25Mi(0%)
ReqC: 100m(2%) ReqM: 128Mi(0%) LimC: 200m(5%) LimM: 256Mi(1%)
ReqC: 250m(6%) ReqM: 250Mi(1%) LimC: 0(0%) LimM: 0(0%)
ReqC: 50m(1%) ReqM: 100Mi(0%) LimC: 50m(1%) LimM: 100Mi(0%)
ReqC: 0(0%) ReqM: 200Mi(1%) LimC: 0(0%) LimM: 0(0%)
ReqC: 100m(2%) ReqM: 25Mi(0%) LimC: 100m(2%) LimM: 25Mi(0%)
ReqC: 100m(2%) ReqM: 70Mi(0%) LimC: 0(0%) LimM: 170Mi(1%)
Resource Requests Limits
-------- -------- ------
cpu 2812m (70%) 6420m (160%)
memory 7935Mi (51%) 14793Mi (95%)
--------------------------------------------
10.100.100.147-slave
ReqC: 102m(2%) ReqM: 180Mi(1%) LimC: 250m(6%) LimM: 180Mi(1%)
ReqC: 10m(0%) ReqM: 20Mi(0%) LimC: 20m(0%) LimM: 60Mi(0%)
ReqC: 250m(6%) ReqM: 250Mi(1%) LimC: 0(0%) LimM: 0(0%)
ReqC: 100m(2%) ReqM: 100Mi(0%) LimC: 200m(5%) LimM: 200Mi(1%)
ReqC: 100m(2%) ReqM: 70Mi(0%) LimC: 0(0%) LimM: 170Mi(1%)
ReqC: 100m(2%) ReqM: 20Mi(0%) LimC: 100m(2%) LimM: 30Mi(0%)
ReqC: 200m(5%) ReqM: 512Mi(3%) LimC: 1(25%) LimM: 1Gi(6%)
ReqC: 100m(2%) ReqM: 128Mi(0%) LimC: 500m(12%) LimM: 512Mi(3%)
Resource Requests Limits
-------- -------- ------
cpu 962m (24%) 2070m (51%)
memory 1280Mi (8%) 2176Mi (14%)
--------------------------------------------
再与之前直接用describe
命令得到的计算结果相对比,mater结点得到了正确的结果:
Resource Requests Limits
-------- -------- ------
cpu 2632m (65%) 5110m (127%)
memory 2237Mi (30%) 7223Mi (97%)
同时在集群中有个定期update的Pod,控制器也能正确检测到事件,并重新计算相对应的Node:
update: velero-745bf958b4-xpfw8
update: velero-745bf958b4-xpfw8
10.100.100.131-master
ReqC: 0(0%) ReqM: 200Mi(2%) LimC: 0(0%) LimM: 0(0%)
ReqC: 100m(2%) ReqM: 25Mi(0%) LimC: 100m(2%) LimM: 25Mi(0%)
ReqC: 102m(2%) ReqM: 180Mi(2%) LimC: 250m(6%) LimM: 180Mi(2%)
ReqC: 10m(0%) ReqM: 20Mi(0%) LimC: 20m(0%) LimM: 60Mi(0%)
ReqC: 200m(5%) ReqM: 256Mi(3%) LimC: 1(25%) LimM: 2Gi(27%)
ReqC: 100m(2%) ReqM: 100Mi(1%) LimC: 200m(5%) LimM: 200Mi(2%)
ReqC: 100m(2%) ReqM: 0(0%) LimC: 0(0%) LimM: 0(0%)
ReqC: 250m(6%) ReqM: 250Mi(3%) LimC: 0(0%) LimM: 0(0%)
ReqC: 500m(12%) ReqM: 128Mi(1%) LimC: 1(25%) LimM: 256Mi(3%)
ReqC: 50m(1%) ReqM: 128Mi(1%) LimC: 100m(2%) LimM: 256Mi(3%)
ReqC: 100m(2%) ReqM: 128Mi(1%) LimC: 100m(2%) LimM: 128Mi(1%)
ReqC: 10m(0%) ReqM: 20Mi(0%) LimC: 20m(0%) LimM: 40Mi(0%)
ReqC: 10m(0%) ReqM: 20Mi(0%) LimC: 20m(0%) LimM: 40Mi(0%)
ReqC: 100m(2%) ReqM: 150Mi(2%) LimC: 100m(2%) LimM: 150Mi(2%)
ReqC: 250m(6%) ReqM: 0(0%) LimC: 0(0%) LimM: 0(0%)
ReqC: 100m(2%) ReqM: 128Mi(1%) LimC: 200m(5%) LimM: 256Mi(3%)
ReqC: 200m(5%) ReqM: 256Mi(3%) LimC: 1(25%) LimM: 2Gi(27%)
ReqC: 100m(2%) ReqM: 128Mi(1%) LimC: 500m(12%) LimM: 512Mi(6%)
ReqC: 100m(2%) ReqM: 100Mi(1%) LimC: 0(0%) LimM: 0(0%)
ReqC: 200m(5%) ReqM: 0(0%) LimC: 0(0%) LimM: 0(0%)
ReqC: 50m(1%) ReqM: 20Mi(0%) LimC: 500m(12%) LimM: 1Gi(13%)
Resource Requests Limits
-------- -------- ------
cpu 2632m (65%) 5110m (127%)
memory 2237Mi (30%) 7223Mi (97%)
--------------------------------------------