kubernetes源码阅读之controller-manager(二)

之前写了一篇kubernetes之controller manager启动,如果感兴趣可以看这篇博客
今天深入聊一下watch机制。从启动CM(controller-manager)启动开始细说
代码是基于kubernetes 1.6的,这里启动和以前的版本稍有差别,老版本是一个一个分别启动,现在统一到一起:

func newControllerInitializers() map[string]InitFunc {
    controllers := map[string]InitFunc{}
    controllers["endpoint"] = startEndpointController
    controllers["replicationcontroller"] = startReplicationController
    controllers["podgc"] = startPodGCController
    controllers["resourcequota"] = startResourceQuotaController
    controllers["namespace"] = startNamespaceController
    controllers["serviceaccount"] = startServiceAccountController
    controllers["garbagecollector"] = startGarbageCollectorController
    controllers["daemonset"] = startDaemonSetController
    controllers["job"] = startJobController
    controllers["deployment"] = startDeploymentController
    controllers["replicaset"] = startReplicaSetController
    controllers["horizontalpodautoscaling"] = startHPAController
    controllers["disruption"] = startDisruptionController
    controllers["statefuleset"] = startStatefulSetController
    controllers["cronjob"] = startCronJobController
    controllers["certificatesigningrequests"] = startCSRController
    controllers["ttl"] = startTTLController
    controllers["bootstrapsigner"] = startBootstrapSignerController
    controllers["tokencleaner"] = startTokenCleanerController

    return controllers
}

然后循环遍历去启动,如下:

    for controllerName, initFn := range controllers {
        if !ctx.IsControllerEnabled(controllerName) {
            glog.Warningf("%q is disabled", controllerName)
            continue
        }

        time.Sleep(wait.Jitter(s.ControllerStartInterval.Duration, ControllerStartJitter))

        glog.V(1).Infof("Starting %q", controllerName)
        started, err := initFn(ctx)
        if err != nil {
            glog.Errorf("Error starting %q", controllerName)
            return err
        }
        if !started {
            glog.Warningf("Skipping %q", controllerName)
            continue
        }
        glog.Infof("Started %q", controllerName)
    }
    ...
    sharedInformers.Start(stop)

在启动完各个controller以后有个方法调用sharedInformers.Start(stop)这个是启动informers的Start方法。

type SharedInformerFactory interface {
    Start(stopCh <-chan struct{})
    InformerFor(obj runtime.Object, newFunc NewInformerFunc) cache.SharedIndexInformer
}

接口如上所示,代码在(kubernetes/pkg/client/informers/informers_generated/externalversions/internalinterfaces/factory_interfaces.go),接口实现在下面(kubernetes/staging/src/k8s.io/client-go/informers/factory.go):

func (f *sharedInformerFactory) Start(stopCh <-chan struct{}) {
    f.lock.Lock()
    defer f.lock.Unlock()

    for informerType, informer := range f.informers {
        if !f.startedInformers[informerType] {
            go informer.Run(stopCh)
            f.startedInformers[informerType] = true
        }
    }
}

这里遍历各个informer,执行Run方法。而run的接口在(kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go)

type SharedInformer interface {
    // AddEventHandler adds an event handler to the shared informer using the shared informer's resync
    // period.  Events to a single handler are delivered sequentially, but there is no coordination
    // between different handlers.
    AddEventHandler(handler ResourceEventHandler)
    // AddEventHandlerWithResyncPeriod adds an event handler to the shared informer using the
    // specified resync period.  Events to a single handler are delivered sequentially, but there is
    // no coordination between different handlers.
    AddEventHandlerWithResyncPeriod(handler ResourceEventHandler, resyncPeriod time.Duration)
    // GetStore returns the Store.
    GetStore() Store
    // GetController gives back a synthetic interface that "votes" to start the informer
    GetController() Controller
    // Run starts the shared informer, which will be stopped when stopCh is closed.
    Run(stopCh <-chan struct{})
    // HasSynced returns true if the shared informer's store has synced.
    HasSynced() bool
    // LastSyncResourceVersion is the resource version observed when last synced with the underlying
    // store. The value returned is not synchronized with access to the underlying store and is not
    // thread-safe.
    LastSyncResourceVersion() string
}

实现类在:

func (s *sharedIndexInformer) Run(stopCh <-chan struct{}) {
    defer utilruntime.HandleCrash()

    fifo := NewDeltaFIFO(MetaNamespaceKeyFunc, nil, s.indexer)

    cfg := &Config{
        Queue:            fifo,
        ListerWatcher:    s.listerWatcher,
        ObjectType:       s.objectType,
        FullResyncPeriod: s.resyncCheckPeriod,
        RetryOnError:     false,
        ShouldResync:     s.processor.shouldResync,

        Process: s.HandleDeltas,
    }

    func() {
        s.startedLock.Lock()
        defer s.startedLock.Unlock()

        s.controller = New(cfg)
        s.controller.(*controller).clock = s.clock
        s.started = true
    }()

    s.stopCh = stopCh
    s.cacheMutationDetector.Run(stopCh)
    s.processor.run(stopCh)
    s.controller.Run(stopCh)
}

这里主要看controller的run方法,

func (c *controller) Run(stopCh <-chan struct{}) {
    defer utilruntime.HandleCrash()
    go func() {
        <-stopCh
        c.config.Queue.Close()
    }()
    r := NewReflector(
        c.config.ListerWatcher,
        c.config.ObjectType,
        c.config.Queue,
        c.config.FullResyncPeriod,
    )
    r.ShouldResync = c.config.ShouldResync
    r.clock = c.clock

    c.reflectorMutex.Lock()
    c.reflector = r
    c.reflectorMutex.Unlock()

    r.RunUntil(stopCh)

    wait.Until(c.processLoop, time.Second, stopCh)
}

在RunUntil里面终于看到ListAndWatch了。

func (r *Reflector) RunUntil(stopCh <-chan struct{}) {
    glog.V(3).Infof("Starting reflector %v (%s) from %s", r.expectedType, r.resyncPeriod, r.name)
    go wait.Until(func() {
        if err := r.ListAndWatch(stopCh); err != nil {
            utilruntime.HandleError(err)
        }
    }, r.period, stopCh)
}

这就回到上一篇kubelet的watch,可以参考上一篇博客。

Kubernetes Job ControllerKubernetes 中的一种 Controller,用于管理 Job 资源,确保它们成功地完成任务。在 Kubernetes 中,Job 是一种用于运行任务的资源类型,通常用于批处理处理和定时任务。 Job Controller 的主要功能是监控 Job 资源的状态,并根据需要创建、更新、删除 Pod 资源,以确保 Job 能够成功地运行。具体来说,Job Controller 会创建一个或多个 Pod 来执行 Job 的任务,如果 Pod 运行成功,则 Job 将被视为已完成;如果 Pod 运行失败,则 Job 将被视为已失败;如果 Pod 没有运行成功或失败,则 Job 将被视为正在运行。 Job Controller源码实现位于 Kubernetes 代码库中的 `k8s.io/kubernetes/pkg/controller/job` 目录下。其中,Job Controller 的主要代码实现位于 `job_controller.go` 文件中。 Job Controller 的主要实现逻辑如下: 1. Job Controller 会使用 Kubernetes API 客户端来监视 Job 资源的变化,包括创建、更新和删除操作。 2. 当 Job 资源发生变化时,Job Controller 会根据 Job 的当前状态来决定如何处理它。如果 Job 还没有创建任何 Pod,则 Job Controller 将创建一个或多个 Pod 来执行 Job 的任务。如果 Job 已经创建了 Pod,则 Job Controller 将检查这些 Pod 的状态,并根据需要创建、更新或删除 Pod。 3. 当一个或多个 Pod 成功地完成 Job 的任务后,Job Controller 将删除这些 Pod。如果 Job 的任务失败,则 Job Controller 将根据需要重试任务,直到达到最大重试次数或任务成功为止。 4. 当 Job 被删除时,Job Controller 将删除与该 Job 相关的所有 Pod。 总之,Job ControllerKubernetes 中非常重要的一种 Controller,它可以确保 Job 资源的正确执行,并帮助用户轻松地管理批处理处理和定时任务。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

柳清风09

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值