kubernetes concepts -- Replication Controller

Edit This Page

ReplicationController

NOTE: A Deployment that configures a ReplicaSet is now the recommended way to set up replication.

注意:更加推荐使用ReplicaSet部署集群。

ReplicationController ensures that a specified number of pod replicas are running at any one time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is always up and available.

RC保证任意时间有固定的副本Pod在运行。

How a ReplicationController Works

If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a ReplicationController are automatically replaced if they fail, are deleted, or are terminated. For example, your pods are re-created on a node after disruptive maintenance such as a kernel upgrade. For this reason, you should use a ReplicationController even if your application requires only a single pod. A ReplicationController is similar to a process supervisor, but instead of supervising individual processes on a single node, the ReplicationController supervises multiple pods across multiple nodes.

如果有太多pods,RC会终止多余的pod。如果Pod太少,RC会启动更多Pod。与手动创建Pod相比,RC会在pod挂掉时自动创建新的Pod,更加及时、方便。例如在破坏性的维护(如kernel升级)之后,节点上的Pods会被重新创建。在这种情况下,即使你的应用只需要一个Pod,也应该使用RC。RC就像一个超级监控进程,同时监控多个节点上的多个Pods。

ReplicationController is often abbreviated to “rc” or “rcs” in discussion, and as a shortcut in kubectl commands.

A simple case is to create one ReplicationController object to reliably run one instance of a Pod indefinitely. A more complex use case is to run several identical replicas of a replicated service, such as web servers.

ReplicationController经常被简写为rc或rcs,在kuberctl命令行中也可以使用rc。

rc的简单用例是创建一个rc然后运行一个单独的Pod,复杂的用法是运行服务的多个相同副本,比如web 服务器。

Running an example ReplicationController

This example ReplicationController config runs three copies of the nginx web server.

下面是运行三个nginx web server副本的RC配置。

replication.yaml 
apiVersion: v1
kind: ReplicationController metadata: name: nginx spec: replicas: 3 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 

Run the example job by downloading the example file and then running this command:

使用kuberctl create -f rc.yaml创建rc

$ kubectl create -f ./replication.yaml
replicationcontroller "nginx" created

Check on the status of the ReplicationController using this command:

使用命令行查询RC的状态,kubectl describe rc/{rc.name}

$ kubectl describe replicationcontrollers/nginx
Name:        nginx
Namespace:   default
Selector:    app=nginx
Labels:      app=nginx Annotations: <none> Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=nginx Containers: nginx: Image: nginx Port: 80/TCP Environment: <none> Mounts: <none> Volumes: <none> Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- ---- ------ ------- 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-qrm3m 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-3ntk0 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-4ok8v 

Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:

当前三个pods被创建了,没有一个是running的,可能因为正在拉取镜像。一会再运行命令,查询到的状态可能是3 Running

Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed

To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:

可以使用下面的命令将所有的Pod名称输出为机器可读的格式。

$ pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name}) echo $pods nginx-3ntk0 nginx-4ok8v nginx-qrm3m 

Here, the selector is the same as the selector for the ReplicationController (seen in the kubectl describe output, and in a different form in replication.yaml. The --output=jsonpath option specifies an expression that just gets the name from each pod in the returned list.其中selector和rc的selector相同, --output=jsonpath选项定义每个pod返回的数据的格式。

Writing a ReplicationController Spec

As with all other Kubernetes config, a ReplicationController needs apiVersionkind, and metadata fields. For general information about working with config files, see object management .

A ReplicationController also needs a .spec section.

rc配置文件需要apiVersion、kind、metadata三个普通域,还需要.spec。

Pod Template

The .spec.template is the only required field of the .spec.

The .spec.template is a pod template. It has exactly the same schema as a pod, except it is nested and does not have an apiVersion or kind.

In addition to required fields for a Pod, a pod template in a ReplicationController must specify appropriate labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See pod selector.

Only a .spec.template.spec.restartPolicy equal to Always is allowed, which is the default if not specified.

For local container restarts, ReplicationControllers delegate to an agent on the node, for example the Kubelet or Docker.

.spec.template只是.spec的一个域,.spec.template是一个Pod模板,和普通的Pod模板格式相同,只是嵌入到rc配置文件中,没有apiVersion和kind。在.spec.template中,除了Pod需要的参数,pod模板还必须明确适当的标签和重启策略。对于标签,一定不能和其他controller重复,具体可以参考pod selector。重启策略为.spec.template.spec.restartPolicy,默认为Always。

对于本地容器的重启,rc把任务派送给节点的代理,如kuberlet或docker。

Labels on the ReplicationController

The ReplicationController can itself have labels (.metadata.labels). Typically, you would set these the same as the .spec.template.metadata.labels; if .metadata.labels is not specified then it defaults to .spec.template.metadata.labels. However, they are allowed to be different, and the .metadata.labels do not affect the behavior of the ReplicationController.

rc可以在.metadata.labels中设置标签。一般,rc的标签和.spec.template.metadata.labels相同。如果不设置.metadata.labels,那么默认.metadata.labels等于.spec.template.metadata.labels。两个标签可以不相同,.metadata.labels不影响rc的行为。

Pod Selector

The .spec.selector field is a label selector. A ReplicationController manages all the pods with labels that match the selector. It does not distinguish between pods that it created or deleted and pods that another person or process created or deleted. This allows the ReplicationController to be replaced without affecting the running pods.

If specified, the .spec.template.metadata.labels must be equal to the .spec.selector, or it will be rejected by the API. If .spec.selector is unspecified, it will be defaulted to .spec.template.metadata.labels.

Also you should not normally create any pods whose labels match this selector, either directly, with another ReplicationController, or with another controller such as Job. If you do so, the ReplicationController thinks that it created the other pods. Kubernetes does not stop you from doing this.

If you do end up with multiple controllers that have overlapping selectors, you will have to manage the deletion yourself (see below).

.spec.selector是一个label selector。rc管理所有标签符合selector的Pods。rc不会区分自己或其他进程创建或删除的pods。这样就可以替换rc,并且不会影响运行中的Pods。

.spec.template.metadata.labels必须等于.spec.selector,否则API 会拒绝创建rc。.spec.selector默认等于.spec.template.metadata.labels。

一般情况下,不同的controller不要创建标签相同、满足其他selector的pods,否则rc会认为pods增多了。kubernetes不会阻止多个controller的标签相同。

如果多个controller有重叠的selector,用户需要自己删除某些controller。

Multiple Replicas

You can specify how many pods should run concurrently by setting .spec.replicas to the number of pods you would like to have running concurrently. The number running at any time may be higher or lower, such as if the replicas were just increased or decreased, or if a pod is gracefully shutdown, and a replacement starts early.

If you do not specify .spec.replicas, then it defaults to 1.

.spec.replicas表示rc中的目标副本数。某个时刻running的pods总数可能会多一些或少一些,比如用户刚刚增加或减少了副本数、某个pod被关闭了rc正在替换新的pod。.spec.replicas默认为1。

Working with ReplicationControllers

Deleting a ReplicationController and its Pods

To delete a ReplicationController and all its pods, use kubectl delete. Kubectl will scale the ReplicationController to zero and wait for it to delete each pod before deleting the ReplicationController itself. If this kubectl command is interrupted, it can be restarted.

When using the REST API or go client library, you need to do the steps explicitly (scale replicas to 0, wait for pod deletions, then delete the ReplicationController).

使用kuberctl delete来删除rc和pods。kubectl会首先将rc的副本数设置为0,然后等待pods被全部删除,最后再删除rc。If this kubectl command is interrupted, it can be restarted.

当使用REST API或go 客户端,你需要自己执行这三个步骤。

Deleting just a ReplicationController

You can delete a ReplicationController without affecting any of its pods.

Using kubectl, specify the --cascade=false option to kubectl delete.

When using the REST API or go client library, simply delete the ReplicationController object.

Once the original is deleted, you can create a new ReplicationController to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old pods. However, it will not make any effort to make existing pods match a new, different pod template. To update pods to a new spec in a controlled way, use a rolling update.

使用kubectl delete --cascade=false ...删除rc时,不会影响pods。

使用REST API 或go client,可以直接删除rc 对象。

当旧的rc被删除后,用户可以创建新的rc。只要.spec.selector是一样的,新的rc就会管理旧的Pods。但是,如果新的rc的.spec.template与旧的不同,rc不会更改已有的pod。如果需要将pod更新为新的配置,可以使用rolling update。

Isolating pods from a ReplicationController

Pods may be removed from a ReplicationController’s target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed).

可以通过更改标签将pod从rc中移除。可以用于测试、数据恢复等。如果副本个数不变,pods被移除后,rc会自动创建新的Pod。

Common usage patterns

Rescheduling

As mentioned above, whether you have 1 pod you want to keep running, or 1000, a ReplicationController will ensure that the specified number of pods exists, even in the event of node failure or pod termination (for example, due to an action by another control agent).

不管rc需要多少副本数,它会一直监控当前状态,确保当前状态等于目标状态,即使出现节点挂掉或pod终止。

Scaling

The ReplicationController makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the replicas field.

通过手动更改replicas或自动扩缩容代理,可以方便地对集群进行扩缩容。

Rolling updates

The ReplicationController is designed to facilitate rolling updates to a service by replacing pods one-by-one.

As explained in #1353, the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.

Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time.

The two ReplicationControllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates.

Rolling update is implemented in the client tool kubectl rolling-update. Visit kubectl rolling-update task for more concrete examples.

通过将Pod一一替换,可以实现滚动升级。

推荐方法:创建一个新的rc,副本数为1。逐步增加新rc的副本数、减小旧rc的副本数。当旧rc的副本数为0时,删除旧rc。

理想情况下,滚动升级可以确保应用一直可以提供服务,确保任意时间有足够的pods。

两个rc创建的Pods需要至少一个标签是不同的,例如镜像标签(一般镜像升级之后再滚动升级rc).

客户端工具kubectl rolling-update支持滚动升级。

Multiple release tracks

In addition to running multiple releases of an application while a rolling update is in progress, it’s common to run multiple releases for an extended period of time, or even continuously, using multiple release tracks. The tracks would be differentiated by labels.

For instance, a service might target all pods with tier in (frontend), environment in (prod). Now say you have 10 replicated pods that make up this tier. But you want to be able to ‘canary’ a new version of this component. You could set up a ReplicationController with replicas set to 9 for the bulk of the replicas, with labels tier=frontend, environment=prod, track=stable, and another ReplicationController with replicas set to 1 for the canary, with labels tier=frontend, environment=prod, track=canary. Now the service is covering both the canary and non-canary pods. But you can mess with the ReplicationControllers separately to test things out, monitor the results, etc.

除了滚动升级时会同时运行多个版本的应用,使用multiple release tracks,可以长时间或一直运行多个版本。根据标签区分tracks。

例如,serviceA对应的的Pods满足:tier in (frontend), environment in (prod)。假设当前有10个副本。但是你想使用一个新版本测试。这时可以设置一个rc A,9个副本,标签为tier=frontend, environment=prod,track=stable,设置rc B,副本数为1,标签为tier=frontend, environment=prod, track=canary。这时就有两个版本的pods。

Using ReplicationControllers with Services

Multiple ReplicationControllers can sit behind a single service, so that, for example, some traffic goes to the old version, and some goes to the new version.

A ReplicationController will never terminate on its own, but it isn’t expected to be as long-lived as services. Services may be composed of pods controlled by multiple ReplicationControllers, and it is expected that many ReplicationControllers may be created and destroyed over the lifetime of a service (for instance, to perform an update of pods that run the service). Both services themselves and their clients should remain oblivious to the ReplicationControllers that maintain the pods of the services.

多个rc可以提供一个服务,这样,某些请求被转发到旧版本的pod,有些被转发到新版本的pod。

RC永远不会自动终止,只有被用户删除,但是RC的存活时间也不能与Services相比。Services可能有很多RC构成,每个RC下又包含一个或多个Pods。在Service的生命周期中,很多RC被创建、销毁,例如更新Pod。Services和服务的客户端不感知RC的变化。

 

Writing programs for Replication

Pods created by a ReplicationController are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but ReplicationControllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the RabbitMQ work queues, as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (for example, cpu or memory), should be performed by another online controller process, not unlike the ReplicationController itself.

RC创建的Pods可以相互替代、语义相同(猜测是暴露的端口相同),可能随着时间变化、版本更新,Pods内部的配置有变化。这种Pod十分适合无状态的多副本服务器,但RC也可以被用来维护主备、分片、工作池(worker-pool)的应用。这种应用应该使用动态工作分配机制(dynamic work assignment mechanisms),例如RabbitMQ Work queues,而不是对每个Pod进行静态或一次性的配置定制,这种一次性修改是反设计的。任何对Pod的定制,如对资源的垂直扩缩容(如cpu、内存),都应该由另外一个controller执行,而不是RC本身。

 

Responsibilities of the ReplicationController

The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, readiness and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.

The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in #492), which would change its replicas field. We will not add scheduling policies (for example, spreading) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation (#170).

The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The “macro” operations currently supported by kubectl (run, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like Asgard managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.

目前RC只是简单的保证符合label selector的Pods个数等于所需副本数,并且这些Pods都是正常的,只有被终止的Pods被排除。以后,系统的准备状态(readiness)和其他已有信息也会被考虑在内,Pods的替换策略更可控,我们计划以后向外部客户端发送事件,用户可以根据这些事件实现更加复杂的替换和扩缩容。

RC的功能很少,他不会进行准备状态或存活状态的架空。也只能被其他的auto-scaler来控制,通过修改replicas的数值进行扩缩容。以后不会增加调度策略,也不会将现有的Pod匹配当前的Pod模板,因为这会妨碍扩缩容和其他自动进程。类似的,完成截止期限、排序依赖、配置扩展或其他特性也不会被增加。我们甚至计划把创建Pod的大部分功能分解出来。

RC应该是原始的可组合的基本单元。我们希望高级API和工具在它和其他补充的原始单元之上。目前kubectl支持的微操作(run, scale, rolling-update)就是这个概念的证据和示例。例如我们假想有个Asgard管理RC、auto-scalers、services、调度策略、canaries(测试版本)等。

API Object

Replication controller is a top-level resource in the Kubernetes REST API. More details about the API object can be found at: ReplicationController API object.

RC是kubernetes REST API的高级资源。

Alternatives to ReplicationController

ReplicaSet

ReplicaSet is the next-generation ReplicationController that supports the new set-based label selector. It’s mainly used by Deployment as a mechanism to orchestrate pod creation, deletion and updates. Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don’t require updates at all.

ReplicaSet是RC的高级版本,支持新的set-based label selector。ReplicaSet主要由Deployment用于对Pod进行创建、删除和更新。注意我们推荐直接使用Deployment而不是ReplicaSet,除非你需要定制更新编排机制或根本不需要更新。

Deployment is a higher-level API object that updates its underlying Replica Sets and their Pods in a similar fashion as kubectl rolling-update. Deployments are recommended if you want this rolling update functionality, because unlike kubectl rolling-update, they are declarative, server-side, and have additional features.

Deployment是基于ReplicaSet的高级API对象,可以使用类似于kubectl rolling-update的机制更新底层的Replica Sets和Pods。如果想要可行的滚动升级,推荐使用Deployments,与kubectl rolling-update相比,Deployment是declarative, server-side, and have additional features.

Bare Pods

Unlike in the case where a user directly created pods, a ReplicationController replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicationController even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A ReplicationController delegates local container restarts to some agent on the node (for example, Kubelet or Docker).

相对于直接创建Pods,RC可以对状态进行维护,当节点挂掉或毁坏性的节点维护时,可以重新创建Pods。所以我们推荐使用RC,即使某个应用只需要一个Pod。RC相当于一个进程监控,监控多个节点行的多个pods。RC将本地容器的重启任务安排给节点上的客户端,如kubelet或docker。

Job

Use a Job instead of a ReplicationController for pods that are expected to terminate on their own (that is, batch jobs).

如果需要Pods自己终止,需要使用Job,如batch jobs。

DaemonSet

Use a DaemonSet instead of a ReplicationController for pods that provide a machine-level function, such as machine monitoring or machine logging. These pods have a lifetime that is tied to a machine lifetime: the pod needs to be running on the machine before other pods start, and are safe to terminate when the machine is otherwise ready to be rebooted/shutdown.

DaemonSet提供machine-level function,如机器架空或机器日志打印。这些Pods和machine的声明周期绑定,需要先启动这些pods,当他们在运行时在启动其他pods,当机器需要被重启或关机时,可以关闭这些pods.

For more information

Read Run Stateless AP Replication Controller.

转载于:https://www.cnblogs.com/MyLifeMyWay/p/8533478.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值