[kubeflow] 从零搭建training-operator项目

最近一直在看kubeflow/training-operator源码,思考怎么从零搭建一个类似的项目呢?网上查了很多资料,点开的chrome标签页密密麻麻,这里把自己的学习过程记录下来,代码在https://github.com/hanjialeOK/simple-operator

简介

training-operator里面主要是一些CRD和相关的controller。其中包括了tfjob,pytorchjob,mpijob,mxjob,paddlepaddlejob等等。这里我们主要关注tfjob和pytorchjob。kubeflow有专门的文档来家介绍各种job的使用:tfjobpytorchjob

tfjob的文档之所以称之为tfjob,是因为契合支持tensorflow的分布式训练。tensorflow的分布式训练是参数服务器架构且使用2222端口通信,因此tfjob中的pod的角色类型就是PS和Worker,其中PS的2222端口默认开放。eg.以 tf_job_mnist.yaml 为例,运行起来后,执行kubectl get all 输出如下,每个PS和Worker都有一个headless service,和pod同名,开启2222端口用于通讯。pod无需关心通信逻辑,因为通信逻辑是用户运行的tensorflow的分布式代码决定的。

➜  dockerfiles kubectl get all 
NAME                                    READY   STATUS    RESTARTS   AGE
pod/dist-mnist-for-e2e-test-ps-0        1/1     Running   0          110s
pod/dist-mnist-for-e2e-test-ps-1        1/1     Running   0          110s
pod/dist-mnist-for-e2e-test-worker-0    1/1     Running   0          110s
pod/dist-mnist-for-e2e-test-worker-1    1/1     Running   0          110s
pod/dist-mnist-for-e2e-test-worker-2    1/1     Running   0          110s
pod/dist-mnist-for-e2e-test-worker-3    1/1     Running   0          110s

NAME                                       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
service/dist-mnist-for-e2e-test-ps-0       ClusterIP   None         <none>        2222/TCP   110s
service/dist-mnist-for-e2e-test-ps-1       ClusterIP   None         <none>        2222/TCP   110s
service/dist-mnist-for-e2e-test-worker-0   ClusterIP   None         <none>        2222/TCP   110s
service/dist-mnist-for-e2e-test-worker-1   ClusterIP   None         <none>        2222/TCP   110s
service/dist-mnist-for-e2e-test-worker-2   ClusterIP   None         <none>        2222/TCP   110s
service/dist-mnist-for-e2e-test-worker-3   ClusterIP   None         <none>        2222/TCP   110s

tfjob会把TF_CONFIG的环境变量写在PS和Worker的所有pod里面。

TF_CONFIG={"cluster":{"ps":["dist-mnist-for-e2e-test-ps-0.default.svc:2222","dist-mnist-for-e2e-test-ps-1.default.svc:2222"],"worker":["dist-mnist-for-e2e-test-worker-0.default.svc:2222","dist-mnist-for-e2e-test-worker-1.default.svc:2222","dist-mnist-for-e2e-test-worker-2.default.svc:2222","dist-mnist-for-e2e-test-worker-3.default.svc:2222"]},"task":{"type":"worker","index":0},"environment":"cloud"}

使用tensorflow进行分布式训练时,可以把这个环境变量load出来,比如 dist.mnist.py 中是这样做的:

tf_config = json.loads(os.environ.get('TF_CONFIG') or '{}')
// 从tf_config中取出ps_spec和worker_spec
// ...
cluster = tf.train.ClusterSpec({"ps": ps_spec, "worker": worker_spec})

同理,pytorchjob契合支持pytorch的分布式训练。pytorch的分布式训练使用的是master-worker架构,默认使用23456端口,因此pytorchjob中的角色类型就是Master和Worker,其中Master的23456端口默认开放。Master的数量只能有一个,否则pytorchjob会报错。eg.以 simple.yaml 为例,运行起来后,执行kubectl get all 输出如下,每个Master和Worker都有一个headless service,和pod同名,开启23456端口用于通讯。

➜  dockerfiles kubectl get all
NAME                                     READY   STATUS    RESTARTS   AGE
pod/pytorch-simple-master-0              1/1     Running   0          3m33s
pod/pytorch-simple-worker-0              1/1     Running   0          3m33s

NAME                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
service/pytorch-simple-master-0   ClusterIP   None             <none>        23456/TCP   3m33s
service/pytorch-simple-worker-0   ClusterIP   None             <none>        23456/TCP   3m33s

每个pod里面会有这些环境变量,所有worker只需要知道master的service和端口号即可,因为master不会向特定的worker发送消息,而是所有worker会订阅master。

MASTER_ADDR=pytorch-simple-master-0 // master的service名称
MASTER_PORT=23456 // master的service的端口
WORLD_SIZE=2  // master+worker的数目
RANK=0  // 序号

mnist.py 是pytorch训练代码,可以看看。

正文

2020.9.23 实现crd 有两派思路,tf-operator 都有体现:

  • tf-operator.v1 是目前在用的版本,基于 informer 和 client 裸写的。
  • training-operator.v1 是下一个大版本的设计,还不太稳定,没有发布出去。training-operator.v1 是基于 controller-runtime 写的, 当然都有大量的复用部分,因此抽了一个 github.com/kubeflow/common。kubeflow/common 不仅用于tf-operator 自身的迭代,也用于整合其它机器学习framework的operator。

上面这段话来自 tf-operator源码分析 。不过现在是2023/8/17,training-operator已经足够完善了,而且最新版本已经把kubeflow/common部分 merge 到了training-operator主分支里面,我们下面也是围绕training-operator来讲。长江后浪推前浪啊真的是…

kubebuilder

不过回到主题,我们这篇文章还是在讲 如何从零搭建training-operator项目。这篇文章 自定义资源对象与控制器的实现 · K8S 实践 04 对我有非常非常大的帮助,基本上就是按着流程在走。

operator=CRD+controller,其中CRD就是一些go struct,而controller则负责控制CRD。k8s官方提供了kubebuildercode-generator两个工具帮助用户搭建自己的operator。kubebuilder可以快速生成operator的脚手架,用户完成CRD的go struct之后,可以通过code-generator生成相应的clientset、informers、listers 和 deepcopy 等代码,而用户只需要完成controller中的Reconcile函数即可。

可以使用kubectl api-resources 命令来查看所有的资源。

NAME                              SHORTNAMES            APIVERSION                             NAMESPACED   KIND 
cronjobs                          cj                    batch/v1                               true         CronJob
jobs                                                    batch/v1                               true         Job
jobs                              vcjob,vj              batch.volcano.sh/v1alpha1              true         Job
commands                                                bus.volcano.sh/v1alpha1                true         Command                                                                                                               
scaleins                                                kai.alibabacloud.com/v1alpha1          true         ScaleIn
scaleouts                                               kai.alibabacloud.com/v1alpha1          true         ScaleOut
trainingjobs                                            kai.alibabacloud.com/v1alpha1          true         TrainingJob
mpijobs                                                 kubeflow.org/v1                        true         MPIJob
mxjobs                                                  kubeflow.org/v1                        true         MXJob
paddlejobs                                              kubeflow.org/v1                        true         PaddleJob
pytorchjobs                                             kubeflow.org/v1                        true         PyTorchJob
tfjobs                                                  kubeflow.org/v1                        true         TFJob
xgboostjobs                                             kubeflow.org/v1                        true         XGBoostJob
...

上面显示的APIVERSION是 group/version,KIND就是go struct的名称,这些概念下面还会提到。

一个 API 组中的一个类型被称为 GroupVersionKind(GVK),API 组的一份资源被称为 GroupVersionResource(GVR)。每个 GVK 对应 Golang 代码中的一个 Go type / struct。具体地,GVK 与 Go struct 之间的转换由 runtime.Scheme 来实现。当我们编写好 CR 的 Go struct 之后,需要调用形如 AddToScheme() 的代码将其注册到一个全局的 runtime.Scheme 实例中,然后就可以全权委托它来实现二者之间的转换了。

type Scheme struct {
    // versionMap allows one to figure out the go type of an object with
    // the given version and name.
    gvkToType map[schema.GroupVersionKind]reflect.Type

    // typeToGroupVersion allows one to find metadata for a given go object.
    // The reflect.Type we index by should *not* be a pointer.
    typeToGVK map[reflect.Type][]schema.GroupVersionKind
    ... }

来自 自定义资源对象与控制器的实现 · K8S 实践 04

以client.Get为例,Get函数内部,首先会把&core.Pod{}转为相应的GVK,然后再把GVK转为相应的GVR,GVR会通过restAPI和api server进行交互。

pod := &core.Pod{}		// 底层 通过反射获取 到pod 类型,进而获取到 pod gvk,拿到对应的client 或informer,再根据 objName 获取真实数据。
err := r.Client.Get(ctx, req.podName, pod);

kubebuilder的官方文档在此

基于 kubebuilder 实现一个 Operator 的流程如下:

  • 创建一个新的工程目录并通过 kubebuilder init 命令将其初始化;
  • 用 kubebuilder create 命令创建 API 组和 CR,并编写对应的 Go struct;
  • 使用 code-generator 为 CR 生成 clientset、informers、listers 和 deepcopy 等代码;
  • 在 controller 中实现协调循环(reconcile loop),让 CR 实例从当前状态向预期状态演进;
  • 构建、测试与发布 Operator。

来自 自定义资源对象与控制器的实现 · K8S 实践 04

我们先去下载最新的 kubebuilder。目前最新是3.11.1最新版本,由于我是linux环境,所以选择kubebuilder_linux_amd64版本。这个本身就是已经编译好的二进制文件,下载后加上可执行权限就行,改个名字方便使用

chmod +x kubebuilder_linux_amd64
mv kubebuilder_linux_amd64 kubebuilder

下面,我们开始搭建training-operator的脚手架。现在的go项目流行使用module管理依赖,传统的$GOPATH/src模式很少再用。因此,我们也使用go module。

首先创建文件夹目录simple-operator,并初始化go mod

mkdir simple-operator && cd simple-operator
go mod init github.com/hanjialeok/simple-operator

执行后会创建一个go.mod文件。然后就可以使用kubebuilder命令来初始化了。kubebuilder这个二进制文件我就放在了~/目录下,嫌麻烦可以直接放在/usr/local/bin目录下。

先看看 kubebuilder init 的说明。--domin是group的域名,--owner会出现在代码最前面copyright部分。

➜  dockerfiles ~/kubebuilder init --help                                              
Initialize a new project including the following files:
  - a "go.mod" with project dependencies
  - a "PROJECT" file that stores project configuration
  - a "Makefile" with several useful make targets for the project
  - several YAML files for project deployment under the "config" directory
  - a "cmd/main.go" file that creates the manager that will run the project controllers

Usage:
  kubebuilder init [flags]

Examples:
  # Initialize a new project with your domain and name in copyright
  kubebuilder init --plugins go/v4 --domain example.org --owner "Your name"

  # Initialize a new project defining a specific project version
  kubebuilder init --plugins go/v4 --project-version 3


Flags:
      --domain string            domain for groups (default "my.domain")
      --fetch-deps               ensure dependencies are downloaded (default true)
  -h, --help                     help for init
      --license string           license to use to boilerplate, may be one of 'apache2', 'none' (default "apache2")
      --owner string             owner to add to the copyright
      --project-name string      name of this project
      --project-version string   project version (default "3")
      --repo string              name to use for go module (e.g., github.com/user/repo), defaults to the go package of the current working directory.
      --skip-go-version-check    if specified, skip checking the Go version

Global Flags:
      --plugins strings   plugin keys to be used for this subcommand execution

我们执行如下命令

~/kubebuilder init --domain kubeflow.org --owner "hanjialeok"

执行后输出

Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
Get controller runtime:
$ go get sigs.k8s.io/controller-runtime@v0.15.0
Update dependencies:
$ go mod tidy
Next: define a resource with:
$ kubebuilder create api 

生成的文件目录如下:

➜  test-operator tree .          
.
├── cmd
│   └── main.go  // 程序入口
├── config
│   ├── default
│   │   ├── kustomization.yaml
│   │   ├── manager_auth_proxy_patch.yaml
│   │   └── manager_config_patch.yaml
│   ├── manager
│   │   ├── kustomization.yaml
│   │   └── manager.yaml
│   ├── prometheus
│   │   ├── kustomization.yaml
│   │   └── monitor.yaml
│   └── rbac
│       ├── auth_proxy_client_clusterrole.yaml
│       ├── auth_proxy_role_binding.yaml
│       ├── auth_proxy_role.yaml
│       ├── auth_proxy_service.yaml
│       ├── kustomization.yaml
│       ├── leader_election_role_binding.yaml
│       ├── leader_election_role.yaml
│       ├── role_binding.yaml
│       └── service_account.yaml
├── Dockerfile
├── go.mod
├── go.sum
├── hack
│   └── boilerplate.go.txt  // 代码头部的注释
├── Makefile  // 负责项目的构建、测试、代码生成和部署
├── PROJECT
└── README.md

7 directories, 24 files

根目录下的 Makefile 为我们在开发、构建和部署阶段提供了一系列命令:

➜  test-operator make help    

Usage:
  make <target>

General
  help             Display this help.

Development
  manifests        Generate WebhookConfiguration, ClusterRole and CustomResourceDefinition objects.
  generate         Generate code containing DeepCopy, DeepCopyInto, and DeepCopyObject method implementations.
  fmt              Run go fmt against code.
  vet              Run go vet against code.
  test             Run tests.

Build
  build            Build manager binary.
  run              Run a controller from your host.
  docker-build     Build docker image with the manager.
  docker-push      Push docker image with the manager.
  docker-buildx    Build and push docker image for the manager for cross-platform support

Deployment
  install          Install CRDs into the K8s cluster specified in ~/.kube/config.
  uninstall        Uninstall CRDs from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion.
  deploy           Deploy controller to the K8s cluster specified in ~/.kube/config.
  undeploy         Undeploy controller from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion.

Build Dependencies
  kustomize        Download kustomize locally if necessary. If wrong version is installed, it will be removed before downloading.
  controller-gen   Download controller-gen locally if necessary. If wrong version is installed, it will be overwritten.
  envtest          Download envtest-setup locally if necessary.

当我们定义好CRD的go struct,写好controller的逻辑代码后,可以执行make <target>来生成相关的代码。

接下来,通过kubebuilder继续创建我们所需的CRD。--group--version--kind 用来设置GVK。

~/kubebuilder create api --group kubeflow.org --version v1 --kind TFJob

执行后,两次询问都回答y,然后输出

Create Resource [y/n]
y
Create Controller [y/n]
y
Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
api/v1/tfjob_types.go
api/v1/groupversion_info.go
internal/controller/suite_test.go
internal/controller/tfjob_controller.go
Update dependencies:
$ go mod tidy
Running make:
$ make generate
mkdir -p /home/hanjiale.123/test-operator/bin
test -s /home/hanjiale.123/test-operator/bin/controller-gen && /home/hanjiale.123/test-operator/bin/controller-gen --version | grep -q v0.12.0 || \
GOBIN=/home/hanjiale.123/test-operator/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.12.0
/home/hanjiale.123/test-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
Next: implement your new API and generate the manifests (e.g. CRDs,CRs) with:
$ make manifests

现在的文件目录如下

➜  test-operator tree .                                                                                                                                                                                                                                              
.                                                                                                                                                                                                                                                                             
├── api
│   └── v1
│       ├── groupversion_info.go
│       ├── tfjob_types.go  // 在此处完成CRD的go struct
│       └── zz_generated.deepcopy.go  // 根据type.go生成的deepcopy代码
├── bin
│   └── controller-gen
├── cmd
│   └── main.go. // 程序入口
├── config
│   ├── crd
│   │   ├── kustomization.yaml
│   │   ├── kustomizeconfig.yaml
│   │   └── patches
│   │       ├── cainjection_in_tfjobs.yaml
│   │       └── webhook_in_tfjobs.yaml
│   ├── default
│   │   ├── kustomization.yaml
│   │   ├── manager_auth_proxy_patch.yaml
│   │   └── manager_config_patch.yaml
│   ├── manager
│   │   ├── kustomization.yaml
│   │   └── manager.yaml
│   ├── prometheus
│   │   ├── kustomization.yaml
│   │   └── monitor.yaml
│   ├── rbac
│   │   ├── auth_proxy_client_clusterrole.yaml
│   │   ├── auth_proxy_role_binding.yaml
│   │   ├── auth_proxy_role.yaml
│   │   ├── auth_proxy_service.yaml
│   │   ├── kustomization.yaml
│   │   ├── leader_election_role_binding.yaml
│   │   ├── leader_election_role.yaml
│   │   ├── role_binding.yaml
│   │   ├── service_account.yaml
│   │   ├── tfjob_editor_role.yaml
│   │   └── tfjob_viewer_role.yaml
│   └── samples
│       ├── kubeflow.org_v1_tfjob.yaml
│       └── kustomization.yaml
├── Dockerfile
├── go.mod
├── go.sum
├── hack
│   └── boilerplate.go.txt  // 代码头部的注释
├── internal
│   └── controller
│       ├── suite_test.go
│       └── tfjob_controller.go  // 在此处完成Reconcile函数的逻辑
├── Makefile
├── PROJECT
└── README.md

15 directories, 38 files

下面就需要完成 api/v1/tfjob_types.go 中TFJob的定义了。我们把tfjob_types.go改名为tensorflow_types.go,然后直接把training-operator中的common_types.gotensorflow_types.go两个文件复制过来。其中common_types.go里面定义了一些tfjob,pytorchjob等所需要的公共部分,tensorflow_types.go则定义了TFJob结构体。

为了简化,tensorflow_types.go中的这个函数注意不要复制过来

func init() {
	SchemeBuilder.Register(&TFJob{}, &TFJobList{})
	SchemeBuilder.SchemeBuilder.Register(addTensorflowDefaultingFuncs)  // 不要复制这句,不然要涉及更多额外代码
}

然后,我们执行make generate更新deepcopy代码

make generate

调用 code-generator 生成代码

下面我们调整一下文件结构,使之和training-operator类似。

  • 创建目录 cmd,然后将程序入口 main.go 放在 cmd/simple-operator 目录下。如果后续支持别的命令,也将其放置在 cmd 目录下;
  • 创建目录 pkg,在其中依次创建 apis 和 controller 两个子目录。在 apis 中,以 kubeflow.org/v1 的形式放置原api/v1路径下的代码;在 controller 中,放置原internel/controller路径下的代码。这样布局是为了方便后续扩展更多 API 组和 controller。

进行上述修改后,不要忘了相应的修改一些文件的import路径。

现在的文件路径如下:

➜  test-operator tree .                                                                                                                                                                                                                                                       
.                                                                                                                                                                                                                                                                             
├── bin                                                                                                                                                                                                                                                                       
│   └── controller-gen   
├── cmd
│   └── simple-operator
│       └── main.go
├── config
│   ├── crd
│   │   ├── kustomization.yaml
│   │   ├── kustomizeconfig.yaml
│   │   └── patches
│   │       ├── cainjection_in_tfjobs.yaml
│   │       └── webhook_in_tfjobs.yaml
│   ├── default
│   │   ├── kustomization.yaml
│   │   ├── manager_auth_proxy_patch.yaml
│   │   └── manager_config_patch.yaml
│   ├── manager
│   │   ├── kustomization.yaml
│   │   └── manager.yaml
│   ├── prometheus
│   │   ├── kustomization.yaml
│   │   └── monitor.yaml
│   ├── rbac
│   │   ├── auth_proxy_client_clusterrole.yaml
│   │   ├── auth_proxy_role_binding.yaml
│   │   ├── auth_proxy_role.yaml
│   │   ├── auth_proxy_service.yaml
│   │   ├── kustomization.yaml
│   │   ├── leader_election_role_binding.yaml
│   │   ├── leader_election_role.yaml
│   │   ├── role_binding.yaml
│   │   ├── service_account.yaml
│   │   ├── tfjob_editor_role.yaml
│   │   └── tfjob_viewer_role.yaml
│   └── samples
│       ├── kubeflow.org_v1_tfjob.yaml
│       └── kustomization.yaml
├── Dockerfile
├── go.mod
├── go.sum
├── hack
│   └── boilerplate.go.txt
├── Makefile
├── pkg
│   ├── apis
│   │   └── kubeflow.org
│   │       └── v1
│   │           ├── common_types.go
│   │           ├── groupversion_info.go
│   │           ├── tensorflow_types.go
│   │           └── zz_generated.deepcopy.go
│   └── controller
│       ├── suite_test.go
│       └── tfjob_controller.go
├── PROJECT
└── README.md

17 directories, 39 files

现在我们依然可以正常使用make generate指令,deepcopy代码会正常生成。

下面我们将使用code-generator工具来生成clientset,informer,lister等代码。我们向hack路径下添加几个文件。

  • generate-groups.sh,generate-internal-groups.sh 来自code-generator release-1.27,看看这两个文件的内容,不难发现generate-groups.sh本质还是调用了generate-internal-groups.sh
  • update-codegen.sh,verify-codegen.sh 来自code-generator release-1.27,这种方法在5个月前被k8s官方标记为Deprecated,不过目前依然是主流。用户可以根据自己的项目修改这两个脚本,作用是调用generate-groups.sh和generate-internal-groups.sh
  • tools.go,来自 k8s官方的sample-controller ,作用是在执行go mod tidy时下载code-generator到本地,比较方便。
cd hack
wget https://raw.githubusercontent.com/kubernetes/sample-controller/master/hack/tools.go
wget https://raw.githubusercontent.com/kubernetes/code-generator/release-1.27/hack/update-codegen.sh
wget https://raw.githubusercontent.com/kubernetes/code-generator/release-1.27/hack/verify-codegen.sh
wget https://raw.githubusercontent.com/kubernetes/code-generator/release-1.27/generate-groups.sh
wget https://raw.githubusercontent.com/kubernetes/code-generator/release-1.27/generate-internal-groups.sh
// 给所有脚本加上执行权限
chmod +x *.sh

我们看看generate-internal-groups.sh脚本的使用说明

./generate-internal-groups.sh --help

执行后输出

Usage: generate-internal-groups.sh <generators> <output-package> <internal-apis-package> <extensiona-apis-package> <groups-versions> ...

  <generators>        the generators comma separated to run (deepcopy,defaulter,conversion,client,lister,informer,openapi) or "all".
  <output-package>    the output package name (e.g. github.com/example/project/pkg/generated).
  <int-apis-package>  the internal types dir (e.g. github.com/example/project/pkg/apis).
  <ext-apis-package>  the external types dir (e.g. github.com/example/project/pkg/apis or githubcom/example/apis).
  <groups-versions>   the groups and their versions in the format "groupA:v1,v2 groupB:v1 groupC:v2", relative
                      to <api-package>.
  ...                 arbitrary flags passed to all generator binaries.

Examples:
  generate-internal-groups.sh all                           github.com/example/project/pkg/client github.com/example/project/pkg/apis github.com/example/project/pkg/apis "foo:v1 bar:v1alpha1,v1beta1"
  generate-internal-groups.sh deepcopy,defaulter,conversion github.com/example/project/pkg/client github.com/example/project/pkg/apis github.com/example/project/apis     "foo:v1 bar:v1alpha1,v1beta1"

我们调用generate-internal-groups.sh的逻辑写在update-codegen.sh里面,其内容需要相应修改来适配我们的项目,可以参考 training-operator/hack/update-codegen.sh 的内容来写。下面是我修改的文件。

set -o errexit
set -o nounset
set -o pipefail

SCRIPT_ROOT=$(dirname "${BASH_SOURCE[0]}")/..
ROOT_PKG=github.com/hanjialeok/simple-operator

GET_PKG_LOCATION() {
  pkg_name="${1:-}"

  pkg_location="$(go list -m -f '{{.Dir}}' "${pkg_name}" 2>/dev/null)"
  if [ "${pkg_location}" = "" ]; then
    echo "${pkg_name} is missing. Running 'go mod download'."

    go mod download
    pkg_location=$(go list -m -f '{{.Dir}}' "${pkg_name}")
  fi
  echo "${pkg_location}"
}

# Grab code-generator version from go.sum
CODEGEN_PKG="$(GET_PKG_LOCATION "k8s.io/code-generator")"
echo ">> Using ${CODEGEN_PKG}"

# Ensure we can execute.
chmod +x ${CODEGEN_PKG}/generate-groups.sh
chmod +x ${CODEGEN_PKG}/generate-internal-groups.sh

# code-generator does work with go.mod but makes assumptions about
# the project living in `$GOPATH/src`. To work around this and support
# any location; create a temporary directory, use this as an output
# base, and copy everything back once generated.
TEMP_DIR=$(mktemp -d)
cleanup() {
    echo ">> Removing ${TEMP_DIR}"
    rm -rf ${TEMP_DIR}
}
trap "cleanup" EXIT SIGINT

echo ">> Temporary output directory ${TEMP_DIR}"

# generate the code with:
# --output-base    because this script should also be able to run inside the vendor dir of
#                  k8s.io/kubernetes. The output-base is needed for the generators to output into the vendor dir
#                  instead of the $GOPATH directly. For normal projects this can be dropped.

${CODEGEN_PKG}/generate-internal-groups.sh "deepcopy,defaulter,client,lister,informer,openapi" \
    github.com/hanjialeok/simple-operator/pkg/client \
    github.com/hanjialeok/simple-operator/pkg/apis \
    github.com/hanjialeok/simple-operator/pkg/apis \
    kubeflow.org:v1 \
    --output-base "${TEMP_DIR}" \
    --go-header-file ${SCRIPT_ROOT}/hack/boilerplate.go.txt

# Copy everything back.
cp -a "${TEMP_DIR}/${ROOT_PKG}/." "${SCRIPT_ROOT}/"

因为现在generate-internal-groups.sh已经可以支持deepcopy,defaulter和openapi了,所以全部可以用generate-internal-groups.sh完成,而不必像training-operator里写得那样麻烦。我们可以修改"deepcopy,defaulter,client,lister,informer,openapi"来选择想生成的文件。GET_PKG_LOCATION()这个函数的作用是找到下载的code-generator路径,然后找到generate-groups.sh和generate-internal-groups.sh这两个脚本文件。所以其实并没有调用我们刚刚下载的那个。
先执行go mod tidy,由于tools.go的import依赖会自动下载code-generator。然后执行脚本

./update-codegen.sh

执行后输出

>> Using /home/hanjiale.123/go/pkg/mod/k8s.io/code-generator@v0.27.2
>> Temporary output directory /tmp/tmp.vvVPWdJyI8
Generating deepcopy funcs
Generating defaulters
Generating clientset for kubeflow.org:v1 at github.com/hanjialeok/simple-operator/pkg/client/clientset
Generating listers for kubeflow.org:v1 at github.com/hanjialeok/simple-operator/pkg/client/listers
Generating informers for kubeflow.org:v1 at github.com/hanjialeok/simple-operator/pkg/client/informers
Generating OpenAPI definitions for kubeflow.org:v1 at github.com/hanjialeok/simple-operator/pkg/client/openapi
API rule violation: list_type_missing,k8s.io/apimachinery/pkg/apis/meta/v1,APIGroup,ServerAddressByClientCIDRs
API rule violation: list_type_missing,k8s.io/apimachinery/pkg/apis/meta/v1,APIGroup,Versions
API rule violation: list_type_missing,k8s.io/apimachinery/pkg/apis/meta/v1,APIGroupList,Groups
API rule violation: list_type_missing,k8s.io/apimachinery/pkg/apis/meta/v1,APIResource,Categories
...
API rule violation: names_match,k8s.io/apimachinery/pkg/runtime,Unknown,ContentEncoding
API rule violation: names_match,k8s.io/apimachinery/pkg/runtime,Unknown,ContentType
>> Removing /tmp/tmp.vvVPWdJyI8

完成后使用go mod tidy更新一下依赖。注意使用code-generator生成的deepcopy代码和使用make generate生成的是相同的。现在我们的pkg目录下文件路径如下:

➜  pkg tree .
.
├── apis
│   └── kubeflow.org
│       └── v1
│           ├── common_types.go
│           ├── groupversion_info.go
│           ├── tensorflow_types.go
│           ├── zz_generated.deepcopy.go
│           └── zz_generated.defaults.go
├── client
│   ├── clientset
│   │   └── versioned
│   │       ├── clientset.go
│   │       ├── fake
│   │       │   ├── clientset_generated.go
│   │       │   ├── doc.go
│   │       │   └── register.go
│   │       ├── scheme
│   │       │   ├── doc.go
│   │       │   └── register.go
│   │       └── typed
│   │           └── kubeflow.org
│   │               └── v1
│   │                   ├── doc.go
│   │                   ├── fake
│   │                   │   ├── doc.go
│   │                   │   ├── fake_kubeflow.org_client.go
│   │                   │   └── fake_tfjob.go
│   │                   ├── generated_expansion.go
│   │                   ├── kubeflow.org_client.go
│   │                   └── tfjob.go
│   ├── informers
│   │   └── externalversions
│   │       ├── factory.go
│   │       ├── generic.go
│   │       ├── internalinterfaces
│   │       │   └── factory_interfaces.go
│   │       └── kubeflow.org
│   │           ├── interface.go
│   │           └── v1
│   │               ├── interface.go
│   │               └── tfjob.go
│   ├── listers
│   │   └── kubeflow.org
│   │       └── v1
│   │           ├── expansion_generated.go
│   │           └── tfjob.go
│   └── openapi
│       └── zz_generated.openapi.go
└── controller
    ├── suite_test.go
    └── tfjob_controller.go

22 directories, 29 files

细心会发现生成的clientset,informer,lister代码里出现报错,原因是缺少下面的定义

v1.SchemeGroupVersion
v1.Resource

这时我们需要把training-operator/pkg/apis/kubeflow.org/v1/register.go复制到我们的pkg/apis/kubeflow.org/v1路径下就好,里面的内容恰恰就是我们所需的。我推测是版本原因导致这些代码没有在groupversion_info.go中生成。

package v1

import (
	"k8s.io/apimachinery/pkg/runtime/schema"
)

// SchemeGroupVersion is group version used to register these objects.
var SchemeGroupVersion = GroupVersion

// Resource takes an unqualified resource and returns a Group-qualified GroupResource.
func Resource(resource string) schema.GroupResource {
	return GroupVersion.WithResource(resource).GroupResource()
}

好的,下面我们就剩controller中Reconcile函数的逻辑了!

在之后的文章,我们会看看controller-runtime的原理: [kubeflow] controller-runtime源码解析,以及training-operator中的Reconcile函数到底是如何运行的:[kubeflow] training-operator源码解析

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值