github地址
一直在网上看k8s自定义资源这一块的内容,但是只停留于看,并没有真正的自己去实践一波,写这篇文章主要参考的是这篇博客,只是我对他做了一些简化,我只希望外部能够通过nodeip+port访问我的服务,并且对里面的资源进行统一生命周期管理。
1、使用kubebuilder初始化一个自定义资源
kubebuilder的安装请参考以前写的博客
1.进入gopath src 下新建一个文件夹,进入新建文件夹,生成自定以资源的相关文件,生成controller,type等,生成webhook相关的文件
[root@master src]# mkdir servicemanager
[root@master src]# cd servicemanager/
[root@master servicemanager]# kubebuilder init --domain servicemanager.io
Writing scaffold for you to edit...
Get controller runtime:
$ go get sigs.k8s.io/controller-runtime@v0.5.0
Update go.mod:
$ go mod tidy
Running make:
$ make
/usr/local/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
go build -o bin/manager main.go
Next: define a resource with:
$ kubebuilder create api
[root@master servicemanager]# kubebuilder create api --group servicemanager --version v1 --kind ServiceManager
Create Resource [y/n]
y
Create Controller [y/n]
y
Writing scaffold for you to edit...
api/v1/servicemanager_types.go
controllers/servicemanager_controller.go
Running make:
$ make
/usr/local/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
go build -o bin/manager main.go
[root@master servicemanager]# kubebuilder create webhook --group servicemanager --version v1 --kind ServiceManager --defaulting --programmatic-validation
Writing scaffold for you to edit...
api/v1/servicemanager_webhook.go
生成的,目录结构如下:
.
├── api
│ └── v1
│ ├── groupversion_info.go // GVK信息、scheme生成的方法都在这里
│ ├── servicemanager_types.go // 自定义CRD结构,需修改的文件
│ ├── servicemanager_webhook.go // webhook相关的文件
│ └── zz_generated.deepcopy.go // 深度拷贝
├── bin
│ └── manager // go打包文件的二进制文件
├── config // 所有最终生成的需要kubectl apply的的资源,按照功能进行分片成不同的目录,这里有些地方可以做些自定义的配置
│ ├── certmanager
│ │ ├── certificate.yaml
│ │ ├── kustomization.yaml
│ │ └── kustomizeconfig.yaml
│ ├── crd // crd的配置
│ │ ├── kustomization.yaml
│ │ ├── kustomizeconfig.yaml
│ │ └── patches
│ │ ├── cainjection_in_servicemanagers.yaml
│ │ └── webhook_in_servicemanagers.yaml
│ ├── default
│ │ ├── kustomization.yaml
│ │ ├── manager_auth_proxy_patch.yaml
│ │ ├── manager_webhook_patch.yaml
│ │ └── webhookcainjection_patch.yaml
│ ├── manager // manager的deployment在这里
│ │ ├── kustomization.yaml
│ │ └── manager.yaml
│ ├── prometheus // metric暴露
│ │ ├── kustomization.yaml
│ │ └── monitor.yaml
│ ├── rbac // rbac授权
│ │ ├── auth_proxy_client_clusterrole.yaml
│ │ ├── auth_proxy_role_binding.yaml
│ │ ├── auth_proxy_role.yaml
│ │ ├── auth_proxy_service.yaml
│ │ ├── kustomization.yaml
│ │ ├── leader_election_role_binding.yaml
│ │ ├── leader_election_role.yaml
│ │ ├── role_binding.yaml
│ │ ├── servicemanager_editor_role.yaml
│ │ └── servicemanager_viewer_role.yaml
│ ├── samples // 简单的自定义资源yaml文件
│ │ └── servicemanager_v1_servicemanager.yaml
│ └── webhook // Unit webhook Service,用来接收APIServer转发而来的webhook请求
│ ├── kustomization.yaml
│ ├── kustomizeconfig.yaml
│ └── service.yaml
├── controllers
│ ├── servicemanager_controller.go // # CRD controller的核心逻辑在这里
│ └── suite_test.go
├── Dockerfile // 制作crd-controller镜像的Dockerfile
├── go.mod
├── go.sum
├── hack
│ └── boilerplate.go.txt
├── main.go // 程序入口
├── Makefile // make编译文件
└── PROJECT // 项目元数据
2.修改servicemanager_types.go文件
type ServiceManagerSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file
// Foo is an example field of ServiceManager. Edit ServiceManager_types.go to remove/update
// Category 只有两种可能 deployment statefulset
// 这个注释表示该字段的值只能是Deployment 或者 Statefulset
// +kubebuilder:validation:Enum=Deployment;Statefulset
Category string `json:"category,omitempty"`
// 标签选择器
Selector map[string]string `json:"selector,omitempty"`
// 引用的statefulset deployment的template
Template corev1.PodTemplateSpec `json:"template,omitempty"`
// 副本数 最大不超过10
// +kubebuilder:validation:Maximum=10
Replicas *int32 `json:"replicas,omitempty"`
//端口号 端口号做大超过65535 服务端口号
// +kubebuilder:validation:Maximum=65535
Port *int32 `json:"port,omitempty"`
// +kubebuilder:validation:Maximum=65535
Targetport *int32 `json:"targetport,omitempty"`
}
// ServiceManagerStatus defines the observed state of ServiceManager
type ServiceManagerStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
Replicas int32 `json:"replicas,omitempty"`
LastUpdateTime metav1.Time `json:"last_update_time,omitempty"`
DeploymentStatus appsv1.DeploymentStatus `json:"deployment_status,omitempty"`
ServiceStatus corev1.ServiceStatus `json:"service_status,omitempty"`
}
// 这里,Spec和Status均是ServiceManager的成员变量,Status并不像Pod.Status一样,是Pod的subResource.因此,
// 如果我们在controller的代码中调用到Status().Update(),会触发panic,
// 并报错:the server could not find the requested resource
// 如果我们想像k8s中的设计那样,那么就要遵循k8s中status subresource的使用规范:
// kubebuilder:subresource:status
// 用户只能指定一个CRD实例的spec部分;
// CRD实例的status部分由控制器进行变更。
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:subresource:scale:selectorpath=.spec.selector,specpath=.spec.replicas,statuspath=.status.replicas
// ServiceManager is the Schema for the servicemanagers API
type ServiceManager struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ServiceManagerSpec `json:"spec,omitempty"`
Status ServiceManagerStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// ServiceManagerList contains a list of ServiceManager
type ServiceManagerList struct {
metav1.TypeMeta `json: