Kubebuilder 构建 k8s-operator 实现自定义 controller 和 webhook 逻辑

转载自:https://labdoc.cc/article/56/

安装相关

  • goland
  • docker
  • kubernetes
  • kubectl
  • kubebuilder
  • kustomize (make install)
  • cert-manager
  • helm (安装 ingress traefik)

创建项目

mkdir kubebuilder-demo
cd kuberbuilder-demo
goland .

初始化当前目录中的新模块

go mod init github.com/kuberbuilder-demo

初始化项目

kubebuilder init --domain=labdoc.cc

生成 API 代码

kubebuilder create api --group ingress --version v1beta1 --kind App

#Create Resource [y/n]
#y
#Create Controller [y/n]
#y

自定义 controller 逻辑

首先,我们需要定义好自定义的资源,我们这里指定为App,我们希望开发团队能够声明一个App 的资源,然后由我们的自定义controller根据其配置,自动为其创建deployment、service、ingress等资源。

修改app_types.go

api/v1beta1/app_types.go

定义如下:

type AppSpec struct {
    // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
    // Important: Run "make" to regenerate code after modifying this file
    
    //+kubebuilder:default:enable_ingress=false
    EnableIngress bool   `json:"enable_ingress,omitempty"`
    EnableService bool   `json:"enable_service"`
    Replicas      int32  `json:"replicas"`
    Image         string `json:"image"`
}

其中Image、Replicas、EnableService为必须设置的属性,EnableIngress可以为空.

重新生成 CRD 资源

make manifests
# /Volumes/Data/Dev/Kubernetes/operator/k8s-operator-demo/kubebuilder-demo/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases

./config/crd/bases/ingress.labdoc.cc_apps.yaml 可见自定义参数

          spec:
            description: AppSpec defines the desired state of App
            properties:
              enable_ingress:
                type: boolean
              enable_service:
                type: boolean
              image:
                type: string
              replicas:
                format: int32
                type: integer
            required:
              - enable_service
              - image
              - replicas

实现Reconcile逻辑

$ tree
tree         
.
├── app_controller.go
├── suite_test.go
├── template
│   ├── deployment.yml
│   ├── ingress.yml
│   └── service.yml
└── utils
    └── resource.go

controllers/app_controller.go

  1. App的处理
	// TODO(user): your logic here
	logger := log.FromContext(ctx)
	app := &ingressv1beta1.App{}
	//从缓存中获取app
	if err := r.Get(ctx, req.NamespacedName, app); err != nil {
		return ctrl.Result{}, client.IgnoreNotFound(err)
	}
  1. Deployment的处理

    之前我们创建资源对象时,都是通过构造golang的struct来构造,但是对于复杂的资源对象
    这样做费时费力,所以,我们可以先将资源定义为go template,然后替换需要修改的值之后,反序列号为golang的struct对象,然后再通过client-go帮助我们创建或更新指定的资源。
    我们的deployment、service、ingress都放在了controllers/template中,通过utils来完成上述过程。

    //1. Deployment的处理
	deployment := utils.NewDeployment(app)
	if err := controllerutil.SetControllerReference(app, deployment, r.Scheme); err != nil {
		return ctrl.Result{}, err
	}
	//查找同名deployment
	d := &appsv1.Deployment{}
	if err := r.Get(ctx, req.NamespacedName, d); err != nil {
		if errors.IsNotFound(err) {
			// 创建 deployment
			if err := r.Create(ctx, deployment); err != nil {
				logger.Error(err, "create deploy failed")
				return ctrl.Result{}, err
			}
		}
	} else {
		//说明: 这里会反复触发更新
		//原因:在148行SetupWithManager方法中,监听了Deployment,所以只要更新Deployment就会触发
		//     此处更新和controllerManager更新Deployment都会触发更新事件,导致循环触发
		//方案:
		//方式1. 注释掉在148行SetupWithManager方法中对Deployment,Ingress,Service等的监听,该处的处理只是为了
		//      手动删除Deployment等后能够自动重建,但正常不会出现这种情况,是否需要根据情况而定
		//方式2. 加上判断条件,仅在app.Spec.Replicas != d.Spec.Replicas ||
		//      app.Spec.Image != d.Spec.Template.Spec.Containers[0].Image时才更新deployment

		if app.Spec.Replicas != *d.Spec.Replicas || app.Spec.Image != d.Spec.Template.Spec.Containers[0].Image {
			logger.Info("update deployment", "app.spec", app.Spec)
			if err := r.Update(ctx, deployment); err != nil {
				return ctrl.Result{}, err
			}
		}

	}
  1. Service的处理
	//2. Service的处理
	service := utils.NewService(app)
	if err := controllerutil.SetControllerReference(app, service, r.Scheme); err != nil {
		return ctrl.Result{}, err
	}
	//查找指定service
	s := &corev1.Service{}
	if err := r.Get(ctx, types.NamespacedName{Name: app.Name, Namespace: app.Namespace}, s); err != nil {
		if errors.IsNotFound(err) && app.Spec.EnableService {
			if err := r.Create(ctx, service); err != nil {
				logger.Error(err, "create service failed")
				return ctrl.Result{}, err
			}
		}
		// 结果非NotFound,重试一下
		if !errors.IsNotFound(err) && app.Spec.EnableService {
			return ctrl.Result{}, err
		}
	} else {
		if app.Spec.EnableService {
			logger.Info("skip update")
		} else {
			if err := r.Delete(ctx, s); err != nil {
				return ctrl.Result{}, err
			}
		}
	}
  1. Ingress的处理
	//3. Ingress的处理,ingress配置可能为空
	//TODO 使用admission校验该值,如果启用了ingress,那么service必须启用
	//TODO 使用admission设置默认值,默认为false
	//Fix: 这里会导致Ingress无法被删除
	if !app.Spec.EnableService {
		return ctrl.Result{}, nil
	}
	ingress := utils.NewIngress(app)
	if err := controllerutil.SetControllerReference(app, ingress, r.Scheme); err != nil {
		return ctrl.Result{}, err
	}
	i := &netv1.Ingress{}
	if err := r.Get(ctx, types.NamespacedName{Name: app.Name, Namespace: app.Namespace}, i); err != nil {
		if errors.IsNotFound(err) && app.Spec.EnableIngress {
			if err := r.Create(ctx, ingress); err != nil {
				logger.Error(err, "create ingress failed")
				return ctrl.Result{}, err
			}
		}
		if !errors.IsNotFound(err) && app.Spec.EnableIngress {
			return ctrl.Result{}, err
		}
	} else {
		if app.Spec.EnableIngress {
			logger.Info("skip update")
		} else {
			if err := r.Delete(ctx, i); err != nil {
				return ctrl.Result{}, err
			}
		}
	}
  1. 删除service、ingress、deployment时,自动重建

SetupWithManager 会自动调用 Reconcile 方法处理

func (r *AppReconciler) SetupWithManager(mgr ctrl.Manager) error {
	// 删除service、ingress、deployment时,自动重建
	return ctrl.NewControllerManagedBy(mgr).
		For(&ingressv1beta1.App{}).
		Owns(&appsv1.Deployment{}).
		Owns(&netv1.Ingress{}).
		Owns(&corev1.Service{}).
		Complete(r)
}
  1. 导包
import (
	"context"
	"github.com/kuberbuilder-demo/controllers/utils"
	appsv1 "k8s.io/api/apps/v1"
	corev1 "k8s.io/api/core/v1"
	netv1 "k8s.io/api/networking/v1"
	"k8s.io/apimachinery/pkg/api/errors"
	"k8s.io/apimachinery/pkg/types"
	"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"

	"k8s.io/apimachinery/pkg/runtime"
	ctrl "sigs.k8s.io/controller-runtime"
	"sigs.k8s.io/controller-runtime/pkg/client"
	"sigs.k8s.io/controller-runtime/pkg/log"

	ingressv1beta1 "github.com/kuberbuilder-demo/api/v1beta1"
)
  1. utils/resource.go
    模版方法:处理 templates 下的模版,转换成 对应的资源对象
package utils

import (
	"bytes"
	"github.com/kuberbuilder-demo/api/v1beta1"
	appv1 "k8s.io/api/apps/v1"
	corev1 "k8s.io/api/core/v1"
	netv1 "k8s.io/api/networking/v1"
	"k8s.io/apimachinery/pkg/util/yaml"
	"text/template"
)

func parseTemplate(templateName string, app *v1beta1.App) []byte {
	tmpl, err := template.ParseFiles("controllers/template/" + templateName + ".yml")
	if err != nil {
		panic(err)
	}
	b := new(bytes.Buffer)
	err = tmpl.Execute(b, app)
	if err != nil {
		panic(err)
	}
	return b.Bytes()
}

func NewDeployment(app *v1beta1.App) *appv1.Deployment {
	d := &appv1.Deployment{}
	err := yaml.Unmarshal(parseTemplate("deployment", app), d)
	if err != nil {
		panic(err)
	}
	return d
}

func NewIngress(app *v1beta1.App) *netv1.Ingress {
	i := &netv1.Ingress{}
	err := yaml.Unmarshal(parseTemplate("ingress", app), i)
	if err != nil {
		panic(err)
	}
	return i
}

func NewService(app *v1beta1.App) *corev1.Service {
	s := &corev1.Service{}
	err := yaml.Unmarshal(parseTemplate("service", app), s)
	if err != nil {
		panic(err)
	}
	return s
}
  1. template
  • template/deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{.ObjectMeta.Name}}
  namespace: {{.ObjectMeta.Namespace}}
  labels:
    app: {{.ObjectMeta.Name}}
spec:
  replicas: {{.Spec.Replicas}}
  selector:
    matchLabels:
      app: {{.ObjectMeta.Name}}
  template:
    metadata:
      labels:
        app: {{.ObjectMeta.Name}}
    spec:
      containers:
        - name: {{.ObjectMeta.Name}}
          image: {{.Spec.Image}}
          ports:
            - containerPort: 8080
  • template/ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{.ObjectMeta.Name}}
  namespace: {{.ObjectMeta.Namespace}}
spec:
  rules:
    - host: {{.ObjectMeta.Name}}.baiding.tech
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: {{.ObjectMeta.Name}}
                port:
                  number: 8080
  ingressClassName: traefik
  • template/service.yml
apiVersion: v1
kind: Service
metadata:
  name: {{.ObjectMeta.Name}}
  namespace: {{.ObjectMeta.Namespace}}
spec:
  selector:
    app: {{.ObjectMeta.Name}}
  ports:
    - name: http
      protocol: TCP
      port: 8080
      targetPort: 80
  1. 增加权限
    在 Reconcile 前面增加即可
//+kubebuilder:rbac:groups=ingress.labdoc.cc,resources=apps,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=ingress.labdoc.cc,resources=apps/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=ingress.labdoc.cc,resources=apps/finalizers,verbs=update
//+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=networking.k8s.io,resources=ingresses,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups="",resources=services,verbs=get;list;watch;create;update;patch;delete

测试

部署 crd

kubectl get crd

# 将 CRD 安装到集群中,网络不好的建议 先将对应版本的 kustomize 存放到bin目录下
make install

# 验证
kubectl get crd
kubectl get apps.ingress.labdoc.cc

启动 controller

go run github.com/kuberbuilder-demo

部署应用
config/samples/ingress_v1beta1_app.yaml

apiVersion: ingress.labdoc.cc/v1beta1
kind: App
metadata:
   labels:
      app.kubernetes.io/name: app
      app.kubernetes.io/instance: app-sample
      app.kubernetes.io/part-of: kubebuilder-demo
      app.kubernetes.io/managed-by: kustomize
      app.kubernetes.io/created-by: kubebuilder-demo
   name: app-sample
spec:
   # TODO(user): Add fields here
   image: nginx:latest
   replicas: 3
   enable_ingress: false  #  默认值为false,需求为:设置为反向值,为true时,enable_service 必须为true
   enable_service: false
kubectl apply -f config/samples/ingress_v1beta1_app.yaml

kubectl get app

kubectl get deployment
#NAME         READY   UP-TO-DATE   AVAILABLE   AGE
#app-sample   3/3     3            3           6m32s

kubectl get ingress

修改 config/samples/ingress_v1beta1_app.yaml 的 参数,验证是否生效

kubectl apply -f config/samples/ingress_v1beta1_app.yaml

验证

kubectl get deployment
#   NAME         READY   UP-TO-DATE   AVAILABLE   AGE
#   app-sample   3/2     3            3           14m

kubectl get svc
#   NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
#   app-sample   ClusterIP   10.102.215.4   <none>        8080/TCP   4m2s
#   kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP    4d17h

kubectl get svc/app-sample -o yaml
#  ownerReferences:
#  - apiVersion: ingress.labdoc.cc/v1beta1
#    ...
#    kind: App

kubectl get ingress
#   NAME         CLASS     HOSTS                  ADDRESS   PORTS   AGE
#   app-sample   traefik   app-sample.labdoc.cc             80      5s

kubectl get svc/traefik

#   NAME      TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
#   traefik   LoadBalancer   10.102.136.32   localhost     80:30302/TCP,443:31186/TCP   16m

修改 DNS 或者 hosts

127.0.0.1   app-sample.labdoc.cc

访问 http://app-sample.labdoc.cc:30302

安装ingress controller

我们这里使用 traefik 作为 ingress controller

cat <<EOF>> traefik_values.yaml
ingressClass:
  enabled: true
  isDefaultClass: true #指定为默认的ingress
EOF

helm repo add traefik https://helm.traefik.io/traefik
helm install traefik traefik/traefik -f traefik_values.yaml

看板

kubectl port-forward $(kubectl get pods --selector "app.kubernetes.io/name=traefik" --output=name) 9000:9000 --address=$YOUR_IP

部署

安装依赖

go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest

fix: 部署之前需要修改一下controllers/app_controller.go的rbac

//+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=networking.k8s.io,resources=ingresses,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups="",resources=services,verbs=get;list;watch;create;update;patch;delete
# 构建镜像
IMG=ju4t/app-controller make docker-build

# 推送
IMG=ju4t/app-controller make docker-push

# 部署
IMG=ju4t/app-controller:v0.0.1 make deploy

遗留问题

  • enable_ingress默认为false, webhook将该值设置为反向值
  • 当设置enable_ingress为true时,enable_service必须设置为true

自定义 webhook 逻辑

解决前面遗留的问题

生成 webhook 代码

kubebuilder create webhook --group ingress --version v1beta1 --kind App --defaulting --programmatic-validation

创建后会在main.go中添加如下代码

    if err = (&ingressv1beta1.App{}).SetupWebhookWithManager(mgr); err != nil {
		setupLog.Error(err, "unable to create webhook", "webhook", "App")
		os.Exit(1)
	}

同时会生成下列文件,主要有:

  • api/v1beta1/app_webhook.go webhook对应的handler,我们添加业务逻辑的地方

  • api/v1beta1/webhook_suite_test.go 测试

  • config/certmanager 自动生成自签名的证书,用于webhook server提供https服务

  • config/webhook 用于注册webhook到k8s中

  • config/crd/patches 为conversion自动注入caBoundle

  • config/default/manager_webhook_patch.yaml 让manager的deployment支持webhook请求

  • config/default/webhookcainjection_patch.yaml 为webhook server注入caBoundle

注入caBoundle由cert-manager的ca-injector 组件实现

修改配置

config/default/kustomization.yaml

为了支持webhook,我们需要修改config/default/kustomization.yaml将相应的配置打开,具体可参考注释。

# Adds namespace to all resources.
namespace: kubebuilder-demo-system

# Value of this field is prepended to the
# names of all resources, e.g. a deployment named
# "wordpress" becomes "alices-wordpress".
# Note that it should also match with the prefix (text before '-') of the namespace
# field above.
namePrefix: kubebuilder-demo-

# Labels to add to all resources and selectors.
#commonLabels:
#  someName: someValue

bases:
- ../crd
- ../rbac
- ../manager
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
# crd/kustomization.yaml
- ../webhook
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'. 'WEBHOOK' components are required.
- ../certmanager
# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.
#- ../prometheus

patchesStrategicMerge:
# Protect the /metrics endpoint by putting it behind auth.
# If you want your controller-manager to expose the /metrics
# endpoint w/o any authn/z, please comment the following line.
- manager_auth_proxy_patch.yaml



# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
# crd/kustomization.yaml
- manager_webhook_patch.yaml

# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'.
# Uncomment 'CERTMANAGER' sections in crd/kustomization.yaml to enable the CA injection in the admission webhooks.
# 'CERTMANAGER' needs to be enabled to use ca injection
- webhookcainjection_patch.yaml

# the following config is for teaching kustomize how to do var substitution
vars:
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER' prefix.
- name: CERTIFICATE_NAMESPACE # namespace of the certificate CR
  objref:
    kind: Certificate
    group: cert-manager.io
    version: v1
    name: serving-cert # this name should match the one in certificate.yaml
  fieldref:
    fieldpath: metadata.namespace
- name: CERTIFICATE_NAME
  objref:
    kind: Certificate
    group: cert-manager.io
    version: v1
    name: serving-cert # this name should match the one in certificate.yaml
- name: SERVICE_NAMESPACE # namespace of the service
  objref:
    kind: Service
    version: v1
    name: webhook-service
  fieldref:
    fieldpath: metadata.namespace
- name: SERVICE_NAME
  objref:
    kind: Service
    version: v1
    name: webhook-service

webhook 业务逻辑

api/v1beta1/app_webhook.go

  1. 设置 enable_ingress 的默认值
func (r *App) Default() {
	applog.Info("default", "name", r.Name)

	// TODO(user): fill in your defaulting logic.
	r.Spec.EnableIngress = !r.Spec.EnableIngress
}
  1. 校验 enbale_service 的值
    ValidateCreate 和 ValidateUpdate 逻辑时一样的,所以封装了一个 validateApp 来处理
// ValidateCreate implements webhook.Validator so a webhook will be registered for the type
func (r *App) ValidateCreate() error {
	applog.Info("validate create", "name", r.Name)

	// TODO(user): fill in your validation logic upon object creation.
	return r.validateApp()
}

// ValidateUpdate implements webhook.Validator so a webhook will be registered for the type
func (r *App) ValidateUpdate(old runtime.Object) error {
	applog.Info("validate update", "name", r.Name)

	// TODO(user): fill in your validation logic upon object update.
	return nil
}

// ValidateCreate 和 ValidateUpdate 逻辑时一样的
func (r *App) validateApp() error {
    // TODO(user): 验证逻辑
	if !r.Spec.EnableService && r.Spec.EnableIngress {
		return apierrors.NewInvalid(GroupVersion.WithKind("App").GroupKind(), r.Name,
			field.ErrorList{
				field.Invalid(field.NewPath("enable_service"),
					r.Spec.EnableService,
					"enable_service should be true when enable_ingress is true"),
			},
		)
	}
	return nil
}
  1. import 部分
import (
	apierrors "k8s.io/apimachinery/pkg/api/errors"
	// ...
)

安装 cert-manager

# 用指定版本 1.8.0,测试时1.11.0 有问题
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml

kubectl get ns cert-manager

本地测试

  1. 添加本地测试相关的代码

config/dev/kustomization.yaml

bases:
   - ../default

patches:
   - patch: |
        - op: "remove"
          path: "/spec/dnsNames"
     target:
        kind: Certificate
   - patch: |
        - op: "add"
          path: "/spec/ipAddresses"
          value: ["192.168.8.3"]
     target:
        kind: Certificate
   - patch: |
        - op: "add"
          path: "/webhooks/0/clientConfig/url"
          value: "https://192.168.8.3:9443/mutate-ingress-example-com-v1beta1-app"
     target:
        kind: MutatingWebhookConfiguration
   - patch: |
        - op: "add"
          path: "/webhooks/0/clientConfig/url"
          value: "https://192.168.8.3:9443/validate-ingress-example-com-v1beta1-app"
     target:
        kind: ValidatingWebhookConfiguration
   - patch: |
        - op: "remove"
          path: "/webhooks/0/clientConfig/service"
     target:
        kind: MutatingWebhookConfiguration
   - patch: |
        - op: "remove"
          path: "/webhooks/0/clientConfig/service"
     target:
        kind: ValidatingWebhookConfiguration

Makefile

# 增加测试环境
.PHONY: dev
dev: manifests kustomize ## Deploy dev from config/dev.
	cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
	$(KUSTOMIZE) build config/dev | kubectl apply -f -
.PHONY: undev
undev: manifests kustomize ## Undeploy dev from config/dev.
	$(KUSTOMIZE) build config/dev | kubectl delete --ignore-not-found=$(ignore-not-found) -f -
  1. 获取证书放到临时文件目录下
    部署到集群中
make dev
mkdir certs
kubectl get secrets webhook-server-cert -n  kubebuilder-demo-system -o jsonpath='{..tls\.crt}' |base64 -d > certs/tls.crt
kubectl get secrets webhook-server-cert -n  kubebuilder-demo-system -o jsonpath='{..tls\.key}' |base64 -d > certs/tls.key
  1. 修改main.go,让webhook server使用指定证书
   mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
		Scheme:                 scheme,
		MetricsBindAddress:     metricsAddr,
		Port:                   9443,
		HealthProbeBindAddress: probeAddr,
		LeaderElection:         enableLeaderElection,
		LeaderElectionID:       "df182d26.labdoc.cc",
		// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
		// when the Manager ends. This requires the binary to immediately end when the
		// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
		// speeds up voluntary leader transitions as the new leader don't have to wait
		// LeaseDuration time first.
		//
		// In the default scaffold provided, the program ends immediately after
		// the manager stops, so would be fine to enable this option. However,
		// if you are doing or is intended to do any operation such as perform cleanups
		// after the manager stops then its usage might be unsafe.
		// LeaderElectionReleaseOnCancel: true,
	})
	
	if err != nil {
		setupLog.Error(err, "unable to start manager")
		os.Exit(1)
	}

替换为:

    options := ctrl.Options{
		Scheme:                 scheme,
		MetricsBindAddress:     metricsAddr,
		Port:                   9443,
		HealthProbeBindAddress: probeAddr,
		LeaderElection:         enableLeaderElection,
		LeaderElectionID:       "df182d26.labdoc.cc",
	}

	if os.Getenv("ENVIRONMENT") == "DEV" {
		path, err := os.Getwd()
		if err != nil {
			setupLog.Error(err, "unable to get work dir")
			os.Exit(1)
		}
		options.CertDir = path + "/certs"
	}
	if os.Getenv("ENVIRONMENT") == "DEV" {
		path, err := os.Getwd()
		if err != nil {
			setupLog.Error(err, "unable to get work dir")
			os.Exit(1)
		}
		options.CertDir = path + "/certs"
	}

	mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), options)

	if err != nil {
		setupLog.Error(err, "unable to start manager")
		os.Exit(1)
	}
  1. 部署测试

设置goland环境变量

ENVIRONMENT=DEV
# 部署一次即可
make dev
mkdir certs
kubectl get secrets webhook-server-cert -n  kubebuilder-demo-system -o jsonpath='{..tls\.crt}' |base64 -d > certs/tls.crt
kubectl get secrets webhook-server-cert -n  kubebuilder-demo-system -o jsonpath='{..tls\.key}' |base64 -d > certs/tls.key

goland > go build github.com/kuberbuilder-demo 按钮运行

# 启动 服务
export ENVIRONMENT=DEV
go run github.com/kuberbuilder-demo

部署应用验证

apiVersion: ingress.labdoc.cc/v1beta1
kind: App
metadata:
  name: app-sample
spec:
  image: nginx:latest
  replicas: 3
  enable_ingress: true  # 调整参数测试
  enable_service: false # 调整参数测试
kubectl apply -f config/samples/ingress_v1beta1_app.yaml
kubectl delete -f config/samples/ingress_v1beta1_app.yaml
  1. 清理环境
make undev

正式部署

  1. 部署
  • 修改 Dockerfile
# 1. 添加 go mod download 加速
RUN go env -w GOPROXY=https://mirrors.aliyun.com/goproxy/

# 2. copy controllers/template/*.yml
COPY --from=builder /workspace/controllers/template/ /controllers/template/
COPY --from=builder /workspace/manager .

# 3. 调试时建议修改基础镜像
# 可通过 kubectl exec -it xxx --sh 进入 容器
# FROM gcr.io/distroless/static:nonroot
FROM gcr.io/distroless/base-debian11:debug

  • 打包、推送记及部署
IMG=ju4t/app-controller:v0.0.1 make docker-build docker-push

IMG=ju4t/app-controller:v0.0.1 make deploy
  1. 验证
    config/samples/ingress_v1beta1_app.yaml
kubectl apply -f config/samples/ingress_v1beta1_app.yaml
kubectl delete -f config/samples/ingress_v1beta1_app.yaml
apiVersion: ingress.labdoc.cc/v1beta1
kind: App
metadata:
  name: app-sample
spec:
  image: nginx:latest
  replicas: 3
  enable_ingress: false #会被修改为true
  enable_service: false #将会失败
apiVersion: ingress.labdoc.cc/v1beta1
kind: App
metadata:
  name: app-sample
spec:
  image: nginx:latest
  replicas: 3
  enable_ingress: false #会被修改为true
  enable_service: true #成功
apiVersion: ingress.labdoc.cc/v1beta1
kind: App
metadata:
  name: app-sample
spec:
  image: nginx:latest
  replicas: 3
  enable_ingress: true #会被修改为false
  enable_service: false #成功

了解更多

https://book.kubebuilder.io/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值