体验 Karmada 1.5

Karmada(Kubernetes Armada)是一个 Kubernetes 管理系统,使您能够在多个Kubernetes 集群和云中运行您的云原生应用程序,而无需对您的应用程序进行任何更改。

Karmada 主要功能包括集中式多云管理、高可用性、故障恢复和流量调度。

Karmada 是云原生计算基金会(CNCF)的一个沙盒项目。

事前准备

  1. 下载 go
cd /tmp
wget https://go.dev/dl/go1.20.2.linux-amd64.tar.gz
  1. 解压到/usr/local/
sudo tar zxvf go1.20.2.linux-amd64.tar.gz -C /usr/local/
  1. 配置环境变量
echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.bashrc
source ~/.bashrc
  1. 确认
$ go version
go version go1.20.2 linux/amd64
  1. 安装所需命令
sudo dnf install -y make
  1. 安装 kind 命令
cd /tmp
curl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.17.0/kind-linux-amd64
chmod +x ./kind
sudo mv kind /usr/local/bin/
  1. 回避 kind too many open files error

refer: https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files

sudo sysctl fs.inotify.max_user_watches=524288
sudo sysctl fs.inotify.max_user_instances=512
sudo vi /etc/sysctl.conf
--- add
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 512
---
sudo sysctl -p
  1. 回避 30s timeout error
vi hack/util.sh
--- many values from
(略) --timeout=30s
--- to
(略) --timeout=300s
---

概念

在这里插入图片描述
来源: https://karmada.io/zh/docs/core-concepts/concepts

架构图

在这里插入图片描述
来源: https://karmada.io/zh/docs/core-concepts/architecture

快速开始

通过 Karmada 分发 Deployment。

  1. 克隆此代码仓库到你的机器
git clone https://github.com/karmada-io/karmada
  1. 更改到 karmada 目录
cd karmada
  1. 部署并运行 Karmada 控制面
export GOARCH=amd64
hack/local-up-karmada.sh

输出结果,

(略)
Local Karmada is running.

To start using your karmada, run:
  export KUBECONFIG=/root/.kube/karmada.config
Please use 'kubectl config use-context karmada-host/karmada-apiserver' to switch the host and control plane cluster.

To manage your member clusters, run:
  export KUBECONFIG=/root/.kube/members.config
Please use 'kubectl config use-context member1/member2/member3' to switch to the different member cluster.
  1. 手动将.kube/karmada.config.kube/members.config文件合成.kube/config文件,方便通过kubectx进行管理

  2. 查看集群信息

$ kubectl --context karmada-apiserver get cluster
NAME      VERSION   MODE   READY   AGE
member1   v1.26.0   Push   True    7m33s
member2   v1.26.0   Push   True    7m28s
member3   v1.26.0   Pull   True    7m18s
  1. 查看Karmada控制节点信息
$ kubectl --context karmada-host get pods -A
NAMESPACE            NAME                                                   READY   STATUS    RESTARTS        AGE
karmada-system       etcd-0                                                 1/1     Running   0               8m52s
karmada-system       karmada-aggregated-apiserver-6b7b7b5657-72ckb          1/1     Running   0               8m23s
karmada-system       karmada-aggregated-apiserver-6b7b7b5657-zvfpw          1/1     Running   0               8m23s
karmada-system       karmada-apiserver-77974ccbff-9wmbw                     1/1     Running   0               8m36s
karmada-system       karmada-controller-manager-75fb96496f-gcd58            1/1     Running   0               8m14s
karmada-system       karmada-controller-manager-75fb96496f-zc8zr            1/1     Running   0               8m14s
karmada-system       karmada-descheduler-7c59d4b4f6-hdgdm                   1/1     Running   0               8m14s
karmada-system       karmada-descheduler-7c59d4b4f6-pl9tf                   1/1     Running   0               8m14s
karmada-system       karmada-kube-controller-manager-795c547674-8gzxm       1/1     Running   0               8m23s
karmada-system       karmada-scheduler-64dfd6bbc4-fr6jp                     1/1     Running   0               8m14s
karmada-system       karmada-scheduler-64dfd6bbc4-rhgz4                     1/1     Running   0               8m14s
karmada-system       karmada-scheduler-estimator-member1-5bdd4ff65f-9b87h   1/1     Running   0               8m
karmada-system       karmada-scheduler-estimator-member1-5bdd4ff65f-fz4kl   1/1     Running   0               8m
karmada-system       karmada-scheduler-estimator-member2-58fdf7c575-9ddk7   1/1     Running   0               7m55s
karmada-system       karmada-scheduler-estimator-member2-58fdf7c575-pzcbp   1/1     Running   0               7m55s
karmada-system       karmada-scheduler-estimator-member3-f6895d459-22jmg    1/1     Running   0               7m51s
karmada-system       karmada-scheduler-estimator-member3-f6895d459-42pc2    1/1     Running   0               7m51s
karmada-system       karmada-search-847c44696f-nzxpt                        1/1     Running   0               8m21s
karmada-system       karmada-search-847c44696f-tzcw2                        1/1     Running   0               8m21s
karmada-system       karmada-webhook-7587c9f44f-bdlnd                       1/1     Running   0               8m14s
karmada-system       karmada-webhook-7587c9f44f-cxkq2                       1/1     Running   0               8m14s
kube-system          coredns-787d4945fb-wdr99                               1/1     Running   0               10m
kube-system          coredns-787d4945fb-zdlrg                               1/1     Running   0               10m
kube-system          etcd-karmada-host-control-plane                        1/1     Running   0               10m
kube-system          kindnet-6g4dd                                          1/1     Running   0               10m
kube-system          kube-apiserver-karmada-host-control-plane              1/1     Running   0               10m
kube-system          kube-controller-manager-karmada-host-control-plane     1/1     Running   1 (10m ago)     10m
kube-system          kube-proxy-pjlg7                                       1/1     Running   0               10m
kube-system          kube-scheduler-karmada-host-control-plane              1/1     Running   1 (9m44s ago)   10m
local-path-storage   local-path-provisioner-c8855d4bb-5n5jp                 1/1     Running   0               10m
  1. 在 Karmada 中创建 nginx deployment
$ kubectl --context karmada-apiserver create -f samples/nginx/deployment.yaml
  1. 创建将 nginx 分发到成员集群的 PropagationPolicy
$ kubectl --context karmada-apiserver create -f samples/nginx/propagationpolicy.yam
  1. 从 Karmada 查看 Deployment 状态
$ kubectl --context karmada-apiserver get deployments
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   2/2     2            2           71s

使用 Karmada 传播 CRD 应用程序

配置 kubeconfig 的 context 为 karmada-apiserver

$ kubectx karmada-apiserver

转到 guestbook 目录,

$ cd samples/guestbook

创建 guestbook CRD,

$ kubectl apply -f guestbooks-crd.yaml 

创建将 guestbook CRD 传播到 member1 的集群传播策略,

$ cat guestbooks-clusterpropagationpolicy.yaml
# clusterpropagationpolicy.yaml
apiVersion: policy.karmada.io/v1alpha1
kind: ClusterPropagationPolicy
metadata:
  name: example-policy
spec:
  resourceSelectors:
    - apiVersion: apiextensions.k8s.io/v1
      kind: CustomResourceDefinition
      name: guestbooks.webapp.my.domain
  placement:
    clusterAffinity:
      clusterNames:
        - member1
$ kubectl apply -f guestbooks-clusterpropagationpolicy.yaml

CRD 将根据集群传播策略中定义的规则传播到成员集群。

注意:我们在这里只能使用 ClusterPropagationPolicy 而不是 PropagationPolicy。

创建 guestbook CR guestbook-sample,

$ cat guestbook.yaml
apiVersion: webapp.my.domain/v1
kind: Guestbook
metadata:
  name: guestbook-sample
spec:
  size: 2
  configMapName: test
  alias: Name
$ kubectl apply -f guestbook.yaml

创建将传播到 member1 传播策略 guestbook-sample,

$ cat guestbooks-propagationpolicy.yaml
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: example-policy
spec:
  resourceSelectors:
    - apiVersion: webapp.my.domain/v1
      kind: Guestbook
  placement:
    clusterAffinity:
      clusterNames:
        - member1

检查 guestbook-sample 状态,

$ kubectl get guestbook -oyaml
apiVersion: v1
items:
- apiVersion: webapp.my.domain/v1
  kind: Guestbook
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        (略)
    creationTimestamp: "2023-04-01T00:44:04Z"
    generation: 1
    labels:
      propagationpolicy.karmada.io/name: example-policy
      propagationpolicy.karmada.io/namespace: default
    name: guestbook-sample
    namespace: default
    resourceVersion: "5618"
    uid: fd558321-27b7-42a4-a5b5-5db39ec6e373
  spec:
    alias: Name
    configMapName: test
    size: 2
kind: List
metadata:
  resourceVersion: ""

创建覆盖策略,该策略将覆盖 member1 中 guestbook 示例的字段大小,

$ cat guestbooks-overridepolicy.yaml
apiVersion: policy.karmada.io/v1alpha1
kind: OverridePolicy
metadata:
  name: guestbook-sample
spec:
  resourceSelectors:
  - apiVersion: webapp.my.domain/v1
    kind: Guestbook
  overrideRules:
  - targetCluster:
      clusterNames:
      - member1
    overriders:
      plaintext:
      - path: /spec/size
        operator: replace
        value: 4
      - path: /metadata/annotations
        operator: add
        value: {"OverridePolicy":"test"}
$ kubectl apply -f guestbooks-overridepolicy.yaml

检查来自成员集群的 guestbook-sample 字段大小,

$ kubectl --context member1 get guestbooks -o yaml
apiVersion: v1
items:
- apiVersion: webapp.my.domain/v1
  kind: Guestbook
  metadata:
    annotations:
      OverridePolicy: test
      resourcebinding.karmada.io/name: guestbook-sample-guestbook
      resourcebinding.karmada.io/namespace: default
      resourcetemplate.karmada.io/managed-annotations: OverridePolicy,resourcebinding.karmada.io/name,resourcebinding.karmada.io/namespace,resourcetemplate.karmada.io/managed-annotations,resourcetemplate.karmada.io/managed-labels,resourcetemplate.karmada.io/uid
      resourcetemplate.karmada.io/managed-labels: propagationpolicy.karmada.io/name,propagationpolicy.karmada.io/namespace,resourcebinding.karmada.io/key,work.karmada.io/name,work.karmada.io/namespace
      resourcetemplate.karmada.io/uid: fd558321-27b7-42a4-a5b5-5db39ec6e373
    creationTimestamp: "2023-04-01T00:49:35Z"
    generation: 2
    labels:
      propagationpolicy.karmada.io/name: example-policy
      propagationpolicy.karmada.io/namespace: default
      resourcebinding.karmada.io/key: 678cf5bb5
      work.karmada.io/name: guestbook-sample-6849fdbd59
      work.karmada.io/namespace: karmada-es-member1
    name: guestbook-sample
    namespace: default
    resourceVersion: "3326"
    uid: e4c632a3-8bca-4e8c-9f1c-1a4b00fc4fc0
  spec:
    alias: Name
    configMapName: test
    size: 4
kind: List
metadata:
  resourceVersion: ""

如果它按预期工作,将被覆盖为 .spec.size4

使用 karmada-search 体验多云搜索

如果使用 hack/local-up-karmada.sh,则已安装 karmada-search。

在以下步骤中,我们将跨成员集群缓存 Deployment 资源。 现在我们已经根据示例将 nginx 部署传播到 member1 和 member2。

创建将跨目标集群缓存部署的 ResourceRegistry,

$ cat <<EOF | kubectl apply -f -
apiVersion: search.karmada.io/v1alpha1
kind: ResourceRegistry
metadata:
  name: deployment-search
spec:
  targetCluster:
    clusterNames:
      - member1
      - member2
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
EOF

通过 Kubernetes API 进行测试,

$ kubectl get --raw /apis/search.karmada.io/v1alpha1/search/cache/apis/apps/v1/deployments | jq
{
  "kind": "DeploymentList",
  "apiVersion": "apps/v1",
  "metadata": {},
  "items": [
    (略)

karmada-search还支持将缓存的资源同步到 Elasticsearch 或 OpenSearch 等后端存储。 通过利用搜索引擎,您可以按字段和按标记执行具有所有所需功能的全文搜索;按分数对结果进行排名,按字段对结果进行排序,并聚合结果。

下面是一个关于使用 OpenSearch 以图形方式检索 Kubernetes 资源的示例。

$ ./hack/deploy-karmada-opensearch.sh $HOME/.kube/config karmada-host

验证安装,

$ kubectl --context karmada-host get po -A | grep karmada-opensearch
karmada-system       karmada-opensearch-777dfbb85b-l8vxv                    1/1     Running   0              2m16s
karmada-system       karmada-opensearch-dashboards-67657cf6f7-fv24q         1/1     Running   0              2m16s

更新 ResourceRegistry 使用后端存储,

$ cat <<EOF | kubectl apply -f -
apiVersion: search.karmada.io/v1alpha1
kind: ResourceRegistry
metadata:
  name: deployment-search
spec:
  backendStore:
    openSearch:
      addresses:
        - http://karmada-opensearch.karmada-system.svc:9200
  targetCluster:
    clusterNames:
      - member1
      - member2
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
EOF

公开仪表盘服务,

$ kubectl --context karmada-host port-forward svc/karmada-opensearch-dashboards 5601:5601 -nkarmada-system --address=0.0.0.0

访问仪表板,http://NodeIP:5601


图片来源:https://karmada.io/zh/docs/tutorials/karmada-search

集群注册

Karmada 支持 Push 和 Pull 这两种模式来管理成员集群。 Push 和 Pull 模式之间的主要区别在于部署清单时对成员集群的访问方式。

  • Push 模式

Karmada 控制平面将直接访问成员集群 kube-apiserver 以获取集群状态并部署清单。

通过 CLI 注册集群,

$ kubectl karmada join member1 --kubeconfig=<karmada kubeconfig> --cluster-kubeconfig=<member1 kubeconfig>

通过 CLI 注销集群,

$ kubectl karmada unjoin member1 --kubeconfig=<karmada kubeconfig> --cluster-kubeconfig=<member1 kubeconfig>
  • Pull 模式
    Karmada 控制平面不会访问成员集群,而是将其委托给名为 karmada-agent 的额外组件。

    每个 karmada-agent 服务于一个集群,并负责,

    • 将集群注册到 Karmada(创建Cluster对象)

    • 维护集群状态并向 Karmada 报告(更新Cluster对象状态)

    • 从 Karmada 执行空间(命名空间,karmada-es-<cluster name>)监视清单,并将监视的资源部署到代理服务的集群。

在 Karmada 控制平面中创建引导令牌,

$ karmadactl token create --print-register-command --kubeconfig /etc/karmada/karmada-apiserver.config

在成员集群中执行karmadactl register

$ karmadactl register 10.10.x.x:32443 --token t2jgtm.9nybj0526mjw1jbf --discovery-token-ca-cert-hash sha256:f5a5a43869bb44577dba582e794c3e3750f2050d62f1b1dc80fd3d6a371b6ed4

注销集群,

kubectl delete cluster member3

统一认证

对于成员集群中的一个或一组用户主体(用户、组或服务帐户),我们可以将它们导入 Karmada 控制平面并授予他们clusters/proxy权限,以便我们可以通过 Karmada 以用户主体的权限访问成员集群。

在本部分中,我们使用为测试命名的member1user服务帐户。

在 member1 集群中创建服务帐户,

$ kubectl --context member1 create serviceaccount member1user

在 Karmada 控制平面中创建服务帐户,

$ kubectl --context karmada-apiserver create serviceaccount member1user
$ cat <<EOF | kubectl --context karmada-apiserver create -f -
apiVersion: v1
kind: Secret
metadata:
  name: member1user
  annotations:
    kubernetes.io/service-account.name: "member1user"
type: kubernetes.io/service-account-token
EOF

要授予服务帐户clusters/proxy权限,请应用以下 rbac yaml 文件,

$ cat <<EOF | kubectl --context karmada-apiserver apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-proxy-clusterrole
rules:
- apiGroups:
  - 'cluster.karmada.io'
  resources:
  - clusters/proxy
  resourceNames:
  - member1
  verbs:
  - '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-proxy-clusterrolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-proxy-clusterrole
subjects:
  - kind: ServiceAccount
    name: member1user
    namespace: default
  # The token generated by the serviceaccount can parse the group information. Therefore, you need to specify the group information below.
  - kind: Group
    name: "system:serviceaccounts"
  - kind: Group
    name: "system:serviceaccounts:default"
EOF

访问 member1 集群,

$ kubectl --context karmada-apiserver get secret member1user -oyaml | grep token: | awk '{print $2}' | base64 -d

然后为 serviceaccount member1user 构造一个 kubeconfig 文件 member1user.config,

cat <<EOF > $HOME/.kube/member1user.config
apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: {karmada-apiserver-address} # Replace {karmada-apiserver-address} with karmada-apiserver-address. You can find it in /root/.kube/karmada.config file.
  name: member1user
contexts:
- context:
    cluster: member1user
    user: member1user
  name: member1user
current-context: member1user
kind: Config
users:
- name: member1user
  user:
    token: {token} # Replace {token} with the token obtain above.
EOF

运行以下命令以访问 member1 集群:

$ kubectl --kubeconfig $HOME/.kube/member1user.config get --raw /apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy/apis

我们可以发现我们能够访问,但运行以下命令,

$ kubectl --kubeconfig $HOME/.kube/member1user.config get --raw /apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy/api/v1/nodes

它将失败,因为服务帐户 testuser 在 member1 集群中没有任何权限。

向 member1 集群中的服务帐户授予权限,

$ cat <<EOF | kubectl --context member1 apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: member1user
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: member1user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: member1user
subjects:
  - kind: ServiceAccount
    name: member1user
    namespace: default
EOF

再次运行在上一步中失败的命令,

kubectl --kubeconfig $HOME/.kube/member1user.config get --raw /apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy/api/v1/nodes

访问将成功。

未完待续!

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值