kubernetes 性能测试工具 kubemark

本文详细描述了如何在最新版本的Ubuntu上基于官方脚本搭建Kubemark环境,包括准备环境、编译Kubernetes、创建集群、配置HollowNodes和相关组件,以及配置Kubemark参数的过程。
摘要由CSDN通过智能技术生成

网上能查到的运行kubemark的教程都比较久远,于是我根据官方 start-kubemark.sh 脚本进行梳理,最终的完整的搭建步骤如下:

准备

我的环境:

  • ubuntu
  • k8s 1.29 (其他版本,需自行编译)

[可选]自行编译镜像

# 安装 git,需要clone kubernetes源码进行相应版本的编译
sudo apt install git
# 安装 golang 1.21(当然,根据自己要编译的kubernetes代码里面的go版本进行安装即可)
# 目前kubernetes的master分支,未来应该是1.30版本
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes

# 编译
make WHAT='cmd/kubemark'
cp _output/bin/kubemark cluster/images/kubemark/
cd cluster/images/kubemark/
sudo make build
# 再打个tag上传就好
# 可以使用我打的:zhizuqiu/kubemark:1.29

准备k8s集群

我是按照官方教程搭建了两个单节点的k8s集群(一个集群也可以),我们给这两个集群起个名字:

  • kubemark cluster: 进行性能测试的集群,上面会出现一些 hollow node
  • work cluster: 运行一些名为 hollow-node-* 的 pod 的集群,这些 pod 通过 kubemark cluster 的 kubeconfig 文件注册为 kubemark cluster 的 node
# 在 work cluster 上创建命名空间:
kubectl create ns kubemark

启动hollow-nodes

在 work cluster 上新建 kernel-monitor.json 文件,是 kubemark 的配置文件

{
	"plugin": "filelog",
	"pluginConfig": {
		"timestamp": "dummy",
		"message": "dummy",
		"timestampFormat": "dummy"
	},
	"logPath": "/dev/null",
	"lookback": "10m",
	"bufferSize": 10,
	"source": "kernel-monitor",
	"conditions": [
		{
			"type": "KernelDeadlock",
			"reason": "KernelHasNoDeadlock",
			"message": "kernel has no deadlock"
		}
	],
	"rules": []
}

# 在 work cluster 上创建 configmap
# Create configmap for configuring hollow- kubelet, proxy and npd.
kubectl create configmap "node-configmap" --namespace="kubemark" --from-file=kernel.monitor="kernel-monitor.json"

准备 kubemark cluster 的 kubeconfig 文件:kubeconfig.kubemark

# 在 work cluster 上创建 secret
# Create secret for passing kubeconfigs to kubelet, kubeproxy and npd.
# It's bad that all component shares the same kubeconfig.
kubectl create secret generic "kubeconfig" --type=Opaque --namespace="kubemark" \
--from-file=kubelet.kubeconfig="kubeconfig.kubemark" \
--from-file=kubeproxy.kubeconfig="kubeconfig.kubemark" \
--from-file=npd.kubeconfig="kubeconfig.kubemark" \
--from-file=heapster.kubeconfig="kubeconfig.kubemark" \
--from-file=cluster_autoscaler.kubeconfig="kubeconfig.kubemark" \
--from-file=dns.kubeconfig="kubeconfig.kubemark"

准备 hollow-node_template.yaml 文件:

apiVersion: v1
kind: ReplicationController
metadata:
  name: hollow-node
  namespace: kubemark
  labels:
    name: hollow-node
spec:
  replicas: 2
  selector:
    name: hollow-node
  template:
    metadata:
      labels:
        name: hollow-node
    spec:
      initContainers:
      - name: init-inotify-limit
        image: busybox:1.32
        command: ['sysctl', '-w', 'fs.inotify.max_user_instances=1000']
        securityContext:
          privileged: true
      volumes:
      - name: kubeconfig-volume
        secret:
          secretName: kubeconfig
      - name: kernelmonitorconfig-volume
        configMap:
          name: node-configmap
      - name: logs-volume
        hostPath:
          path: /var/log
      - name: containerd
        hostPath:
          path: /run/containerd
      - name: no-serviceaccount-access-to-real-master
        emptyDir: {}
      containers:
      - name: hollow-kubelet
        image: zhizuqiu/kubemark:1.29
        ports:
        - containerPort: 4194
        - containerPort: 10250
        - containerPort: 10255
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        command: [
          "/go-runner",
          "-log-file=/var/log/kubelet-$(NODE_NAME).log",
          "/kubemark",
          "--morph=kubelet",
          "--name=$(NODE_NAME)",
          "--kubeconfig=/kubeconfig/kubelet.kubeconfig"
        ]
        volumeMounts:
        - name: kubeconfig-volume
          mountPath: /kubeconfig
          readOnly: true
        - name: logs-volume
          mountPath: /var/log
        - name: containerd
          mountPath: /run/containerd
        resources:
          requests:
            cpu: 20m
            memory: 50M
        securityContext:
          privileged: true
      - name: hollow-proxy
        image: zhizuqiu/kubemark:1.29
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        command: [
          "/go-runner",
          "-log-file=/var/log/kubeproxy-$(NODE_NAME).log",
          "/kubemark",
          "--morph=proxy",
          "--name=$(NODE_NAME)",
          "--kubeconfig=/kubeconfig/kubeproxy.kubeconfig",
        ]
        volumeMounts:
        - name: kubeconfig-volume
          mountPath: /kubeconfig
          readOnly: true
        - name: logs-volume
          mountPath: /var/log
        resources:
          requests:
            cpu: 20m
            memory: 50M
      - name: hollow-node-problem-detector
        image: zhizuqiu/node-problem-detector:v0.8.13
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        command:
        - /bin/sh
        - -c
        - /node-problem-detector --system-log-monitors=/config/kernel.monitor --apiserver-override="https://192.168.1.117:443?inClusterConfig=false&auth=/kubeconfig/npd.kubeconfig" --alsologtostderr 1>>/var/log/npd-$(NODE_NAME).log 2>&1
        volumeMounts:
        - name: kubeconfig-volume
          mountPath: /kubeconfig
          readOnly: true
        - name: kernelmonitorconfig-volume
          mountPath: /config
          readOnly: true
        - name: no-serviceaccount-access-to-real-master
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
          readOnly: true
        - name: logs-volume
          mountPath: /var/log
        resources:
          requests:
            cpu: 20m
            memory: 50M
        securityContext:
          privileged: true
      # Keep the pod running on unreachable node for 15 minutes.
      # This time should be sufficient for a VM reboot and should
      # avoid recreating a new hollow node.
      # See https://github.com/kubernetes/kubernetes/issues/67120 for context.
      tolerations:
      - key: "node.kubernetes.io/unreachable"
        operator: "Exists"
        effect: "NoExecute"
        tolerationSeconds: 900

创建:

# 在 work cluster 上创建相应的 hollow-node pod
kubectl create -f hollow-node_template.yaml
# 在 work cluster 上检查相应的 hollow-node pod
kubectl -n kubemark get pod

pod-list.png

# 在 kubemark cluster 上检查相应的 hollow node
kubectl -n kubemark get pod

node-list.png

接下来就就可以正常的在 kubemark cluster 上创建资源进行测试了。

kubemark 配置参数

  • name: Hollow Node 的名字
  • max-pods: 每个 Hollow node 支持运行的最大 pod 数量
  • node-labels: Hollow Node 的标签
  • register-with-taints: Hollow Node 的污点<key>=<value>:<effect>
  • use-host-image-service: 是否使用本地的images接口(ListImages、ImageStatus、PullImage、RemoveImage、ImageFsInfo),如果设置成 false,则不会真正的执行拉取镜像等这些动作,也就是说不正确的镜像地址,pod 的状态也会成为 Runnning

kubemark 支持 kubelet 的参数配置比较少,如果想修改其他参数,可以自行修改 kubemark 的源码 hollow_kubelet.go GetHollowKubeletConfig()

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值