k8s笔记1--使用kubeadm快速部署一套k8s-v1.15.4

本文为笔者初学k8s时的一个部署记录,后续将在次基础上继续完善优化kubeadm部署相关内容, 贴在此处以便于查阅,也给有需要的用户参考!

1 k8s 基础

  1. k8s 架构
    k8s 架构
  • 控制管理
    K8S的控制管理可以包括Kubectl,UI,API 等3种方式
  • Master
    master 模块主要包括scheduler、apiserver、controller-manager子模块模块
  • Nodes
    nodes 模块中包括kubelet、kube-proxy、Docker Engine 三部分
  • Etcd Cluster
    Etcd 模块主要包括etcd,用于保存集群所有的网络配置和对象的状态信息
  1. k8s 部署方式
  • kubeadm
    初学者建议使用adm,下文就使用该方式部署
  • 二进制
    企业中使用最广,有经验建议使用二进制,方便后期排查问题
  • minikube
    多用于测试
  • yum
    使用较少

2 环境准备和kubeadm

1个master(192.168.2.132 ),2个nodes节点(192.168.2.133-134), 非master至少1c2g,master至少2c2g
1)禁止swap,
swapoff -a 临时禁止
建议直接在 /etc/fstab 中通过注释来永久禁止
2)设置hostname和hosts
在master中设置相关参数,node节点类似设置
/etc/hostname
k8s01
/etc/hosts
127.0.1.1 k8s01
192.168.2.132 k8s01
3)设置清华源头,下载基本程序
ubuntu 1604 源 写入到/etc/apt/sources.list 中,然后apt-get update
apt-get -y install apt-transport-https ca-certificates curl software-properties-common
4)安装docker

step 1: 安装GPG证书
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -
Step 2: 写入软件源信息
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
Step 3: 更新并安装 Docker-CE
apt-get -y update
安装指定版本的Docker-CE,通过查找Docker-CE的版本:
apt-cache madison docker-ce
sudo apt-get -y install docker-ce=[VERSION]   //安装格式
apt-get -y install docker-ce=18.06.3~ce~3-0~ubuntu
Step 4: 配置docker-hub源
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors":["https://docker.mirrors.ustc.edu.cn","http://hub-mirror.c.163.com"]
}
EOF
systemctl daemon-reload && systemctl restart docker
5)安装kubeadm
新增源kubeadm相关的源
```bash
add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main"
apt-get update
查看kubeadm相关新版本
apt-cache madison kubelet kubectl kubeadm |grep '1.15.4-00'        
apt install -y kubelet=1.15.4-00 kubectl=1.15.4-00 kubeadm=1.15.4-00       
安装指定的版本,笔者初次安装,因此直接参考其它用户版本,暂未使用最新版本,以避免兼容问题

3 部署master与node加入集群

  1. 初始化master
    注: master必须2core,否则会报错
kubeadm init \
  --apiserver-advertise-address=192.168.2.132 \
  --kubernetes-version=v1.15.4 \
  --image-repository registry.aliyuncs.com/google_containers \
  --pod-network-cidr=10.24.0.0/16 \
  --ignore-preflight-errors=Swap
root@k8s01:/home/xg# tee /etc/default/kubelet <<-'EOF'
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
EOF
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
root@k8s01:/home/xg# systemctl daemon-reload && systemctl restart kubelet
root@k8s01:/home/xg# kubeadm init   --apiserver-advertise-address=192.168.2.132   --kubernetes-version=v1.15.4   --image-repository registry.aliyuncs.com/google_containers   --pod-network-cidr=10.24.0.0/16   --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.15.4
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.132]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s01 localhost] and IPs [192.168.2.132 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s01 localhost] and IPs [192.168.2.132 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.014975 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: omno4a.rgnhd0lfkoxm0yns
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.2.132:6443 --token omno4a.rgnhd0lfkoxm0yns \
    --discovery-token-ca-cert-hash sha256:783aac372134879f6f5daf1439c21ffe1cd651a43c9a98e00da6b89be0702276 
root@k8s01:/home/xg# 

创建正常用户:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

此时master正在启动,由于网络还没有起来,因此是NotRead状态

# kubectl get nodes
NAME    STATUS     ROLES    AGE   VERSION
k8s01   NotReady   master   19m   v1.15.4

启动网络:

# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

创建过一为master节点就会变为Ready状态,并启动一系列的pod如下图:
在这里插入图片描述
在master机器上查看有那些网络版本,在nodes节点上pull下来

# grep -i image kube-flannel.yml
        image: quay.io/coreos/flannel:v0.12.0-amd64
        image: quay.io/coreos/flannel:v0.12.0-amd64
        image: quay.io/coreos/flannel:v0.12.0-arm64
        image: quay.io/coreos/flannel:v0.12.0-arm64
        image: quay.io/coreos/flannel:v0.12.0-arm
        image: quay.io/coreos/flannel:v0.12.0-arm
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        image: quay.io/coreos/flannel:v0.12.0-s390x
        image: quay.io/coreos/flannel:v0.12.0-s390x
# docker pull quay.io/coreos/flannel:v0.12.0-amd64

2)node节点加入集群
在node节点上执行加入操作:

# kubeadm join 192.168.2.132:6443 --token omno4a.rgnhd0lfkoxm0yns \
>     --discovery-token-ca-cert-hash sha256:783aac372134879f6f5daf1439c21ffe1cd651a43c9a98e00da6b89be0702276
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
在master上get nodes,发现节点都成功加入了:
# kubectl get nodes
NAME    STATUS   ROLES    AGE    VERSION
k8s01   Ready    master   46m    v1.15.4
k8s02   Ready    <none>   107s   v1.15.4
k8s03   Ready    <none>   98s    v1.15.4

4 部署k8s UI

在master上执行如下操作:
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

改动1:
添加 type: NodePort 和 type: NodePort
改动2:
# cat recommended.yaml |grep -C 2 k8s01
        k8s-app: kubernetes-dashboard
    spec:
      nodeName: k8s01 # 此处设置为master
      containers:
        - name: kubernetes-dashboard
--
        k8s-app: dashboard-metrics-scraper
    spec:
      nodeName: k8s01 # 此处设置为master
      containers:
        - name: dashboard-metrics-scraper
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

kubectl apply -f recommended.yaml
此时查看pods,可以发现服务都正常启动:
在这里插入图片描述
创建nginx 实例:
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc
在这里插入图片描述
此时可以通过node节点ip访问nginx端口了,如图:
在这里插入图片描述
ui 事宜:
笔者初次创建的时候经常报错:
因此删掉namespace后,重新新建ns,随后重启动pod
kubectl delete namespace kubernetes-dashboard
kubectl apply -f recommended.yaml
在这里插入图片描述

添加角色,生成功对应的证书,倒入到浏览器中:

# 生成client-certificate-data
grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
# 生成client-key-data
grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
# 生成p12
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"
... 此处省略一堆证书信息
Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNjk0MmMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNzhjM2ZiMWItZjRjYi00NThhLWFkZGQtZDBmNjA0YzFmZTNkIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.a0UsU8otC0VqAU56IQt4JnfSrDtOdVPxgqDvDN10YFoWLnS4xNXU9kTJl9k6w-Dmf1nBaWMWqPnhnRNlRuAqTjk0xngsrRxQvm_eAVM050q2ceCzfArMy-xX7hvBsXwjii8Ux5kODUCf6W3_RZduyxJ_j5E6c5WDb7IrWJ8sAi3822ZwP78tbXepNU8VnQFfFZWBQs3Ew8yBr3QVz7qDpXMt6dMT6f8-wbqOV2zPNaZl6xXCrttL1H6zkkajD2iXZLcl4ggl3as9NFc1ZHP8aVQQa0KG4uaoh5sZQtZwDFMHDxCs1Q0jTFUn2oGM-RBXOJFU3MQQKkaeJH7Ku-bn3A

此处密码随意输,笔者直接设置为111111,后续倒入到浏览器中需要使用该秘密
从master中拷贝 kubecfg.p12 到本地机器,便于后续导入
注: 需要正确拷贝最后一行的token,否则可能导致登陆后查看的信息不全

浏览器配置证书:
在Privacy and security-> Your certificates->Import 上面生成的 kubecfg.p12 证书
在这里插入图片描述
最终正常启动:
https://192.168.2.132:6443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy
在这里插入图片描述

在这里插入图片描述

5注意事项

5.1 coredns报错

笔者第二次安装1.19版本时候,发现coredns使用没有起来,具体报错如下:
0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn’t tolerate.
Readiness probe failed: HTTP probe failed with statuscode: 503
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container “1e82d567d26941b05c94d29dedd2fc358838034de91b73706e6fc8b21efaaa9b” network for pod “coredns-6d56c8448f-lkdxd”: networkPlugin cni failed to set up pod “coredns-6d56c8448f-lkdxd_kube-system” network: open /run/flannel/subnet.env: no such file or directory
解决方法:
kubectl get pod --all-namespaces -o wide 发现coredns不在master上
因此3节点上都进行reset,然后重新初始化master,再设置网络,随后加入node节点到集群
kubeadm reset
注意:初次使用建议iptables -F 将防火墙给关闭掉,否则可能会出现coredns无法启动的情况;

5.2 kubeadm 1.19.0 版本安装步骤

以下是笔者安装1.19.0 版本的一个记录,其主要思路和上面1.15.4 版本一样,极少数不同

apt install -y kubelet kubectl kubeadm --allow-unauthenticated
apt-mark hold kubelet kubeadm kubectl docker

3个节点上都拉flannel镜像:
docker pull quay.io/coreos/flannel:v0.12.0-amd64

docker save -o flannel-v0.12.0-amd64.tar.gz quay.io/coreos/flannel:v0.12.0-amd64
docker load -i flannel-v0.12.0-amd64.tar.gz registry.aliyuncs.com/google_containers/etcd:3.4.9-1

docker save -o etcd-3.4.9-1.tar.gz registry.aliyuncs.com/google_containers/etcd:3.4.9-1
docker load -i etcd-3.4.9-1.tar.gz

kubeadm init \
  --apiserver-advertise-address=192.168.2.132 \
  --kubernetes-version=v1.19.0 \
  --image-repository registry.aliyuncs.com/google_containers \
  --pod-network-cidr=10.0.0.0/16 \
  --ignore-preflight-errors=Swap

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

master机器k8s01:
kubectl apply -f kube-flannel.yml

flannel起来后再让 node 节点加入集群,否则会导致coredns不在master,从而出现网络异常:
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.2.132:6443 --token t04ywd.m6hau0x92qhmqn9e \
    --discovery-token-ca-cert-hash sha256:88c94e64151a236d2cd3282da36f6b59fbb1ca90836be947fa3e5947f07b6ced

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

ca.crt:     1066 bytes
namespace:  11 bytes
token:         eyJhbGciOiJSUzI1NiIsImtpZCI6IlJ0MWRMdVlMYmtjMHYzb3hROVcxS3R1dk00VXdZeVpLSTYyUGN5RFRtVTgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tY205ZjQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjQyNGFlOGYtM2EzMi00OWFmLTljYzktNDgzODMyZjNlMzc1Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.HJ1HSr52BzaPv_lqiU9yqITokd5Upvq7atIezSRLgw1ygpIjAuHTJB0i3ikTOwRyzBY_zNNuGWdiQ6z_TuDeuoKYB3hL8-wd52Ifh365lihV7_erwxT7CyB-hQ7hgpWFpKQ5GbLUiUmHJhdo43vB9i1H8NKT4xpux33K6t0H2wgEtidrvVKqS-zq1t23RjoBUSAnU9WtBsxp-sQcNcN8mZBQgZkB0FUBVfwS3QIatR00McX0QniIp-WtzVWZTsprD0ab4I2z7xyb5zKOZBpllNY_pjwqrcENh1dOg48WAYFLppcBBmDPmAzTN7YNvurP1nZHwGZp3-A-0VFC_3L2ag

grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt

grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key

openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"
密码依旧设置111111
在浏览器中导入p12证书,然后打开浏览器就可以查看了

5.3 安装metrics-server

使用kubeadmin 部署k8s后,默认没有部署metrics-server,此时如果通过top node查看会报错;
kubectl top node 会报错:error: Metrics API not available
安装方法:

\# kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

安装成功后:
kube-system            metrics-server-5d5c49f488-m7p2m                           0/1     CrashLoopBackOff   6          5m37s
此时查看发现pod没有起来,报错:
Readiness probe failed: Get "https://10.244.2.8:4443/readyz": dial tcp 10.244.2.8:4443: connect: connection refused
查看官方文档,发现缺少 --kubelet-insecure-tls 配置,因此在deployment中的args中添加该参数;

再次查看,发现pod已经正常起来了;
\# kubectl get pods -A|grep metrics
kube-system            metrics-server-56c59cf9ff-zhr6k                           1/1     Running   0          3m23s

metrics-server 正常启动后,就可以通过top node|pod 查看资源使用情况
xg@xgmac ~ % kubectl top node                        
NAME                              CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
test01.i.xxx.net   1057m        4%     16872Mi         26%       
test02.i.xxx.net   1442m        6%     12243Mi         19%       
test03.i.xxx.net   749m         3%     14537Mi         22%       
xg@xgmac ~ % kubectl top pod 
NAME                     CPU(cores)   MEMORY(bytes)   
nginx-6799fc88d8-5twmz   0m           9Mi             
nginx-6799fc88d8-c578z   0m           4Mi             
nginx-6799fc88d8-mxdcl   0m           5Mi  

解决方法:
编辑deployment,在 containers 的args 中添加 --kubelet-insecure-tls 即可
在这里插入图片描述
kubernetes-sigs/metrics-server

6 说明

  1. 参考文档
    Kubernetes 安装 dashboard 报错
    ubuntu18.04使用kubeadm部署k8s单节点
    1天入门Kubernets/K8S
    使用kubeadm快速部署一个Kubernetes集群(v1.18)
    setup/cri 设置容器环境
  2. 软件说明
    部署系统: ubuntu 1604 server
    docker 版本: Docker version 18.06.3-ce, build d7080c1
    k8s 组建版本:kubelet=1.15.4-00 kubectl=1.15.4-00 kubeadm=1.15.4-00
  3. 配置文件
    由于一般的机器可能访问不了该域名(raw.githubusercontent.com), 笔者此处已经将相关资源上传到csdn,共有需要的用户下载; 当前上传审核中,过2天应该可以搜索名称或者链接下载了
    快速部署一套k8s-配置文件
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
Bootstrap Table Treegrid 是基于 Bootstrap Table 的扩展插件,用于在表格中显示树形结构数据。下面是使用 Bootstrap Table Treegrid 的步骤: 1. 引入必要的文件 ``` <link rel="stylesheet" href="https://cdn.staticfile.org/bootstrap/3.3.7/css/bootstrap.min.css"> <link rel="stylesheet" href="https://cdn.staticfile.org/bootstrap-table/1.15.4/bootstrap-table.min.css"> <link rel="stylesheet" href="https://cdn.staticfile.org/bootstrap-table-treegrid/1.11.0/bootstrap-table-treegrid.min.css"> <script src="https://cdn.staticfile.org/jquery/3.5.1/jquery.min.js"></script> <script src="https://cdn.staticfile.org/bootstrap/3.3.7/js/bootstrap.min.js"></script> <script src="https://cdn.staticfile.org/bootstrap-table/1.15.4/bootstrap-table.min.js"></script> <script src="https://cdn.staticfile.org/bootstrap-table-treegrid/1.11.0/bootstrap-table-treegrid.min.js"></script> ``` 2. 在 HTML 页面中添加表格元素,并设置 data-url 属性为数据源的 URL。例如: ``` <table id="treegrid" data-url="data.json"></table> ``` 3. 在 JavaScript 中初始化表格,启用 Treegrid 插件。例如: ``` $('#treegrid').bootstrapTable({ columns: [{ field: 'id', title: 'ID' }, { field: 'name', title: '名称' }, { field: 'parentId', title: '父级ID' }], treeShowField: 'name', // 指定树形结构显示的字段 parentIdField: 'parentId', // 指定父级ID字段 idField: 'id', // 指定ID字段 treeCollapse: false, // 是否默认折叠树形结构 treeGrid: true, // 启用 Treegrid 插件 expandIcon: 'glyphicon glyphicon-plus', // 展开图标 collapseIcon: 'glyphicon glyphicon-minus' // 折叠图标 }); ``` 4. 数据源的格式需要满足以下要求: * 每个节点需要有一个 ID 字段和一个父级 ID 字段。 * 根节点的父级 ID 字段可以为空。 * 节点之间的层级关系可以通过父级 ID 字段来确定。 以上就是使用 Bootstrap Table Treegrid 的基本步骤,您可以根据实际情况调整参数和数据源格式。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

昕光xg

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值