k8s搭建-kubeadm安装k8s1.25高可用集群(一)

一、环境装备
内存:4G
cpu:2c
硬盘:50G
版本:v1.25
本次搭建用到的包以及文件:
链接:https://pan.baidu.com/s/1QVdPfkNgDoA3BpvRn2W2Xw
提取码:n1xv

设备名称和IP地址规划

172.30.0.150 k8s-master1 安装组件apiserver、controller-manager、schedule、kubelet、etcd、kube-proxy、容器运行时、calico、keepalived、nginx
172.30.0.151 k8s-node1 Kube-proxy、calico、coredns、容器运行时、kubelet
172.30.0.152 k8s-node2 Kube-proxy、calico、coredns、容器运行时、kubelet

关闭selinux,防火墙,swap分区,修改主机名
修改主机名,依次修改三台

hostnamectl set-hostname k8s-master1 &&  bash
hostnamectl set-hostname k8s-node1 &&  bash
hostnamectl set-hostname k8s-node2 &&  bash

关闭selinux

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

关闭防火墙

systemctl stop firewalld  &&  systemctl disable firewalld

注释swap分区

swapoff -a
sed -i  's/^.*swap/#&/g' /etc/fstab

重启

reboot

设置本地host解析

cat >>/etc/hosts << EOF
172.30.0.150 k8s-master1
172.30.0.151 k8s-node1
172.30.0.152 k8s-node2
EOF

时间同步

yum install ntpdate -y
ntpdate ntp.aliyun.com
crontab -e
* */1 * * * /usr/sbin/ntpdate ntp.aliyun.com
systemctl restart crond

修改内核参数

modprobe br_netfilter
echo "modprobe br_netfilter" >> /etc/profile

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl -p /etc/sysctl.d/k8s.conf

配置yum源

yum install yum-utils -y
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

设置k8s的源

cat >  /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

安装基础环境

yum install -y device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc \
gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip \
sudo ntp libaio-devel wget vim ncurses-devel autoconf automake \
zlib-devel  python-devel epel-release openssh-server socat \
ipvsadm conntrack telnet ipvsadm

安装containerd服务

yum install  containerd.io-1.6.6 -y

配置containerd服务

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

修改配置文件:

vim /etc/containerd/config.toml

把SystemdCgroup = false修改成SystemdCgroup = true
把sandbox_image = "k8s.gcr.io/pause:3.6"修改成sandbox_image=“registry.aliyuncs.com/google_containers/pause:3.7”

启动

systemctl enable containerd  --now

修改容器配置文件

cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

docker也要安装,docker跟containerd不冲突,安装docker是为了能基于dockerfile构建镜像

yum install  docker-ce  -y
systemctl enable docker --now

配置containerd镜像加速器,k8s所有节点均按照以下配置:

编辑vim /etc/containerd/config.toml文件
找到config_path = “”,修改成如下目录:
config_path = “/etc/containerd/certs.d”

#保存退出

mkdir /etc/containerd/certs.d/docker.io/ -p
vim /etc/containerd/certs.d/docker.io/hosts.toml

#写入如下内容:
[host.“https://vh3bm52y.mirror.aliyuncs.com”,host.“https://registry.docker-cn.com”]
capabilities = [“pull”]

重启containerd:
systemctl restart containerd

配置docker镜像加速器,k8s所有节点均按照以下配置
vim /etc/docker/daemon.json
写入如下内容:

{
 "registry-mirrors":["https://vh3bm52y.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"]
} 

重启docker:

systemctl restart docker

安装初始化k8s需要的软件包

yum install -y kubelet-1.25.0 kubeadm-1.25.0 kubectl-1.25.0
systemctl enable kubelet

Kubeadm: kubeadm是一个工具,用来初始化k8s集群的
kubelet: 安装在集群所有节点上,用于启动Pod的,kubeadm安装k8s,k8s控制节点和工作节点的组件,都是基于pod运行的,只要pod启动,就需要kubelet
kubectl: 通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

kubeadm初始化k8s集群
指定runtime

crictl config runtime-endpoint /run/containerd/containerd.sock

以上操作所有节点执行

[root@k8s-master ~]# vim kubeadm.yaml

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.30.0.150 #master节点的ip
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock #设置容器运行时
  imagePullPolicy: IfNotPresent
  name: k8s-master #节点的名称
  taints: null
·---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers#设置仓库地址
kind: ClusterConfiguration
kubernetesVersion: 1.25.0
networking:
  dnsDomain: cluster.local
   podSubnet: 10.244.0.0/16 
  serviceSubnet: 10.96.0.0/12
scheduler: {}
#下面是新增的部分,注意横线要带着
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

再次设置容器运行时

crictl config runtime-endpoint /run/containerd/containerd.sock

由于我文件里制定的是离线包 imagePullPolicy: IfNotPresent
这里我们上传相关镜像包,解压到本地,单个节点都要传(文件在文章开头下载)
然后倒入
ctr -n=k8s.io images import k8s_1.25.0.tar.gz
导入完成查询镜像

 [root@k8s-master ~]# crictl images
IMAGE                                                                         TAG                 IMAGE ID            SIZE
registry.aliyuncs.com/google_containers/pause                                 3.7                 221177c6082a8       311kB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   v1.9.3              5185b96f0becf       14.8MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64                3.4.7-0             ff0da8ec66a57       104MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.5.4-0             a8a176a5d5d69       102MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.25.0             4d2edfd10d3e3       34.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.25.0             1a54c86c03a67       31.3MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.25.0             58a9a0c6d96f2       20.3MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.25.0             bef2cf3115095       15.8MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.8                 4873874c08efc       311kB

开始初始化集群

 kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification

出现以下

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.30.0.150:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:2cdbbe5124108201494b137ddf72a28d6f9ac0ac157f1a51e705efd2d97a2d9

在这里插入图片描述
创建命令补全

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看master状态

 [root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES           AGE     VERSION
k8s-master   NotReady   control-plane   2m19s   v1.25.0

加入node节点到master
在master查看加入node节点的命令

  kubeadm token create --print-join-command

在这里插入图片描述

加入节点
在最后加上ignore-preflight-errors=SystemVerification跳过不必要的错误。每个人的节点加入生产的秘钥不一样,不要复制我的

kubeadm join 172.30.0.150:6443 --token ttlr4x.q47b09cytzxek4e2 --discovery-token-ca-cert-hash sha256:2cdbbe5124108201494b137ddf72a28d6f9ac0ac157f1a51e705efd2d97a2d94 --ignore-preflight-errors=SystemVerification

出现以下加入完成

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

You have new mail in /var/spool/mail/root

在这里插入图片描述

在master查看节点状态(没有ready是还没装网络插件)

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES           AGE     VERSION
k8s-master   NotReady   control-plane   8m32s   v1.25.0
k8s-node1    NotReady   <none>          83s     v1.25.0
k8s-node2    NotReady   <none>          47s     v1.25.0

安装kubernetes网络组件-Calico(这是来做节点间通信的网络插件,也可以用其他)
把安装calico需要的镜像calico.tar.gz传到所有节点,手动解压:
ctr -n=k8s.io images import calico.tar.gz
配置文件的下载地址,也可以用我的

 https://docs.projectcalico.org/manifests/calico.yaml

部署网络插件

[root@k8s-master ~]# kubectl apply -f calico.yaml 
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
You have new mail in /var/spool/mail/root

等k8s pod起来查看状态

[root@k8s-master ~]# kubectl get pod -A 
NAMESPACE     NAME                                       READY   STATUS    RESTARTS      AGE
kube-system   calico-kube-controllers-6744f6b6d5-zhq6d   0/1     Running   0             82s
kube-system   calico-node-k4w65                          1/1     Running   0             88s
kube-system   calico-node-l57bf                          0/1     Running   0             88s
kube-system   calico-node-xf8pb                          0/1     Running   0             88s
kube-system   coredns-7f8cbcb969-8gdjx                   1/1     Running   0             18m
kube-system   coredns-7f8cbcb969-rk2d8                   1/1     Running   0             18m
kube-system   etcd-k8s-master                            1/1     Running   0             18m
kube-system   kube-apiserver-k8s-master                  1/1     Running   0             18m
kube-system   kube-controller-manager-k8s-master         1/1     Running   0             18m
kube-system   kube-proxy-6txsr                           1/1     Running   0             18m
kube-system   kube-proxy-fmshc                           1/1     Running   0             11m
kube-system   kube-proxy-m6j2s                           1/1     Running   0             10m
kube-system   kube-scheduler-k8s-master                  1/1     Running   1 (55s ago)   18m
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES           AGE   VERSION
k8s-master   Ready    control-plane   18m   v1.25.0
k8s-node1    Ready    <none>          11m   v1.25.0
k8s-node2    Ready    <none>          11m   v1.25.0
[root@k8s-master ~]# 

测试coredns的解析

[root@k8s-master ~]#  kubectl run busybox --image docker.io/library/busybox:1.28  --image-pull-policy=IfNotPresent --restart=Never --rm -it busybox -- sh

nslookup kubernetes.default.svc.cluster.local
在这里插入图片描述

跑一个测试容器吧,直接用pod,以nginx为例

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nginx
  name: nginx-test
  namespace: default
spec:
  containers:
  - name: nginx-test
    image: nginx
    ports:
    - containerPort: 80
[root@k8s-master ~]# kubectl apply -f nginx-test.yaml 

在这里插入图片描述
至此基础部分搭建完毕
下面是可选的
dashbord管理界面
项目地址
https://github.com/kubernetes/dashboard
文件如下

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.3.1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.6
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

创建
[root@k8s-master ~]# kubectl apply -f dashbord.yaml
查看dashbord状态

[root@k8s-master ~]# kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7cc7856cfb-94trm   1/1     Running   0          3m29s
kubernetes-dashboard-7fb56fd5f4-f8b7h        1/1     Running   0          3m29s

由于我们还没部署nginx-ingress,所以就用四层的svc的nodeport先用着

修改svc为NodePort

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

type: ClusterIP 改为 type: NodePort

查看dashbord的端口
kubectl get svc -A |grep kubernetes-dashboard
访问
https://任意节点ip:30703/
创建访问的账号并且授权访问apiserver

#创建访问账号,准备一个yaml文件; vi dash.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

[root@k8s-master ~]# kubectl apply -f dash_sa.yaml
在这里插入图片描述
获取访问令牌

kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

打开火狐浏览器
输入token

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
搭建Kubernetes可用集群,可以按照以下步骤进行操作: 1. 准备环境:确保每个节点满足安装要求,并安装docker、kubeadm和kubelet等必要软件。 2. 部署master节点的可用组件:首先在每个master节点上部署keepalived和haproxy。这些组件将负责提供VIP和负载均衡功能。 3. 使用kubeadm初始化第一个master节点:在其中一个master节点上使用kubeadm init命令进行集群初始化。执行该命令后,会得到一个join命令,记下来以便后续使用。 4. 加入其他master节点:在其他master节点上执行之前记下的join命令,并添加参数--control-plane,以将其加入到集群的控制平面中。 5. 加入worker节点:在每个worker节点上执行join命令,将其加入到集群中。 6. 安装集群网络:根据需要选择合适的网络插件,并在集群中部署。 7. 进行集群测试:使用kubectl命令验证集群是否正常工作。 这样,就完成了Kubernetes可用集群的搭建过程。请注意,这只是一个简要的概述,实际操作中可能还需要进行一些额外的配置和调整。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* [K8s可用集群搭建](https://blog.csdn.net/weixin_44917045/article/details/127993927)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] - *3* [k8s系列(二)之k8s可用集群环境搭建](https://blog.csdn.net/qq_29653373/article/details/126147549)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值