利用kubeadm安装kubernetes1.21.2单节点集群

利用kubeadm安装kubernetes集群

环境:

1、关闭防火墙,

2、配置好/etc/hosts

k8s-master:192.168.248.128

k8s-node01: 192.168.248.129

k8s-node02:192.168.248.130

3、时间同步;yum -y install ntpdate &&ntpdate ntp1.aliyun.com

安装步骤:

1、etcd cluster ,仅master节点;

2、flannel,集群的所以节点;

3、配置k8s的master;仅master节点:

kubernetes-master

        启动的服务:

        kube-apiserver,kube-scheduler,kube-controller-manager

4、配置k8s的node节点:

kubernetes-node

        先启动docker服务;

       启动k8s的服务:kube-proxy,kubelet

使用kubeadm之前需要如下操作

  1、master,nodes:安装docker,kubelet,kubeadm

  2、master:kubeadm init

  3、nodes:kubeadm join

没有特殊说明的以下操作所有节点均执行:

安装docker

#卸载旧版本
yum remove docker  \
dockerclinet  \
docker-client-latest  \
docker-common  \
docker-latest  \
docker-latest-logrotate  \
docker-logrotate  \   
docker-engine     

配置docker.repo源,使用阿里云源

yum-config-manager --add-repo  http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 

刷新yum源

yum makecache fast

安装docker依赖包

yum -y install yum-utils device-mapper-persistent-data lvm2  gcc  gcc-c++

安装docker软件

yum -y install docker-ce docker-ce-cli containerd.io

启动docker,设置开机自启

systemctl start docker 
systemctl enable docker

docker镜像国内加速:

阿里云镜像加速地址:https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors

sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://mktue9qa.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
#registry-mirrors 阿里云加速镜像源。
#exec-opts:此行为扩加项,docker文件驱动默认为cgroup,需要修改为systemd,因为kubelet使用的是systemd,两者需要保持一致。
sudo systemctl daemon-reload       #重新加载daemon
sudo systemctl restart docker      #重新启动docker

关闭swap分区:

k8s用不到swap分区,安装时也会提示错误

swapoff  -a  #临时关闭
#永久关闭
vim /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Jun 15 09:22:39 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=b95b6fb4-4ec8-4f74-8f24-384c53303a2f /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0         #注释掉此行

内核参数修改

br_netfilter 模块用于将桥接流量转发至iptables链,br_netfilter内核参数需要开启转发。

[root@k8s-master ~]#yum -y install ipvsamd
[root@k8s-master ~]# modprobe br_netfilter       #加载模块
[root@k8s-master ~]# lsmod |grep br_netfilter    #查看是否加载
br_netfilter           22209  0 
bridge                136173  1 br_netfilter
[root@k8s-master ~]#echo "modprobe br_netfilter" >> /etc/profile      #加入到开启加载

添加网络配置文件

k8s.conf,或者追加到/etc/sysctl.conf中

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
#加载生效
sysctl -p /etc/sysctl.d/k8s.conf

制作kubadm.repo源:使用阿里云源

vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc

将阿里云里的验证gpg包下载下来

[root@k8s-master ~]# wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[root@k8s-master ~]# wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
#导入下载下来的gpg包:
[root@k8s-master ~]# rpm --import rpm-package-key.gpg 
[root@k8s-master ~]# rpm --import yum-key.gpg 

验证配置的repo源是否生效,加载到包

[root@k8s-master docker]# yum repolist
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
repo id                       repo name                                            status
base/7/x86_64                 CentOS-7 - Base - mirrors.aliyun.com                 10,072
docker-ce-stable/7/x86_64     Docker CE Stable - x86_64                               117
extras/7/x86_64               CentOS-7 - Extras - mirrors.aliyun.com                  498
kubernetes                    Kubernetes Repo                                         678
updates/7/x86_64              CentOS-7 - Updates - mirrors.aliyun.com               2,458
repolist: 13,823

安装kubeadm软件

[root@k8s-master docker]# yum -y install kubelet kubeadm kubectl

设置kubelet开机自启

kubelet现在无法启动没有服务(加入自启动就可)

[root@k8s-master ~]# systemctl enable kubelet
[root@k8s-master ~]# systemctl start kubelet

#kubelet : 运行在集群所有的节点上,用于启动Pod和容器等对象的工具。
#kubeadm: 用于初始化集群,启动集群的命令工具。
#kubectl: 用于和集群通信的命令行,通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件。

配置安装kubernetes代理地址

(选作,可以在初始化时指定镜像,建议指定镜像,不改动)

(安装好以后改回,依旧使用阿里云的镜像源)

修改过以后重新启动docker服务

vim /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket containerd.service

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
Environment="HTTPS_PROXY=http://www.ik8s.io:10080"     #添加的代理镜像地址
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500

[Install]
WantedBy=multi-user.target

设置kubelet参数

–swap空间已经关闭可以忽略此步骤。

vim /etc/sysconfig/kubelet 
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
#如果没有关闭swap空间,在初始化集群是会包swap错误,可以修改此配置文件,并且在安装是添加忽略swap空间的参数。

拉取搭建的额外的两个镜像插件。一个网络,一个监控
docker pull quay.io/coreos/flannel:v0.14.0 #配置网络用到的插件,后面需要用到网络配置,重要操作
docker pull registry.cn-hangzhou.aliyuncs.com/kubeapps/k8s-gcr-kubernetes-dashboard-amd64:v1.8.3 #dashboard暂时用不到可以先跳过。

使用kubeadm初始化kubernetes集群

在开始初始化之前可以将需要用到的镜像提前导入加快部署。

导入时注意镜像标签要和下面初始的时候所需的标签一致,否则会继续拉取新的镜像(部分镜像需要翻墙)

#初始化
kubeadm init --kubernetes-version=1.21.2 \
--apiserver-advertise-address=192.168.248.128 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
#--image-repository       设置阿里云镜像加速。kubeadm默认从k8ss.grc.io拉取镜像需要翻墙,拉取可能失败
#--apiserver-advertise-address    指定master节点ip
#--service-cidr                   server运行的网络,默认地址10.96.0.0/12
#--pod-network-cidr               指定pod运行的ip,falnnel默认地址10.244.0.0/16
[root@k8s-master ~]# kubeadm init --kubernetes-version=1.21.2 \
> --apiserver-advertise-address=192.168.248.128 \
> --image-repository registry.aliyuncs.com/google_containers \
> --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.248.12
8][certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.248.128 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.248.128 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 42.005766 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/ex
clude-from-external-load-balancers][mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 7lz2zr.tuvtc9sv8rrjna54
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.248.128:6443 --token 7lz2zr.tuvtc9sv8rrjna54 \
	--discovery-token-ca-cert-hash sha256:d63e9c11783ebf919b6e6e9f36d8bd927cc44279815ffcd4890e34f079d7fadc
#按照提示创建下面目录和文件
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
################################################################################################
kubeadm join 192.168.248.128:6443 --token 7lz2zr.tuvtc9sv8rrjna54 \
	--discovery-token-ca-cert-hash sha256:d63e9c11783ebf919b6e6e9f36d8bd927cc44279815ffcd4890e34f079d7fadc  #后期加入node节点用的token和hash值记得保留。
################################################################################################
#查看启动的所有容器id。
[root@k8s-master ~]# docker ps -qa
8a85962e7f0a
cbd90dfb587d
4cf3470fda6c
3cead2502816
d90eac928b7a
44574d80769b
f2b8820035e3
a9804dc37853
d6f997bda702
626e0b7ccd2a
#一共运行10个容器

下载配置网络插件falnnel

方式1:yum -y install git && git clone https://github.com/blackmed/kubernetes-kubeadm.git
下载过以后会克隆一个文件到本地,进到文件里修改kube-flannel.yml里的关于images的版本号,与前面使用的flannel镜像版本保持一致。
方式2:直接去githup官网下载。
官网地址:https://github.com/flannel-io/flannel
找到下图中的地址下载下来,(此处需要翻墙)
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

此操作在主节点执行

#安装falnnel插件
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml 
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

将node节点加入到集群(所有node节点都执行加入)

[root@k8s-node01 ~]# kubeadm join 192.168.248.128:6443 --token 6drffj.rkw2gb19rv2pxs8p \
--discovery-token-ca-cert-hash sha256:70f6b916ff7cbb8a5c3854af09826322ac8bcd4b64330e6b6533f0194d265e21
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-node01 ~]# 
[root@k8s-node02 ~]#  kubeadm join 192.168.248.128:6443 --token 6drffj.rkw2gb19rv2pxs8p --discovery-token-ca-cert-hash sha256:70f6b916ff7cbb8a5c3854af09826322ac8bcd4b64330e6b6533f0194d265e21
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-node02 ~]# 

查看加入的node的状态

kubectl命令只可以在主节点运行

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE    VERSION
k8s-master   Ready    control-plane,master   103m   v1.21.2
k8s-node01   Ready    <none>                 9m     v1.21.2
k8s-node02   Ready    <none>                 97s    v1.21.2

查看cs状态

[root@k8s-master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                STATUS                      MESSAGE          ERROR
scheduler           Unhealthy                   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection                             refused   
controller-manager  Unhealthy                   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection                             refused   
etcd-0              Healthy                     {"health":"true"}   
scheduler、controller-manager状态是Unhealthy的,出现这种情况,是/etc/kubernetes/manifests/下的kube-controller-manager.yaml和kube-scheduler.yaml设置的默认端口是0导致的,解决方式是注释掉对应的port即可,操作如下:
kube-controller-manager.yaml文件修改:注释掉27行,#- --port=0
kube-scheduler.yaml配置修改:注释掉19行,#- --port=0
然后在master节点上重启kubelet,systemctl restart kubelet.service,然后重新查看就正常了
[root@k8s-master manifests]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"} 
#参考解决链接:https://www.cnblogs.com/Crazy-Liu/p/14653849.html    感谢此博主

查看默认域的状态

[root@k8s-master ~]# kubectl get pods -A           #也可以写kubectl get pods -n kube-system
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-59d64cd4d4-5mqq5             1/1     Running   0          12m
kube-system   coredns-59d64cd4d4-jzsgv             1/1     Running   0          12m
kube-system   etcd-k8s-master                      1/1     Running   0          12m
kube-system   kube-apiserver-k8s-master            1/1     Running   0          12m
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          4m54s
kube-system   kube-flannel-ds-9r7gq                1/1     Running   0          8m17s
kube-system   kube-flannel-ds-dlrfn                1/1     Running   1          6m53s
kube-system   kube-flannel-ds-tdjvv                1/1     Running   1          7m5s
kube-system   kube-proxy-mcqn9                     1/1     Running   0          12m
kube-system   kube-proxy-nplcd                     1/1     Running   0          6m53s
kube-system   kube-proxy-vpknp                     1/1     Running   0          7m5s
kube-system   kube-scheduler-k8s-master            1/1     Running   0          4m34s

kube-proxy开启ipvs(主节点执行)

[root@k8s-master ~]#kubectl get configmap kube-proxy -n  kube-system  -o yaml  > kube-proxy-configmapyaml
[root@k8s-master ~]#sed -i 's/mode: ""/mode: "ipvs"/' kube-proxy-configmapyaml
[root@k8s-master ~]#kubectl apply -f kube-proxy-configmap.yaml
[root@k8s-master ~]#rm -f kube-proxy-configmap.yaml
[root@k8s-master ~]# kubectl get pod -n kube-system |grep kube-proxy |awk '{system ("kubectl delete pod"$1" -n kube-system")}'        #貌似删除以后直接运行新的pod,后续研究。

安装Dashboard(dashboard主节点操作)

[root@k8s-master ~]# docker pull registry.aliyuncs.com/google_containers/dashboard:v2.3.0
[root@k8s-master ~]# docker pull  registry.aliyuncs.com/google_containers/metrics-scraper:v1.0.6
[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.0/aio/deploy/recommended.yaml
#下载好的镜像注意修改tag,需要与recommended.yaml文件中的使用的images保持一致。
[root@k8s-master ~]# docker tag registry.aliyuncs.com/google_containers/dashboard:v2.3.0 kubernetesui/dashboard:v2.3.0
[root@k8s-master ~]# docker tag registry.aliyuncs.com/google_containers/metrics-scraper:v1.0.6  kubernetesui/metrics-scraper:v1.0.6

下载的recommend.yaml文件默认没有pod节点,需要修改

 [root@k8s-master ~]# vim recommended.yaml 
 30 ---
 31 
 32 kind: Service
 33 apiVersion: v1
 34 metadata:
 35   labels:
 36     k8s-app: kubernetes-dashboard
 37   name: kubernetes-dashboard
 38   namespace: kubernetes-dashboard
 39 spec:
 40   ports:
 41     - port: 443
 42       targetPort: 8443
 43       nodePort: 30000   #新增nodePort,访问dashboard仪表盘的端口
 44   type: NodePort        #新增
:set nu     

创建证书

[root@k8s-master]# mkdir dashboard-certs
[root@k8s-master]# cd dashboard-certs
#创建命名空间
[root@k8s-master dashboard-certs]# kubectl create namespace kubernetes-dashboard
#创建私钥key文件
[root@k8s-master dashboard-certs]# openssl genrsa -out dashboard.key 2048
#证书请求
[root@k8s-master dashboard-certs]# openssl req -days 36500 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'
#自签证书
[root@k8s-master dashboard-certs]# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
#创建kubernetes-dashboard-certs对象
[root@k8s-master dashboard-certs]# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard

添加dashboard管理员用户凭证:

#创建账户
[root@k8s-master ~]# vim dashboard-admin.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: dashboard-admin
  namespace: kubernetes-dashboard
  
 #安装
[root@k8s-master ~]# kubectl apply -f dashboard-admin.yaml
#为用户分配权限
[root@k8s-master ~]# vim dashboard-admin-bind-cluster-role.yaml
---
apiVersion: rbac.authorization.k8s.io/v1     #注意下面的警告
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin-bind-cluster-role
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard
  
#安装
[root@k8s-master ~]# kubectl apply -f dashboard-admin-bind-cluster-role.yaml
#警告:v1.17+中不推荐使用rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding,v1.22+中不可用;使用rbac.authorization.k8s.io/v1 ClusterRoleBinding

配置dashboard

#安装
[root@k8s-master ~]# kubectl apply -f recommended.yaml 
[root@k8s-master ~]# kubectl get pod -A|grep 'dashboard'          #查看启动的dashboard服务
NAMESPACE                              NAME                        READY  STATUS   RESTARTS  AGE
kubernetes-dashboard   dashboard-metrics-scraper-856586f554-mljjf    1/1  Running     0      87s
kubernetes-dashboard   kubernetes-dashboard-5579987b5f-j6vfd         1/1  Running     0      87s
#获取登陆dashboard的登陆令牌
[root@k8s-master ~]# kubectl describe secrets -n 命名空间   用户名
[root@k8s-master ~]# kubectl describe secrets -n kubernetes-dashboard dashboard-admin  #获取登陆dashboard的登陆令牌
Name:         dashboard-admin-token-9wvhh
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: ecf9cf55-0415-444d-90fe-704246b18907

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImxLN2NydDEwTVFLb29UZ2pqTE9XTXNfcmtzV1YtbkZ1aktlV3Y3ZEZpZWcifQ.eyJpc3MiOiJrdWJ
lcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tOXd2aGgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZWNmOWNmNTUtMDQxNS00NDRkLTkwZmUtNzA0MjQ2YjE4OTA3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.hkNjZEvVhDQZZRed-RZGS9uOJQro1Vpe7-pbx_SyJuJbTr3RJq7X0981B_m4VUBhZs0Y648dloy52oXLp1GFSo2cW_FY9SpJ8cTcUbr5B_zrwSuDQSrUzB5FHCowuzXX_ltk0XuJAd_76xjwO36Y0skQlmOvpbSRJZwXmC3nlbHnl69n0j5K5oDyWqAQ0yrQ6w935WW-aKqnT9H180A3zApAtpulgd1tM-M0b9oG6YfdMQtHzWMJWYyPPsWl00nZZzIzZ3RUDNL6LO43n1rrDHqvhk7806y1rkiwtWMTSkev6D0uxIzZtJKpovSQ2ob5Z8NDtHbypv0YIea67VTfNw
#token是登陆dashboard的tohken。
登陆时复制token时候需要从"token:"后开始复制,防止踩坑!!!,一定要注意复制token:   eyJhb....之间的空格。
登陆dashboard:https://ip:port   
              https://192.168.248.128:30000
#登陆以后如果提示为匿名用户无法查看资源可以使用下面的方式临时解决,生产环境禁用
[root@k8s-master ~]# kubectl create clusterrolebinding  test:anonymous  --clusterrole=cluster-admin --user=system:anonymous

清理和后续步骤(不需要时可以删除)

删除管理员ServiceAccount和ClusterRoleBinding.

[root@k8s-master ~]# kubectl -n 命名空间 delete serviceaccount 用户名 
[root@k8s-master ~]# kubectl -n 命名空间 delete clusterrolebinding 用户名
[root@k8s-master ~]# kubectl -n kubernetes-dashboard delete serviceaccount dashboard-admin 
[root@k8s-master ~]# kubectl -n kubernetes-dashboard delete clusterrolebinding dashboard-admin
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值