建立Kubernates 1.24.4 Cluster

建立Kubernates 1.24.4 Cluster

上接 第一篇:Windows 上建立虚拟机
本篇很多内容都是取自网络,但都是经过验证可行的操作,可以按照步骤操作,就可以部署成功。

1. 部署方式

有几下几种部署方式:

minikube:一个用于快速搭建单节点的kubernetes工具
kubeadm:一个用于快速搭建kubernetes集群的工具
二进制包:从官网上下载每个组件的二进制包,依次去安装

这里我们选用kubeadm方式进行安装

2. 集群规划

Kubernetes有一主多从或多主多从的集群部署方式,这里我们采用一主多从的方式

服务器名称 服务器IP 角色 CPU(最低要求) 内存(最低要求)
master 192.168.19.50 master 2核 3G
slave1 192.168.19.51 node 2核 3G
slave2 192.168.19.52 node 2核 3G

3. containerd安装

前面第一篇已经说明,Kubernates 1.24 版及之后的版本,默认使用Containerd 做容器,不在默认使用 docker,因此,我们需要先安装
containerd和kubernetes版本对应关系,参考:

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md
https://github.com/kubernetes/kubernetes/blob/master/build/dependencies.yaml
containerd的安装,
请参考Centos7上安装容器运行时Containerd和命令行工具nerdctl、crictl

3.1 cgroup driver的说明

Linux使用cgroup进行资源的隔离控制

Centos启动时使用systemd(即我们使用的systemctl,Centos7之前是使用/etc/rc.d/init.d)进行初始化系统,会生成一个cgroup manager去管理cgroupfs。如果让Containerd直接去管理cgroupfs,又会生成一个cgroup manager。一个系统有两个cgroup manager很不稳定。所以我们需要配置Containerd直接使用systemd去管理cgroupfs

3.2 部署

下载并解压

[root@master ~]# wget https://github.com/containerd/containerd/releases/download/v1.4.12/cri-containerd-cni-1.4.12-linux-amd64.tar.gz
若提示 wget 命令不能找到,则通过 yum install -y wget 安装命令后再执行。

[root@master ~]# mkdir cri-containerd-cni
[root@master ~]# tar -zxvf cri-containerd-cni-1.4.12-linux-amd64.tar.gz -C cri-containerd-cni

将解压的文件,复制到系统的配置目录和执行目录

[root@master ~]# cp -a cri-containerd-cni/etc/systemd/system/containerd.service /etc/systemd/system
[root@master ~]# cp -a cri-containerd-cni/etc/crictl.yaml /etc
[root@master ~]# cp -a cri-containerd-cni/etc/cni /etc
[root@master ~]# cp -a cri-containerd-cni/usr/local/sbin/runc /usr/local/sbin
[root@master ~]# cp -a cri-containerd-cni/usr/local/bin/* /usr/local/bin
[root@master ~]# cp -a cri-containerd-cni/opt/* /opt

启动containerd

[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl enable containerd --now
Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
[root@master ~]#

3.3 配置containerd

获取默认配置文件
[root@master ~]# mkdir /etc/containerd
[root@master ~]# containerd config default | tee /etc/containerd/config.toml

修改config.toml配置文件,修改说明如下:
[root@master ~]# vi /etc/containerd/config.toml
…省略部分…
enable_selinux = false
selinux_category_range = 1024
#sandbox_image = “k8s.gcr.io/pause:3.2”
# 注释上面那行,添加下面这行
sandbox_image = “registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2”
stats_collect_period = 10

systemd_cgroup = false
…省略部分…
privileged_without_host_devices = false
base_runtime_spec = “”
[plugins.“io.containerd.grpc.v1.cri”.containerd.runtimes.runc.options]
# 添加下面这行
SystemdCgroup = true

[plugins.“io.containerd.grpc.v1.cri”.cni]
bin_dir = “/opt/cni/bin”
…省略部分…
[plugins.“io.containerd.grpc.v1.cri”.registry.mirrors]
[plugins.“io.containerd.grpc.v1.cri”.registry.mirrors.“docker.io”]
#endpoint = [“https://registry-1.docker.io”]
# 注释上面那行,添加下面三行
endpoint = [“https://docker.mirrors.ustc.edu.cn”]
[plugins.“io.containerd.grpc.v1.cri”.registry.mirrors.“k8s.gcr.io”]
endpoint = [“https://registry.cn-hangzhou.aliyuncs.com/google_containers”]

[plugins.“io.containerd.grpc.v1.cri”.image_decryption]
key_model = “”
…省略部分…
然后重启containerd
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart containerd

3.4. 安装nerdctl

nerdctl是containerd原生的命令行管理工具。和Docker的命令行兼容
nerdctl管理的容器、镜像与Kubernetes的容器、镜像是完全隔离的,不能互通。目前只能通过crictl命令行工具来查看、拉取Kubernetes的容器、镜像

nerdctl和containerd的版本对应关系,参考:https://github.com/containerd/nerdctl/blob/v0.6.1/go.mod

下载解压,然后复制到执行目录下,即可使用
[root@master ~]# wget https://github.com/containerd/nerdctl/releases/download/v0.6.1/nerdctl-0.6.1-linux-amd64.tar.gz
[root@master ~]#
[root@master ~]# mkdir nerdctl
[root@master ~]#
[root@master ~]# tar -zxvf nerdctl-0.6.1-linux-amd64.tar.gz -C nerdctl
nerdctl
containerd-rootless-setuptool.sh
containerd-rootless.sh
[root@master ~]#
[root@master ~]# cp -a nerdctl/nerdctl /usr/bin
[root@master ~]#
[root@master ~]# nerdctl images
REPOSITORY TAG IMAGE ID CREATED SIZE
[root@master ~]#
注意: nerdctl run -p containerPort:hostPort的端口映射无效,只能通过nerdctl run -net host来实现

3.5 安装crictl

crictl是Kubernetes用于管理Containerd上的镜像和容器的一个命令行工具,主要用于Debug

下载解压
[root@master ~]# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.0/crictl-v1.24.0-linux-amd64.tar.gz
[root@master ~]# tar -zxvf crictl-v1.24.0-linux-amd64.tar.gz -C /usr/local/bin
crictl
[root@master ~]#
创建crictl的配置文件
[root@master ~]# cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: false
pull-image-on-create: false
EOF
[root@master ~]#
[root@master ~]# systemctl daemon-reload

使用crictl
[root@master ~]# crictl images
IMAGE TAG IMAGE ID SIZE
[root@master ~]#

3.6 安装buildkit(可选,用于nerdctl build)

使用nerdctl build进行Dockerfile的镜像构建时,报缺少buildkit,所以需要进行安装,若不需要使用,也可以不用安装。
buildkit和containerd的版本对应关系,参考:https://github.com/moby/buildkit/blob/v0.8.3/go.mod

下载、解压、安装客户端buildkit
[root@master ~]# mkdir buildkit
[root@master ~]#
[root@master ~]# cd buildkit/
[root@master buildkit]#
[root@master buildkit]# wget https://github.com/moby/buildkit/releases/download/v0.8.3/buildkit-v0.8.3.linux-amd64.tar.gz
[root@master buildkit]#
[root@master buildkit]# tar -zxvf buildkit-v0.8.3.linux-amd64.tar.gz
bin/
bin/buildctl
bin/buildkit-qemu-aarch64
bin/buildkit-qemu-arm
bin/buildkit-qemu-i386
bin/buildkit-qemu-ppc64le
bin/buildkit-qemu-riscv64
bin/buildkit-qemu-s390x
bin/buildkit-runc
bin/buildkitd
[root@master buildkit]#
[root@master buildkit]# cp -a bin /usr/local

编写buildkitd的启动文件
[root@master buildkit]# vi /etc/systemd/system/buildkit.service
[Unit]
Description=BuildKit
Documentation=https://github.com/moby/buildkit

[Service]
ExecStart=/usr/local/bin/buildkitd --oci-worker=false --containerd-worker=true

[Install]
WantedBy=multi-user.target
保存文件;
[root@master buildkit]#

启动buildkitd服务端程序
[root@master buildkit]# systemctl enable buildkit --now
Created symlink from /etc/systemd/system/multi-user.target.wants/buildkit.service to /etc/systemd/system/buildkit.service.
[root@master buildkit]#

4. 安装k8s集群

4.1 基础环境

4.1.1. 禁用selinux

临时禁用方法
[root@master ~]# setenforce 0
[root@master ~]# getenforce
Permissive
[root@master ~]#

永久禁用方法。需重启服务器
[root@master ~]# sed -i ‘s/^SELINUX=enforcing$/SELINUX=permissive/’ /etc/selinux/config

4.1.2. 关闭swap

swap分区指的是虚拟内存分区,它的作用是在物理内存使用完之后,将磁盘空间虚拟成内存来使用。但是会对系统性能产生影响。所以这里需要关闭。如果不能关闭,则在需要修改集群的配置参数

临时关闭方法
[root@master ~]# swapoff -a
[root@master ~]# free -m
total used free shared buff/cache available
Mem: 1819 286 632 9 900 1364
Swap: 0 0 0
[root@master ~]#

永久关闭方法。需重启服务器
[root@master ~]# sed -ri ‘s/.swap./#&/’ /etc/fstab

4.1.3. bridged网桥设置

为了让服务器的iptables能发现bridged traffic,需要添加网桥过滤和地址转发功能
新建modules-load.d/k8s.conf文件

[root@master ~]# cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
[root@master ~]#

新建sysctl.d/k8s.conf文件

[root@master ~]# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@master ~]#

加载配置文件
[root@master ~]# sysctl --system

加载br_netfilter网桥过滤模块,和加载网络虚拟化技术模块
[root@master ~]# modprobe br_netfilter
[root@master ~]# modprobe overlay

检验网桥过滤模块是否加载成功
[root@master ~]# lsmod | grep -e br_netfilter -e overlay
br_netfilter 22256 0
bridge 151336 1 br_netfilter
overlay 91659 0
[root@master ~]#

4.1.4 配置IPVS

service有基于iptables和基于ipvs两种代理模型。基于ipvs的性能要高一些。需要手动载入才能使用ipvs模块
安装ipset和ipvsadm
[root@master ~]# yum install ipset ipvsadm

新建脚本文件/etc/sysconfig/modules/ipvs.modules,内容如下
[root@master ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe – ip_vs
modprobe – ip_vs_rr
modprobe – ip_vs_wrr
modprobe – ip_vs_sh
modprobe – nf_conntrack_ipv4
EOF
[root@master ~]#

添加执行权限给脚本文件,然后执行脚本文件
[root@master ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
[root@master ~]# /bin/bash /etc/sysconfig/modules/ipvs.modules
[root@master ~]#

检验模块是否加载成功
[root@master ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145458 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4 15053 2
nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
nf_conntrack 139264 ip_vs, nf_nat, nf_nat_ipv4, xt_conntrack, nf_nat_masquerade_ipv4, nf_conntrack_netlink, nf_conntrack_ipv4
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
[root@master ~]#

4.2 安装kubelet、kubeadm、kubectl

添加yum源
[root@master ~]# cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@master ~]#

安装,然后启动kubelet

[root@master ~]# yum install -y --setopt=obsoletes=0 kubelet-1.24.4 kubeadm-1.24.4 kubectl-1.24.4
[root@master ~]# systemctl enable kubelet --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@master ~]#

说明如下:
obsoletes等于1表示更新旧的rpm包的同时会删除旧包,0表示更新旧的rpm包不会删除旧包
kubelet启动后,可以用命令journalctl -f -u kubelet查看kubelet更详细的日志
kubelet默认使用systemd作为cgroup driver
启动后,kubelet现在每隔几秒就会重启,因为它陷入了一个等待kubeadm指令的死循环

4.3 下载各个机器需要的镜像

查看集群所需镜像的版本
[root@master ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.24.4
k8s.gcr.io/kube-controller-manager:v1.24.4
k8s.gcr.io/kube-scheduler:v1.24.4
k8s.gcr.io/kube-proxy:v1.24.4
k8s.gcr.io/pause:3.7
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/coredns/coredns:v1.8.6
[root@master ~]#

编辑镜像下载文件images.sh,然后执行。其中node节点只需要kube-proxy和pause
[root@master ~]# tee ./images.sh <<‘EOF’
#!/bin/bash
images=(
kube-apiserver:v1.24.4
kube-controller-manager:v1.24.4
kube-scheduler:v1.24.4
kube-proxy:v1.24.4
pause:3.7
etcd:3.5.3-0
coredns:v1.8.6
)
for imageName in ${images[@]} ; do
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
EOF
[root@master ~]#
[root@master ~]# chmod +x ./images.sh && ./images.sh

4.4 初始化主节点(只在master节点执行)

[root@master ~]# kubeadm init
–apiserver-advertise-address=192.168.19.50 \ # IP 为 master IP address
–control-plane-endpoint=master
–image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
–kubernetes-version v1.24.4
–service-cidr=10.96.0.0/16 \ # default address
–pod-network-cidr=10.244.0.0/16 # default address

OUTPUT INFORMATION:

[init] Using Kubernetes version: v1.24.4
[preflight] Running pre-flight checks
…省略部分…
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join master:6443 --token yzicfs.d50rrfxpd3a0wokb
–discovery-token-ca-cert-hash sha256:8548affb49155f0ba53a0ac4eb9500a060ee3b4076ad8b66bd973bab92d78103
–control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join master:6443 --token yzicfs.d50rrfxpd3a0wokb
–discovery-token-ca-cert-hash sha256:8548affb49155f0ba53a0ac4eb9500a060ee3b4076ad8b66bd973bab92d78103
[root@master ~]#

说明:

可以使用参数–v=6或–v=10等查看详细的日志
所有参数的网络不能重叠。比如192.168.2.x和192.168.3.x是重叠的
–pod-network-cidr:指定pod网络的IP地址范围。直接填写这个就可以了
–service-cidr:service VIP的IP地址范围。默认就10.96.0.0/12。直接填写这个就可以了
–apiserver-advertise-address:API Server监听的IP地址

另一种kubeadm init的方法

#打印默认的配置信息
[root@master ~]# kubeadm config print init-defaults --component-configs KubeletConfiguration
#通过默认的配置信息,进行编辑修改,其中serviceSubnet和podSubnet在同一层级。然后拉取镜像
[root@master ~]# kubeadm config images pull --config kubeadm-config.yaml
#进行初始化
[root@master ~]# kubeadm init --config kubeadm-config.yaml

如果init失败,使用如下命令进行回退
[root@master ~]# kubeadm reset -f
[root@master ~]#
[root@master ~]# rm -rf /etc/kubernetes
[root@master ~]# rm -rf /var/lib/etcd/
[root@master ~]# rm -rf $HOME/.kube

4.5 设置.kube/config(只在master执行)

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown ( i d − u ) : (id -u): (idu):(id -g) $HOME/.kube/config

kubectl会读取该配置文件

4.6 安装网络插件calico(只在master执行)

参考calico官网
calico版本选择,参考
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md
插件使用的是DaemonSet的控制器,会在每个节点都运行

下载calico.yaml文件
[root@master ~]# curl https://docs.projectcalico.org/archive/v3.19/manifests/calico.yaml -O
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 185k 100 185k 0 0 81914 0 0:00:02 0:00:02 --:–:-- 81966
[root@master ~]#

将下面的部分
# - name: CALICO_IPV4POOL_CIDR
# value: “192.168.0.0/16”

修改成下面的部分。其中IP为kubeadm init时候pod-network-cidr的IP
- name: CALICO_IPV4POOL_CIDR
value: “10.244.0.0/16”

查看需要的镜像
[root@master ~]# cat calico.yaml | grep image
image: docker.io/calico/cni:v3.19.4
image: docker.io/calico/cni:v3.19.4
image: docker.io/calico/pod2daemon-flexvol:v3.19.4
image: docker.io/calico/node:v3.19.4
image: docker.io/calico/kube-controllers:v3.19.4
[root@master ~]#

编辑镜像下载文件,然后执行脚本文件
[root@master ~]# tee ./calicoImages.sh <<‘EOF’
#!/bin/bash
images=(
docker.io/calico/cni:v3.19.4
docker.io/calico/pod2daemon-flexvol:v3.19.4
docker.io/calico/node:v3.19.4
docker.io/calico/kube-controllers:v3.19.4
)
for imageName in ${images[@]} ; do
crictl pull $imageName
done
EOF
[root@master ~]#
[root@master ~]# chmod +x ./calicoImages.sh && ./calicoImages.sh
会下载calico/node、calico/pod2daemon-flexvol、calico/cni、calico/kube-controllers四个镜像

部署calico
[root@master ~]# kubectl apply -f calico.yaml

此时查看master的状态
[root@master ~]#
[root@master ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-57d95cb479-5zppz 0/1 Pending 0 14s
kube-system calico-node-v6zcv 1/1 Running 0 14s
kube-system coredns-7f74c56694-snzmv 1/1 Running 0 71s
kube-system coredns-7f74c56694-whh84 1/1 Running 0 71s
kube-system etcd-master 1/1 Running 0 84s
kube-system kube-apiserver-master 1/1 Running 0 84s
kube-system kube-controller-manager-master 1/1 Running 0 83s
kube-system kube-proxy-f9w7h 1/1 Running 0 71s
kube-system kube-scheduler-master 1/1 Running 0 84s
[root@master ~]#
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 105s v1.24.0
[root@master ~]#

4.7 加入node节点(只在node执行)

由上面的kubeadm init成功后的结果得来的

[root@slave1 ~]# kubeadm join master:6443 --token yzicfs.d50rrfxpd3a0wokb
–discovery-token-ca-cert-hash sha256:8548affb49155f0ba53a0ac4eb9500a060ee3b4076ad8b66bd973bab92d78103
[preflight] Running pre-flight checks
…省略部分…
This node has joined the cluster:

  • Certificate signing request was sent to apiserver and a response was received.
  • The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.

[root@slave1 ~]#
令牌有效期24小时,可以在master节点生成新令牌命令
[root@master ~]# kubeadm token create --print-join-command

此时查看master的状态
[root@master ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-57d95cb479-5zppz 1/1 Running 0 2m35s
kube-system calico-node-2m8xb 1/1 Running 0 37s
kube-system calico-node-jnll4 1/1 Running 0 35s
kube-system calico-node-v6zcv 1/1 Running 0 2m35s
kube-system coredns-7f74c56694-snzmv 1/1 Running 0 3m32s
kube-system coredns-7f74c56694-whh84 1/1 Running 0 3m32s
kube-system etcd-master 1/1 Running 0 3m45s
kube-system kube-apiserver-master 1/1 Running 0 3m45s
kube-system kube-controller-manager-master 1/1 Running 0 3m44s
kube-system kube-proxy-9gc7d 1/1 Running 0 35s
kube-system kube-proxy-f9w7h 1/1 Running 0 3m32s
kube-system kube-proxy-s8rwk 1/1 Running 0 37s
kube-system kube-scheduler-master 1/1 Running 0 3m45s
[root@master ~]#
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 4m11s v1.24.0
slave1 Ready 61s v1.24.0
slave2 Ready 59s v1.24.0
[root@master ~]#

4.7.1 node节点可以执行kubectl命令方法

在master节点上将 H O M E / . k u b e 复制到 n o d e 节点的 HOME/.kube复制到node节点的 HOME/.kube复制到node节点的HOME目录下

[root@master ~]#
[root@master ~]# scp -r H O M E / . k u b e s l a v e 1 : HOME/.kube slave1: HOME/.kubeslave1:HOME
[root@master ~]#

5. 部署dashboard(只在master执行)

Kubernetes官方可视化界面:https://github.com/kubernetes/dashboard

5.1 部署

dashboard和kubernetes的版本对应关系,参考:https://github.com/kubernetes/dashboard/blob/v2.5.1/go.mod

[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@master ~]#

会下载kubernetesui/dashboard:v2.5.1、kubernetesui/metrics-scraper:v1.0.7两个镜像

在master通过命令watch -n 3 kubectl get pods -A查看状态

5.2 设置访问端口

将type: ClusterIP改为:type: NodePort
[root@master ~]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
service/kubernetes-dashboard edited
[root@master ~]#

查看端口命令
[root@master ~]# kubectl get svc -A | grep kubernetes-dashboard
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.96.44.79 8000/TCP 3m39s
kubernetes-dashboard kubernetes-dashboard NodePort 10.96.27.108 443:30256/TCP 3m39s
[root@master ~]#

访问dashborad页面:https://slave1:30256,如下所示
在这里插入图片描述
这里需要登录令牌,通过下面的步骤获取

5.3 创建访问账号

创建资源文件,然后应用
[root@master ~]# tee ./dash.yaml <<‘EOF’
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:

  • kind: ServiceAccount
    name: admin-user
    namespace: kubernetes-dashboard
    EOF
    [root@master ~]#
    [root@master ~]# kubectl apply -f dash.yaml
    serviceaccount/admin-user created
    clusterrolebinding.rbac.authorization.k8s.io/admin-user created
    [root@master ~]#

5.4 获取访问令牌

[root@master ~]# kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath=“{.secrets[0].name}”) -o go-template=“{{.data.token | base64decode}}”
Error executing template: template: output:1:16: executing “output” at : invalid value; expected string. Printing more information for debugging the template:
template was:
{{.data.token | base64decode}}
raw data was:
{“apiVersion”:“v1”,“items”:[{“apiVersion”:“v1”,“kind”:“Secret”,“metadata”:{“annotations”:{“kubectl.kubernetes.io/last-applied-configuration”:“{“apiVersion”:“v1”,“kind”:“Secret”,“metadata”:{“annotations”:{},“labels”:{“k8s-app”:“kubernetes-dashboard”},“name”:“kubernetes-dashboard-certs”,“namespace”:“kubernetes-dashboard”},“type”:“Opaque”}\n”},“creationTimestamp”:“2022-05-12T05:00:35Z”,“labels”:{“k8s-app”:“kubernetes-dashboard”},“managedFields”:[{“apiVersion”:“v1”,“fieldsType”:“FieldsV1”,“fieldsV1”:{“f:metadata”:{“f:annotations”:{“.”:{},“f:kubectl.kubernetes.io/last-applied-configuration”:{}},“f:labels”:{“.”:{},“f:k8s-app”:{}}},“f:type”:{}},“manager”:“kubectl-client-side-apply”,“operation”:“Update”,“time”:“2022-05-12T05:00:35Z”}],“name”:“kubernetes-dashboard-certs”,“namespace”:“kubernetes-dashboard”,“resourceVersion”:“1071”,“uid”:“952ec075-0512-42c0-9845-5fc0feb1e3ed”},“type”:“Opaque”},{“apiVersion”:“v1”,“data”:{“csrf”:“014uxJAOA4YG1zOy+F0sz5LBmbii8hfVMCKREV2h3xcTCGhXQaEq2DInQlyG/ivRDENB0Yn8aRiN7N0zhkypuc+yKztoCKhzoAkTAzaLN0Ddjxk9WQksdchRlkpfF/ydwDUAAwETfH3xtshTEHDz+WXd3/q0dQG/v4RyUeJtoG1mHyCmLceoxP1nFsiIUA0EfKuu0acapiRr7vynZHhGCCUCVamQx6u4sYTN5lJnhr1wxEU8nS12LuyznLR5TmxX9R7SniQ4FivMZpP+cDDn2RgSXq9U8gC2iAHNx/USzJ81ukYJxj37buSZdah9P8wGRO5Zt9Vb8v/nrXB+VGM98g==”},“kind”:“Secret”,“metadata”:{“annotations”:{“kubectl.kubernetes.io/last-applied-configuration”:“{“apiVersion”:“v1”,“data”:{“csrf”:”“},“kind”:“Secret”,“metadata”:{“annotations”:{},“labels”:{“k8s-app”:“kubernetes-dashboard”},“name”:“kubernetes-dashboard-csrf”,“namespace”:“kubernetes-dashboard”},“type”:“Opaque”}\n”},“creationTimestamp”:“2022-05-12T05:00:35Z”,“labels”:{“k8s-app”:“kubernetes-dashboard”},“managedFields”:[{“apiVersion”:“v1”,“fieldsType”:“FieldsV1”,“fieldsV1”:{“f:data”:{},“f:metadata”:{“f:annotations”:{“.”:{},“f:kubectl.kubernetes.io/last-applied-configuration”:{}},“f:labels”:{“.”:{},“f:k8s-app”:{}}},“f:type”:{}},“manager”:“kubectl-client-side-apply”,“operation”:“Update”,“time”:“2022-05-12T05:00:35Z”},{“apiVersion”:“v1”,“fieldsType”:“FieldsV1”,“fieldsV1”:{“f:data”:{“f:csrf”:{}}},“manager”:“dashboard”,“operation”:“Update”,“time”:“2022-05-12T05:02:47Z”}],“name”:“kubernetes-dashboard-csrf”,“namespace”:“kubernetes-dashboard”,“resourceVersion”:“1307”,“uid”:“2c6d3791-323d-4230-a1c7-64b422b0d5ae”},“type”:“Opaque”},{“apiVersion”:“v1”,“data”:{“priv”:“LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBL3VQRjBzY2dSTktsZWhqa0tLV1JBK2Y3NjNYTVRETjNnSzdSK245QVRES0wwV2NWClJOUExpdzJqSmVnSVFjRG16Q0RvMEQzKzdpVjJKalk0YTZGeWxwSzc4NkljNEZ4SWNhNEM1OWd6TGo5eHJ1VU4KbDBBSTM1b0VHOWRkWmJIenEwaU9laTNZMXV1dThRNnhPNTFMYk9qV3IxSzhDaHV0MldQbGdiTTduYjRBeWxsZgpPenJxcXMrSHVuZEN6aTMzZU84WXVJQkJhMlJ6MGJnMkNNU3h0dU1iM0NXcGhaWFl1d01JMnMvaEFLcDJseUtmCkl5NFRZalcyK2dCdDlhNnpHSUt0bk1MZ0haTjVPbU1UeTlzVlJzbkRweHBaOE1DTjlnRG9VaEhjWHh3RExDRkIKUitNMUR5eFBLMHhpbXo3ejJmY1I4YnZNbC9MLzFiWEx4K1Bqd3dJREFRQUJBb0lCQVFDQTdyaS9vU2hxaDk5YQp2c0tTNlFWTTQ0a2tGd2RMdUhFSHIrYlpmb3NJd0R6SHBRdzJMNmh6WTJlV29pT2pGeS9vSy9GNGZSTzZaVXE1Cms0M0FxLzhwdVhuSGlNWndtMTJ0MjJidTNnY3RxcndYeXhldjNaMWZkaW9ENTFJQVFoN1BFcm0zaGY5ODMrVXoKWE1vOExKbmRzbjMrVzZ4d3RJV2hSSTN3cUxoTVZyRGpoNmw3V1VaMWtNQnB5dGt3NnNhcFVJMnVva0VyNnFNUgpvWmNCU1Z4SkZZVEx0Q2VTTWw3MWdKRDFVNWMvSE5XSWcrZkoyZ2g1OUE2NEdBcUpJNEcxcjcrRzlZRndIQWM3CjNBeUlRSW1TK3RMNHBGTEVwckxYVGxPOHQydXBrNzRHYUk2dEljY0llNDUxWG05SWNEZElQQUFlR0s0c0hLRkoKRzZQQkxKMXBBb0dCQVA4VXB2ZWRxSHBVZ2NjZFJvaDFES1FMYmY2ZnNseXlOVTVJVlFSbjdiVVlrR3ZXYkJ2ZAoramQwbGVsMml3amhvZ0lOOWpOMHYyRjRDQjRZdzZ4eHJBaWdQcDJWVThuNDZCUWRQOTFMZzNGdWxsN0pQVnMvCk1Nb2FZL0NBdlg3RW1FSDB1SFhmYk43QVhSRVhTKy9lMHhmZDFtQ3A2RnpsTDhnbG5qYThJVmtGQW9HQkFQL08KOGNJSEJXYTdYUEUwdE5jVG8xTkp2YTdWWWNLcXZydnMzRHd0aFUvY1czWUMzRGxkSHFEOWlDOStUZGdMMmJFcgpEU3lSd3RkNTBMMWh0czhOSmp4WFozbWtBb1EyR0ZKL0VaNWM5MXBOT0hQVUtXOG5IUmY5M0NpbkhscDE3RzdMCnVkYTlMZkNhbTBPNVFpNitIOFE2VUE3WnAzNmtkbWE5MHFXV29rUW5Bb0dCQUlVMGs3emJhQS81OFl1NWpndlUKbERWV2dxcGxXdzl0UU1rUW5OVWdNTkpSY1puZTc3WGR4YjBQOVBsbUhsVVUvelZ6ZFE2SitTYzlONEFBRHE4Two3WGZUdHQ4MEMvMTlMalRTMFhjTzZDVmtTc0pVOU9XaHFpamdmekFwQ3N3WWZpcHpVYUM4Zkc0V3BvTTJWMEY4CmEyQWJTTWhSOGpZUXVWTWIwZk5qYTBiQkFvR0FIdGYyOG13aVRKYSt5QjZReDNZSXRWd28wTkhOcmNra29rZ1cKN2ZLWEpsL3RiemM5RW5XVjRkZHYram9DYk5CUStUbTFwdkFVVENMVjlsKzN5Uk5PenV2REFEbTBTL2l4eWhDawpNVElJYVF6eWg1VEhRaTIzSmxObm5rYzRNN1FRUS9Pd2ZxSGt6aVAySUo1UHlvOEdDWVQyYmpQMExDTHNXOHI3CmdSZStqUFVDZ1lCcW83OTV6MWhOdmQzVXc0Nzl3cklnQXZsZlJvN3UzMkl6ZEZFTTZqc2RVN05MSDlYdm9pVVcKNnpsK2FMbm11cUJ5eFB2dlVwVGZCd3pPYVRqeS9LcXFnYXo5QThYanJudEFiQWxWclVTa0xEd0h5cjcwTERZVQpMSm9sYTBpWWFrUHovY0hsbVF4ejBINTRSR3FuSXM0L2U5QVBrTzdSUGxNNC9TZ01UbXdPNHc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=”,“pub”:“LS0tLS1CRUdJTiBSU0EgUFVCTElDIEtFWS0tLS0tCk1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBL3VQRjBzY2dSTktsZWhqa0tLV1IKQStmNzYzWE1URE4zZ0s3UituOUFUREtMMFdjVlJOUExpdzJqSmVnSVFjRG16Q0RvMEQzKzdpVjJKalk0YTZGeQpscEs3ODZJYzRGeEljYTRDNTlnekxqOXhydVVObDBBSTM1b0VHOWRkWmJIenEwaU9laTNZMXV1dThRNnhPNTFMCmJPaldyMUs4Q2h1dDJXUGxnYk03bmI0QXlsbGZPenJxcXMrSHVuZEN6aTMzZU84WXVJQkJhMlJ6MGJnMkNNU3gKdHVNYjNDV3BoWlhZdXdNSTJzL2hBS3AybHlLZkl5NFRZalcyK2dCdDlhNnpHSUt0bk1MZ0haTjVPbU1UeTlzVgpSc25EcHhwWjhNQ045Z0RvVWhIY1h4d0RMQ0ZCUitNMUR5eFBLMHhpbXo3ejJmY1I4YnZNbC9MLzFiWEx4K1BqCnd3SURBUUFCCi0tLS0tRU5EIFJTQSBQVUJMSUMgS0VZLS0tLS0K”},“kind”:“Secret”,“metadata”:{“creationTimestamp”:“2022-05-12T05:00:35Z”,“managedFields”:[{“apiVersion”:“v1”,“fieldsType”:“FieldsV1”,“fieldsV1”:{“f:type”:{}},“manager”:“kubectl-client-side-apply”,“operation”:“Update”,“time”:“2022-05-12T05:00:35Z”},{“apiVersion”:“v1”,“fieldsType”:“FieldsV1”,“fieldsV1”:{“f:data”:{“.”:{},“f:priv”:{},“f:pub”:{}}},“manager”:“dashboard”,“operation”:“Update”,“time”:“2022-05-12T05:02:47Z”}],“name”:“kubernetes-dashboard-key-holder”,“namespace”:“kubernetes-dashboard”,“resourceVersion”:“1312”,“uid”:“75107a0e-b809-491d-84cf-c7626dca3aea”},“type”:“Opaque”}],“kind”:“List”,“metadata”:{“resourceVersion”:“”}}
object given to template engine was:
map[apiVersion:v1 items:[map[apiVersion:v1 kind:Secret metadata:map[annotations:map[kubectl.kubernetes.io/last-applied-configuration:{“apiVersion”:“v1”,“kind”:“Secret”,“metadata”:{“annotations”:{},“labels”:{“k8s-app”:“kubernetes-dashboard”},“name”:“kubernetes-dashboard-certs”,“namespace”:“kubernetes-dashboard”},“type”:“Opaque”}
] creationTimestamp:2022-05-12T05:00:35Z labels:map[k8s-app:kubernetes-dashboard] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:annotations:map[.:map[] f:kubectl.kubernetes.io/last-applied-configuration:map[]] f:labels:map[.:map[] f:k8s-app:map[]]] f:type:map[]] manager:kubectl-client-side-apply operation:Update time:2022-05-12T05:00:35Z]] name:kubernetes-dashboard-certs namespace:kubernetes-dashboard resourceVersion:1071 uid:952ec075-0512-42c0-9845-5fc0feb1e3ed] type:Opaque] map[apiVersion:v1 data:map[csrf:014uxJAOA4YG1zOy+F0sz5LBmbii8hfVMCKREV2h3xcTCGhXQaEq2DInQlyG/ivRDENB0Yn8aRiN7N0zhkypuc+yKztoCKhzoAkTAzaLN0Ddjxk9WQksdchRlkpfF/ydwDUAAwETfH3xtshTEHDz+WXd3/q0dQG/v4RyUeJtoG1mHyCmLceoxP1nFsiIUA0EfKuu0acapiRr7vynZHhGCCUCVamQx6u4sYTN5lJnhr1wxEU8nS12LuyznLR5TmxX9R7SniQ4FivMZpP+cDDn2RgSXq9U8gC2iAHNx/USzJ81ukYJxj37buSZdah9P8wGRO5Zt9Vb8v/nrXB+VGM98g==] kind:Secret metadata:map[annotations:map[kubectl.kubernetes.io/last-applied-configuration:{“apiVersion”:“v1”,“data”:{“csrf”:“”},“kind”:“Secret”,“metadata”:{“annotations”:{},“labels”:{“k8s-app”:“kubernetes-dashboard”},“name”:“kubernetes-dashboard-csrf”,“namespace”:“kubernetes-dashboard”},“type”:“Opaque”}
] creationTimestamp:2022-05-12T05:00:35Z labels:map[k8s-app:kubernetes-dashboard] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:data:map[] f:metadata:map[f:annotations:map[.:map[] f:kubectl.kubernetes.io/last-applied-configuration:map[]] f:labels:map[.:map[] f:k8s-app:map[]]] f:type:map[]] manager:kubectl-client-side-apply operation:Update time:2022-05-12T05:00:35Z] map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:data:map[f:csrf:map[]]] manager:dashboard operation:Update time:2022-05-12T05:02:47Z]] name:kubernetes-dashboard-csrf namespace:kubernetes-dashboard resourceVersion:1307 uid:2c6d3791-323d-4230-a1c7-64b422b0d5ae] type:Opaque] map[apiVersion:v1 data:map[priv:LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBL3VQRjBzY2dSTktsZWhqa0tLV1JBK2Y3NjNYTVRETjNnSzdSK245QVRES0wwV2NWClJOUExpdzJqSmVnSVFjRG16Q0RvMEQzKzdpVjJKalk0YTZGeWxwSzc4NkljNEZ4SWNhNEM1OWd6TGo5eHJ1VU4KbDBBSTM1b0VHOWRkWmJIenEwaU9laTNZMXV1dThRNnhPNTFMYk9qV3IxSzhDaHV0MldQbGdiTTduYjRBeWxsZgpPenJxcXMrSHVuZEN6aTMzZU84WXVJQkJhMlJ6MGJnMkNNU3h0dU1iM0NXcGhaWFl1d01JMnMvaEFLcDJseUtmCkl5NFRZalcyK2dCdDlhNnpHSUt0bk1MZ0haTjVPbU1UeTlzVlJzbkRweHBaOE1DTjlnRG9VaEhjWHh3RExDRkIKUitNMUR5eFBLMHhpbXo3ejJmY1I4YnZNbC9MLzFiWEx4K1Bqd3dJREFRQUJBb0lCQVFDQTdyaS9vU2hxaDk5YQp2c0tTNlFWTTQ0a2tGd2RMdUhFSHIrYlpmb3NJd0R6SHBRdzJMNmh6WTJlV29pT2pGeS9vSy9GNGZSTzZaVXE1Cms0M0FxLzhwdVhuSGlNWndtMTJ0MjJidTNnY3RxcndYeXhldjNaMWZkaW9ENTFJQVFoN1BFcm0zaGY5ODMrVXoKWE1vOExKbmRzbjMrVzZ4d3RJV2hSSTN3cUxoTVZyRGpoNmw3V1VaMWtNQnB5dGt3NnNhcFVJMnVva0VyNnFNUgpvWmNCU1Z4SkZZVEx0Q2VTTWw3MWdKRDFVNWMvSE5XSWcrZkoyZ2g1OUE2NEdBcUpJNEcxcjcrRzlZRndIQWM3CjNBeUlRSW1TK3RMNHBGTEVwckxYVGxPOHQydXBrNzRHYUk2dEljY0llNDUxWG05SWNEZElQQUFlR0s0c0hLRkoKRzZQQkxKMXBBb0dCQVA4VXB2ZWRxSHBVZ2NjZFJvaDFES1FMYmY2ZnNseXlOVTVJVlFSbjdiVVlrR3ZXYkJ2ZAoramQwbGVsMml3amhvZ0lOOWpOMHYyRjRDQjRZdzZ4eHJBaWdQcDJWVThuNDZCUWRQOTFMZzNGdWxsN0pQVnMvCk1Nb2FZL0NBdlg3RW1FSDB1SFhmYk43QVhSRVhTKy9lMHhmZDFtQ3A2RnpsTDhnbG5qYThJVmtGQW9HQkFQL08KOGNJSEJXYTdYUEUwdE5jVG8xTkp2YTdWWWNLcXZydnMzRHd0aFUvY1czWUMzRGxkSHFEOWlDOStUZGdMMmJFcgpEU3lSd3RkNTBMMWh0czhOSmp4WFozbWtBb1EyR0ZKL0VaNWM5MXBOT0hQVUtXOG5IUmY5M0NpbkhscDE3RzdMCnVkYTlMZkNhbTBPNVFpNitIOFE2VUE3WnAzNmtkbWE5MHFXV29rUW5Bb0dCQUlVMGs3emJhQS81OFl1NWpndlUKbERWV2dxcGxXdzl0UU1rUW5OVWdNTkpSY1puZTc3WGR4YjBQOVBsbUhsVVUvelZ6ZFE2SitTYzlONEFBRHE4Two3WGZUdHQ4MEMvMTlMalRTMFhjTzZDVmtTc0pVOU9XaHFpamdmekFwQ3N3WWZpcHpVYUM4Zkc0V3BvTTJWMEY4CmEyQWJTTWhSOGpZUXVWTWIwZk5qYTBiQkFvR0FIdGYyOG13aVRKYSt5QjZReDNZSXRWd28wTkhOcmNra29rZ1cKN2ZLWEpsL3RiemM5RW5XVjRkZHYram9DYk5CUStUbTFwdkFVVENMVjlsKzN5Uk5PenV2REFEbTBTL2l4eWhDawpNVElJYVF6eWg1VEhRaTIzSmxObm5rYzRNN1FRUS9Pd2ZxSGt6aVAySUo1UHlvOEdDWVQyYmpQMExDTHNXOHI3CmdSZStqUFVDZ1lCcW83OTV6MWhOdmQzVXc0Nzl3cklnQXZsZlJvN3UzMkl6ZEZFTTZqc2RVN05MSDlYdm9pVVcKNnpsK2FMbm11cUJ5eFB2dlVwVGZCd3pPYVRqeS9LcXFnYXo5QThYanJudEFiQWxWclVTa0xEd0h5cjcwTERZVQpMSm9sYTBpWWFrUHovY0hsbVF4ejBINTRSR3FuSXM0L2U5QVBrTzdSUGxNNC9TZ01UbXdPNHc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= pub:LS0tLS1CRUdJTiBSU0EgUFVCTElDIEtFWS0tLS0tCk1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBL3VQRjBzY2dSTktsZWhqa0tLV1IKQStmNzYzWE1URE4zZ0s3UituOUFUREtMMFdjVlJOUExpdzJqSmVnSVFjRG16Q0RvMEQzKzdpVjJKalk0YTZGeQpscEs3ODZJYzRGeEljYTRDNTlnekxqOXhydVVObDBBSTM1b0VHOWRkWmJIenEwaU9laTNZMXV1dThRNnhPNTFMCmJPaldyMUs4Q2h1dDJXUGxnYk03bmI0QXlsbGZPenJxcXMrSHVuZEN6aTMzZU84WXVJQkJhMlJ6MGJnMkNNU3gKdHVNYjNDV3BoWlhZdXdNSTJzL2hBS3AybHlLZkl5NFRZalcyK2dCdDlhNnpHSUt0bk1MZ0haTjVPbU1UeTlzVgpSc25EcHhwWjhNQ045Z0RvVWhIY1h4d0RMQ0ZCUitNMUR5eFBLMHhpbXo3ejJmY1I4YnZNbC9MLzFiWEx4K1BqCnd3SURBUUFCCi0tLS0tRU5EIFJTQSBQVUJMSUMgS0VZLS0tLS0K] kind:Secret metadata:map[creationTimestamp:2022-05-12T05:00:35Z managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:type:map[]] manager:kubectl-client-side-apply operation:Update time:2022-05-12T05:00:35Z] map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:data:map[.:map[] f:priv:map[] f:pub:map[]]] manager:dashboard operation:Update time:2022-05-12T05:02:47Z]] name:kubernetes-dashboard-key-holder namespace:kubernetes-dashboard resourceVersion:1312 uid:75107a0e-b809-491d-84cf-c7626dca3aea] type:Opaque]] kind:List metadata:map[resourceVersion:]]

error: error executing template “{{.data.token | base64decode}}”: template: output:1:16: executing “output” at : invalid value; expected string
[root@master ~]#

这里报异常,可能是dashborad和Kubernetes的兼容性问题。dashboard v2.5.1在Kubernetes 1.23.6运行是正常的,正常会返回一大串字符串的令牌,格式为:eyJhbGci…gMZ0RqeQ,将令牌复制到登录页面,进行登录即可

6. 安装nginx进行测试

部署

[root@master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@master ~]#
在master通过命令watch -n 3 kubectl get pods -A查看状态

暴露端口

[root@master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@master ~]#

查看端口

[root@master ~]# kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-8f458dc5b-d4wnm 1/1 Running 0 87s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 443/TCP 18m
service/nginx NodePort 10.96.145.55 80:31219/TCP 8s
[root@master ~]#

访问nginx页面:http://slave1:32319
在这里插入图片描述

7. 其它可选模块部署

7.1 metrics-server安装

metrics-server的介绍和安装,请参考这篇博客的kubernetes-sigs/metrics-server的介绍和安装部分
https://blog.csdn.net/yy8623977/article/details/124872606

7.2 IPVS的开启

IPVS的开启,请参考这篇博客的ipvs的开启部分
https://blog.csdn.net/yy8623977/article/details/124885428

7.3 ingress-nginx的安装

ingress-nginx Controller的安装,请参考这篇博客的ingress-nginx Controller安装部分
https://blog.csdn.net/yy8623977/article/details/124899742

7.4 搭建NFS服务器

搭建NFS服务器,请参考这篇博客的搭建NFS服务器部分
https://blog.csdn.net/yy8623977/article/details/124928461

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值