kubernetes集群安装配置

kubernetes安装配置

一、环境准备

1、服务器资源

节点角色&主机名IP地址CPU内存os info
master-vip192.168.10.100
master01192.168.10.10122centos 7.9
master02192.168.10.10222centos 7.9
master03192.168.10.10322centos 7.9
node01192.168.10.10422centos 7.9
node02192.168.10.10522centos 7.9

2、服务器环境准备

  • 关闭防火墙和swap
systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0
sed -i ‘s/enforcing/disabled/’ /etc/selinux/config
# 临时关闭
swapoff -a    
# 永久关闭,注释掉swap配置行,重启reboot
vim /etc/fstab
  • 配置主机名

分别为每台服务器配置hostname。

hostnamectl set-hostname master01
hostnamectl set-hostname master02
hostnamectl set-hostname master03
hostnamectl set-hostname node01
hostnamectl set-hostname node02

分别在每台服务器配置hosts,让机器间能通过主机名访问。

vi /etc/hosts
192.168.10.101  master01
192.168.10.102  master02
192.168.10.103  master03
192.168.10.104  node01
192.168.10.105  node02
  • 在每个节点打开内置的桥功能
cat >> /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.ip_forward = 1
EOF
sysctl --system

二、安装docker环境

1、指定yum源

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum clean all

2、卸载已安装的docker(如有已安装)

yum remove docker docker-common docker-selinux docker-engine

3、安装docker的依赖库

yum install -y yum-utils device-mapper-persistent-data lvm2

4、安装docker

获取哪些版本的docker可以使用

yum list docker-ce --showduplicates | sort -r

安装最新版docker

yum install -y docker-ce docker-ce-cli containerd.io

安装指定版本的docker,例如:安装18.09.9

yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io
yum install docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io -y

5、镜像加速

创建配置文件目录

mkdir -p /etc/docker

浏览器打开http://cr.console.aliyun.com,注册或登录阿里云账号,点击左侧的镜像加速器,将会得到一个专属的加速地址,而且下面有使用配置说明:

vi /etc/docker/daemon.json
{
	"registry-mirrors": ["https://rs0djmo6.mirror.aliyuncs.com"]
}
#网易云:http://hub-mirror.c.163.com/ 
#腾讯云:https://mirror.ccs.tencentyun.com

如果需要修改默认的数据存储路径,则在以上daemon.json中增加data-root(19.xx 版本以后使用data-root来代替graph)配置信息,例如修改数据存储路径为/docker/data,则按以下配置:

vi /etc/docker/daemon.json
{
	"registry-mirrors": ["https://rs0djmo6.mirror.aliyuncs.com"],
	"data-root":"/docker/data"
}

刷新daemon、设置自启动、检查版本信息。

systemctl daemon-reload
systemctl enable docker
systemctl restart docker
docker info
docker -v

三、安装docker-compose

curl -L https://get.daocloud.io/docker/compose/releases/download/1.29.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
#或者https://github.com/docker/compose/releases下载需要的版本,放入/usr/local/bin目录,改名为docker-compose并赋予执行权限。
chmod +x /usr/local/bin/docker-compose
docker-compose version

docker-compose version 1.29.1, build c34c88b2
docker-py version: 5.0.0
CPython version: 3.7.10
OpenSSL version: OpenSSL 1.1.0l  10 Sep 2019

四、安装k8s

1、工具说明

kubeadm:部署集群用的命令。

kubelet:在集群中的每台服务器都需要运行的组件,负责管理POD、容器的生命周期。

kubectl:集群管理工具。

2、配置yum源

在每台服务器配置以下yum源

cat >> /etc/yum.repos.d/kubernetes.repo <<EOF 
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 
EOF
yum clean all

3、安装工具(每个节点)

在每个节点安装以上三个工具,与docker类似,不指定版本号的话默认安装最新版。

#安装最新版
yum install -y kubelet kubectl kubeadm --disableexcludes=kubernetes
#指定版本,例如:
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0 --disableexcludes=kubernetes

安装完成后,设置开机启动

systemctl enable kubelet
systemctl start kubelet

4、master节点建立高可用

高可用我们采用官方推荐的HAproxy+Keepalived,HAproxy和Keepalived以守护进程的方式在所有Master节点部署。

  • kube-proxy开启IPVS配置
#ipvs称之为IP虚拟服务器(IP Virtual Server,简写为IPVS)
#1.在所有master节点执行以下命令
cat >> /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
#2.查看IPVS模块加载情况
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
#能看到ip_vs ip_vs_rr ip_vs_wrr  ip_vs_sh nf_conntrack_ipv4加载成功
  • 安装keepalived和haproxy
#1.在所有master节点安装haproxy和keepalived
yum install -y keepalived haproxy
systemctl start keepalived
systemctl enable keepalived
systemctl start nginx
systemctl enable nginx
  • 配置Haproxy服务

所有master节点的haproxy配置相同,haproxy的配置文件是/etc/haproxy/haproxy.cfg。master01节点配置完成之后再分发给master02、master03两个节点。

备份haproxy的默认配置文件:

cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
true > /etc/haproxy/haproxy.cfg

将以下内容导入(注意这里的三个master节点的ip地址要根据实际情况配置好。):

cat > /etc/haproxy/haproxy.cfg <<EOF
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

listen stats
  bind    *:8006
  mode    http
  stats   enable
  stats   hide-version
  stats   uri       /stats
  stats   refresh   30s
  stats   realm     Haproxy\ Statistics
  stats   auth      admin:admin

frontend k8s-master
  bind 0.0.0.0:8443
  bind 127.0.0.1:8443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server master01 192.168.10.101:6443  check inter 2000 fall 2 rise 2 weight 100
  server master02 192.168.10.102:6443  check inter 2000 fall 2 rise 2 weight 100
  server master03 192.168.10.103:6443  check inter 2000 fall 2 rise 2 weight 100
EOF
  • 配置Keepalived服务

keepalived中使用track_script机制来配置脚本进行探测kubernetes的master节点是否宕机,并以此切换节点实现高可用。

master01节点的keepalived配置文件如下所示,配置文件所在的位置/etc/keepalived/keepalived.conf。

需要注意几点(前两点记得修改):

interface:当前网卡名称

mcast_src_ip:配置多播源地址,此地址是当前主机的ip地址。

priority:keepalived根据此项参数的大小仲裁master节点。我们这里让master节点为kubernetes提供服务,其他两个节点暂时为备用节点。因此master01节点设置为100,master02节点设置为99,master03节点设置为98。

state:我们将master01节点的state字段设置为MASTER,其他两个节点字段修改为BACKUP。

! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
    enable_script_security
}
vrrp_script chk_kubernetes {
    script "/etc/keepalived/check_kubernetes.sh"
    interval 2
    weight -5
    fall 3  
    rise 2
}
vrrp_instance VI_1 {
    state MASTER                  #BACKUP
    interface ens33
    mcast_src_ip 192.168.10.101    #192.168.10.101/103
    virtual_router_id 51
    priority 100                  #99/98
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.10.100   
    }
    track_script {
       chk_kubernetes
    }
}
  • 配置健康检测脚本

我这里将健康检测脚本放置在/etc/keepalived目录下,check_kubernetes.sh检测脚本如下,创建好以后,赋予可执行权限。

#!/bin/bash
#****************************************************************#
# ScriptName: check_kubernetes.sh
# Author: winter liu
# Create Date: 2022-09-09 11:30
#****************************************************************#

function chech_kubernetes() {
 for ((i=0;i<5;i++));do
  apiserver_pid_id=$(pgrep kube-apiserver)
  if [[ ! -z $apiserver_pid_id ]];then
   return
  else
   sleep 2
  fi
  apiserver_pid_id=0
 done
}

# 1:running  0:stopped
check_kubernetes
if [[ $apiserver_pid_id -eq 0 ]];then
 /usr/bin/systemctl stop keepalived
 exit 1
else
 exit 0
fi
  • 启动Keeplived和Haproxy服务

Keeplived和Haproxy服务启动后,检查vip是否存活。

systemctl enable --now keepalived haproxy
systemctl status keepalived haproxy
ping 192.168.10.100                      #检测一下是否通

五、部署master节点

1、镜像准备

使用kubeadm来搭建Kubernetes,那么就需要下载得到Kubernetes运行的对应基础镜像,比如:kubeproxy、kube-apiserver、kube-controller-manager等等 。那么有什么方法可以得知要下载哪些镜像 呢?从kubeadm v1.11+版本开始,增加了一个kubeadm config print-default 命令,可以让我们方便的将kubeadm的默认配置输出到文件中,这个文件里就包含了搭建K8S对应版本需要的基础配置环境。另外,我们也可以执行 kubeadm config images list 命令查看依赖需要安装的镜像列表。

#查看需要哪些镜像
[root@master01 keepalived]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.23.10
k8s.gcr.io/kube-controller-manager:v1.23.10
k8s.gcr.io/kube-scheduler:v1.23.10
k8s.gcr.io/kube-proxy:v1.23.10
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6

配置文件默认会从google的镜像仓库地址k8s.gcr.io下载镜像,如果没有科学上网,那么就会下载不来。因此,通过下面的方法把地址改成国内的,比如用阿里云的。

kubeadm config print init-defaults > kubeadm-init.yaml

[root@master01 k8s]# cat kubeadm-init.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.10.101
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: master01											#修改节点名称
  taints: null
---
apiServer:
  certSANs:                                                 #添加两行
  - "192.168.10.100"										#添加本地地址
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers	#修改源
controlPlaneEndpoint: "192.168.10.100:8443"					#添加vip地址和绑定的端口
kind: ClusterConfiguration
kubernetesVersion: 1.23.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16									#添加pod网段
scheduler: {}

注意:

advertiseAddress字段的值,这个值并非当前主机的网卡地址,而是高可用集群的VIP的地址。

controlPlaneEndpoint这里填写的是VIP的地址,而端口则是haproxy服务的8443端口,也就是我们在haproxy里面配置的这段信息。

frontend k8s-master
  bind 0.0.0.0:8443
  bind 127.0.0.1:8443
  mode tcp

2、拉取镜像

根据上一步的配置文件,提前下载镜像。

[root@master01 k8s]# kubeadm config images pull --config kubeadm-init.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6

#利用脚本将镜像tag重命名
touch change_image.sh
chmod +x change_image.sh
#脚本内容
cat > change_image.sh <<EOF
newtag=k8s.gcr.io
for i in $(docker images | grep -v TAG |awk '{print $1 ":" $2}')
do
   image=$(echo $i | awk -F '/' '{print $3}')
   docker tag $i $newtag/$image
   docker rmi $i
done
EOF
#执行脚本,修改tag并删除无用的镜像
./change_image.sh

这里遇到一个坑,记录一下

#遇到这个错误后,尝试更换镜像地址后,需要删除/etc/containerd/config.toml文件,并重启服务,再次执行即可

[root@master01 k8s]# kubeadm config images pull --config kubeadm-init.yaml
failed to pull image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.0": output: E0909 14:07:32.358328   14272 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService" image="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.0"
time="2022-09-09T14:07:32+08:00" level=fatal msg="pulling image: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService"
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher

[root@master01:~] rm -rf /etc/containerd/config.toml
[root@master01:~] systemctl restart containerd

3、初始化master

#初始化命令
kubeadm init --config kubeadm-init.yaml --upload-certs
#执行部分结果
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.10.100:8443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:ff37af64155bc7ece6a05d71eef86a9a34ab054a7fb84b9f693642feacfa1af5 \
        --control-plane --certificate-key ef7419033277a6edde8b4b7f2b2220e0f3b8bf72f84ac5e6fd59722c2583c0fd

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.100:8443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:ff37af64155bc7ece6a05d71eef86a9a34ab054a7fb84b9f693642feacfa1af5 

过程大概30s的时间就做完了,之所以初始化的这么快就是因为我们提前拉取了镜像。像我上面这样的没有报错信息,并且显示上面的最后10行类似的信息这些,说明我们的master01节点是初始化成功的。

上述有两条 kubeadm join 192.168.10.100:8443的信息,这分别是其他masternode节点加入kubernetes集群的认证命令。密钥是系统根据sha256算法计算出来的,必须有这样的密钥方可加入当前的kubernetes集群,其中--control-plane --certificate-key xxxx,这是控制节点加入集群的命令,没有则是node节点加入集群的命令。

4、其他 master 节点加入

#执行第一个初始化成功的master节点的提示命令
kubeadm join 192.168.10.100:8443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:ff37af64155bc7ece6a05d71eef86a9a34ab054a7fb84b9f693642feacfa1af5 \
        --control-plane --certificate-key ef7419033277a6edde8b4b7f2b2220e0f3b8bf72f84ac5e6fd59722c2583c0fd
#执行部分结果展示:       
This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

5、master节点配置

各master节点添加完成以后,需要按照提示配置环境变量信息。

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#添加环境变量
cat >> /etc/profile <<EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
#刷新环境变量
source /etc/profile

6、工作节点加入集群

kubeadm join 192.168.10.100:8443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:ff37af64155bc7ece6a05d71eef86a9a34ab054a7fb84b9f693642feacfa1af5 

#加入结果展示
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

7、集群信息确认

在任意master节点执行命令,查询集群信息。

[root@master01 k8s]# kubectl get nodes
NAME       STATUS     ROLES                  AGE    VERSION
master01   NotReady   control-plane,master   34m    v1.23.10
master02   NotReady   control-plane,master   15m    v1.23.10
master03   NotReady   control-plane,master   13m    v1.23.10
node01     NotReady   <none>                 2m6s   v1.23.10
node02     NotReady   <none>                 109s   v1.23.10

可以看到集群的五个节点都已经存在,但现在还不能用,也就是说现在集群节点是不可用的,原因在于上面的第2个字段,我们看到五个节点都是NotReady状态,这是因为我们还没有安装网络插件。网络插件有calicoflannel等插件,这里我们选择使用flannel插件。

六、安装flannel插件

1、默认方法

[root@master01 k8s]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

如果不成功,可以尝试下面的方法

2、其他方法

master01节点上修改本地的hosts文件添加如下内容以便解析

#master1节点上修改本地的hosts文件添加如下内容以便解析
199.232.28.133  raw.githubusercontent.com

然后下载finnel文件

[root@master01 ~]# curl -o kube-flannel.yml   https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

编辑镜像源,默认的镜像地址我们修改一下。把yaml文件中所有的quay.io修改为 quay-mirror.qiniu.com

[root@master01 ~]# sed -i 's/quay.io/quay-mirror.qiniu.com/g' kube-flannel.yml

此时保存保存退出。在master节点执行此命令。

[root@master01 ~]# kubectl apply -f kube-flannel.yml 
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

3、检查资源

#如果你想查看flannel这些pod运行是否正常
[root@master01 k8s]# kubectl get pods -n kube-system
NAME                               READY   STATUS    RESTARTS      AGE
coredns-6d8c4cb4d-6ckx6            1/1     Running   0             42m
coredns-6d8c4cb4d-jz4gc            1/1     Running   0             42m
etcd-master01                      1/1     Running   0             42m
etcd-master02                      1/1     Running   0             24m
etcd-master03                      1/1     Running   0             22m
kube-apiserver-master01            1/1     Running   0             42m
kube-apiserver-master02            1/1     Running   0             24m
kube-apiserver-master03            1/1     Running   0             22m
kube-controller-manager-master01   1/1     Running   1 (24m ago)   42m
kube-controller-manager-master02   1/1     Running   0             24m
kube-controller-manager-master03   1/1     Running   0             22m
kube-proxy-5g6c9                   1/1     Running   0             10m
kube-proxy-6kd9g                   1/1     Running   1             10m
kube-proxy-bjcrn                   1/1     Running   0             42m
kube-proxy-kvnj2                   1/1     Running   0             24m
kube-proxy-wjs84                   1/1     Running   0             22m
kube-scheduler-master01            1/1     Running   1 (24m ago)   42m
kube-scheduler-master02            1/1     Running   0             24m
kube-scheduler-master03            1/1     Running   0             22m

#查看节点是否可用
[root@master01 k8s]# kubectl get nodes
NAME       STATUS   ROLES                  AGE   VERSION
master01   Ready    control-plane,master   42m   v1.23.10
master02   Ready    control-plane,master   24m   v1.23.10
master03   Ready    control-plane,master   21m   v1.23.10
node01     Ready    <none>                 10m   v1.23.10
node02     Ready    <none>                 10m   v1.23.10

七、集群环境验证

1、创建一个nginx的pod

现在我们在kubernetes集群中创建一个nginx的pod,验证是否能正常运行。在master节点执行一下步骤:

[root@master01 ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

[root@master01 ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed

#现在我们查看pod和service
[root@master01 ~]# kubectl get pod,svc -o wide
NAME                         READY   STATUS    RESTARTS   AGE    IP           NODE     NOMINATED NODE   READINESS GATES
pod/nginx-85b98978db-25l2w   1/1     Running   0          2m4s   10.244.4.2   node02   <none>           <none>

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE     SELECTOR
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        6h36m   <none>
service/nginx        NodePort    10.96.172.114   <none>        80:30281/TCP   50s     app=nginx

打印的结果中,前半部分是pod相关信息,后半部分是service相关信息。我们看service/nginx这一行可以看出service暴漏给集群的端口是30281。记住这个端口。

然后从pod的详细信息可以看出此时pod在node02节点之上。node2节点的IP地址是192.168.10.105。

2、访问nginx

打开浏览器,访问地址就是:http://192.168.10.105:30281,另外访问vip地址的30281端口也能访问到该nginx。
在这里插入图片描述

3、安装dashboard

#下载dashboard
#https://github.com/kubernetes/dashboard/releases,在此可以查看dashboard和k8s的版本对应关系
cd /root/k8s
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml
#如域名打不开,请在/etc/hosts文件中添加域名解析199.232.28.133  raw.githubusercontent.com

# 默认`Dashboard`只能集群内部访问,修改`Service`为`NodePort`类型,暴露到外部
vi recommended.yaml  #修改以下内容
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort       #加上此行
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001     #加上此行,端口30001可以自行定义
  selector:
    k8s-app: kubernetes-dashboard
#运行此yaml文件
[root@master01 k8s]# kubectl apply -f recommended.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

#查看dashboard是否运行正常
[root@master01 k8s]# kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-799d786dbf-bt5sq   1/1     Running   0          3m11s
kubernetes-dashboard-fb8648fd9-kjz4t         1/1     Running   0          3m11s

[root@master01 k8s]# kubectl get pod,svc -n kubernetes-dashboard -o wide

在这里插入图片描述

主要是看status这一列的值,如果是Running,并且RESTARTS字段的值为0(只要这个值不是一直在渐渐变大),就是正常的,目前来看是没有问题的。我们可以继续下一步。
可以看出,kubernetes-dashboard-fb8648fd9-kjz4t运行所在的节点是node01上面,并且暴漏出来的端口是30001,所以访问地址是:https://192.168.10.104:30001

访问到页面时,提示需要输入token
在这里插入图片描述

[root@master01 k8s]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
#将展示出来的token粘贴至浏览器即可登录

在这里插入图片描述

不过现在我们虽然可以登陆上去,但是我们权限不够还查看不了集群信息,因为我们还没有绑定集群角色,同学们可以先按照上面的尝试一下,再来做下面的步骤。

4、cluster-admin管理员角色绑定

[root@master01 k8s]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@master01 k8s]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
#再次使用token登录即可正常显示
[root@master01 k8s]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

八、其他

#查看集群所有节点
kubectl get nodes

#根据配置文件,给集群创建资源
kubectl apply -f xxxx.yaml

#查看集群部署了哪些应用?
docker ps   ===   kubectl get pods -A
# 运行中的应用在docker里面叫容器,在k8s里面叫Pod
kubectl get pods -A
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Winter Liu

别说话,打赏就行了!

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值