从零开始搭建部署kubernets集群(附踩坑及解决方法)

4 篇文章 0 订阅
3 篇文章 0 订阅

kubernets简介

kubernets概述

kubernetes,简称K8s,是用8代替8个字符“ubernete”而成的缩写。是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规划,更新,维护的一种机制。

Kubernetes是Google开源的一个容器编排引擎,它支持自动化部署、大规模可伸缩、应用容器化管理。在生产环境中部署一个应用程序时,通常要部署该应用的多个实例以便对应用请求进行负载均衡。

在Kubernetes中,我们可以创建多个容器,每个容器里面运行一个应用实例,然后通过内置的负载均衡策略,实现对这一组应用实例的管理、发现、访问,而这些细节都不需要运维人员去进行复杂的手工配置和处理。

Kubernetes 特点

  • 可移植: 支持公有云,私有云,混合云,多重云(multi-cloud)
  • 可扩展: 模块化,插件化,可挂载,可组合
  • 自动化: 自动部署,自动重启,自动复制,自动伸缩/扩展

Kubernetes 组件

  • Master 组件

    1.1kube-apiserver
    1.2etcd
    1.3kube-controller-manager
    1.4cloud-controller-manager
    1.5kube-scheduler
    1.6插件 addons
    1.6.1DNS
    1.6.2用户界面
    1.6.3容器资源监测
    1.6.4Cluster-level Logging

  • 节点(Node)组件
    2.1kubelet
    2.2kube-proxy
    2.3docker
    2.4RKT
    2.5supervisord
    2.6fluentd

环境

3台联网Linux虚拟机
server1为master,server2为node1,server2为node2

master 192.168.56.132
node1 192.168.56.133
node2 192.168.56.134
防火墙关闭
selinux关闭

安装kubernets

更改hosts文件

这里三台服务器都改成一样的
[root@server1 ~]# vim /etc/hosts
192.168.56.132 server1.example.com master
192.168.56.133 server2.example.com node1
192.168.56.134 server3.example.com node2

配置yum源

进入/etc/yum.repos.d目录,下载dockeryum源

[root@server1 yum.repos.d]# pwd
/etc/yum.repos.d
[root@server1 yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

[root@server2 yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

[root@server3 yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

配置kubernetsyum源(三台服务器相同)

[root@server1 yum.repos.d]# vim k8s.repo
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabled=1

安装kubernets

master安装docker,kubernets

[root@server1 yum.repos.d]# yum install docker-ce kubelet kubeadm kubectl -y

node节点安装docker,kubernets

[root@server2 yum.repos.d]# yum install -y docker-ce kubelet kubeadm
[root@server3 yum.repos.d]# yum install -y docker-ce kubelet kubeadm

关闭防火墙(node和mater相同)

[root@server1 ~]# systemctl stop firewalld
[root@server1 ~]# systemctl disable firewalld

关闭selinux(node和master相同)

[root@server1 ~]# setenforce 0
[root@server1 ~]# vim /etc/selinux/config 
SELINUX=disabled

关闭swap(node相同)

[root@server1 ~]# swapoff -a
[root@server1 ~]# vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

载入br_netfilter模块

[root@server1 ~]# modprobe br_netfilter

设置iptables转发规则(node相同)

[root@server1 ~]# vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

启动master上的docker,k8s

[root@server1 ~]# systemctl start docker
[root@server1 ~]# ps -ef |grep docker
root      13347      1  1 20:53 ?        00:00:00 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json --selinux-enabled --log-driver=journald --signature-verification=false --storage-driver overlay
root      13352  13347  0 20:53 ?        00:00:00 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --shim docker-containerd-shim --runtime docker-runc --runtime-args --systemd-cgroup=true

启动kuberlet

[root@server1 ~]# systemctl start kubelet

node节点加入集群

下载镜像

master节点所需镜像

[root@server1 ~]# kubeadm config images list
W0601 22:34:47.738683   20090 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

由于国内无法访问k8s镜像仓库,先从daocloud.io镜像仓库下载所需镜像,然后修改镜像标签所有节点上(master, node1, node2)下载安装kubernetes集群所需镜像
下载镜像

[root@server2 yum.repos.d]# docker pull daocloud.io/daocloud/kube-apiserver:v1.18.3
Error response from daemon: Get https://daocloud.io/v2/: x509: certificate signed by unknown authority

这里报错了,是因为docker默认的源为国外官方源,下载速度较慢,可改为国内

[root@server2 yum.repos.d]# vi /etc/docker/daemon.json
{
"registry-mirrors": ["http://hub-mirror.c.163.com"]
}

再次下载镜像

docker pull daocloud.io/daocloud/kube-apiserver:v1.18.3
docker pull daocloud.io/daocloud/kube-controller-manager:v1.18.3
docker pull daocloud.io/daocloud/kube-scheduler:v1.18.3
docker pull daocloud.io/daocloud/kube-proxy:v1.18.3
docker pull daocloud.io/daocloud/pause:3.2
docker pull daocloud.io/daocloud/etcd:3.4.3-0
docker pull daocloud.io/daocloud/coredns:1.6.7

给镜像打标签

docker tag daocloud.io/daocloud/kube-apiserver:v1.18.3 k8s.gcr.io/kube-apiserver:v1.18.3
docker tag daocloud.io/daocloud/kube-controller-manager:v1.18.3 k8s.gcr.io/kube-controller-manager:v1.18.3
docker tag daocloud.io/daocloud/kube-scheduler:v1.18.3 k8s.gcr.io/kube-scheduler:v1.18.3
docker tag daocloud.io/daocloud/kube-proxy:v1.18.3 k8s.gcr.io/kube-proxy:v1.18.3
docker tag daocloud.io/daocloud/pause:3.2 k8s.gcr.io/pause:3.2
docker tag daocloud.io/daocloud/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tag daocloud.io/daocloud/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7


删除原来镜像

docker rmi daocloud.io/daocloud/kube-apiserver:v1.18.3
docker rmi daocloud.io/daocloud/kube-controller-manager:v1.18.3
docker rmi daocloud.io/daocloud/kube-scheduler:v1.18.3
docker rmi daocloud.io/daocloud/kube-proxy:v1.18.3
docker rmi daocloud.io/daocloud/pause:3.2
docker rmi daocloud.io/daocloud/etcd:3.4.3-0
docker rmi daocloud.io/daocloud/coredns:1.6.7

初始化kubernets master

[root@server1 ~]# kubeadm init  --kubernetes-version=v1.18.3  --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.56.132
#--kubernetes-version指明要下载的镜像版本
#--pod-network-cidr 指明网络的子网掩码
#--apiserver-advertise-address 指明与master结点绑定的ip
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.56.132:6443 --token i0koyd.saeq47izyvrjy9nf \
    --discovery-token-ca-cert-hash sha256:91a672ec090a3419f46b7c69c101bd0d9cb2e462d68d0df2bfe95b55b49c8230

按照提示在master执行如下

[root@server1 ~]# mkdir -p $HOME/.kube
[root@server1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@server1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装网络插件

master节点上安装flannel网络插件
下载flannel.yml文件

[root@server1 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

安装flannel插件

[root@server1 ~]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

确认pod状态是否为running

[root@server1 ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                          READY   STATUS     RESTARTS   AGE
kube-system   coredns-66bff467f8-7796j                      0/1     Pending    0          22h
kube-system   coredns-66bff467f8-xn8zd                      0/1     Pending    0          22h
kube-system   etcd-server1.example.com                      1/1     Running    0          22h
kube-system   kube-apiserver-server1.example.com            1/1     Running    0          22h
kube-system   kube-controller-manager-server1.example.com   1/1     Running    2          22h
kube-system   kube-flannel-ds-amd64-xxkwl                   0/1     Init:0/1   0          2m48s
kube-system   kube-proxy-6gtmz                              1/1     Running    0          22h
kube-system   kube-scheduler-server1.example.com            1/1     Running    2          22h

node节点加入

根据master节点初始化提示在node节点输入以下命令

Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.56.132:6443 --token i0koyd.saeq47izyvrjy9nf \
    --discovery-token-ca-cert-hash sha256:91a672ec090a3419f46b7c69c101bd0d9cb2e462d68d0df2bfe95b55b49c8230

[root@server2 ~]# kubeadm join 192.168.56.132:6443 --token i0koyd.saeq47izyvrjy9nf --discovery-token-ca-cert-hash sha256:91a672ec090a3419f46b7c69c101bd0d9cb2e462d68d0df2bfe95b55b49c8230
W0604 22:42:03.866286    5927 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
 [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
 [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
 [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
 [ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

这里报错了,根据报错提示需要在/proc/sys/net/ipv4/ip_forward设1

[root@server2 ~]# echo "1"> /proc/sys/net/ipv4/ip_forward


关掉node节点swap

[root@server2 ~]# swapoff -a
[root@server2 ~]# vim /etc/fstab 
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0

再次加入master节点

[root@server2 ~]# kubeadm join 192.168.56.132:6443 --token i0koyd.saeq47izyvrjy9nf --discovery-token-ca-cert-hash sha256:91a672ec090a3419f46b7c69c101bd0d9cb2e462d68d0df2bfe95b55b49c8230
W0608 20:49:55.508112    7408 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
 [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
 [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "i0koyd"

这里又报错了,应该是master的token ID有问题,重新生成一个

[root@server1 ~]# kubeadm token create
W0608 21:15:49.538646   70263 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
0x79mr.uxkclfoukkm0v6om
[root@server1 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
91a672ec090a3419f46b7c69c101bd0d9cb2e462d68d0df2bfe95b55b49c8230

再次加入node节点(node2同node1)
可以看到node1已经成功加入集群

[root@server2 ~]# kubeadm join 192.168.56.132:6443 --token 0x79mr.uxkclfoukkm0v6om --discovery-token-ca-cert-hash sha256:91a672ec090a3419f46b7c69c101bd0d9cb2e462d68d0df2bfe95b55b49c8230
W0608 21:25:34.389417    8109 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
 [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
 [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
*Certificate signing request was sent to apiserver and a response was received.
*The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

确认集群状态

[root@server2 mnt]# kubectl get nodes
NAME                  STATUS     ROLES    AGE     VERSION
server1.example.com   Ready      master   7d22h   v1.18.3
server2.example.com   NotReady   <none>   3d      v1.18.3
server3.example.com   Ready   <none>   3d      v1.18.3

可以看到server2节点是notready状态的
查看server2kubelet状态

[root@server2 ~]# journalctl -f -u kubelet
Jun 15 21:57:04 server2.example.com kubelet[64808]: I0615 21:57:04.119342   64808 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Jun 15 21:57:04 server2.example.com kubelet[64808]: W0615 21:57:04.119472   64808 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Jun 15 21:57:04 server2.example.com kubelet[64808]: W0615 21:57:04.122984   64808 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Jun 15 21:57:04 server2.example.com kubelet[64808]: I0615 21:57:04.123018   64808 docker_service.go:253] Docker cri networking managed by cni
Jun 15 21:57:04 server2.example.com kubelet[64808]: W0615 21:57:04.123086   64808 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d

查看报错是因为/etc/没有cni这个目录而其他node节点都有
使用scp 把master节点的cni 复制到server2

[root@server1 redhat.io]# scp -r /etc/cni/ root@192.168.56.133:/etc/

重启server2的kubelet

[root@server2 ~]# systemctl restart kubelet.service 

检查kubelet状态,发现kubelet起不来了。。。

[root@server2 etc]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Sun 2020-06-28 21:53:56 CST; 6s ago
     Docs: https://kubernetes.io/docs/
  Process: 97952 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
 Main PID: 97952 (code=exited, status=255)

Jun 28 21:53:56 server2.example.com systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jun 28 21:53:56 server2.example.com systemd[1]: Unit kubelet.service entered failed state.
Jun 28 21:53:56 server2.example.com systemd[1]: kubelet.service failed.

查看kubelet服务日志

[root@server2 etc]# journalctl -xefu kubelet
Jun 28 21:50:19 server2.example.com kubelet[96182]: F0628 21:50:19.312158   96182 server.go:274] failed to run Kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

这里报错的原因是kubelet文件驱动默认cgroupfs, 而我们安装的docker使用的文件驱动是systemd, 造成不一致, 导致镜像无法启动。

修改kubelet环境变量和docker一致
查看docker环境变量

[root@server2 etc]# docker info | grep Driver
WARNING: IPv4 forwarding is disabled
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.
         Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
 Storage Driver: devicemapper
 Logging Driver: json-file
 Cgroup Driver: cgroupfs

docker为cgroups,那么将kubelet也改为cgroups

[root@server2 etc]# vim /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2"

再次重启kubelet服务

[root@server2 etc]# systemctl restart kubelet.service 
[root@server2 etc]# systemctl status  kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Sun 2020-06-28 22:26:56 CST; 21s ago
     Docs: https://kubernetes.io/docs/
 Main PID: 113543 (kubelet)
   Memory: 19.4M
   CGroup: /system.slice/kubelet.service
           └─113543 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.c...

启动成功!

再次到master上确认node节点状态,这次两个节点都是ready状态了。

[root@server1 redhat.io]# kubectl get nodes
NAME                  STATUS   ROLES    AGE    VERSION
server1.example.com   Ready    master   11d    v1.18.3
server2.example.com   Ready    <none>   7d1h   v1.18.3
server3.example.com   Ready    <none>   7d     v1.18.3

安装dashboard

下载dashboard.yml文件

[root@server1 ~]# wget  https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

安装dashboard

[root@server1 ~]# kubectl apply -f kubernetes-dashboard.yaml 
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created

查看dashboard状态

[root@server1 ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                          READY   STATUS             RESTARTS   AGE     IP                NODE                  NOMINATED NODE   READINESS GATES
default       nginx-f89759699-5vj87                         1/1     Running            0          50m     10.244.1.7        server2.example.com   <none>           <none>
kube-system   coredns-66bff467f8-8mdms                      1/1     Running            2          4d22h   10.244.0.6        server1.example.com   <none>           <none>
kube-system   coredns-66bff467f8-jd2xf                      1/1     Running            2          4d22h   10.244.0.7        server1.example.com   <none>           <none>
kube-system   etcd-server1.example.com                      1/1     Running            3          4d22h   192.168.174.128   server1.example.com   <none>           <none>
kube-system   kube-apiserver-server1.example.com            1/1     Running            5          4d22h   192.168.174.128   server1.example.com   <none>           <none>
kube-system   kube-controller-manager-server1.example.com   1/1     Running            5          4d22h   192.168.174.128   server1.example.com   <none>           <none>
kube-system   kube-flannel-ds-amd64-h9vd9                   1/1     Running            2          4d21h   192.168.174.130   server3.example.com   <none>           <none>
kube-system   kube-flannel-ds-amd64-pchlr                   1/1     Running            2          4d21h   192.168.174.129   server2.example.com   <none>           <none>
kube-system   kube-flannel-ds-amd64-r6l59                   1/1     Running            2          4d22h   192.168.174.128   server1.example.com   <none>           <none>
kube-system   kube-proxy-bdnvh                              1/1     Running            2          4d22h   192.168.174.128   server1.example.com   <none>           <none>
kube-system   kube-proxy-fnb2z                              1/1     Running            10         4d21h   192.168.174.129   server2.example.com   <none>           <none>
kube-system   kube-proxy-vjmhz                              1/1     Running            5          4d21h   192.168.174.130   server3.example.com   <none>           <none>
kube-system   kube-scheduler-server1.example.com            1/1     Running            5          4d22h   192.168.174.128   server1.example.com   <none>           <none>
kube-system   kubernetes-dashboard-975499656-mh9ct          0/1     ImagePullBackOff   0          3m44s   10.244.2.3        server3.example.com   <none>           <none>

这里看到dashboard镜像是有问题的,百度了下是因为在server3上无法拉取dashboard镜像,那么就用手动pull镜像

[root@server3 docker.service.d]# docker pull loveone/kubernetes-dashboard-amd64:v1.10.1
v1.10.1: Pulling from loveone/kubernetes-dashboard-amd64
9518d8afb433: Pulling fs layer 
v1.10.1: Pulling from loveone/kubernetes-dashboard-amd64
9518d8afb433: Pull complete 
Digest: sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747
Status: Downloaded newer image for loveone/kubernetes-dashboard-amd64:v1.10.1
docker.io/loveone/kubernetes-dashboard-amd64:v1.10.1

查看docker images

[root@server3 docker.service.d]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.18.6             c3d62d6fe412        2 weeks ago         117MB
k8s.gcr.io/kube-apiserver            v1.18.6             56acd67ea15a        2 weeks ago         173MB
k8s.gcr.io/kube-controller-manager   v1.18.6             ffce5e64d915        2 weeks ago         162MB
k8s.gcr.io/kube-scheduler            v1.18.6             0e0972b2b5d1        2 weeks ago         95.3MB
quay.io/coreos/flannel               v0.12.0-amd64       4e9f801d2217        4 months ago        52.7MB
k8s.gcr.io/pause                     3.2                 80d28bedfe5d        5 months ago        683kB
k8s.gcr.io/coredns                   1.6.7               67da37a9a360        6 months ago        43.8MB
k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        9 months ago        288MB
loveone/kubernetes-dashboard-amd64   v1.10.1             f9aed6605b81        19 months ago       122MB

修改镜像标签

[root@server3 docker.service.d]# docker tag loveone/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
[root@server3 docker.service.d]# docker images
REPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                   v1.18.6             c3d62d6fe412        2 weeks ago         117MB
k8s.gcr.io/kube-controller-manager      v1.18.6             ffce5e64d915        2 weeks ago         162MB
k8s.gcr.io/kube-apiserver               v1.18.6             56acd67ea15a        2 weeks ago         173MB
k8s.gcr.io/kube-scheduler               v1.18.6             0e0972b2b5d1        2 weeks ago         95.3MB
quay.io/coreos/flannel                  v0.12.0-amd64       4e9f801d2217        4 months ago        52.7MB
k8s.gcr.io/pause                        3.2                 80d28bedfe5d        5 months ago        683kB
k8s.gcr.io/coredns                      1.6.7               67da37a9a360        6 months ago        43.8MB
k8s.gcr.io/etcd                         3.4.3-0             303ce5db0e90        9 months ago        288MB
loveone/kubernetes-dashboard-amd64      v1.10.1             f9aed6605b81        19 months ago       122MB
k8s.gcr.io/kubernetes-dashboard-amd64   v1.10.1             f9aed6605b81        19 months ago       122MB

删除原来镜像

[root@server3 docker.service.d]# docker image rm loveone/kubernetes-dashboard-amd64:v1.10.1
Untagged: loveone/kubernetes-dashboard-amd64:v1.10.1
Untagged: loveone/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747

查看master上dashboard状态

[root@server1 ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGE     IP                NODE                  NOMINATED NODE   READINESS GATES
default       nginx-f89759699-5vj87                         1/1     Running   0          79m     10.244.1.7        server2.example.com   <none>           <none>
kube-system   coredns-66bff467f8-8mdms                      1/1     Running   2          4d22h   10.244.0.6        server1.example.com   <none>           <none>
kube-system   coredns-66bff467f8-jd2xf                      1/1     Running   2          4d22h   10.244.0.7        server1.example.com   <none>           <none>
kube-system   etcd-server1.example.com                      1/1     Running   3          4d22h   192.168.174.128   server1.example.com   <none>           <none>
kube-system   kube-apiserver-server1.example.com            1/1     Running   5          4d22h   192.168.174.128   server1.example.com   <none>           <none>
kube-system   kube-controller-manager-server1.example.com   1/1     Running   5          4d22h   192.168.174.128   server1.example.com   <none>           <none>
kube-system   kube-flannel-ds-amd64-h9vd9                   1/1     Running   2          4d22h   192.168.174.130   server3.example.com   <none>           <none>
kube-system   kube-flannel-ds-amd64-pchlr                   1/1     Running   2          4d22h   192.168.174.129   server2.example.com   <none>           <none>
kube-system   kube-flannel-ds-amd64-r6l59                   1/1     Running   2          4d22h   192.168.174.128   server1.example.com   <none>           <none>
kube-system   kube-proxy-bdnvh                              1/1     Running   2          4d22h   192.168.174.128   server1.example.com   <none>           <none>
kube-system   kube-proxy-fnb2z                              1/1     Running   10         4d22h   192.168.174.129   server2.example.com   <none>           <none>
kube-system   kube-proxy-vjmhz                              1/1     Running   5          4d22h   192.168.174.130   server3.example.com   <none>           <none>
kube-system   kube-scheduler-server1.example.com            1/1     Running   5          4d22h   192.168.174.128   server1.example.com   <none>           <none>
kube-system   kubernetes-dashboard-975499656-mh9ct          1/1     Running   0          32m     10.244.2.3        server3.example.com   <none>           <none>

删除现有的dashboard服务,dashboard服务的namespace是kube-system,但是该服务的类型是ClusterIP,不便于我们通过浏览器访问,因此需要改成NodePort型的

[root@server1 ~]# kubectl delete service kubernetes-dashboard --namespace=kube-system
service "kubernetes-dashboard" deleted

创建dashboard服务的配置文件

[root@server1 ~]# vim dashboard-svc.yml
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

创建dashboard服务

[root@server1 ~]# kubectl apply -f dashboard-svc.yml 
service/kubernetes-dashboard created

创建account服务用于访问dashboard,设置dashboard服务的权限和绑定关系

[root@server1 ~]# vim dashboard-svc-account.yml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system

创建account服务

[root@server1 ~]# kubectl apply -f dashboard-svc-account.yml 
serviceaccount/kubernetes-dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-admin created

找出secret,这个secret中有token用于登录dashboard

[root@server1 ~]# kubectl -n kube-system get secret|egrep  kubernetes-dashboard-admin
kubernetes-dashboard-admin-token-kth6g           kubernetes.io/service-account-token   3      2m51s

查看token信息

[root@server1 ~]# kubectl describe -n kube-system secret/kubernetes-dashboard-admin-token-kth6g|egrep token:
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InJuNldQR2tLWXowOE1pMGpwSWtzVGFRNjQwRmk5dk1uVk12WUFNa2pmUlkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1rdGg2ZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjBmMzk5YTlhLWI1NzItNDk4Mi1hZTNjLWMxMjY2NDY3M2MzOSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.UFKdOQrzgHQvxCiujbeLRoKAXoA-ewx5VBCKc-n1EHHRJ2QEIJ5EUXDvEplcZ5ZAne-3PmmXpd65atbYYrgtHs0npPBKNEqTN5k4CKeSn7rxAx_V5zmRq0qrP-5vddB1NQYq-C9Ea92gNAvTuGyRqbcejt67XFfaMeKbe5MLMx-nWF_nJ2FnvybNBQ_Rpv8K623FvMfpWYNFZHlrcLndIgvQDcicdGXDcAlTw8c26mvlJ78yhthi5rVmrt4lBJOWuRzZjkAo5PpJT35AvkXp6BU2ZxaqLbgGrivLoJhVe7D67HzB7i9THRSIpNK0hiFnktaqTs4YxJrd32JOo0KAjQ

查看dashboard服务nodeip和端口(dashboard节点位于master上,端口为31691)

[root@server1 ~]# kubectl describe -n kube-system pod/kubernetes-dashboard-975499656-n6k5t
Name:         kubernetes-dashboard-975499656-n6k5t
Namespace:    kube-system
Priority:     0
Node:         server1.example.com/192.168.174.128
Start Time:   Wed, 05 Aug 2020 01:34:40 -0700
Labels:       k8s-app=kubernetes-dashboard
              pod-template-hash=975499656
Annotations:  <none>
[root@server1 ~]# kubectl get svc -n kube-system
NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
kube-dns               ClusterIP   10.96.0.10     <none>        53/UDP,53/TCP,9153/TCP   4d23h
kubernetes-dashboard   NodePort    10.98.15.225   <none>        443:31691/TCP            16m

在浏览器中访问dashboard https://ip:31691
在这里插入图片描述
将token输入到下面,点击登录
在这里插入图片描述
登陆后页面如下
在这里插入图片描述

配置私有镜像仓库

配置master为镜像仓库(所有节点)

[root@server1 ~]# vim /etc/docker/daemon.json 
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"insecure-registries": ["192.168.56.132:5000"]
}

master节点创建私有镜像仓库

[root@server1 ~]# docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry  registry
Unable to find image 'registry:latest' locally
Trying to pull repository docker.io/library/registry ... 
latest: Pulling from docker.io/library/registry
486039affc0a: Pull complete 
ba51a3b098e6: Pull complete 
8bb4c43d6c8e: Pull complete 
6f5f453e5f2d: Pull complete 
42bc10b72f42: Pull complete 
Digest: sha256:7d081088e4bfd632a88e3f3bcd9e007ef44a796fddfe3261407a3f9f04abe1e7
Status: Downloaded newer image for docker.io/registry:latest
7eee64ce4722e1372f53492dfdac0f460a022b05dfc8d9458513583da0c17917

测试将nginx镜像打标签

[root@server1 ~]# docker tag docker.io/pseudecoder/centos-nginx:latest 192.168.56.132:5000/nginx:latest

测试将nginx镜像推到私有镜像仓库

[root@server1 ~]# docker push 192.168.56.132:5000/nginx:latest
The push refers to a repository [192.168.56.132:5000/nginx]
5f70bf18a086: Pushed 
385103b6bdaf: Pushed 
772fd97d8860: Pushed 
aafdd9551301: Pushed 
62c84ccc943b: Pushed 
d4f57322e1a5: Pushed 
ee1dd2cb6df2: Pushed 
latest: digest: sha256:3dfe5891f891b3c0af86290e33db9591959acf80cf8067dcfc99e962b2a8d2f6 size: 2604

查看镜像是否推送成功

[root@server1 ~]# ls /opt/myregistry/docker/registry/v2/repositories/
nginx

至此k8s集群已经部署好了,下一篇将介绍在k8s集群中部署应用。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值