部署Kubernetes 1.22.1多Master高可用集群(kubeadm部署)

一、简述:

kubeadm部署Kubernetes 1.22.1多Master高可用集群;

节点       HOSTNAME角色      IP        部署软件
master01   master01master192.168.7.2kubeadm、kubelet、kubectl、docker、haproxy、keepalived
master02  master02master192.168.7.3

kubeadm、kubelet、kubectl、docker、haproxy、keepalived                        

master03master03master192.168.7.4

kubeadm、kubelet、kubectl、docker、haproxy、keepalived                        

负载vipVIP负载192.168.7.1
nodenode01node192.168.7.5kubeadm、kubelet、kubectl、docker
nodenode02node192.168.7.6kubeadm、kubelet、kubectl、docker
nodenode02node192.168.7.7kubeadm、kubelet、kubectl、docker

二、基础环境部署:

(全部主机执行)

安装基本软件与升级内核:

yum -y install vim git lrzsz wget net-tools bash-completion 

sudo yum -y update,需要下载1G左右的升级文件。
sudo yum update -y kernel,只升级内核只需要下载100M左右的升级文件。
内核升级完毕后,重启。然后重新启动Docker服务,成功。

关闭所有节点的Slinux/防火墙:

setenforce 0 \
&& sed -i 's/^SELINUX=.*$/SELINUX=disabled/' /etc/selinux/config \
&& getenforce

 
systemctl stop firewalld \
&& systemctl daemon-reload \
&& systemctl disable firewalld \
&& systemctl daemon-reload \
&& systemctl status firewalld

添加Host解析:

cat >>/etc/hosts<<EOF
192.168.7.2 master01
192.168.7.3 master02
192.168.7.4 master03
192.168.7.5 node01
192.168.7.6 node02
192.168.7.7 node03
EOF

同步节点系统时间:

yum install ntp
ntpdate cn.pool.ntp.org
timedatectl set-timezone Asia/Shanghai
timedatectl set-local-rtc 1
timedatectl set-ntp 1

设置网桥:

配置L2网桥在转发包时会被iptables的FORWARD规则所过滤,CNI插件需要该配置;创建/etc/sysctl.d/k8s.conf文件;

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

 
# 执行命令使其修改生效
modprobe br_netfilter \
&& sysctl -p /etc/sysctl.d/k8s.conf

关闭交换分区:

swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak | grep -v swap > /etc/fstab
rm -rf /etc/fstab_bak


echo vm.swappiness = 0 >> /etc/sysctl.d/k8s.conf
sysctl -p /etc/sysctl.d/k8s.conf

安装设置Ipvs:

yum -y install ipvsadm ipset


#创建ipvs脚本

cat > /etc/sysconfig/modules/ipvs.modules << EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

#执行脚本,验证配置

chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack_ipv4

部署密钥登陆:

[root@master01 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
b7:ac:c3:65:06:97:80:2a:f6:88:13:9a:dd:8a:a1:d6 root@node1
The key's randomart image is:
+--[ RSA 2048]----+
|       .         |
|      . .        |
|     .   . .     |
|. o .   . o      |
|.* =    So.      |
|* o o    o+.     |
|.+..   . +o      |
|o..E    o.       |
|.       ..       |
+-----------------+

  分发密钥:

for host in master01 master02 master03; do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; done

部署Docker:

#卸载已经安装的docker

sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

#部署依赖与源

 sudo yum install -y yum-utils
 sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

#部署docker

sudo yum install docker-ce docker-ce-cli containerd.io

#启动docker

systemctl start docker
systemctl enable docker
systemctl status docker
设置Docker镜像源和Cgroup驱动:
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://aedvu1x8.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF


重启Docker,查看结果;
systemctl restart docker
docker info | grep Cgroup

三、负载均衡部署

所有Master节点安装Haproxy、Keepalived;

yum -y install haproxy keepalived

修改所有Master节点的配置文件:

cat > /etc/haproxy/haproxy.cfg << EOF
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats

defaults
    mode                    tcp
    log                     global
    option                  tcplog
    option                  dontlognull
    option                  redispatch
    retries                 3
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout check           10s
    maxconn                 3000

frontend  k8s_https *:8443
    mode      tcp
    maxconn      2000
    default_backend     https_sri
    
backend https_sri
    balance      roundrobin
    server master1-api 192.168.7.2:6443  check inter 10000 fall 2 rise 2 weight 1
    server master2-api 192.168.7.3:6443  check inter 10000 fall 2 rise 2 weight 1
    server master3-api 192.168.7.4:6443  check inter 10000 fall 2 rise 2 weight 1
EOF

所有Master节点修改Keepalived配置文件: 注意按照规划修改priorty 值;

cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
   router_id LVS_DEVEL
}

vrrp_script check_haproxy {
    script "/etc/keepalived/check_haproxy.sh"
    interval 3000
}

vrrp_instance VI_1 {
    state Master
    interface ens33
    virtual_router_id 80
    priority 100      #注意修改三个节点的值
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 111111
    }
    virtual_ipaddress {
        192.168.7.1/24  #规划的vip地址
    }
    track_script {

    }
}
EOF

所有master节点上部署Haproxy监控脚本:

cat > /sh/check_haproxy.sh << EOF
#!/bin/bash
if [ `ps -C haproxy --no-header | wc -l` == 0 ]; then
        systemctl start haproxy
        sleep 3
        if [ `ps -C haproxy --no-header | wc -l` == 0 ]; then
                systemctl stop keepalived
        fi
fi
EOF


chmod +x /sh/check_haproxy.sh

四、Kubernetes集群部署

(所有主机都安装kubelet/kubeadm/kubectl)

一、部署安装kubelet、kubeadm和kubectl

#添加kubernetes阿里源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

 安装kubelet、kubeadm和kubectl:

yum -y install kubelet-1.22.1-0 kubeadm-1.22.1-0 kubectl-1.22.1-0

#启动kubelet,并设置自启动:

systemctl start kubelet
systemctl enable kubelet

 此时kubelet缺省配置文件无法启动,可忽略状态;

提前下载镜像(3台master主机上下载):

#查看所需镜像
[root@master01 tools]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.22.1
k8s.gcr.io/kube-controller-manager:v1.22.1
k8s.gcr.io/kube-scheduler:v1.22.1
k8s.gcr.io/kube-proxy:v1.22.1
k8s.gcr.io/pause:3.5
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4
#!/bin/bash
images=(
	kube-apiserver:v1.22.1
	kube-controller-manager:v1.22.1
	kube-scheduler:v1.22.1
	kube-proxy:v1.22.1
	pause:3.5
	etcd:3.5.0-0
#	coredns/coredns:v1.8.4

        )
for imageName in ${images[@]};
do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName       k8s.gcr.io/$imageName
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
 
docker pull coredns/coredns:1.8.4
docker tag coredns/coredns:1.8.4 k8s.gcr.io/coredns/coredns:v1.8.4
docker rmi coredns/coredns:1.8.4   

在master节点保存镜像;

docker save -o kube-proxy.tar k8s.gcr.io/kube-proxy:v1.22.2
docker save -o coredns.tar k8s.gcr.io/coredns:v1.8.4
docker save -o pause.tar k8s.gcr.io/pause:3.5

在node节点上导入镜像:

docker load  kube-proxy.tar k8s.gcr.io/kube-proxy:v1.22.2
docker load  coredns.tar k8s.gcr.io/coredns:v1.8.4
docker load  pause.tar k8s.gcr.io/pause:3.5

二、初始化集群

使用kubeadm config print init-defaults > kubeadm-init.yaml 打印出默认配置,然后在根据自己的环境修改配置; 需要修改advertiseAddress、controlPlaneEndpoint、imageRepository、serviceSubnet;

[root@master01 tools]# cat kubeadm-init.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.7.2  #本节点ip
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: master01               #hostname
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.7.1:8443"  #vip
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io     #镜像地址
kind: ClusterConfiguration
kubernetesVersion: 1.22.1       #版本
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12        #网段
scheduler: {}

初始化

kubeadm init --config kubeadm-init.yaml
[root@master01 ~]# kubeadm init --config kubeadm-init.yaml
[init] Using Kubernetes version: v1.14.2
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master01 localhost] and IPs [192.168.4.129 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master01 localhost] and IPs [192.168.4.129 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.209.0.1 192.168.4.129 192.168.4.110]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.506253 seconds
..........
 
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.7.1:8443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:7bcdc310d7f92752ba1675a2b83460b891b817f4209c452a61697a36d042436a \
	--control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.7.1:8443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:7bcdc310d7f92752ba1675a2b83460b891b817f4209c452a61697a36d042436a 

kubeadm init主要执行了以下操作:

[init]:指定版本进行初始化操作
[preflight] :初始化前的检查和下载所需要的 Docker 镜像文件
[kubelet-start] :生成 kubelet 的配置文件 ”/var/lib/kubelet/config.yaml”,没有这个文件 kubelet 无法启动,所以初始化之前的 kubelet 实际上启动失败。
[certificates]:生成Kubernetes使用的证书,存放在 /etc/kubernetes/pki 目录中。
[kubeconfig] :生成 KubeConfig 文件,存放在 /etc/kubernetes目录中,组件之间通信需要使用对应文件。
[control-plane]:使用 /etc/kubernetes/manifest 目录下的 YAML 文件,安装 Master 组件。
[etcd]:使用 /etc/kubernetes/manifest/etcd.yaml 安装 Etcd 服务。
[wait-control-plane]:等待 control-plan 部署的 Master 组件启动。
[apiclient]:检查 Master 组件服务状态。
[uploadconfig]:更新配置。
[kubelet]:使用 configMap 配置 kubelet。
[patchnode]:更新 CNI 信息到 Node上,通过注释的方式记录。
[mark-control-plane]:为当前节点打标签,打了角色 Master,和不可调度标签,这样默认就不会使用 Master 节点来运行 Pod。
[bootstrap-token]:生成 token 记录下来,后边使用 kubeadm join 往集群中添加节点时会用到。
[addons]:安装附加组件 CoreDNS 和 kube-proxy。

 为 kubectl 准备 Kubeconfig 文件;

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

export KUBECONFIG=/etc/kubernetes/admin.conf

将证书分配至其它Master节点;

for node in master02 master03; do
  ssh $node "mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/"
  scp /etc/kubernetes/pki/ca.crt $node:/etc/kubernetes/pki/ca.crt
  scp /etc/kubernetes/pki/ca.key $node:/etc/kubernetes/pki/ca.key
  scp /etc/kubernetes/pki/sa.key $node:/etc/kubernetes/pki/sa.key
  scp /etc/kubernetes/pki/sa.pub $node:/etc/kubernetes/pki/sa.pub
  scp /etc/kubernetes/pki/front-proxy-ca.crt $node:/etc/kubernetes/pki/front-proxy-ca.crt
  scp /etc/kubernetes/pki/front-proxy-ca.key $node:/etc/kubernetes/pki/front-proxy-ca.key
  scp /etc/kubernetes/pki/etcd/ca.crt $node:/etc/kubernetes/pki/etcd/ca.crt
  scp /etc/kubernetes/pki/etcd/ca.key $node:/etc/kubernetes/pki/etcd/ca.key
  scp /etc/kubernetes/admin.conf $node:/etc/kubernetes/admin.conf
  scp /etc/kubernetes/admin.conf $node:~/.kube/config
done

加入Master节点:

  kubeadm join 192.168.7.1:8443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:7bcdc310d7f92752ba1675a2b83460b891b817f4209c452a61697a36d042436a \
	--control-plane 

加入work节点:

kubeadm join 192.168.7.1:8443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:7bcdc310d7f92752ba1675a2b83460b891b817f4209c452a61697a36d042436a 

查看节点状态:

一开始没安装网络组件,是显示 notReady 的,装完 cailco 后就变成 Ready,说明集群已就绪了,可以进行下一步验证集群是否搭建成功。

[root@master01 tools]# kubectl get node
NAME       STATUS   ROLES                  AGE   VERSION
master01   Ready    control-plane,master   21h   v1.22.1
master02   Ready    control-plane,master   20h   v1.22.1
master03   Ready    control-plane,master   21h   v1.22.1
node1      Ready    <none>                 19h   v1.22.1
node2      Ready    <none>                 19h   v1.22.1
node3      Ready    <none>                 19h   v1.22.1

三、kube-proxy开启ipvs:

在任意Master节点上修改ConfigMap kube-proxy中的mode: “ipvs”:

kubectl edit configmap kube-proxy -n kube-system

在任意Master节点上重启各个节点上的kube-proxy pod:

kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'

 验证修改:

[root@master01 tools]# kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-2kt8z                           1/1     Running   0             3h31m
kube-proxy-dh7wv                           1/1     Running   0             3h31m
kube-proxy-hnkxl                           1/1     Running   0             3h31m
kube-proxy-pmc29                           1/1     Running   0             3h31m
kube-proxy-sf279                           1/1     Running   0             3h31m
kube-proxy-zzmvt                           1/1     Running   0             3h31m

[root@master01 tools]# kubectl logs kube-proxy-2kt8z -n kube-system
I0908 02:04:37.897963       1 node.go:172] Successfully retrieved node IP: 192.168.7.4
I0908 02:04:37.898135       1 server_others.go:140] Detected node IP 192.168.7.4
I0908 02:04:37.963430       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I0908 02:04:37.963478       1 server_others.go:274] Using ipvs Proxier.
I0908 02:04:37.963501       1 server_others.go:276] creating dualStackProxier for ipvs.
W0908 02:04:37.963730       1 server_others.go:479] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
W0908 02:04:37.963746       1 server_others.go:528] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
E0908 02:04:37.964537       1 proxier.go:381] "can't set sysctl net/ipv4/vs/conn_reuse_mode, kernel version must be at least 4.1"
I0908 02:04:37.964791       1 proxier.go:440] "IPVS scheduler not specified, use rr by default"
E0908 02:04:37.964991       1 proxier.go:381] "can't set sysctl net/ipv4/vs/conn_reuse_mode, kernel version must be at least 4.1"
I0908 02:04:37.965137       1 proxier.go:440] "IPVS scheduler not specified, use rr by default"
W0908 02:04:37.965181       1 ipset.go:113] ipset name truncated; [KUBE-6-LOAD-BALANCER-SOURCE-CIDR] -> [KUBE-6-LOAD-BALANCER-SOURCE-CID]
W0908 02:04:37.965200       1 ipset.go:113] ipset name truncated; [KUBE-6-NODE-PORT-LOCAL-SCTP-HASH] -> [KUBE-6-NODE-PORT-LOCAL-SCTP-HAS]
I0908 02:04:37.965439       1 server.go:649] Version: v1.22.1
I0908 02:04:37.976468       1 conntrack.go:52] Setting nf_conntrack_max to 393216
I0908 02:04:37.978248       1 config.go:224] Starting endpoint slice config controller
I0908 02:04:37.978270       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0908 02:04:37.978518       1 config.go:315] Starting service config controller
I0908 02:04:37.978538       1 shared_informer.go:240] Waiting for caches to sync for service config
E0908 02:04:37.985384       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"node1.16a2b68079dd3e4a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc046246d7a4d46be, ext:250009788, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-node1", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node1", UID:"node1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "node1.16a2b68079dd3e4a" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
I0908 02:04:38.078698       1 shared_informer.go:247] Caches are synced for service config 
I0908 02:04:38.078748       1 shared_informer.go:247] Caches are synced for endpoint slice config 

日志中打印出了Using ipvs Proxier,说明ipvs模式已经开启。

五、安装CNI网络

wget https://docs.projectcalico.org/manifests/calico.yaml  #下载calico.yaml文件

所有master节点提前下载镜像:

[root@master01 tools]# cat calico.yaml |grep image
          image: docker.io/calico/cni:v3.20.0
          image: docker.io/calico/cni:v3.20.0
          image: docker.io/calico/pod2daemon-flexvol:v3.20.0
          image: docker.io/calico/node:v3.20.0
          image: docker.io/calico/kube-controllers:v3.20.0

部署CNI网络:

kubectl apply -f calico.yaml

注意:calico.yaml中的CIDR需与初始化集群中的参数一致

六、测试集群

查看完成后的Pod状态:

[root@master01 tools]# kubectl get pod -o wide -n kube-system
NAME                                       READY   STATUS    RESTARTS      AGE     IP              NODE       NOMINATED NODE   READINESS GATES
calico-kube-controllers-58497c65d5-wjh5c   1/1     Running   0             20h     172.18.59.193   master02   <none>           <none>
calico-node-b7r44                          1/1     Running   0             20h     192.168.7.2     master01   <none>           <none>
calico-node-dm245                          1/1     Running   0             20h     192.168.7.10    master03   <none>           <none>
calico-node-j5hvc                          1/1     Running   0             20h     192.168.7.6     node3      <none>           <none>
calico-node-khxnb                          1/1     Running   0             20h     192.168.7.5     node2      <none>           <none>
calico-node-qw4g6                          1/1     Running   0             20h     192.168.7.3     master02   <none>           <none>
calico-node-zt2sz                          1/1     Running   0             20h     192.168.7.4     node1      <none>           <none>
coredns-78fcd69978-ktnxg                   1/1     Running   0             21h     172.16.241.66   master01   <none>           <none>
coredns-78fcd69978-wfglv                   1/1     Running   0             21h     172.16.241.65   master01   <none>           <none>
etcd-master01                              1/1     Running   1             21h     192.168.7.2     master01   <none>           <none>
etcd-master02                              1/1     Running   0             21h     192.168.7.3     master02   <none>           <none>
etcd-master03                              1/1     Running   0             21h     192.168.7.10    master03   <none>           <none>
kube-apiserver-master01                    1/1     Running   1             21h     192.168.7.2     master01   <none>           <none>
kube-apiserver-master02                    1/1     Running   0             21h     192.168.7.3     master02   <none>           <none>
kube-apiserver-master03                    1/1     Running   0             21h     192.168.7.10    master03   <none>           <none>
kube-controller-manager-master01           1/1     Running   2 (21h ago)   21h     192.168.7.2     master01   <none>           <none>
kube-controller-manager-master02           1/1     Running   0             21h     192.168.7.3     master02   <none>           <none>
kube-controller-manager-master03           1/1     Running   0             21h     192.168.7.10    master03   <none>           <none>
kube-proxy-2kt8z                           1/1     Running   0             3h39m   192.168.7.4     node1      <none>           <none>
kube-proxy-dh7wv                           1/1     Running   0             3h39m   192.168.7.5     node2      <none>           <none>
kube-proxy-hnkxl                           1/1     Running   0             3h39m   192.168.7.2     master01   <none>           <none>
kube-proxy-pmc29                           1/1     Running   0             3h39m   192.168.7.6     node3      <none>           <none>
kube-proxy-sf279                           1/1     Running   0             3h39m   192.168.7.3     master02   <none>           <none>
kube-proxy-zzmvt                           1/1     Running   0             3h39m   192.168.7.10    master03   <none>           <none>
kube-scheduler-master01                    1/1     Running   2 (21h ago)   21h     192.168.7.2     master01   <none>           <none>
kube-scheduler-master02                    1/1     Running   0             21h     192.168.7.3     master02   <none>           <none>
kube-scheduler-master03                    1/1     Running   0             21h     192.168.7.10    master03   <none>           <none>

IPVS状态查看

[root@master01 tools]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.17.0.1:30009 rr
  -> 172.18.241.65:8443           Masq    1      0          0         
TCP  172.18.241.64:30009 rr
  -> 172.18.241.65:8443           Masq    1      0          0         
TCP  192.168.7.2:30009 rr
  -> 172.18.241.65:8443           Masq    1      0          0         
TCP  10.96.0.0:30009 rr
  -> 172.18.241.65:8443           Masq    1      0          0         
TCP  10.96.0.1:443 rr
  -> 192.168.7.2:6443             Masq    1      0          0         
  -> 192.168.7.3:6443             Masq    1      0          0         
  -> 192.168.7.10:6443            Masq    1      0          0         
TCP  10.96.0.10:53 rr
  -> 172.16.241.65:53             Masq    1      0          0         
  -> 172.16.241.66:53             Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 172.16.241.65:9153           Masq    1      0          0         
  -> 172.16.241.66:9153           Masq    1      0          0         
TCP  10.103.28.216:443 rr
  -> 172.18.241.65:8443           Masq    1      0          0         
TCP  10.107.116.120:8000 rr
  -> 172.18.135.1:8000            Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 172.16.241.65:53             Masq    1      0          0         
  -> 172.16.241.66:53             Masq    1      0          0       

测试DNS

[root@master01 tools]#  nslookup kubernetes.default
Server:		8.8.8.8
Address:	8.8.8.8#53

[root@master01 ~]# kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
[ root@curl-66bdcf564-njcqk:/ ]$ nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.10 kube-dns.kube-system.svc.cluster.local
 
Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
# 能显示类似这样的输出,说明 dns 是 okay 的

部署Nginx容器测试一下

[root@master01 tools]# kubectl get pods -o wide|grep nginx
nginx   1/1     Running   0               2m24s   172.18.104.1     node2   <none>           <none>
[root@master01 tools]# curl http://172.18.104.1
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

能显示出 Welcome to nginx,说明 pod 运行正常,间接也说明集群可以正常使用。

七、Kubernetes—Dashboard部署:

参考地址:Deploy and Access the Kubernetes Dashboard | Kubernetes

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml   #下载yaml文件

查看需要下载镜像提前下载;

[root@master01 tools]# cat recommended.yaml |grep image
          image: kubernetesui/dashboard:v2.3.1
          imagePullPolicy: Always
          image: kubernetesui/metrics-scraper:v1.0.6
kubectl apply -f recommended.yaml  #在Master节点上部署Dashboard:

[root@master01 tools]# kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-856586f554-s42cr   1/1     Running   0          4h42m
kubernetes-dashboard-67484c44f6-bx8rd        1/1     Running   0          4h42m

配置RBAC:

参考地址:https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md

cat > serveraccount.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
EOF
cat > clusterrolebinding.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF
kubectl apply -f serveraccount.yaml
kubectl apply -f clusterrolebinding.yaml

#修改服务类型:
kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kubernetes-dashboard

或者直接编辑

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

查看SVC

[root@master01 tools]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.107.116.120   <none>        8000/TCP        4h46m
kubernetes-dashboard        NodePort    10.103.28.216    <none>        443:30009/TCP   4h46m

 查看相关pod

[root@master01 tools]# kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-856586f554-s42cr   1/1     Running   0          4h42m
kubernetes-dashboard-67484c44f6-bx8rd        1/1     Running   0          4h42m
[root@master01 tools]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
Edit cancelled, no changes made.
[root@master01 tools]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.107.116.120   <none>        8000/TCP        4h46m
kubernetes-dashboard        NodePort    10.103.28.216    <none>        443:30009/TCP   4h46m
[root@master01 tools]# kubectl describe secret $(kubectl get secret -n kubernetes-dashboard | grep admin-user | awk '{print $1}') -n kubernetes-dashboard
Name:         admin-user-token-wfnnk
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 95013c20-7d3e-4582-be87-92307c515038

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1099 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Il83TkxmMHJHVV9vTkRYOEJKZHJJYkxhS3AzTzNwUGZsU2ZTeHN2azFWX1kifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXdmbm5rIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5NTAxM2MyMC03ZDNlLTQ1ODItYmU4Ny05MjMwN2M1MTUwMzgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.COpfTVuqC42N1B-KlrTVOu_my4cvOforIeukMqK4hacRx1ZBn7HxN10fAezZUHKi_gBEPdIn8ouKgz0QSaipc9zlhJIqlQMz61jne2dYTnr929wisnV1LxOQ92Ig5aCvk0vSHk_SStVzXkAWeFgSmvPT2JpQGqVVicwLJ5_1JT-pt81BxbY-xj40XeQ7nmpiN-Sj4wVuVGZvBniqeS1vgptD4YpTgTZeVWqYl8wStEqLjg4WS7dVqHJvuEZFkLkGbB8v0_FWet1Hcqzo3Lfq3AcdQhJ4lgB8ljON3kRT-44S99tIN6WWg11himU8xmgQzDfkNXIEWspDf210ppeLzw

八、部署Metrics-server

用于监测node,pod等资源状态;

参考地址:https://github.com/kubernetes-sigs/metrics-server

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml  #下载yaml文件
[root@master01 tools]# cat components.yaml |grep image
        image: k8s.gcr.io/metrics-server/metrics-server:v0.5.0
        imagePullPolicy: IfNotPresent

注意: 直接pull不下来镜像,需要下载下来,导入进去;

[root@master01 tools]# docker images|grep metrics
k8s.gcr.io/metrics-server/metrics-server   v0.5.0    1c655933b9c5   3 months ago    63.5MB
# 安装metrics-server 0.5版本
[root@master01 tools]#  kubectl apply -f ./components-v0.5.0.yaml
 
# 检查启动的metrics-server的pod实例
[root@master01 tools]# kubectl get pods -n kube-system|grep metrics
metrics-server-6dfddc5fb8-m5sf6            1/1     Running   0             15m

错误:

kubectl -n kube-system logs metrics-server-6dfddc5fb8-m5sf6

ps://master03:10250/stats/summary?only_cpu_and_memory=true: x509: certificate signed by unknown authority, unable to fully scrape 

这时的问题是x509: certificate signed by unknown authority 证书没有签名的原因.
Metric Server 支持一个参数 --kubelet-insecure-tls,可以跳过这一检查,然而官方也明确说了,这种方式不推荐生产使用。
所以我们需要启用TLS Bootstrap 证书签发:

所有节点修改

[root@master01 tools]# vim /var/lib/kubelet/config.yaml 

serverTLSBootstrap: true		#最后一行添加 参数

[root@master01 tools]# systemctl restart kubelet  #重启
[root@master01 tools]# kubectl get csr
NAME        AGE     SIGNERNAME                      REQUESTOR              REQUESTEDDURATION   CONDITION
csr-558ms   5m31s   kubernetes.io/kubelet-serving   system:node:master02   <none>              Pending
csr-cnwmb   6m40s   kubernetes.io/kubelet-serving   system:node:node1      <none>              Pending
csr-pmd55   10m     kubernetes.io/kubelet-serving   system:node:node3      <none>              Pending
csr-q9lfl   5m56s   kubernetes.io/kubelet-serving   system:node:master03   <none>              Pending
csr-rcgqq   5m9s    kubernetes.io/kubelet-serving   system:node:master01   <none>              Pending
csr-smk4v   7m19s   kubernetes.io/kubelet-serving   system:node:node2      <none>              Pending


#现在所有节点处于pending状态;

[root@master01 tools]# kubectl certificate approve  csr-cnwmb
certificatesigningrequest.certificates.k8s.io/csr-cnwmb approved
[root@master01 tools]# kubectl certificate approve  csr-pmd55
certificatesigningrequest.certificates.k8s.io/csr-pmd55 approved
[root@master01 tools]# kubectl certificate approve  csr-q9lfl
certificatesigningrequest.certificates.k8s.io/csr-q9lfl approved
[root@master01 tools]# kubectl certificate approve  csr-rcgqq
certificatesigningrequest.certificates.k8s.io/csr-rcgqq approved
[root@master01 tools]# kubectl certificate approve  csr-smk4v
certificatesigningrequest.certificates.k8s.io/csr-smk4v approved

#签名


[root@master01 tools]# kubectl get csr
NAME        AGE   SIGNERNAME                      REQUESTOR              REQUESTEDDURATION   CONDITION
csr-558ms   27m   kubernetes.io/kubelet-serving   system:node:master02   <none>              Approved,Issued
csr-cnwmb   28m   kubernetes.io/kubelet-serving   system:node:node1      <none>              Approved,Issued
csr-pmd55   32m   kubernetes.io/kubelet-serving   system:node:node3      <none>              Approved,Issued
csr-q9lfl   28m   kubernetes.io/kubelet-serving   system:node:master03   <none>              Approved,Issued
csr-rcgqq   27m   kubernetes.io/kubelet-serving   system:node:master01   <none>              Approved,Issued
csr-smk4v   29m   kubernetes.io/kubelet-serving   system:node:node2      <none>              Approved,Issued

[root@master01 tools]# kubectl top node
NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master01   455m         3%     2491Mi          15%       
master02   424m         3%     2776Mi          17%       
master03   390m         6%     1557Mi          20%       
node1      199m         1%     1954Mi          12%       
node2      190m         1%     1087Mi          6%        
node3      210m         1%     1151Mi          7%     

访问web界面,也会显示出监控信息;

  • 2
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值