Kubernetes----kubeadm初始化k8s集群(1.16.9)

一、系统配置

1.1、虚拟机准备

使用virtualbox建立如下虚拟主机,版本CentOS-7-x86_64-Minimal-2003

主机名ip内存cpu磁盘
k8s-master1192.168.10.2502G2核40G
k8s-master2192.168.10.2512G2核40G
k8s-master3192.168.10.2522G2核40G
k8s-node1192.168.10.2432G2核40G
k8s-node2192.168.10.2442G2核40G

1.2、ssh免密登录配置

安装sshpass配置ssh秘钥登录:

yum install -y sshpass

脚本如下:

#!/bin/bash
ssh-keygen -f /root/.ssh/id_rsa -P ''
NET=192.168.10
export SSHPASS=123456
for IP in 250 251 252 243 244
do
	sshpass -e ssh-copy-id -f root@$NET.$IP -o StrictHostKeyChecking=no
done

配置/etc/hosts解析(所有节点也都配置)

cat >> /etc/hosts <<EOF
192.168.10.250 k8s-master1
192.168.10.251 k8s-master2
192.168.10.252 k8s-master3
192.168.10.243 k8s-node1
192.168.10.244 k8s-node2
EOF

测试ssh登录是否成功

[root@k8s-master1 ~]# ssh k8s-node1
The authenticity of host 'k8s-node1 (192.168.10.243)' can't be established.
ECDSA key fingerprint is SHA256:DyhSW+CT9jOd+Q3T3GfZbeo5ibKk4pVZugz9YRoeHW0.
ECDSA key fingerprint is MD5:a3:35:70:7a:00:90:d1:ca:c1:08:a0:3b:4a:18:8a:ed.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'k8s-node1' (ECDSA) to the list of known hosts.
Last login: Tue Aug 18 15:04:29 2020 from 192.168.10.38

1.3、系统时间配置

# 设置系统时区为 中国上海
timedatectl set-timezone Asia/Shanghai
# 将当前的UTC时间写入硬件时钟
timedatectl set-local-rtc 0
# 重启依赖于系统时间的服务
systemctl restart rsyslog
systemctl restart crond

下载ntp时间服务

yum install -y ntp ntpdate

1.4、系统配置(1.4.1,1.4.2,1.4.8可选)

1.4.1、关闭系统不需要的服务

systemctl stop postfix && systemctl disable postfix

1.4.2、设置rsyslogd和systemd journald

# 持久化保存日志的目录
mkdir /var/log/journal 
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
# 持久化保存到磁盘
Storage=persistent

# 压缩历史日志
Compress=yes

SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000

# 最大占用空间
SystemMaxUse=10G

# 单日志文件最大 200M
SystemMaxFileSize=200M

# 日志保存时间 2周
MaxRetentionSec=2week

# 不将日志转发到syslog
ForwardToSyslog=no
EOF

重启systemd-journald服务

systemctl restart systemd-journald

1.4.3、更新软件包

yum update -y

1.4.4、关闭防火墙,并禁止开机启动

systemctl stop firewalld.service && systemctl disable firewalld.service

1.4.5、配置系统参数

在没有配置系统参数的时候,kubeadm init初始化时会产生如下错误

	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
	[ERROR Swap]: running with swap on is not supported. Please disable swap

通过配置如下参数,避免上面产生的错误

cat > /etc/sysctl.d/k8s.conf <<EOF
# 必须
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
vm.swappiness=0  #禁止启用swap空间,只有当系统OOM时才使用它
# 可选
net.ipv6.conf.all.disable_ipv6=1  # 禁用ipv6
vm.overcommit_memory=1  # 不检查物理内存是否够用
vm.panic_on_oom=0  # 开启OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
EOF

sysctl -p /etc/sysctl.d/k8s.conf如果遇到如下错误,则需要加载br_netfilter模块,使用modprobe br_netfilter命令加载

sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 没有那个文件或目录
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: 没有那个文件或目录

1.4.6、关闭swap

关闭swap,按如上配置vm.swappiness = 0,但是还需要注释/etc/fstab文件中swap那一行,如下所示:

swapoff -a && sed -i '/swap/s/^/#/' /etc/fstab
[root@k8s-master2 ~]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Mon Aug 17 11:38:53 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=f08e8c55-51f5-4e10-9cba-784d8b98af0b /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

1.4.7、关闭SELinux

setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

1.4.8、配置kube-proxy开启ipvs的前置条件

注意:如果kube-proxy代理模式使用ipvs转发,则需要配置ipvs模块

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

在这里插入图片描述
下载安装ipvsadm

yum install -y ipset ipvsadm

二、必要组件安装配置

主要需要如下组件:

kubelet、kubeadm、kubectl、docker-ce、docker-ce-cli

2.1、配置下载源

yum install -y wget vim
# 配置docker源
wget -P /etc/yum.repos.d/ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 配置kubernetes源
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg \
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2.2、安装组件

所有master节点按如下方式安装

yum install -y kubelet kubeadm kubectl docker-ce docker-ce-cli

所有node节点按如下方式安装

yum install -y kubelet kubeadm docker-ce docker-ce-cli

注意:可以按如下安装指定版本:

yum install -y kubelet-1.16.9 kubeadm-1.16.9 kubectl-1.16.9 docker-ce-19.03.5 docker-ce-cli-19.03.5 

如果遇到如下错误,通过添加--setopt=obsoletes=0参数避免,需要先安装kubelet,在安装其它组件

yum install -y --setopt=obsoletes=0 kubelet-1.16.9
yum install -y kubeadm-1.16.9 kubectl-1.16.9 docker-ce-19.03.5 docker-ce-cli-19.03.5 
错误:软件包:kubelet-1.14.2-0.x86_64 (kubernetes)
          需要:kubernetes-cni = 0.7.5
          可用: kubernetes-cni-0.3.0.1-0.07a8a2.x86_64 (kubernetes)
              kubernetes-cni = 0.3.0.1-0.07a8a2
          可用: kubernetes-cni-0.5.1-0.x86_64 (kubernetes)
              kubernetes-cni = 0.5.1-0
          可用: kubernetes-cni-0.5.1-1.x86_64 (kubernetes)
              kubernetes-cni = 0.5.1-1
          可用: kubernetes-cni-0.6.0-0.x86_64 (kubernetes)
              kubernetes-cni = 0.6.0-0
          可用: kubernetes-cni-0.7.5-0.x86_64 (kubernetes)
              kubernetes-cni = 0.7.5-0
          正在安装: kubernetes-cni-0.8.6-0.x86_64 (kubernetes)
              kubernetes-cni = 0.8.6-0
 您可以尝试添加 --skip-broken 选项来解决该问题
 您可以尝试执行:rpm -Va --nofiles --nodigest

2.3、组件配置

docker启动并配置开机自启

systemctl start docker && systemctl enable docker

配置docker Cgroup Driversystemd

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  }
}
EOF

重启docker,是配置生效

systemctl daemon-reload && systemctl restart docker

查看是否修改成功

[root@k8s-master1 ~]# docker info | grep Cgroup
Cgroup Driver: systemd

配置kubelet开机自启

systemctl enable kubelet

注意:配置kubelet忽略swap错误,如下:

	[ERROR Swap]: running with swap on is not supported. Please disable swap
cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
EOF

三、集群初始化

3.1、单节点master初始化

3.1.1、初始化文件配置

首先在k8s-master1初始化,导出初始化默认配置,使用ipvs

kubeadm config print init-defaults > kubeadm-config.yaml

修改如下:

cat > /root/kubeadm-config.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.10.250
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.16.9
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
EOF

3.1.2、初始化master节点

kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log

注意1.14.2版本

  • 初始化命令为kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
  • apiVersion为kubeadm.k8s.io/v1beta1
[root@k8s-master1 ~]# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
[init] Using Kubernetes version: v1.16.9
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.250]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master1 localhost] and IPs [192.168.10.250 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master1 localhost] and IPs [192.168.10.250 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 35.505861 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
9423183598abcbfeff4b9151d17d8cb4a6420de926b0b7a27c125b438efff78a
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.250:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:8586729bb076ed2eef2099e63b16994165bb7b47d4d4d02dd6828e54472da75d

按如上配置config

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看node状态,可以看到状态为NotReady,需要部署网络插件Flannel

[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS     ROLES    AGE     VERSION
k8s-master1   NotReady   master   5m31s   v1.16.9

部署Flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

查看Flannel的pod状态,在命名控件kube-system,直到Running状态,再次查看node状态即为Ready

[root@k8s-master1 ~]# kubectl get pods -n kube-system | grep flannel
kube-flannel-ds-amd64-cmx7r           1/1     Running   0          2m23s
[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
k8s-master1   Ready    master   8m14s   v1.16.9

3.1.3、worker节点加入

k8s-node1k8s-node2执行如下命令即可:

kubeadm join 192.168.10.250:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:8586729bb076ed2eef2099e63b16994165bb7b47d4d4d02dd6828e54472da75d

如果加入集群命令丢失,可通过如下命令找回

kubeadm token create --print-join-command

查看node和pod状态,node状体为Ready,各个pod状态均为Running即可,至此k8s单master节点集群搭建成功

[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
k8s-master1   Ready    master   21m   v1.16.9
k8s-node1     Ready    <none>   93s   v1.16.9
k8s-node2     Ready    <none>   83s   v1.16.9
[root@k8s-master1 ~]# kubectl get pods -A
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   coredns-58cc8c89f4-8ntwv              1/1     Running   0          20m
kube-system   coredns-58cc8c89f4-wl6dl              1/1     Running   0          20m
kube-system   etcd-k8s-master1                      1/1     Running   0          19m
kube-system   kube-apiserver-k8s-master1            1/1     Running   0          20m
kube-system   kube-controller-manager-k8s-master1   1/1     Running   0          20m
kube-system   kube-flannel-ds-amd64-cmx7r           1/1     Running   0          13m
kube-system   kube-flannel-ds-amd64-ff9b6           1/1     Running   2          86s
kube-system   kube-flannel-ds-amd64-g69lw           1/1     Running   2          96s
kube-system   kube-proxy-dz5rv                      1/1     Running   0          96s
kube-system   kube-proxy-l5wpb                      1/1     Running   0          86s
kube-system   kube-proxy-srm5h                      1/1     Running   0          20m
kube-system   kube-scheduler-k8s-master1            1/1     Running   0          19m

改变worker节点none的状态,可通过打标签的方式修改,通过如下命令配置

[root@k8s-master1 ~]# kubectl label node k8s-node1 node-role.kubernetes.io/worker=worker
node/k8s-node1 labeled
[root@k8s-master1 ~]# kubectl label node k8s-node2 node-role.kubernetes.io/worker=worker
node/k8s-node2 labeled
[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
k8s-master1   Ready    master   26m     v1.16.9
k8s-node1     Ready    worker   6m48s   v1.16.9
k8s-node2     Ready    worker   6m38s   v1.16.9

kubectl常用操作指令参考:https://blog.csdn.net/cyfblog/article/details/97639312

3.2、高可用master部署

3.2.1、重置k8s集群

将上面搭建好的k8s集群重置,master节点和worker节点执行kubeadm reset,并清除ipvs规则ipvsadm --clear,master节点删除.kube目录

[root@k8s-master1 ~]# kubeadm reset 
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
[reset] Removing info for node "k8s-master1" from the ConfigMap "kubeadm-config" in the "kube-system" Namespace
W0825 17:26:47.451606   20594 removeetcdmember.go:61] [reset] failed to remove etcd member: error syncing endpoints with etc: etcdclient: no available endpoints
.Please manually remove this etcd member using etcdctl
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@k8s-master1 ~]# ipvsadm --clear
[root@k8s-master1 ~]# rm -rf .kube/

3.2.2、nginx和keepalived部署

keepalived部署配置

k8s-master1、k8s-master2和k8s-master3三个节点都安装keepalived,通过如下命令:

yum install -y keepalived

k8s-master1配置master节点,配置如下:

cat > /etc/keepalived/keepalived.conf <<EOF
global_defs {
   router_id LVS_DEVEL
}

vrrp_script chk_http_port {
    script "/root/nginx_check.sh"
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    state MASTER
    interface enp0s3
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.10.44
    }
    track_script {
        chk_http_port
    }
}
EOF

k8s-master2配置backup节点,配置如下:

cat > /etc/keepalived/keepalived.conf <<EOF
global_defs {
   router_id LVS_DEVEL
}

vrrp_script chk_http_port {
    script "/root/nginx_check.sh"
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface enp0s3
    virtual_router_id 51
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.10.44
    }
    track_script {
        chk_http_port
    }
}
EOF

k8s-master3配置backup节点,配置如下:

cat > /etc/keepalived/keepalived.conf <<EOF
global_defs {
   router_id LVS_DEVEL
}

vrrp_script chk_http_port {
    script "/root/nginx_check.sh"
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface enp0s3
    virtual_router_id 51
    priority 60
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.10.44
    }
    track_script {
        chk_http_port
    }
}
EOF

nginx_check.sh检测监本如下:

#!/bin/bash
A=`ps -C nginx --no-header | wc -l`
if [ $A -eq 0 ];then
	docker restart nginx
    sleep 2
	if [ `ps -C nginx --no-header | wc -l` -eq 0 ];then
		killall keepalived
	fi
fi

配置完成后启动各个master节点的keepalived服务

systemctl start keepalived && systemctl enable keepalived

nginx部署配置

nginx.conf主要配置stream模块,配置如下:

cat > /root/nginx.conf <<EOF
user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

stream {
    upstream k8s {                      
        server 192.168.10.250:6443;             
        server 192.168.10.251:6443;             
        server 192.168.10.251:6443;                                               
    }
    server {
        listen 7443;
        proxy_pass k8s;                   
    }
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;
    sendfile        on;
    #tcp_nopush     on;
    keepalive_timeout  65;
    #gzip  on;
    include /etc/nginx/conf.d/*.conf;
}
EOF

nginx使用docker部署即可,各个节点按如下命令执行即可:

docker run -d --name nginx -p 80:80 -p 7443:7443 -v /root/nginx.conf:/etc/nginx/nginx.conf nginx

3.2.3、高可用master节点部署

kubeadm-config.yaml配置如下(主要修改controlPlaneEndpoint: "192.168.10.44:7443",vip地址,haproxy负载均衡端口)

cat > /root/kubeadm-config.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.10.250
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.10.44:7443"
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.16.9
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
EOF

初始化k8s-master1节点,kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log

[root@k8s-master1 ~]# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
[init] Using Kubernetes version: v1.16.9
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.250 192.168.10.44]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master1 localhost] and IPs [192.168.10.250 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master1 localhost] and IPs [192.168.10.250 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 35.034155 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
b26e8133d9d4ef98f87bf7c9478a7b0bef3639086d9eba79539af3b73b65c0cf
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.10.44:7443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:4c2b5c4dbd509069bb99fe5b554a7994b487ca4bbddb6b0ecd33081d26ab3821 \
    --control-plane --certificate-key b26e8133d9d4ef98f87bf7c9478a7b0bef3639086d9eba79539af3b73b65c0cf

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use 
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.44:7443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:4c2b5c4dbd509069bb99fe5b554a7994b487ca4bbddb6b0ecd33081d26ab3821

通过日志可以看到,多了如下加入集群命令,此命令即master角色加入的命令

  kubeadm join 192.168.10.44:7443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:4c2b5c4dbd509069bb99fe5b554a7994b487ca4bbddb6b0ecd33081d26ab3821 \
    --control-plane --certificate-key b26e8133d9d4ef98f87bf7c9478a7b0bef3639086d9eba79539af3b73b65c0cf

紧接着k8s-master2和k8s-master3加入集群,如下所示:

[root@k8s-master2 kubernetes]#   kubeadm join 192.168.10.44:7443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:4c2b5c4dbd509069bb99fe5b554a7994b487ca4bbddb6b0ecd33081d26ab3821 \
>     --control-plane --certificate-key b26e8133d9d4ef98f87bf7c9478a7b0bef3639086d9eba79539af3b73b65c0cf
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master2 localhost] and IPs [192.168.10.251 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master2 localhost] and IPs [192.168.10.251 127.0.0.1 ::1]
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.251 192.168.10.44]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2020-08-26T08:36:23.206+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://192.168.10.251:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master2 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

如上配置config,可以看到config中连接的server地址是vip的地址

[root@k8s-master1 ~]# cat .kube/config | grep server
    server: https://192.168.10.44:7443

部署Flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

查看节点状态和pod状态,node均为master角色且状态为Ready,pods状态均为Running,至此高可用master节点搭建完毕

[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
k8s-master1   Ready    master   7m3s    v1.16.9
k8s-master2   Ready    master   5m8s    v1.16.9
k8s-master3   Ready    master   2m53s   v1.16.9
[root@k8s-master1 ~]# kubectl get pods -A
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   coredns-58cc8c89f4-24sp8              1/1     Running   0          6m47s
kube-system   coredns-58cc8c89f4-zfv9h              1/1     Running   0          6m47s
kube-system   etcd-k8s-master1                      1/1     Running   0          5m59s
kube-system   etcd-k8s-master2                      1/1     Running   0          5m9s
kube-system   etcd-k8s-master3                      1/1     Running   0          2m54s
kube-system   kube-apiserver-k8s-master1            1/1     Running   0          5m54s
kube-system   kube-apiserver-k8s-master2            1/1     Running   0          5m10s
kube-system   kube-apiserver-k8s-master3            1/1     Running   0          2m55s
kube-system   kube-controller-manager-k8s-master1   1/1     Running   1          5m50s
kube-system   kube-controller-manager-k8s-master2   1/1     Running   0          5m10s
kube-system   kube-controller-manager-k8s-master3   1/1     Running   0          2m55s
kube-system   kube-flannel-ds-amd64-4nnxw           1/1     Running   0          50s
kube-system   kube-flannel-ds-amd64-qj2fn           1/1     Running   0          50s
kube-system   kube-flannel-ds-amd64-xmgw5           1/1     Running   0          50s
kube-system   kube-proxy-55wgv                      1/1     Running   0          2m55s
kube-system   kube-proxy-sf4h5                      1/1     Running   0          5m10s
kube-system   kube-proxy-sh2w5                      1/1     Running   0          6m47s
kube-system   kube-scheduler-k8s-master1            1/1     Running   1          5m52s
kube-system   kube-scheduler-k8s-master2            1/1     Running   0          5m10s
kube-system   kube-scheduler-k8s-master3            1/1     Running   0          2m55s

接着将worker节点加入集群,至此k8s高可用集群搭建完毕

[root@k8s-node1 ~]# kubeadm join 192.168.10.44:7443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:4c2b5c4dbd509069bb99fe5b554a7994b487ca4bbddb6b0ecd33081d26ab3821
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值