一、部署环境
主机列表:
k8s 版本:
二、高可用架构
本文采用kubeadm方式搭建高可用k8s集群,k8s集群的高可用实际是k8s各核心组件的高可用,这里使用主备模式,架构如下:
主备模式高可用架构说明:
apiserver 通过keepalived实现高可用,当某个节点故障时触发keepalived vip 转移;
controller-manager k8s内部通过选举方式产生领导者(由–leader-elect 选型控制,默认为true),同一时刻集群内只有一个controller-manager组件运行;
scheduler k8s内部通过选举方式产生领导者(由–leader-elect 选型控制,默认为true),同一时刻集群内只有一个scheduler组件运行;
etcd 通过运行kubeadm方式自动创建集群来实现高可用,部署的节点数为奇数,3节点方式最多容忍一台机器宕机。
三、安装准备工作
control plane和work节点都执行本部分操作。
安装Centos时已经禁用了防火墙和selinux并设置了阿里源。
- 配置主机名
1.1 修改主机名
hostnamectl set-hostname master01
[root@master01 ~]# cat >> /etc/hosts << EOF
172.27.34.3 master01
172.27.34.4 master02
172.27.34.5 master03
172.27.34.93 work01
172.27.34.94 work02
172.27.34.95 work03
EOF
- 禁用swap
2.1 临时禁用
swapoff -a
2.2 永久禁用
若需要重启后也生效,在禁用swap后还需修改配置文件/etc/fstab,注释swap - 内核参数修改
本文的k8s网络使用flannel,该网络需要设置内核参数bridge-nf-call-iptables=1,修改这个参数需要系统有br_netfilter模块。
4 br_netfilter模块加载
查看br_netfilter模块:
[root@master01 ~]# lsmod |grep br_netfilter
如果系统没有br_netfilter模块则执行下面的新增命令,如有则忽略。
临时新增br_netfilter模块:
modprobe br_netfilter
永久新增br_netfilter模块:
[root@master01 ~]# cat > /etc/rc.sysinit << EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF
[root@master01 ~]# cat > /etc/sysconfig/modules/br_netfilter.modules << EOF
modprobe br_netfilter
EOF
[root@master01 ~]# chmod 755 /etc/sysconfig/modules/br_netfilter.modules
4.2 内核参数临时修改
[root@master01 ~]# sysctl net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-iptables = 1
[root@master01 ~]# sysctl net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-ip6tables = 1
4.3 内核参数永久修改
[root@master01 ~]# cat < /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@master01 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
4. 设置kubernetes源
5.1 新增kubernetes源
[root@master01 ~]# cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@master01 ~]# yum clean all
[root@master01 ~]# yum -y makecache
6.免密登录
6.1 创建秘钥
[root@master01 ~]# ssh-keygen -t rsa
6.2 将秘钥同步至master02/master03
[root@master01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@172.27.34.4
[root@master01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@172.27.34.5
四、Docker安装
control plane和work节点都执行本部分操作。
1.安装依赖包
[root@master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
2. 设置Docker源
[root@master01 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
3. 安装Docker CE
3.1 docker安装版本查看
[root@master01 ~]# yum list docker-ce --showduplicates | sort -r
[root@master01 ~]# yum install docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io -y
[root@master01 ~]# systemctl start docker
[root@master01 ~]# systemctl enable docker
4.命令补全
4.1安装bash-completion
[root@master01 ~]# yum -y install bash-completion
[root@master01 ~]# source /etc/profile.d/bash_completion.sh
5.镜像加速
登陆地址为:https://cr.console.aliyun.com ,未注册的可以先注册阿里云账户
6.配置镜像加速器
配置daemon.json文件
[root@master01 ~]# mkdir -p /etc/docker
[root@master01 ~]# tee /etc/docker/daemon.json <<-‘EOF’
{
“registry-mirrors”: [“https://v16stybc.mirror.aliyuncs.com”]
}
EOF
[root@master01 ~]# systemctl daemon-reload
[root@master01 ~]# systemctl restart docker
7.修改Cgroup Driver
修改daemon.json
修改daemon.json,新增‘”exec-opts”: [“native.cgroupdriver=systemd”’
[root@master01 ~]# more /etc/docker/daemon.json
{
“registry-mirrors”: [“https://v16stybc.mirror.aliyuncs.com”],
“exec-opts”: [“native.cgroupdriver=systemd”]
}
[root@master01 ~]# systemctl daemon-reload
[root@master01 ~]# systemctl restart docker
修改cgroupdriver是为了消除告警:
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
五、keepalived安装
control plane节点都执行本部分操作
1.安装keepalived
[root@master01 ~]# yum -y install keepalived
2. keepalived配置
master01上keepalived配置:
[root@master01 ~]# more /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id master01
}
vrrp_instance VI_1 {
state MASTER
interface ens160
virtual_router_id 50
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.27.34.130
}
}
master02上keepalived配置:
[root@master02 ~]# more /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id master02
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 50
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.27.34.130
}
}
master03上keepalived配置:
[root@master03 ~]# more /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id master03
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 50
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.27.34.130
}
3.所有control plane启动keepalived服务并设置开机启动
[root@master01 ~]# service keepalived start
[root@master01 ~]# systemctl enable keepalived
六、k8s安装
control plane和work节点都执行本部分操作。
[root@master01 ~]# yum list kubelet --showduplicates | sort -r
安装的kubelet版本是1.16.4,该版本支持的docker版本为1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09
[root@master01 ~]# yum install -y kubelet-1.16.4 kubeadm-1.16.4 kubectl-1.16.4
kubelet 运行在集群所有节点上,用于启动Pod和容器等对象的工具
kubeadm 用于初始化集群,启动集群的命令工具
kubectl 用于和集群通信的命令行,通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
[root@master01 ~]# systemctl enable kubelet && systemctl start kubelet
[root@master01 ~]# echo “source <(kubectl completion bash)” >> ~/.bash_profile
[root@master01 ~]# source .bash_profile
镜像下载
[root@master01 ~]# more image.sh
#!/bin/bash
url=registry.cn-hangzhou.aliyuncs.com/loong576
version=v1.16.4
images=(kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'
)
for imagename in ${images[@]} ; do
docker pull
u
r
l
/
url/
url/imagename
docker tag
u
r
l
/
url/
url/imagename k8s.gcr.io/$imagename
docker rmi -f
u
r
l
/
url/
url/imagename
done
./image.sh
七、初始化Master
[root@master01 ~]# more kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.16.4
apiServer:
certSANs: #填写所有kube-apiserver节点的hostname、IP、VIP
- master01
- master02
- master03
- node01
- node02
- node03
- 172.27.34.3
- 172.27.34.4
- 172.27.34.5
- 172.27.34.93
- 172.27.34.94
- 172.27.34.95
- 172.27.34.130
controlPlaneEndpoint: “172.27.34.130:6443”
networking:
podSubnet: “10.244.0.0/16”
[root@master01 ~]# kubeadm init --config=kubeadm-config.yaml
记录kubeadm join的输出,后面需要这个命令将work节点和其他control plane节点加入集群中。
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij
–discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
–control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij
–discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
初始化失败:
如果初始化失败,可执行kubeadm reset后重新初始化
[root@master01 ~]# kubeadm reset
[root@master01 ~]# rm -rf $HOME/.kube/config
加载环境变量
[root@master01 ~]# echo “export KUBECONFIG=/etc/kubernetes/admin.conf” >> ~/.bash_profile
[root@master01 ~]# source .bash_profile
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown
(
i
d
−
u
)
:
(id -u):
(id−u):(id -g) $HOME/.kube/config
安装flannel网络
[root@master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
八、control plane节点加入集群
证书分发
master01分发证书:
在master01上运行脚本cert-main-master.sh,将证书分发至master02和master03
[root@master01 ~]# ll|grep cert-main-master.sh
-rwxr–r-- 1 root root 638 1月 2 15:23 cert-main-master.sh
[root@master01 ~]# more cert-main-master.sh
USER=root # customizable
CONTROL_PLANE_IPS=“172.27.34.4 172.27.34.5”
for host in
C
O
N
T
R
O
L
P
L
A
N
E
I
P
S
;
d
o
s
c
p
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
p
k
i
/
c
a
.
c
r
t
"
{CONTROL_PLANE_IPS}; do scp /etc/kubernetes/pki/ca.crt "
CONTROLPLANEIPS;doscp/etc/kubernetes/pki/ca.crt"{USER}"@
h
o
s
t
:
s
c
p
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
p
k
i
/
c
a
.
k
e
y
"
host: scp /etc/kubernetes/pki/ca.key "
host:scp/etc/kubernetes/pki/ca.key"{USER}"@
h
o
s
t
:
s
c
p
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
p
k
i
/
s
a
.
k
e
y
"
host: scp /etc/kubernetes/pki/sa.key "
host:scp/etc/kubernetes/pki/sa.key"{USER}"@
h
o
s
t
:
s
c
p
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
p
k
i
/
s
a
.
p
u
b
"
host: scp /etc/kubernetes/pki/sa.pub "
host:scp/etc/kubernetes/pki/sa.pub"{USER}"@
h
o
s
t
:
s
c
p
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
p
k
i
/
f
r
o
n
t
−
p
r
o
x
y
−
c
a
.
c
r
t
"
host: scp /etc/kubernetes/pki/front-proxy-ca.crt "
host:scp/etc/kubernetes/pki/front−proxy−ca.crt"{USER}"@
h
o
s
t
:
s
c
p
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
p
k
i
/
f
r
o
n
t
−
p
r
o
x
y
−
c
a
.
k
e
y
"
host: scp /etc/kubernetes/pki/front-proxy-ca.key "
host:scp/etc/kubernetes/pki/front−proxy−ca.key"{USER}"@
h
o
s
t
:
s
c
p
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
p
k
i
/
e
t
c
d
/
c
a
.
c
r
t
"
host: scp /etc/kubernetes/pki/etcd/ca.crt "
host:scp/etc/kubernetes/pki/etcd/ca.crt"{USER}"@KaTeX parse error: Expected 'EOF', got '#' at position 22: …tcd-ca.crt #̲ Quote this lin…{USER}"@KaTeX parse error: Expected 'EOF', got '#' at position 104: …oot@master02 ~]#̲ more cert-othe…{USER}/ca.crt /etc/kubernetes/pki/
mv /
U
S
E
R
/
c
a
.
k
e
y
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
p
k
i
/
m
v
/
{USER}/ca.key /etc/kubernetes/pki/ mv /
USER/ca.key/etc/kubernetes/pki/mv/{USER}/sa.pub /etc/kubernetes/pki/
mv /
U
S
E
R
/
s
a
.
k
e
y
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
p
k
i
/
m
v
/
{USER}/sa.key /etc/kubernetes/pki/ mv /
USER/sa.key/etc/kubernetes/pki/mv/{USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /
U
S
E
R
/
f
r
o
n
t
−
p
r
o
x
y
−
c
a
.
k
e
y
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
p
k
i
/
m
v
/
{USER}/front-proxy-ca.key /etc/kubernetes/pki/ mv /
USER/front−proxy−ca.key/etc/kubernetes/pki/mv/{USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
[root@master02 ~]# ./cert-other-master.sh
master03移动证书至指定目录:
在master03上也运行脚本cert-other-master.sh
master02加入集群:
kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij
–discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
–control-plane
master03加入集群
kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij
–discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
–control-plane
加载环境变量
[root@master02 ~]# scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@master02 ~]# echo “export KUBECONFIG=/etc/kubernetes/admin.conf” >> ~/.bash_profile
[root@master02 ~]# source .bash_profile
[root@master03 ~]# scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@master03 ~]# echo “export KUBECONFIG=/etc/kubernetes/admin.conf” >> ~/.bash_profile
[root@master03 ~]# source .bash_profile
该操作是为了在master02和master03上也能执行kubectl命令。
[root@master01 ~]# kubectl get nodes
[root@master01 ~]# kubectl get po -o wide -n kube-system
九、work节点加入集群
work01,work02,work03加入集群
kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij
–discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
十、client配置
[root@client ~]# cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@client ~]# yum clean all
[root@client ~]# yum -y makecache
安装版本与集群版本保持一致
[root@client ~]# yum install -y kubectl-1.16.4
命令补全:安装bash-completion
[root@client ~]# yum -y install bash-completion
[root@client ~]# source /etc/profile.d/bash_completion.sh
拷贝admin.conf
[root@client ~]# mkdir -p /etc/kubernetes
[root@client ~]# scp 172.27.34.3:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@client ~]# echo “export KUBECONFIG=/etc/kubernetes/admin.conf” >> ~/.bash_profile
[root@client ~]# source .bash_profile
[root@master01 ~]# echo “source <(kubectl completion bash)” >> ~/.bash_profile
[root@master01 ~]# source .bash_profile
[root@client ~]# kubectl get nodes
[root@client ~]# kubectl get cs
[root@client ~]# kubectl get po -o wide -n kube-system
十一、Dashboard搭建
[root@client ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
[root@client ~]# sed -i ‘s/kubernetesui/registry.cn-hangzhou.aliyuncs.com/loong576/g’ recommended.yaml
外网访问
[root@client ~]# sed -i ‘/targetPort: 8443/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort’ recommended.yaml
配置NodePort,外部通过https://NodeIp:NodePort 访问Dashboard,此时端口为30001
新增管理员帐号
[root@client ~]# cat >> recommended.yaml << EOF
------------------- dashboard-admin -------------------
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kubernetes-dashboard
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
kubectl apply -f recommended.yaml
kubectl get all -n kubernetes-dashboard
令牌查看
kubectl describe secrets -n kubernetes-dashboard dashboard-admin
https://VIP:30001
通过令牌方式登录
脚本文件:https://github.com/loong576/Centos7.6-install-k8s-v1.16.4-HA-cluster