虚拟机环境
server1 harbor仓库
server2 haproxy+pcs
server20 haproxy+pcs
server3 k8s master
server4 k8s master
server5 k8s master
部署haproxy+pcs
在server2与server20上准备yum源
[dvd]
name=rhel7.6
baseurl=http://172.25.52.250/rhel7.6
gpgcheck=0
[high]
name=rhel7.6 high
baseurl=http://172.25.52.250/rhel7.6/addons/HighAvailability
gpgcheck=0
安装haproxy,修改配置文件
yum install -y haproxy
vim /etc/haproxy/haproxy.cfg
default:
...
listen stats *:80
stats uri /status
frontend main *:6443
mode tcp
default_backend static
backend static
balance roundrobin
mode tcp
server server3 172.25.52.3:6443 check
server server4 172.25.52.4:6443 check
server server5 172.25.52.5:6443 check
systemctl enable --now haproxy
安装pcs
在server2 server20
yum install -y pacemaker pcs psmisc policycoreutils-python
systemctl enable --now pcsd.service
在server2
echo westos | passwd --stdin hacluster
ssh server20 'echo westos | passwd --stdin hacluster'
pcs cluster auth server2 server20
pcs cluster setup --name mycluster server2 server20
pcs cluster start --all
pcs cluster enable --all
pcs property set stonith-enabled=false
pcs resource create VIP ocf:heartbeat:IPaddr2 ip=172.25.52.100 op monitor interval=30s
pcs resource create haproxy systemd:haproxy op monitor interval=30s
pcs resource group add Group VIP haproxy
查看psc status,正常工作
部署docker+k8s
docker
在server3 server4 server5 准备docker-ce源
[dvd]
name=rhel7.6
baseurl=http://172.25.52.250/rhel7.6
gpgcheck=0
[docker]
name=docker
baseurl=http://172.25.52.250/docker-ce
gpgcheck=0
运行安装docker
yum install -y docker-ce
配置网桥
vim /etc/sysctl.d/docker.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
sysctl --system
准备daemon.json
vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://reg.westos.org"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
复制仓库密钥,在仓库Server1端,复制密钥至3 4 5
scp -r certs.d/ server3:/etc/docker/certs.d/
scp -r certs.d/ server4:/etc/docker/certs.d/
scp -r certs.d/ server5:/etc/docker/certs.d/
开启docker,查看信息
systemctl enable --now docker
k8s
准备k8s安装包以及镜像,在 3 4 5 解压安装k8s包
关闭swap分区
swapoff -a
vim /etc/fstab
systemctl enable --now kubelet
安装ipvsadm
yum install -y ipvsadm
ipvsadm -ln
lsmod | grep ip_vs
准备部署文件
kubeadm config print init-defaults > kubeadm-init.yaml
vim kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.25.52.3
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: server3
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "172.25.52.100:6443"
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: reg.westos.org/k8s
kind: ClusterConfiguration
kubernetesVersion: 1.21.3
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
应用文件部署
kubeadm init --config kubeadm-init.yaml --upload-certs
注:初始化失败时,再次初始化需要重置 : kubeadm reset
使用生成的token将4 5 加入集群
kubeadm join 172.25.52.100:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:460368156b9f8ad73512805a3e8a0c43594b345d1b70271631243e3c6e6791cd \
> --control-plane --certificate-key 2f36587c1b46b5572d2e6917ab146685d5e9bffabfcdf46a9f4357d16226b454
在 3 4 5配置环境变量
export KUBECONFIG=/etc/kubernetes/admin.conf
在3 安装flannel
kubectl apply -f kube-flannel.yaml
查看节点运行状态:
测试高可用
haproxy监控显示3 4 5 正常工作
此时部署pod
kubectl run demo --image=busybox
由于未准备worker节点,pod在pending状态,此时关闭server3,监控页面显示server3下线
但在 4 5 均可查看k8s pod状态
控制pcs server2 standby,此时服务迁移至server20
pcs node standby
监控状态没有发生变动