企业运维----Docker-kubernetes-高可用集群部署(未完......)

21 篇文章 0 订阅
16 篇文章 0 订阅
本文档介绍了如何通过haproxy实现K8s Master节点的负载均衡,以及如何搭建包含3台Master节点的高可用Kubernetes集群。实验环境中涉及6台虚拟机,详细步骤包括haproxy的部署、K8s主机的配置、集群服务的创建及VIP设置等。目前文章处于未完成状态。
摘要由CSDN通过智能技术生成


K8s高可用+负载均衡集群

工作原理:
在这里插入图片描述
本实验通过haproxy配置k8s master主机实现负载均衡,通过k8s三台master主机实现k8s集群高可用。
实验环境:
需要6台虚拟机
至少3台master节点才能实现高可用k8s集群

主机名:iprole
server1:172.25.12.1提供harbor仓库服务
server5:172.25.12.5提供haproxy负载均衡
server6:172.25.12.6提供haproxy负载均衡
server7:172.25.12.7k8s集群主机master节点
server8:172.25.12.8k8s集群主机master节点
server9:172.25.12.9k8s集群主机master节点

部署

yum源环境和主机解析

[root@server5 ~]# vim /etc/hosts
[root@server5 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.12.250	foundation12.ilt.example.com
172.25.12.1	server1 reg.westos.org
172.25.12.2	server2
172.25.12.3	server3
172.25.12.4	server4
172.25.12.5	server5
172.25.12.6	server6
172.25.12.7	server7
172.25.12.8	server8
172.25.12.9	server9
[root@server5 ~]# scp /etc/hosts server6:/etc/hosts
[root@server5 ~]# scp /etc/hosts server7:/etc/hosts
[root@server5 ~]# scp /etc/hosts server8:/etc/hosts
[root@server5 ~]# scp /etc/hosts server9:/etc/hosts


[root@server5 yum.repos.d]# vim dvd.repo 
[root@server5 yum.repos.d]# cat dvd.repo 
[dvd]
name=dvd
baseurl=http://172.25.12.250/rhel7.6
gpgcheck=0

[HighAvailability]
name=HighAvailability
baseurl=http://172.25.12.250/rhel7.6/addons/HighAvailability
gpgcheck=0


[root@server5 yum.repos.d]# cat docker.repo 
[docker]
name=docker-ce
baseurl=http://172.25.12.250/docker-ce
gpgcheck=0


[root@server5 yum.repos.d]# scp dvd.repo docker.repo server6:/etc/yum.repos.d/
[root@server5 yum.repos.d]# scp dvd.repo docker.repo server7:/etc/yum.repos.d/
[root@server5 yum.repos.d]# scp dvd.repo docker.repo server8:/etc/yum.repos.d/
[root@server5 yum.repos.d]# scp dvd.repo docker.repo server9:/etc/yum.repos.d/
haproxy负载均衡部署

server5/6安装pacemaker相关组件,设置开机自启

[root@server5 ~]# yum install -y pacemaker pcs psmisc policycoreutils-python
[root@server6 ~]# yum install -y pacemaker pcs psmisc policycoreutils-python

[root@server5 ~]# systemctl enable --now pcsd.service 
[root@server6 ~]# systemctl enable --now pcsd.service 

修改server5/6 的hacluster用户密码为westos

[root@server5 ~]# passwd hacluster 
Changing password for user hacluster.
New password: 
BAD PASSWORD: The password is shorter than 8 characters
Retype new password: 
passwd: all authentication tokens updated successfully.
[root@server6 ~]# passwd hacluster 
Changing password for user hacluster.
New password: 
BAD PASSWORD: The password is shorter than 8 characters
Retype new password: 
passwd: all authentication tokens updated successfully.

pcs注册认证server5 server6

[root@server5 ~]# pcs cluster auth server5 server6
Username: hacluster   
Password: 
server5: Authorized
server6: Authorized

创建集群命名mycluster,server5,6为成员

[root@server5 ~]# pcs cluster setup --name mycluster server5 server6
Destroying cluster on nodes: server5, server6...
server5: Stopping Cluster (pacemaker)...
server6: Stopping Cluster (pacemaker)...
server5: Successfully destroyed cluster
server6: Successfully destroyed cluster

Sending 'pacemaker_remote authkey' to 'server5', 'server6'
server5: successful distribution of the file 'pacemaker_remote authkey'
server6: successful distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the nodes...
server5: Succeeded
server6: Succeeded

Synchronizing pcsd certificates on nodes server5, server6...
server5: Success
server6: Success
Restarting pcsd on the nodes in order to reload the certificates...
server5: Success
server6: Success

设置集群服务启动并开机自启

[root@server5 ~]# pcs cluster start --all
server5: Starting Cluster (corosync)...
server6: Starting Cluster (corosync)...
server5: Starting Cluster (pacemaker)...
server6: Starting Cluster (pacemaker)...
[root@server5 ~]# pcs cluster enable --all
server5: Cluster Enabled
server6: Cluster Enabled

关闭fence警告

[root@server5 ~]# crm_verify -L -V
   error: unpack_resources:	Resource start-up disabled since no STONITH resources have been defined
   error: unpack_resources:	Either configure some or disable STONITH with the stonith-enabled option
   error: unpack_resources:	NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid
[root@server5 ~]# pcs status 
Cluster name: mycluster

WARNINGS:
No stonith devices and stonith-enabled is not false

Stack: corosync
Current DC: server6 (version 1.1.19-8.el7-c3c624ea3d) - partition with quorum
Last updated: Fri Aug  6 23:29:46 2021
Last change: Fri Aug  6 23:24:34 2021 by hacluster via crmd on server6

2 nodes configured
0 resources configured

Online: [ server5 server6 ]

No resources


Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@server5 ~]# pcs property set stonith-enabled=false
[root@server5 ~]# crm_verify -L -V
[root@server5 ~]# pcs status 
Cluster name: mycluster
Stack: corosync
Current DC: server6 (version 1.1.19-8.el7-c3c624ea3d) - partition with quorum
Last updated: Fri Aug  6 23:30:57 2021
Last change: Fri Aug  6 23:30:48 2021 by root via cibadmin on server5

2 nodes configured
0 resources configured

Online: [ server5 server6 ]

No resources


Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

创建vip 172.25.12.100

[root@server5 ~]# pcs resource create vip ocf:heartbeat:IPaddr2 ip=172.25.12.100 op monitor interval=30s
[root@server5 ~]# ip addr
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:7f:34:da brd ff:ff:ff:ff:ff:ff
    inet 172.25.12.5/24 brd 172.25.12.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 172.25.12.100/24 brd 172.25.12.255 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe7f:34da/64 scope link 
       valid_lft forever preferred_lft forever

查看pcs状态

[root@server5 docker]# pcs status 
Cluster name: mycluster
Stack: corosync
Current DC: server5 (version 1.1.19-8.el7-c3c624ea3d) - partition with quorum
Last updated: Sun Aug  8 02:03:32 2021
Last change: Sat Aug  7 03:39:44 2021 by root via cibadmin on server5

2 nodes configured
2 resources configured

Online: [ server5 server6 ]

Full list of resources:

 Resource Group: haproup
     vip	(ocf::heartbeat:IPaddr2):	Started server5
     haproxy	(systemd:haproxy):	Started server5

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

安装haproxy服务编辑配置文件

[root@server5 ~]# yum install -y haproxy
[root@server5 ~]# cd /etc/haproxy/
[root@server5 haproxy]# vim haproxy.cfg 
# 监听80端口,查看负载均衡节点状态
 59 listen stats *:80
 60     status uri /status

# 设定负载均衡监听端口为6443 监听模式为tcp
 64 frontend  main *:6443
 65     mode tcp
 66     default_backend             app
 
 # 添加负载均衡后端节点为k8s1/k8s2/k8s3,check各自IP的6443端口
 72 backend app     
 73     balance     roundrobin
 74     mode tcp
 75     server  k8s1 127.0.0.1:5001 check
 76     server  k8s2 127.0.0.1:5002 check
 77     server  k8s3 127.0.0.1:5003 check

开启haproxy服务 查看监听端口是否开启

[root@server5 haproxy]# systemctl start haproxy.service 
[root@server5 haproxy]# netstat -antlp |grep :6443
tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      8187/haproxy        
[root@server5 haproxy]# netstat -antlp |grep :80
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      8599/haproxy       

k8s主机down 还未部署

请添加图片描述
为server6进行同样操作,并在访问测试后关闭haproxy服务

[root@server6 ~]# yum install -y haproxy.x86_64 
[root@server5 haproxy]# systemctl stop haproxy.service 
[root@server5 haproxy]# ls
haproxy.cfg
[root@server5 haproxy]# scp haproxy.cfg server6:/etc/haproxy/
root@server6's password: 
haproxy.cfg                               100% 2682     4.5MB/s   00:00    
[root@server6 ~]# systemctl start haproxy.service 

请添加图片描述

创建haproxy服务到pcs集群

[root@server5 haproxy]# pcs resource create haproxy systemd:haproxy op monitor interval=60s
[root@server5 haproxy]# pcs status
Cluster name: mycluster
Stack: corosync
Current DC: server6 (version 1.1.19-8.el7-c3c624ea3d) - partition with quorum
Last updated: Sat Aug  7 00:51:05 2021
Last change: Sat Aug  7 00:50:28 2021 by root via cibadmin on server5

2 nodes configured
2 resources configured

Online: [ server5 server6 ]

Full list of resources:

 vip    (ocf::heartbeat:IPaddr2):       Started server5
 haproxy        (systemd:haproxy):      Started server6

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

将vip和haproxy服务拉到一个group结点上

[root@server5 haproxy]# pcs resource group add haproup vip haproxy
[root@server5 haproxy]# pcs status
Cluster name: mycluster
Stack: corosync
Current DC: server6 (version 1.1.19-8.el7-c3c624ea3d) - partition with quorum
Last updated: Sat Aug  7 00:52:33 2021
Last change: Sat Aug  7 00:52:30 2021 by root via cibadmin on server5

2 nodes configured
2 resources configured

Online: [ server5 server6 ]

Full list of resources:

 Resource Group: haproup
     vip        (ocf::heartbeat:IPaddr2):       Started server5
     haproxy    (systemd:haproxy):      Starting server5

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

k8s高可用集群部署

master节点 部署docker 最后全部running 重启服务failed可能是dademin.json写错

[root@server7 ~]# yum install -y docker-ce
[root@server7 ~]# systemctl enable --now docker
[root@server8 ~]# yum install -y docker-ce
[root@server8 ~]# systemctl enable --now docker
[root@server9 ~]# yum install -y docker-ce
[root@server9 ~]# systemctl enable --now docker

[root@server7 ~]# cd /etc/docker/
[root@server7 docker]# ls
key.json
[root@server7 docker]# vim daemon.json
[root@server7 docker]# cat daemon.json 
{
  "registry-mirrors":["https://reg.westos.org"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
[root@server7 docker]# systemctl restart docker
[root@server7 docker]# systemctl status docker

   Active: active (running) since Sat 2021-08-07 22:19:17 EDT; 1s ago


[root@server7 docker]# scp daemon.json server8:/etc/docker/
[root@server7 docker]# scp daemon.json server9:/etc/docker/
[root@server8 ~]# systemctl restart docker
[root@server8 ~]# systemctl status docker
[root@server9 ~]# systemctl restart docker
[root@server9 ~]# systemctl status docker

server7 查看docker info 编写/etc/sysctl.d/docker.conf 重启内核模块 WARNING消失

[root@server7 docker]# docker info
Client:
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 19.03.15
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
 runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 3.10.0-957.el7.x86_64
 Operating System: Red Hat Enterprise Linux Server 7.6 (Maipo)
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 1.952GiB
 Name: server7
 ID: TXAW:B7U3:XPGX:PBPE:E56R:CDQ2:OMOD:DENZ:ZRKW:JR4W:NNIA:S3NK
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
[root@server7 sysctl.d]# pwd 
/etc/sysctl.d/
[root@server7 sysctl.d]# ls
99-sysctl.conf  docker.conf
[root@server7 sysctl.d]# cat docker.conf 
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

[root@server7 docker]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/docker.conf ...
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
* Applying /etc/sysctl.conf ...
[root@server7 docker]# docker info
Client:
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 19.03.15
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
 runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 3.10.0-957.el7.x86_64
 Operating System: Red Hat Enterprise Linux Server 7.6 (Maipo)
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 1.952GiB
 Name: server7
 ID: TXAW:B7U3:XPGX:PBPE:E56R:CDQ2:OMOD:DENZ:ZRKW:JR4W:NNIA:S3NK
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

server8,9上做一遍

[root@server7 sysctl.d]# scp docker.conf server6:/etc/sysctl.d/
[root@server7 sysctl.d]# scp docker.conf server7:/etc/sysctl.d/
[root@server8 docker]# sysctl --system
[root@server8 docker]# docker info
[root@server9 docker]# sysctl --system
[root@server9 docker]# docker info

认证

[root@server1 ~]# cd /etc/docker/
[root@server1 docker]# ls
certs.d  key.json
[root@server1 docker]# scp -r certs.d/ server7:/etc/docker/
[root@server1 docker]# scp -r certs.d/ server8:/etc/docker/
[root@server1 docker]# scp -r certs.d/ server9:/etc/docker/

安装ipvs使用模块,kube_proxy使用IPVS模式,减少iptables的压力。

[root@server7 ~]# yum install -y ipvsadm.x86_64 
[root@server7 ~]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
[root@server7 ~]# lsmod | grep ip_vs
ip_vs                 145497  0 
nf_conntrack          133095  7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack


[root@server8 ~]# yum install -y ipvsadm.x86_64 
[root@server8 ~]# ipvsadm -l
[root@server8 ~]# lsmod | grep ip_vs
[root@server9 ~]# yum install -y ipvsadm.x86_64 
[root@server9 ~]# ipvsadm -l
[root@server9 ~]# lsmod | grep ip_vs

禁用swap分区,注释掉 /etc/fstab 中对swap定义

[root@server7 ~]# swapoff -a 
[root@server7 ~]# vim /etc/fstab 
 11 #/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
[root@server8 ~]# swapoff -a 
[root@server8 ~]# vim /etc/fstab 
 11 #/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
[root@server9 ~]# swapoff -a 
[root@server9 ~]# vim /etc/fstab 
 11 #/dev/mapper/rhel-swap   swap                    swap    defaults        0 0

地址伪装让虚拟机上网 安装kubelet组件

[root@foundation12 images]# iptables -t nat -I POSTROUTING -s 172.25.12.0/24 -j MASQUERADE 

[root@server7 ~]# cd /etc/yum.repos.d/
[root@server7 yum.repos.d]# ls
docker.repo  dvd.repo  redhat.repo
[root@server7 yum.repos.d]# vim k8s.repo
[root@server7 yum.repos.d]# cat k8s.repo 
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
[root@server7 ~]# yum install -y kubeadm kubelet kubectl

[root@server7 yum.repos.d]# scp k8s.repo server8:/etc/yum.repos.d/
[root@server7 yum.repos.d]# scp k8s.repo server9:/etc/yum.repos.d/

[root@server8 ~]# yum install -y kubeadm kubelet kubectl
[root@server9 ~]# yum install -y kubeadm kubelet kubectl

启动kubelet服务 设置开机自启

[root@server7 ~]# systemctl enable --now kubelet.service 
[root@server8 ~]# systemctl enable --now kubelet.service 
[root@server9 ~]# systemctl enable --now kubelet.service 

server7内配置高可用,导出kubeadm-init.yaml配置文件

[root@server7 ~]# kubeadm config print init-defaults > kubeadm-init.yaml
[root@server7 ~]# ls
kubeadm-1.21.3.tar.gz  kubeadm-init.yaml  packages
[root@server7 ~]# vim kubeadm-init.yaml 
[root@server7 ~]# cat kubeadm-init.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.25.12.7
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: server7
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "172.25.12.100:6443"
controllerManager: {}
dns: 
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: reg.westos.org/k8s
kind: ClusterConfiguration
kubernetesVersion: 1.21.3
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

拉取镜像

[root@server7 ~]# kubeadm config images pull --config kubeadm-init.yaml
[config/images] Pulled reg.westos.org/k8s/kube-apiserver:v1.21.3
[config/images] Pulled reg.westos.org/k8s/kube-controller-manager:v1.21.3
[config/images] Pulled reg.westos.org/k8s/kube-scheduler:v1.21.3
[config/images] Pulled reg.westos.org/k8s/kube-proxy:v1.21.3
[config/images] Pulled reg.westos.org/k8s/pause:3.4.1
[config/images] Pulled reg.westos.org/k8s/etcd:3.4.13-0
[config/images] Pulled reg.westos.org/k8s/coredns:v1.8.0

未完…

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值