Kubeadm安装高可用的K8S集群--多master单node

Kubeadm安装高可用的K8S集群–多master单node

master1

IP 192.168.1.180/24

OS Centos7.6

master2

IP 192.168.1.181/24

OS Centos7.6

node1

IP 192.168.1.182/24

OS Centos7.6

1.初始化k8s集群的安装环境

1.1 .配置静态ip

按如上ip信息给三台节点配置好ip地址

1.2 .修改节点主机名

#master1
hostnamectl set-hostname master1 && bash
#master2
hostnamectl set-hostname master2 && bash
#node1
hostnamectl set-hostname node1 && bash

1.3 .配置解析

#master1
cat > /etc/hosts <<END
> 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
> ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
> 192.168.1.180 master1
> 192.168.1.182 master2
> 192.168.1.182 node1
> END

 scp /etc/hosts master2:/etc/hosts
 scp /etc/hosts node1:/etc/hosts

1.4 .配置免密登陆

#master1
ssh-keygen
ssh-copy-id master1
ssh-copy-id master2
ssh-copy-id node1
#master2
ssh-keygen
ssh-copy-id master2
ssh-copy-id master1
ssh-copy-id node1
#node1
ssh-keygen
ssh-copy-id node1
ssh-copy-id master1
ssh-copy-id master2

1.5 .关闭交换分区

#master1
swapoff -a  #临时关闭
永久关闭,将/etc/fstab中的如下一行注释掉
#/dev/mapper/centos-swap swap swap defaults 0 0
同样的操作在master2和node1上也分别做一下。
为什么要关掉swap呢?
k8s在设计时就考虑要提升性能,不让使用swap,如果不关的话,初始化时将会提示错误。

1.6 .修改参数开启转发

#master1
modprobe br_netfilter
cat > /etc/sysctl.d/k8s.conf <<END
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
END
sysctl -p /etc/sysctl.d/k8s.conf

#br_netfilter模块默认情况下不会开机自启动,如下操作是为了让其开机后自启动
cat > /etc/rc.sysinit <<END
#/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
END
cd /etc/sysconfig/modules/
cat > br_netfilter.modules <<END
modprobe br_netfilter
END
chmod 755 br_netfilter.modules

#master2和node1上做上述同样的操作。

1.7 .关闭防火墙和selinux

systemctl disable firewalld --now
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
setenforce 0

1.8 .配置软件源

#yum的基本源

mkdir /root/repo.bak

mv /etc/yum.repos.d/* /root/repo.bak

cd /etc/yum.repos.d/

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo

#配置docker-ce的源

yum install -y yum-utils
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#配置安装k8s组件需要的阿里云的源

cat > /etc/yum.repos.d/kubernetes.repo <<END
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
END

上述操作在三台机上都做

1.9 .配置时间同步

#master1 ,master2 ,node1 均做如下操作
yum install -y ntpdate
ntpdate ntp1.aliyun.com

crontab -e
* */1 * * * /usr/sbin/ntpdate ntp1.aliyun.com
systemctl restart crond

1.10 .开启ipvs

ipvs IP virtual server 4层

kube-proxy支持iptables 和ipvs两种模式

k8s 1.8出来的ipvs , 1.11版本稳定

k8s 1.1出来的iptables

ipvs 和iptables 都是基于netfilter实现的

ipvs采用的hash表记录规则的

ipvs和iptables的区别

1,ipvs为大型集群提供了更好的可扩展性和性能

2,ipvs 支持比iptables更复杂的负载均衡算法

3,ipvs支持服务器健康检查和连接重试等 功能

#master1 ,master2 ,node1 均做如下操作
cat > /etc/sysconfig/modules/ipvs.modules <<END
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
 /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
 if [ 0 -eq 0 ]; then
 /sbin/modprobe ${kernel_module}
 fi
done
END
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules

1.11 .安装基础软件包

#master1 ,master2 ,node1 均做如下操作
yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet 

1.12 .安装iptables

#master1 ,master2 ,node1 均做如下操作
yum install -y iptables-services
systemctl disable iptables --now
iptables -F

2.安装docker服务

2.1安装docker并启动服务

#master1 ,master2 ,node1 均做如下操作
yum install -y docker-cd docker-ce-cli containerd.io
systemctl enable docker --now

2.2配置镜像加速器

cat > /etc/docker/daemon.json <<END
{
  "registry-mirrors": ["https://5vrctq3v.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"],"exec-opts": ["native.cgroupdriver=systemd"]
}
END
systemctl daemon-reload
systemctl restart docker

3.安装初始化k8s所需的软件包

3.1 安装

yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6
systemctl enable kubelet --now

4.通过keepalived+nginx 实现k8s apiserver节点高可用

4.1安装nginx和keepalived主备

#在master1和master2上做nginx主备安装
yum install -y nginx keepalived
yum -y install nginx-all-modules    #这个如果不安装的话,启动nginx服务时会报错

4.2修改nginx配置文件,主备配置文件是一样的

#Master1和Master2上均做
cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
cat /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
    worker_connections 1024;
}
stream {
 log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
 access_log /var/log/nginx/k8s-access.log main;
 upstream k8s-apiserver {
   server 192.168.1.180:6443;    #Master1 APISERVER IP:PORT
   server 192.168.1.181:6443;    #Master2 APISERVER IP:PORT
 }
 server {
  listen 16443;  #由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
  proxy_pass k8s-apiserver;
 }
}
http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    access_log  /var/log/nginx/access.log  main;
    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 4096;
    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
   #include /etc/nginx/conf.d/*.conf;
    server {
        listen       80 default_server;
        server_name  _;
        location = / {
        }
    }
}
systemctl enable nginx --now

4.3 keepalived配置

#Master1做成主
#vrrp_script:指定检查 nginx 工作状态脚本(根据 nginx 状态判断是否故障转移) #virtual_ipaddress:虚拟 IP(VIP)
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
cat /etc/keepalived/keepalived.conf 
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

vrrp_script check_nginx {
        script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
    state MASTER
    interface ens192    #修改为实际网卡名
    virtual_router_id 51  #VRRP路由ID实例,每个实例是唯一的
    priority 100   #优先级,备服务器设置90
    advert_int 1    #指定VRRP心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    #虚拟IP
    virtual_ipaddress {
        192.168.1.199/24
    }
    track_script {
       check_nginx
    }
}


#脚本
#注:keepalived 根据脚本返回状态码(0 为工作不正常,非 0 正常)判断是否故障转移。
cat /etc/keepalived/check_nginx.sh
#!/bin/bash 
count=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$") 
if [ "$count" -eq 0 ];then 
 systemctl stop keepalived 
fi

chmod a+x /etc/keepalived/check_nginx.sh

systemctl enable keepalived --now
#Master2做成备
#vrrp_script:指定检查 nginx 工作状态脚本(根据 nginx 状态判断是否故障转移) #virtual_ipaddress:虚拟 IP(VIP)
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
cat /etc/keepalived/keepalived.conf 
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_BACKUP
}

vrrp_script check_nginx {
        script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens192    #修改为实际网卡名
    virtual_router_id 51  #VRRP路由ID实例,每个实例是唯一的
    priority 90   #优先级,主服务器设置100
    advert_int 1    #指定VRRP心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    #虚拟IP
    virtual_ipaddress {
        192.168.1.199/24
    }
    track_script {
       check_nginx
    }
}

#脚本
#注:keepalived 根据脚本返回状态码(0 为工作不正常,非 0 正常)判断是否故障转移。
cat /etc/keepalived/check_nginx.sh
#!/bin/bash 
count=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$") 
if [ "$count" -eq 0 ];then 
 systemctl stop keepalived 
fi
chmod a+x /etc/keepalived/check_nginx.sh

systemctl enable keepalived --now

5.Kubeadm初始化k8s集群

#Master1上创建kubeadm-config.yaml文件
cd /root
cat > kubeadm-config.yaml <<END
apiVersion: kubeadm.k8s.io/v1beta2 
kind: ClusterConfiguration 
kubernetesVersion: v1.23.6 
controlPlaneEndpoint: 192.168.1.199:16443 
imageRepository: registry.aliyuncs.com/google_containers 
apiServer:
 certSANs:
 - 192.168.1.180
 - 192.168.1.181
 - 192.168.1.182
 - 192.168.1.199 
networking: 
 podSubnet: 10.244.0.0/16 
 serviceSubnet: 10.10.0.0/16 
--- 
apiVersion: kubeproxy.config.k8s.io/v1alpha1 
kind: KubeProxyConfiguration 
mode: ipvs
END

kubeadm init --config kubeadm-config.yaml 
#如果执行时报错的话,在上面这行命令的最后加一个参数 --ignore-preflight-errors=SystemVerification
kubeadm init --config kubeadm-config.yaml --ignore-preflight-errors=SystemVerification

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

6.扩容k8s集群–添加master节点(master2)

在master2 创建证书存放目录

cd /root && mkdir -p /etc/kubernetes/pki/etcd && mkdir -p ~/.kube/

在master1上拷贝证书

在master1上操作
cd /etc/kubernetes/pki
scp ca.crt master2:/etc/kubernetes/pki/
scp ca.key master2:/etc/kubernetes/pki
scp sa.key master2:/etc/kubernetes/pki
scp sa.pub master2:/etc/kubernetes/pki/
scp front-proxy-ca.crt master2:/etc/kubernetes/pki/
scp front-proxy-ca.key master2:/etc/kubernetes/pki/
cd /etc/kubernetes/pki/etcd/
scp ca.crt master2:/etc/kubernetes/pki/etcd/
scp ca.key master2:/etc/kubernetes/pki/etcd/

#copy config
cd /root/.kube
scp config master2:/root/.kube/
#在master1上操作
kubeadm toker create --print-join-command
显示出如下内容
#将这些内容复制在master2上执行
kubeadm join 192.168.1.199:16443 --token hsjuj7.v42f3yop422uy25y --discovery-token-ca-cert-hash sha256:a8019d8788255d8b7a40b3d193f0b77ef48b89bac6a828830472e9c50772da56 --control-plane

看到下面这样表示已加入成功

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-AesVGClH-1653093520463)(C:\Users\mack\AppData\Roaming\Typora\typora-user-images\1652964925081.png)]

7.添加node节点到k8s集群

7.1在master1获取添加node节点到集群的指令

#Master1执行如下指令,然后复制它的执行结果,在node1节点指行
kubeadm token create --print-join-command
#结果
kubeadm join 192.168.1.199:16443 --token u0uchg.7cliezajlcrr9n8t --discovery-token-ca-cert-hash sha256:a8019d8788255d8b7a40b3d193f0b77ef48b89bac6a828830472e9c50772da56

7.2在Node1上执行

#node1上执行
kubeadm join 192.168.1.199:16443 --token u0uchg.7cliezajlcrr9n8t --discovery-token-ca-cert-hash sha256:a8019d8788255d8b7a40b3d193f0b77ef48b89bac6a828830472e9c50772da56

7.3在master1上检查加入状态

kubectl get nodes
NAME      STATUS     ROLES                  AGE   VERSION
master1   NotReady   control-plane,master   10h   v1.23.6
master2   NotReady   control-plane,master   9h    v1.23.6
node1     NotReady   <none>                 91s   v1.23.6

#可以看出node1已加入,roles显示<none>,可以通过如下指令打个标签,让它显示worker,这条在master1上执行
kubectl label node node1 node-role.kubernetes.io/worker=worker
#再次检查已显示worker角色
kubectl get nodes
NAME      STATUS     ROLES                  AGE     VERSION
master1   NotReady   control-plane,master   10h     v1.23.6
master2   NotReady   control-plane,master   10h     v1.23.6
node1     NotReady   worker                 6m36s   v1.23.6

8.安装calico插件

8.1上传calico.yaml文件

将calico.yaml上传到master1的root家目录

calico.yaml

8.2apply calico.yaml

#Master1
cd
kubectl apply -f calico.yaml

8.3查看pods状态

kubectl get pods -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS      AGE     IP              NODE      NOMINATED NODE   READINESS GATES
calico-kube-controllers-677cd97c8d-4hc2w   1/1     Running   0             3m34s   10.244.180.1    master2   <none>           <none>
calico-node-9gpjt                          1/1     Running   0             3m34s   192.168.1.181   master2   <none>           <none>
calico-node-g5p49                          1/1     Running   0             3m34s   192.168.1.182   node1     <none>           <none>
calico-node-qk6wz                          1/1     Running   0             3m34s   192.168.1.180   master1   <none>           <none>
coredns-6d8c4cb4d-8vlzg                    1/1     Running   0             13h     10.244.137.66   master1   <none>           <none>
coredns-6d8c4cb4d-d88qx                    1/1     Running   0             13h     10.244.137.65   master1   <none>           <none>
etcd-master1                               1/1     Running   2             13h     192.168.1.180   master1   <none>           <none>
etcd-master2                               1/1     Running   0             12h     192.168.1.181   master2   <none>           <none>
kube-apiserver-master1                     1/1     Running   2             13h     192.168.1.180   master1   <none>           <none>
kube-apiserver-master2                     1/1     Running   1             12h     192.168.1.181   master2   <none>           <none>
kube-controller-manager-master1            1/1     Running   3 (12h ago)   13h     192.168.1.180   master1   <none>           <none>
kube-controller-manager-master2            1/1     Running   1             12h     192.168.1.181   master2   <none>           <none>
kube-proxy-5qhmm                           1/1     Running   0             127m    192.168.1.182   node1     <none>           <none>
kube-proxy-qrf2l                           1/1     Running   0             12h     192.168.1.181   master2   <none>           <none>
kube-proxy-zkk74                           1/1     Running   0             13h     192.168.1.180   master1   <none>           <none>
kube-scheduler-master1                     1/1     Running   3 (12h ago)   13h     192.168.1.180   master1   <none>           <none>
kube-scheduler-master2                     1/1     Running   1             12h     192.168.1.181   master2   <none>           <none>

9.测试在k8s创建pod是否可以正常访问网络

#在node1节点pull一个busybox镜像
docker pull busybox
#在master1执行如下指令
kubectl run busybox --image busybox --restart=Never --rm -it busybox -- sh
#根据如下的执行结果可以看出它能访问网络,说明calico网络插件已经被正常安装了
kubectl run busybox --image busybox --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # ping www.baidu.com
PING www.baidu.com (183.232.231.174): 56 data bytes
64 bytes from 183.232.231.174: seq=0 ttl=55 time=100.526 ms
64 bytes from 183.232.231.174: seq=1 ttl=55 time=309.393 ms

10.测试k8s集群中部署tomcat服务

#在node1节点pull一个tomcat镜像
docker pull tomcat

#master1
cd
cat > tomcat.yaml <<END
apiVersion: v1  #pod属于k8s核心组v1
kind: Pod  #创建的是一个Pod资源
metadata:  #元数据
  name: demo-pod  #pod名字
  namespace: default  #pod所属的名称空间
  labels:
    app: myapp  #pod具有的标签
    env: dev      #pod具有的标签
spec:
  containers:      #定义一个容器,容器是对象列表,下面可以有多个name
  - name:  tomcat-pod-java  #容器的名字
    ports:
    - containerPort: 8080
    image: tomcat:latest   #容器使用的镜像
    imagePullPolicy: IfNotPresent
END

kubectl apply -f tomcat.yaml 
pod/demo-pod created
[root@master1 ~]# kubectl get pods
NAME       READY   STATUS    RESTARTS   AGE
demo-pod   1/1     Running   0          11s

#这时可以检询到pod里tomcat服务的ip
kubectl get pods -o wide
NAME       READY   STATUS    RESTARTS   AGE    IP               NODE    NOMINATED NODE   READINESS GATES
demo-pod   1/1     Running   0          113m   10.244.166.131   node1   <none>           <none>
#在k8s集群中的节点上是可以该问访ip的,但是在集群外的主机上是不能访问到它的。
#为了解决这一问题,需要下面步骤
cd
cat > tomcat-service.yaml <<END
apiVersion: v1
kind: Service
metadata:
  name: tomcat
spec:
  type: NodePort
  ports:
    - port: 8080
      nodePort: 30080
  selector:
    app: myapp
    env: dev
END
kubectl apply -f tomcat-service.yaml
kubectl get service
#这时在集群外的节点上通过两个master节点的ip:30080或者vip:30080即可请求到tomcat提供的服务

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-88WWyJyB-1653093520463)(C:\Users\mack\AppData\Roaming\Typora\typora-user-images\1653027041550.png)]

11.测试coredns是否正常

kubectl run busybox --image busybox --restart=Never --rm -it busybox -- sh
nslookup tomcat.default.svc.cluster.local

113m 10.244.166.131 node1
#在k8s集群中的节点上是可以该问访ip的,但是在集群外的主机上是不能访问到它的。
#为了解决这一问题,需要下面步骤
cd
cat > tomcat-service.yaml <<END
apiVersion: v1
kind: Service
metadata:
name: tomcat
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30080
selector:
app: myapp
env: dev
END
kubectl apply -f tomcat-service.yaml
kubectl get service
#这时在集群外的节点上通过两个master节点的ip:30080或者vip:30080即可请求到tomcat提供的服务


[外链图片转存中...(img-88WWyJyB-1653093520463)]

## 11.测试coredns是否正常

```bash
kubectl run busybox --image busybox --restart=Never --rm -it busybox -- sh
nslookup tomcat.default.svc.cluster.local
  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值