Centos7使用Kubeadm方式部署K8S集群V1.91版本

0、环境说明

主机名称主机IP主机用户
Master192.168.100.100root
Slave1192.168.100.101root
Slave2192.168.100.102root

1、准备工作

1.1 关闭防火墙、SELinux操作

三个节点都需要操作
#Master、Slave1、Slave2
[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
[root@localhost ~]# sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux

1.2 修改主机名以及hosts文件

三个节点都需要操作
#Master
[root@localhost ~]# hostnamectl set-hostname master
#Slave1
[root@localhost ~]# hostnamectl set-hostname slave1
#Slave2
[root@localhost ~]# hostnamectl set-hostname slave2
#Master
[root@master ~]# cat <<EOF > /etc/hosts
> 192.168.100.100 master
> 192.168.100.101 slave1
> 192.168.100.102 slave2
> EOF
#编写好hosts文件复制到其他节点
[root@master ~]# scp /etc/hosts slave1:/etc/
[root@master ~]# scp /etc/hosts slave2:/etc/

1.3 关闭Swap分区

三个节点都需要操作
#Master、Slave1、Slave2
#先让机器临时关闭,方便后续实验
[root@master ~]# swapoff -a

#避免开机重新启动,禁止机器开机启动
[root@master ~]# sed -i 's/.*swap.*/#&/' /etc/fstab

1.4 安装Docker

三台机器都需要操作
#Master、Slave1、Slave2
[root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 git
[root@master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@master ~]# yum install docker-ce -y
[root@master ~]# systemctl enable --now docker

2、使用Kubeadm部署Kubernetes

2.1 配置K8S的yum源及安装

#配置Yum源
[root@master ~]#  cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#写好复制到另外两个节点
[root@master ~]# scp /etc/yum.repos.d/kubernetes.repo slave1:/etc/yum.repos.d   
[root@master ~]# scp /etc/yum.repos.d/kubernetes.repo slave2:/etc/yum.repos.d 

三台机器都需要操作
#本次实验采用的是V1.91,所以直接指点版本安装了
[root@master ~]# yum install -y kubelet-1.19.1-0.x86_64 kubeadm-1.19.1-0.x86_64 kubectl-1.19.1-0.x86_64 ipvsadm
#检查安装的版本是否正确
[root@master ~]# rpm -qa | grep -E "kubectl|kubeadm|kubelet"
kubectl-1.19.1-0.x86_64
kubelet-1.19.1-0.x86_64
kubeadm-1.19.1-0.x86_64
#加载ipvs相关的模块
[root@master ~]# modprobe ip_vs && modprobe ip_vs_rr && modprobe ip_vs_wrr && modprobe ip_vs_sh && modprobe nf_conntrack_ipv4

#此办法是临时使用,如果开机配置会丢失,所以需要写入开机启动
[root@master ~]# vim /etc/rc.local
modprobe ip_vs && modprobe ip_vs_rr && modprobe ip_vs_wrr && modprobe ip_vs_sh && modprobe nf_conntrack_ipv4
[root@master ~]# chmod +x /etc/rc.local
#复制到其他两个节点
[root@master ~]# scp /etc/rc.local slave1:/etc/  
[root@master ~]# scp /etc/rc.local slave2:/etc/

#配置相关的转发参数,避免报错
[root@master ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> vm.swappiness=0
> EOF
#复制到其他节点
[root@master ~]# scp /etc/sysctl.d/k8s.conf slave1:/etc/sysctl.d/ 
[root@master ~]# scp /etc/sysctl.d/k8s.conf slave2:/etc/sysctl.d/

三台机器都需要操作
#应用配置,并检查
#Master、Slave1、Slave2
[root@master ~]# sysctl --system
[root@master ~]# lsmod | grep ip_vs
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs                 140944  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          105745  7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c              12644  2 xfs,ip_vs

2.2 配置及启动Kubelet(所有节点)

配置Kubelet的cgroups
#临时设置一个变量(三台机器都做)
#Master、Slave1、Slave2
[root@master ~]# DOCKER_CGROUPS=`docker info|grep "Cgroup Driver"|awk '{print $3}'`
[root@master ~]# cat >/etc/sysconfig/kubelet<<EOF
> KUBELET_EXTRA_ARGS="--cgroup-driver=cgroupfs --pod-infra-container-image=k8s.gcr.io/pause:3.2"
> EOF
> #配置好后,复制到其他节点
[root@master ~]# scp /etc/sysconfig/kubelet slave1:/etc/sysconfig/  
[root@master ~]# scp /etc/sysconfig/kubelet slave2:/etc/sysconfig/

三台机器都需要操作
#刷新配置文件并设置开机启动及启动测试
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl enable kubelet && systemctl restart kubelet

此时通过Kubelet的状态,是错误的,这个是正常的。因为我们并没有对Kubelet进行初始化。

[root@master ~]#  kubeadm init --kubernetes-version=v1.19.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.100.100 --ignore-preflight-errors=Swap
此时发现,初始化应该是失败的,因为网络原因,可能会导致拉取镜像失败的情况,这个时候,我们需要记录他需要的镜像信息,然后去国内的镜像进行拉取,并进行重命名操作
[root@master ~]# vim pull.sh
#!/bin/bash
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.19.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.19.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.19.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.19.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2

[root@master ~]# vim tag.sh
#!/bin/bash
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.19.1 k8s.gcr.io/kube-controller-manager:v1.19.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.19.1 k8s.gcr.io/kube-proxy:v1.19.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.19.1 k8s.gcr.io/kube-apiserver:v1.19.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.19.1 k8s.gcr.io/kube-scheduler:v1.19.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
[root@master ~]# chmod +x pull.sh tag.sh
#复制到其他节点
[root@master ~]# scp pull.sh tag.sh slave1:/root/
[root@master ~]# scp pull.sh tag.sh slave2:/root/

三台机器都需要执行这个脚本
#Master、Slave1、Slave2
[root@master ~]# ./pull.sh && ./tag.sh
#检查
[root@master ~]# docker images
REPOSITORY                                                                    TAG        IMAGE ID       CREATED       SIZE
k8s.gcr.io/kube-proxy                                                         v1.19.1    33c60812eab8   3 years ago   118MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.19.1    33c60812eab8   3 years ago   118MB
k8s.gcr.io/kube-apiserver                                                     v1.19.1    ce0df89806bb   3 years ago   119MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.19.1    ce0df89806bb   3 years ago   119MB
k8s.gcr.io/kube-controller-manager                                            v1.19.1    538929063f23   3 years ago   111MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.19.1    538929063f23   3 years ago   111MB
k8s.gcr.io/kube-scheduler                                                     v1.19.1    49eb8a235d05   3 years ago   45.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.19.1    49eb8a235d05   3 years ago   45.6MB
k8s.gcr.io/etcd                                                               3.4.13-0   0369cf4303ff   3 years ago   253MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.4.13-0   0369cf4303ff   3 years ago   253MB
k8s.gcr.io/coredns                                                            1.7.0      bfe3a36ebd25   3 years ago   45.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   1.7.0      bfe3a36ebd25   3 years ago   45.2MB
k8s.gcr.io/pause                                                              3.2        80d28bedfe5d   4 years ago   683kB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.2        80d28bedfe5d   4 years ago   683kB

#如果第一次初始化失败了,需要reset重新初始化
#初始化只需要在master做,不要去其他机器也做了
[root@master ~]# kubeadm reset
[root@master ~]# kubeadm init --kubernetes-version=v1.19.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.100.100  --ignore-preflight-errors=Swap
//--apiserver-advertise-address=192.168.100.100 写master的IP地址
//--pod-network-cidr=10.244.0.0/16 分给pod的网段范围
--ignore-preflight-errors=Swap 忽略Swap交换分区问题
#出现以下信息代表成功
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.100:6443 --token pwhwgj.khxuhf9ssz8rtzac \
    --discovery-token-ca-cert-hash sha256:a56c44f2418cf1373ae384df3acc396d1eb496c516e82832bae4cbde36e74e69 
#务必要记录好最下面的kubeadm这一行
kubeadm join 192.168.100.100:6443 --token pwhwgj.khxuhf9ssz8rtzac \
    --discovery-token-ca-cert-hash sha256:a56c44f2418cf1373ae384df3acc396d1eb496c516e82832bae4cbde36e74e69

#根据初始化结束的信息进行如下操作
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

#查看Node节点
[root@master ~]#  kubectl get nodes
NAME     STATUS     ROLES    AGE    VERSION
master   NotReady   master   4m9s   v1.19.1
此时Master是NotReady状态是正常的,因为没有部署网络插件

#查看默认的命名空间pod信息
#这个地方需要多等一会,等机器把这几个都做完,大概等三五分钟,看机器配置,配置高会很快
[root@master ~]#  kubectl get pod -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-f9fd979d6-67625          0/1     Pending   0          10m
coredns-f9fd979d6-9wfcv          0/1     Pending   0          10m
etcd-master                      1/1     Running   1          104s
kube-apiserver-master            1/1     Running   1          78s
kube-controller-manager-master   1/1     Running   1          84s
kube-proxy-wqrck                 1/1     Running   0          10m
kube-scheduler-master            1/1     Running   1          101s
coredns那两个Pending状态是正常的,因为没有部署网络插件
#如果3-5分钟之后,pod还是没有加载出来,重启机器即可

2.2 部署Flannel

#下载flannel的yml文件
[root@master ~]# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#进行修改
[root@master ~]# vim kube-flannel.yml
# 这条是替换除了最上面的命名空间其他的namespace参数,都替换成kube-system
:% s/namespace: kube-flannel/namespace: kube-system/g
#修改命名空间名字,改成刚才做的kube-system
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-system
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eno16777736
#添加了--iface参数,改成自己的网卡即可
      tolerations:
      - operator: Exists
        effect: NoSchedule
      - key: node.kubernetes.io/not-ready
        operator: Exists
        effect: NoSchedule
#代表着如果节点状态是not-ready没有准备也去做
      initContainers:
      - name: install-cni-plugin
        #image: docker.io/flannel/flannel-cni-plugin:v1.4.1-flannel1
        image: registry.cn-hangzhou.aliyuncs.com/docker-centos7-k8s/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        #image: docker.io/flannel/flannel:v0.25.1
        image: registry.cn-hangzhou.aliyuncs.com/docker-centos7-k8s/mirrored-flannelcni-flannel:v0.18.1
        command:
        - cp
#修改成这两个镜像,所有节点都需要拉取
[root@master ~]# kubectl apply -f kube-flannel.yml
#稍等一会,查看pod情况
[root@master ~]# kubectl get pod -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-f9fd979d6-hmxk7          1/1     Running   0          44m
coredns-f9fd979d6-l28vq          1/1     Running   0          44m
etcd-master                      1/1     Running   1          25m
kube-apiserver-master            1/1     Running   3          24m
kube-controller-manager-master   1/1     Running   1          24m
kube-flannel-ds-f6vzn            1/1     Running   0          93s
kube-proxy-fnzhf                 1/1     Running   0          44m
kube-scheduler-master            1/1     Running   1          24m

2.3 其他节点加入K8S集群

使用我们初始化的中的信息,在Node节点执行加入
#Slave1、Slave2
[root@slave1 ~]# kubeadm join 172.30.100.200:6443 --token qai6yi.4pdh7m5cwujju9cv \
>     --discovery-token-ca-cert-hash sha256:47fc0a20c83812bc5b2fee909841c14e8dcacd64e9ab6c149c860ab9f8e7b7fd
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 26.1.1. Latest validated version: 19.03
	[WARNING SystemVerification]: missing optional cgroups: pids
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

2.4 主节点进行查看

[root@master ~]# kubectl get node
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   55m     v1.19.1
slave1   Ready    <none>   8m3s    v1.19.1
slave2   Ready    <none>   5m59s   v1.19.1
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值