【k8s记录系列】实操kubeadm安装部署Kubernetes集群全过程 V1.20.5

首先感谢王跃辉我辉哥提供的技术支持,嘿嘿!

准备工具:VMware(已经启动好三台Linux服务器Centos7.x版本),远程连接工具putty,xshell等

如果还没有安装虚拟机,或者不知道怎么操作,请移步这里:

https://blog.csdn.net/ma726518972/article/details/106250012

注意:服务器至少是两核,如不是,可以通过编辑虚拟机设置。

所有主机配置如下

IP地址主机名节点角色K8S版本安装方式
192.168.218.131k8smastermasterV1.18.0kubeadm
192.168.218.132k8snode1node1V1.18.0kubeadm
192.168.218.133k8snode2node2V1.18.0kubeadm

 建议先配置完Master服务器,再配置node服务器。

第一步:环境配置

1.1修改主机名(所有服务器都执行)

hostnamectl set-hostname k8smaster

1.2配置hosts解析(所有服务器都执行)

cp /etc/hosts /etc/hosts.bak`date +%F`
cat >> /etc/hosts <<'EOF'
192.168.218.131  k8smaster
192.168.218.132  k8snode1
192.168.218.133  k8snode2
EOF

1.3关闭防火墙(所有服务器都执行)

# 关闭 firewalld 防火墙(允许 master 和 node 的网络通信)

systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld

1.4关闭 SElinux 安全模组(所有服务器都执行)

# 关闭 SElinux 安全模组(让容器可以读取主机的文件系统)

setenforce 0
sed -i.bak`date +%F` 's|SELINUX=.*|SELINUX=disabled|g' /etc/selinux/config

1.5关闭 Swap 交换分区(所有服务器都执行)

# 关闭 Swap 交换分区(启用了 Swap,则 Qos 策略可能会失效)

swapoff -a && sed -i.bak "s/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g" /etc/fstab

1.6修改阿里的 yum 源(所有服务器都执行)

# 配置阿里的yum源 
# 一下顺序不能错
# 安装wget命令

yum install -y wget
mkdir -p /etc/yum.repos.d/bak`date +%F` && yes|mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak`date +%F`
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos7_base.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.cloud.tencent.com/repo/epel-7.repo
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum clean all && yum makecache fast

1.7安装docker容器(所有服务器都执行)

## Install required packages.

yum install -y yum-utils device-mapper-persistent-data lvm2

## Add Docker repository.

yum-config-manager --add-repo \
  https://download.docker.com/linux/centos/docker-ce.repo

## Install Docker CE.

yum update -y && yum install -y \
  containerd.io-1.2.10 \
  docker-ce-19.03.4 \
  docker-ce-cli-19.03.4

## Create /etc/docker directory.

 mkdir -p /etc/docker

# Setup daemon.

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker

systemctl daemon-reload
systemctl restart docker

第二步:安装kubeadm

2.1安装 Kubeadm 和相关工具(所有服务器都执行)

yum list|egrep 'kubelet|kubeadm|kubectl'

# 注:kubectl仅仅是个二进制文件而已即 /usr/bin/kubectl

yum install -y kubelet kubeadm kubectl

# 启动 Kubelet

systemctl start kubelet && systemctl enable kubelet

 #查看kubelet状态

systemctl status kubelet

运行 systemctl status kubelet 发现kubelet服务启动失败,错误代码255。
kubelet.service: main process exited, code=exited, status=255/n/a

后来查了资料,运行 journalctl -xefu kubelet 命令查看systemd日志才发现,真正的错误是:
unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory

这个错误在运行kubeadm init 生成CA证书后会被自动解决,此处可先忽略。
再回过头来看Kubernets官方文档,其实里面已经写了很清楚了:

The kubelet is now restarting every few seconds, as it waits in a crashloop for kubeadm to tell it what to do. This crashloop is expected and normal, please proceed with the next step and the kubelet will start running normally.

简单地说就是在kubeadm init 之前kubelet会不断重启。

第三步:创建 Kubernetes 集群

3.1初始化 Master 节点(在Master服务器)

# 注1:版本要根据实际的来更改,我安装的是K8S 1.18.2,所以改成了 --kubernetes-version=v1.20.5
# 注2:pod-network-cidr我将会用Flannel 所以改为了 --pod-network-cidr=10.244.0.0/16

kubeadm init --kubernetes-version=v1.20.5 \
--pod-network-cidr=10.244.0.0/16 \
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers

 # 大约一分钟
# 输出以下内容表示成功:....................  中间日志省略,最后的日志如下   ...........................

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.218.131:6443 --token yptahz.1rm8nbkg1frzbhlh \
    --discovery-token-ca-cert-hash sha256:dc7ebc35051b0ee8c6dcb8f12f2fc8b61766cef8960210c839ea51e722e6a26c

#  上面一句是 Node 节点加入集群的命令,记得保存一下,而且token的有效期是24小时,如果过期了,使用kubeadm token create再生成一个token
#  上面过程中生成的配置文件在  /var/lib/kubelet/config.yaml

3.2在master上操作,拷贝上面的命令(在Master服务器)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

 # 打印文件看一下内容

[root@localhost ~]# cat $HOME/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EVXlNVEV6TVRFMU1sb1hEVE13TURVeE9URXpNVEUxTWxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS0ZNCk05aTVoZERRWTJUR01hd1lYeVpHM1BIOHpHRzFEVmVGQkhNbEx4SW9FemFVZzgyc0U2NGtlN0E3dk1HWXdhY0IKTHJoRTBxb21kTDJ0eXhBME1VMHNSaFVkUHc1Yi90L0I3VElUcFBYVGRlbEFXbDV1N25GWlVmd29vbDMxQk1TUQpkTHNrdmc4RUJiSHVVZ0lJbGdpUnRSY2IvRlZleGl4NjhRZ2JXZjBhWFFRYWYrSkhqSlpTb0ZDc3VoYXpMTTZVCm8zREs2eHloN2REWFY5UFZ0ZEZpYU1iNDBGdnZWS3AvK09zeCtnbmFkSWlWQ1NIbitWTmM4ZjNCd0YvdVJCcXkKdFljNVpneDhQR1FTdzNHUVg5VFVqcGJwdktOUmgwUERYK0ljWTFZdjRvelNLR3hSdW5wODdLQWhUb2pkd0RXbgpvV25hNks0TlBjNU0zSnVDODdNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFFWXZLWm9XZnlqZVNEaHpqTUhBSWRrU3VUbWIKNzZhcUxCVUtPOFJUZ3E2Z0RYSVp4aEx2aWhPR3Z4RUFMbFFhT0pvcWJiYm05cXV1Z053WTR3alVud3Q0TUY0KwpBTXd1T2QrTFNJdU1TUDBmMnZ5aU1JZTVGR2thVW9PYWlQN3ppRHQzM2JSVmltZFMvczUxY1JlM0RNdWF5VjNUCngxa09qTkFPNDQrZDF4Q2tEVUZJMlk1QzNYdUFXYTNMQ3F2Z3IrQWtPV3VKR21DcWVXMjVBVlYvY1hlU1E1d2EKNUs5a25OQjgrd2NpWTl6ZjQ2dWZ1YytVZkZLa3JFSlkwMHRjQ3A3SWdtcDZ3a1VCZU1xWFJ2UG9rR1Z1QjJQbApubU9VeCt3ekdzMXBYSmdaekZ5ZDFKa3ZGMjlmaDhTZEZ1ZkxpckQ2K2w2T2JlSHVwakcrbjZFckNmRT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.218.131:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJSU1HQ2ZUbU50TnN3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBMU1qRXhNekV4TlRKYUZ3MHlNVEExTWpFeE16RXhOVFJhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXFmTEV6UEpqd29GZFZXZGIKL2lIOVRmcUVzRWNHQjVvYW9TcVhsWGxSUTFNWHBVRkNYbG02K25KTHZSZHhDSVJDY0RCODFJVmp5QUZOT01jTApXb1hLcFVyOTJxN1hzZGhUb2VFSlZTZk92WkFQemVKeFN3ZjZCUnpna3FuamtmTzJkTzNMcngrSm9PSGFWR3FVCnd3eGJ5Mk5OTWx1aFFPZE1VZ25NMmZWb2xUMkxZQldVajgyZDBML1ErcG16VmxzUnozMHB6cXFKRGlKb1pxVWEKTmhHQjljaUNNQmhURUF2bVlyV3d1OUpaWmxLRUlsZTZEL1FIRlkzaC9vSnRLeVB1NTh4ZjRQTTRDNnJYcnRPegpYTE9GclFRZThWY1ZFKzlpT2t6bVliaUhJK3loSHJXNCs1aSt1c0JxekV5YWVjQXRWcDMzWTFnYUVWVzF3TGxaCnJ1b3BiUUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFIV1R0WHdqdlFtVzNuV2k1NkRoaWY4ZTVpQmkrb0Y0YlQ2Qgpzc3FDWmVlc0V3WVpudXBGeHVMNGVuQldtWG1idi96c3NVLzRDMzdjS01ZWUlCbU5CZFUxdW5uYXVqTTFQQVJSCkVjbXQ1QlgzVjEyVm9BOUlaVXQ4YnFkL2pSYXdvSzU2OHM0eGF4TW5UeGh3cSt0TXpJV3piL0JlZTBHdGRyRE0KS3dKT2V2cFZvcEdXTjlCZW1GZUFXMU5Odlg5L3NtenF0enNoR3ZvYVBzbGgyMzBKNkRKK3B5ZC9HR3k1TzlPcQpocld3c2VRWVU2MUdteHdmdnd4RmxNanVEdlFaUUJ2SFF1Q2RpaXJFUjRzNjZvNkJHNGwzYis2WWt5UWo0WkdiClNWek1ZT0luY1E0MmNPR21IUTZaL0thdjdmc1FQS1VnSXRvdTcrVWxVMTJsYWdlc0hEbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBcWZMRXpQSmp3b0ZkVldkYi9pSDlUZnFFc0VjR0I1b2FvU3FYbFhsUlExTVhwVUZDClhsbTYrbkpMdlJkeENJUkNjREI4MUlWanlBRk5PTWNMV29YS3BVcjkycTdYc2RoVG9lRUpWU2ZPdlpBUHplSngKU3dmNkJSemdrcW5qa2ZPMmRPM0xyeCtKb09IYVZHcVV3d3hieTJOTk1sdWhRT2RNVWduTTJmVm9sVDJMWUJXVQpqODJkMEwvUStwbXpWbHNSejMwcHpxcUpEaUpvWnFVYU5oR0I5Y2lDTUJoVEVBdm1Zcld3dTlKWlpsS0VJbGU2CkQvUUhGWTNoL29KdEt5UHU1OHhmNFBNNEM2clhydE96WExPRnJRUWU4VmNWRSs5aU9rem1ZYmlISSt5aEhyVzQKKzVpK3VzQnF6RXlhZWNBdFZwMzNZMWdhRVZXMXdMbFpydW9wYlFJREFRQUJBb0lCQUFReGNqdWdTMmZVSzBwZApKMzdvdGNoRHd4eGFWRUxCd2FCeVhaVVpqakM4RHh4THRPaUJERVQ3cHZTK2JGS0tlTjB0eFJhMVI5WDZlajVKCll2VlQwY0VzVFlFa3lUdWhHOGNsdDBZN21qVkJKYkt0d0ovYVRZZnN3M202NlZ1RGlOL3ZzaFBiRWxrKzJWVTEKMy8vRUFVdk9ZbXc0cUl6aWFCYXFHVHpUZWtZY1dVMDJPaTN1NjdMcFdDVEkxYVVoMGxrWC9qaVNsUlRocTRrWQoySm1DdmNaMlpxTmNqZFEwZVJML2phR1I1SFg3Q3ZmRXBNVGVOd29xbEVBUTdwQjJZdWlML1Z6L1N6M25IL1QwClUvb20wTXBaZWxlMEZuVWw0cGxEZjY3OXI4VHFpYXdyYUIwQXV1aVM5aXBLY2dkRVZrdnBQL0M1dlZHSTYvM1MKYVJiS3ROMENnWUVBd0RjS3ZHdlhlQi9mUTN3amZvYWE3REtCbnhvQXRRZCtENkp2NWlPY21rNjIyRDlvS1ZtbApCbGxLQzZDWjk3N28xdEVreEtOWm5EaGhRZ0Uzay9oS3RBR2puK25JRGFaWGVOdzBpZTc0TWF1ZWRYeVhNckk0CllRYUErRVhvU0tpbXZWcmRqUUFMbTcybXlSSWM5VXpIUnlESTQ4SUJqWWVkSTJBTlM1Z2hoeU1DZ1lFQTRsZ2oKS2M2Y3dZQ1ZWODNJVGlYTk5oZDNkSE9nT2NBNjEweWVEcytYc0lhV0FHdGFwQ0ZnZjR2dTh2cXVvOW5XSWdjQwpiYW93L2dvSjR5d2dLVDY2N3NJOHRrOW42L3pISDN1by9RWVEvTDhFK254d1lodGVFZlhnL3FlTXJpaVZIUjlXCjhJeHYzS1c2cnF0ZHFrbVI1MGQ2T2NuckMwNmhYOEdvUlA1RzNpOENnWUVBbVhsWmNTa0tXamZZcEtHeUZZeVUKbHBPZE85UWZUR3czRTNTM3RDSXJJR3BKUkZFY2NpZkp4RS8yOTJHOGpqdzQzWTBRdHBGWE00MHcydXJ0M1pBYQoxYStaWGsza0ZrSURCZFdOZmJUNUoyL0lqalowNDEyNTluNmk2NW1sNXA0Q3hKNlExOHg1ZUZqdG13NkRZTGwxClJDM0JPVm5tczRMY3pTb2NjNGQ4L2RFQ2dZRUF1UFpUVGMrMFU0QXpDangwU2tBajBPY2VTOEJORjhSWmtTVGcKS0xSRmZoQ05OYXlFdG9rNzVSN0IxamM2VFZVdTRrR2VIMldyZ1gxTWxTS3k2V0dFdXFWcG5ZV0lJOVUrRnlFagplQmpqK3RaU1NDczJYMFdEK3VOVnlHTzgxM2o4V1g4SnVhclpvcEtmMmlyWmNOV0w4Rlo5c0Ftc0ZHSmVCdlVtCi83SlcwU3NDZ1lFQXBkLzhFWjlCMjh1MDBTOVZQM1BPZndqRWtudlNJcmdMQmFMVVB2aG5BS0JsU1hZdUZIb1IKWmdvV0p6QWVsMk42ZGwyTGdHNU1GYmtVM2ZNNW41TC9RbHYyZ0NSRGlDTDFzS3lWR1dPM29RM2FMSkRmNzFwMQpTZW9iU2NMR09UeWU2MFR1SWVpSmZFTU85b25yc3c2a3B4TUE2U2ZzaTk5NDUwTlBsbWY3ZzhvPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

3.3安装flanneld(在master服务器上)

只在master节点安装flanneld即可,其它节点会自动安装的flannel pod,多少节点多少个flannel pod

# 如果不好下载多试两次,也可以用浏览器下载工具下来,然后再传到服务器

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

#  因为kube-flannel.yml文件中使用的镜像为quay.io的,国内无法拉取,所以同样的先从国内源quay.azk8s.cn或quay-mirror.qiniu.com上下载。

grep 'quay.io/coreos' kube-flannel.yml
[root@localhost ~]# grep 'quay.io/coreos' kube-flannel.yml
        image: quay.io/coreos/flannel:v0.12.0-amd64
        image: quay.io/coreos/flannel:v0.12.0-amd64
        image: quay.io/coreos/flannel:v0.12.0-arm64
        image: quay.io/coreos/flannel:v0.12.0-arm64
        image: quay.io/coreos/flannel:v0.12.0-arm
        image: quay.io/coreos/flannel:v0.12.0-arm
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        image: quay.io/coreos/flannel:v0.12.0-s390x
        image: quay.io/coreos/flannel:v0.12.0-s390x
sed -i.bak`date +%F` 's@quay.io/coreos@quay.azk8s.cn/coreos@g' kube-flannel.yml
kubectl apply -f kube-flannel.yml

flanneld默认安装在kube-system Namespace中,使用以下命令查看:

kubectl -n kube-system get pods
[root@localhost ~]# kubectl -n kube-system get pods
NAME                                READY   STATUS                  RESTARTS   AGE
coredns-546565776c-pp6gr            0/1     Pending                 0          29m
coredns-546565776c-r644s            0/1     Pending                 0          29m
etcd-k8smaster                      1/1     Running                 0          29m
kube-apiserver-k8smaster            1/1     Running                 0          29m
kube-controller-manager-k8smaster   1/1     Running                 0          29m
kube-flannel-ds-amd64-hml7j         0/1     Init:ImagePullBackOff   0          15m
kube-proxy-gfzgb                    1/1     Running                 0          29m
kube-scheduler-k8smaster            1/1     Running                 0          29m

3.添加节点(节点服务器执行,master服务器不执行

添加 node 节点,执行如下命令,使所有 node 节点加入 Kubernetes 集群;

# 使用上面master初始化成功时所打印的命令

kubeadm token create  # 由于token过期了,主节点再重新生成一个,如果没有过期则不用执行此命令

# 在 所有 node 节点都执行如下的命令,如下命令就是主节点在初始化成功时的日志中的命令,直接复制上面初始化时的命令即可 

# 这里的192.168.218.131是主节点的IP

kubeadm join 192.168.218.131:6443 --token yptahz.1rm8nbkg1frzbhlh \
    --discovery-token-ca-cert-hash sha256:dc7ebc35051b0ee8c6dcb8f12f2fc8b61766cef8960210c839ea51e722e6a26c
​
----------------------- 日志输出如下,则加入成功--------------------------------

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

第四步:查看所有节点

在Master服务器上执行

 kubectl get nodes

恭喜你!成功完成Kubernetes集群的搭建,点个赞再走吧!

(有问题可评论留言,不定时解答)

评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

安逸的程序猿

意思不意思那是你的意思我没意思

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值