CentOS 7 从0 开始部署Kubernetes (K8S安装)1.21版本最新版本安装

部署环境介绍

本次部署由4台服务器组成,1台Master角色,3台node角色

先决条件配置

  1. 配置IP地址,使得各个节点之间能够正常通信,使得各个节点能够访问互联网
  2. 配置DNS地址,DNS解析记录中配置各个节点的解析记录
  3. 配置时间服务器,保证各个节点的时间是同步的,完成以上2步后,可直接配置同步阿里云的ntp服务器,可参照博客CentOS 配置ntp服务同步时间
  4. 关闭SELinux。shell命令:“setenforce 0 && sed -i ‘s/^SELINUX=enforcing$/SELINUX=permissive/’ /etc/selinux/config”
  5. 关闭防火墙。shell命令:systemctl stop firewalld && systemctl disable firewalld

Kubernetes官网安装指引

本博客中过程均采用和译自官网安装文档,精简步骤,按照此博客可完成安装,本博客目的是记录下来安装过程以供日后参考安装使用,读者也可直接查看官网安装指引

一,安装底层容器环境-Docker CE 3:20.10.7(所有节点都安装)

Kubernetes(简称K8S) 是一个容器编排工具,具体底层的容器可以支持多种,当前主流的容器运行平台为“Docker”,所以本次部署环境以Docker为底层的容器运行环境。

1.1 从Docker 官方YUM仓库中安装

Docker 官网安装指南

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

1.2 安装Docker-CE

 yum install docker-ce docker-ce-cli containerd.io -y

1.3 启动Docker

# systemctl start docker

1.4 运行Docker-容器环境

以下为需要在shell中执行的命令

 docker run hello-world

以下为结果输出,看到以下信息表示Docker安装成功且运行正常。

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete 
Digest: sha256:9f6ad537c5132bcce57f7a0a20e317228d382c3cd61edae14650eec68b2b345c
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

[root@master-1 ~]# 

1.5 使用systemd(systemctl )管理container的cgroup,需额外配置Docker的守护进程

cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

注意:当Linux的kernel的版本高于4.0时,或者RHEL 和CentOS 的版本高于3.10.0-514时,overlay2的存储驱动是RHEL和CentOS操作系统优选的存储驱动。

1.6 将Docker 设置为服务器开机自启动

 systemctl enable docker

二. 使用部署工具(Deployment tools)安装Kubernetes

注,官网介绍了3种部署工具部署K8S,
1种是使用kubeadm部署自举式集群;(本部署使用)
2种是使用kops安装在AWS上安装K8S集群;
3种是使用kubespray将K8S部署在GCE(谷歌云), Azure(微软云), OpenStack(私有云), AWS(亚马逊云), vSphere(VMware vSphere), Packet (bare metal)(裸金属服务器), Oracle Cloud Infrastructure (Experimental)(甲骨文云基础设施)上的。

本次部署直接使用kubeadm部署工具直接将K8S集群安装在自己的数据中心中。

2.1 安装kubeadm(所有节点都安装)

2.1.1 安装之前需要确认的先决条件

  1. Linux操作系统主机,基于Debian 和 RedHat的操作系统
  2. 2G以上内存,少于2G可能程序无法运行
  3. 2核心以上CPU
  4. 网络连接,确保集群中各个主机能够互相访问,最好可以直接访问互联网
  5. 集群内的每台主机拥有唯一的主机名,MAC地址,UUID,
  6. 禁用Swap,一定要禁用Swap设备,这样Kubernetes才能工作的更好。

2.1.2 禁用Swap设备(每台服务器都执行)

禁用swap,首先用命令禁用已经挂载的swap设备,然后修改/etc/fstab文件,让下次启动时不挂载swap设备。

 swapoff -a
 vim /etc/fstab 
......
#UUID=fafe611e-2d60-4bf9-a5bc-aaa7528114b3 swap    
## 在swap这一行最前面加上一个“#”就可以了
......

查看swap设备挂载情况

[root@master-1 docker]# free -m
              total        used        free      shared  buff/cache   available
Mem:           7982         416        6727           8         838        7308
Swap:             0           0           0

2.1.3 设置iptables 能够识别到桥接网络的流量

  1. 确认“br_netfilter”模块加载了,使用lsmod | grep br_netfilter 查看,加载的话可以使看到,如下面所示,所以没有加载,则使用命令“modprobe br_netfilter” 进行加载
[root@master-2 ~]# lsmod | grep br_netfilter
br_netfilter           22256  0 
bridge                151336  1 br_netfilter
  1. 为了让主机的iptables 可以正确识别到桥接的流量,需要确保“net.bridge.bridge-nf-call-iptables”的值为1.
    以下脚本可以帮你全部解决,,可直接复制粘贴到shell中执行:
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
​
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sudo sysctl --system

2.1.4 安装kubeadm, kubelet and kubectl

安装kubeadm, kubelet and kubectl官方介绍

这3个组件各自是做什么用的?本博客中并不列出。但是需要知道一点的是,kubeadmin并不代理或者帮助安装kubelet和kubectl,所以需要手动安装在所需要的节点上,并且版本最好一致。

2.1.4.1 配置yum仓库

在国内,由于某些原因,无法访问Kubernetes官方的仓库站,所以配置官网的yum仓库也是无法获取的安装包的,所以这里使用其它的yum源。本博客使用“清华大学镜像站” 配置yum源。

直接复制以下内容到shell命令行中进行粘贴:

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes-TUNA]
name=Kubernetes-TUNA
baseurl=https://mirrors.tuna.tsinghua.edu.cn/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
2.1.4.2 安装kubeadm, kubelet and kubectl(所有节点都安装,包括worker,其中kubeclt在workder上是可选安装)
yum install -y kubelet kubeadm kubectl
2.1.4.3 设置kubelet 跟随服务器开机启动
systemctl enable --now kubelet

2.2 创建集群

2.2.1 初始化Master(Control-Plane)节点

2.2.1.1 初始化前先决条件确认
  1. 如果你以后准备将Master 配置为高可用集群,那么在初始化时,应该带上“–control-plane-endpoint” 选项,用来在所有的Master上设置一个共享的终端节点,这个终端节点可以是一个DNS的域名,可以是一个负载均衡的虚拟IP地址。
  2. 为Kubernetes选择一个给Pod的使用的网络附件,在添加前确认好该网络插件是不需要添加一些选项至“kubeadm init”命令中。另外需要看一下你选择的网络附件的相关文件,以确定是否需要使用“–pod-network-cidr”来为Pod提供一个特定的网段。
  3. 使用 “kubeadm config images pull”来确认互联网是否可以连接至“gcr.io”
    • 国内出于某种原因,暂时不能访问“gcr.io”这个镜像站,可使用国内的镜像站,具体的方法请大家自行google,目标就是将相应版本的镜像下载到本地后可供使用。想省事的网友可直接查看我另外一遍博客kubernetes 曲线救国式下载 kubeadm 1.21 相关镜像

在这里插入图片描述

2.2.1.2 Master初始化

以下为完整的命令

kubeadm init --kubernetes-version=1.21.2 --apiserver-advertise-address=172.16.133.56 --control-plane-endpoint=cluster-endpoint.microservice.for-best.cn --service-cidr=10.1.0.0/16 --pod-network-cidr=10.2.0.0/16 --service-dns-domain=microservice.for-best.cn
初始化参数解析
选项意义
–kubernetes-version=1.21.2指明需要初始化的kubernetes的版本,默认值为stable-1
–apiserver-advertise-address=172.16.133.56Master服务器的API对外监听的IP地址是哪个IP,有的服务器有多个IP,可以指明一下IP地址,以示明确
–control-plane-endpoint=cluster-endpoint.microservice.for-best.cnMaster高可用时用到的,另外一篇博客详解Master的集群
–service-cidr=10.1.0.0/16Service的IP地址分配段
–pod-network-cidr=10.2.0.0/16Pod的IP地址分配段
–service-dns-domain=microservice.for-best.cnService的域名设置,默认是cluster.local,企业内部通常会更改
–upload-certs用于新建master集群的时候直接在master之间共享证书,如没有此选项,后期配置集群的时候需要手动复制证书文件

以下为结果输入,可略过

[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [cluster-endpoint.microservice.for-best.cn kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.microservice.for-best.cn master-1.for-best.cn] and IPs [10.1.0.1 172.16.133.56]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1.for-best.cn] and IPs [172.16.133.56 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1.for-best.cn] and IPs [172.16.133.56 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.009425 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master-1.for-best.cn as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master-1.for-best.cn as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ii44ya.n4ryb3yka0q09fq3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join cluster-endpoint.microservice.for-best.cn:6443 --token ii44ya.n4ryb3yka0q09fq3 \
        --discovery-token-ca-cert-hash sha256:67318db78eef549400d515ed239ca3dbf85d5195e4ba6c13b61854f497278b39 \
        --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join cluster-endpoint.microservice.for-best.cn:6443 --token ii44ya.n4ryb3yka0q09fq3 \
        --discovery-token-ca-cert-hash sha256:67318db78eef549400d515ed239ca3dbf85d5195e4ba6c13b61854f497278b39 

Your Kubernetes control-plane has initialized successfully! 标志着Master初始化成功
注意,在Master初始化完成后,在最后的输出中,会提示如何加入其它的Master,提示如何加入worker节点

2.2.1.3 Worker加入Master做为运行Pod的节点

直接在安装完成之后的Worker上输入Master初始化后的加入提示

以下为命令输入到shell窗口中

kubeadm join cluster-endpoint.microservice.for-best.cn:6443 --token ii44ya.n4ryb3yka0q09fq3 \
        --discovery-token-ca-cert-hash sha256:67318db78eef549400d515ed239ca3dbf85d5195e4ba6c13b61854f497278b39 

以下为输出的内容,可省略

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

2.2.1.4 配置环境变量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile 
export KUBECONFIG=/etc/kubernetes/admin.conf
2.2.1.5 Master上查看集群
[root@master-1 yum.repos.d]# kubectl get nodes
NAME                   STATUS     ROLES                  AGE     VERSION
master-1.for-best.cn   NotReady   control-plane,master   26m     v1.21.2
node-1.for-best.cn     NotReady   <none>                 3m23s   v1.21.2
node-2.for-best.cn     NotReady   <none>                 3m24s   v1.21.2
node-3.for-best.cn     NotReady   <none>                 3m23s   v1.21.2

状态为NotReady的状态,使用journalctl -f -u kubelet 查看日志,有显示

Jul 08 17:30:50 master-1.for-best.cn kubelet[9263]: E0708 17:30:50.130903    9263 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"

提示网络功能有问题,我们再安装一个网络功能附件

2.2.1.6 安装网络功能附件(Master上执行)

flannel托管在github上的安装使用介绍

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

由于网络原因,可能执行不成功,可以多尝试几次。

Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

2.2.1.7 查看集群状态

1.查看集群节点工作状态

[root@master-1 ~]# kubectl get nodes
NAME                   STATUS   ROLES                  AGE   VERSION
master-1.for-best.cn   Ready    control-plane,master   16h   v1.21.2
node-1.for-best.cn     Ready    <none>                 15h   v1.21.2
node-2.for-best.cn     Ready    <none>                 15h   v1.21.2
node-3.for-best.cn     Ready    <none>                 15h   v1.21.2

为ready的状态,表示工作正常

  1. 查看kube-system名称空间的工作状态
[root@master-1 ~]# kubectl get pods -n kube-system
NAME                                           READY   STATUS    RESTARTS   AGE
coredns-558bd4d5db-qj4t5                       1/1     Running   1          16h
coredns-558bd4d5db-tq9sv                       1/1     Running   1          16h
etcd-master-1.for-best.cn                      1/1     Running   1          16h
kube-apiserver-master-1.for-best.cn            1/1     Running   1          16h
kube-controller-manager-master-1.for-best.cn   1/1     Running   1          16h
kube-flannel-ds-dgn27                          1/1     Running   1          15h
kube-flannel-ds-lqdds                          1/1     Running   1          15h
kube-flannel-ds-njj6b                          1/1     Running   1          15h
kube-flannel-ds-rjx5q                          1/1     Running   0          15h
kube-proxy-cgbt6                               1/1     Running   1          16h
kube-proxy-ffbxs                               1/1     Running   0          15h
kube-proxy-lq2s8                               1/1     Running   0          15h
kube-proxy-z55bl                               1/1     Running   0          15h
kube-scheduler-master-1.for-best.cn            1/1     Running   1          16h
[root@master-1 ~]# 

所以的pod运行均正常

2.3 创建第一个静态Pod

2.3.1 创建

直接在shell中运行以下命令,没有任何输出即表示创建成功:

kubectl run nginx --image=nginx --port=80

2.3.2 查看

[root@master-1 ~]# kubectl get pods
NAME    READY   STATUS              RESTARTS   AGE
nginx   0/1     ContainerCreating   0          8s

[root@master-1 ~]# kubectl get pods -o wide
NAME    READY   STATUS              RESTARTS   AGE   IP       NODE                 NOMINATED NODE   READINESS GATES
nginx   0/1     ContainerCreating   0          23s   <none>   node-3.for-best.cn   <none>           <none>

[root@master-1 ~]# kubectl get pods -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP         NODE                 NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          83s   10.2.6.2   node-3.for-best.cn   <none>           <none>

STATUS为Running 表示为运行状态。Pod的IP也有了,可以curl进行访问
被分配的运行Worker的节点在node-3.for-best.cn上运行。

2.3.3 curl访问测试

[root@node-3 ~]# curl 10.2.6.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

没有其它网络配置,不进行额外的访问测试,以上测试即表示正常。其它信息后续再谈。

  • 2
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值