Kubernetes集群部署(单Master)

提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档


前言

要求 :三台虚拟机
以下步骤在三台虚拟机上均要执行
安装系统环境为 CentOS7.6以上都行,服务器配置如下

在这里插入图片描述

一、环境准备

在所有节点上配置

1 配置所有机器连接互联网并设置好yum源

cd /etc/yum.repos.d/
wget http://mirrors.aliyun.com/repo/Centos-7.repo

2 升级内核(centos7.6以上的版本不需要)

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install -y kernel-lt
grub2-set-default 'CentOS Linux (5.4.184-1.el7.elrepo.x86_64) 7 (Core)'

3 关闭防火墙,,关闭selinux,关闭swap

 systemctl stop firewalld
 systemctl disable firewalld

 setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

 swapoff -a  && sed -i 's/.*swap/#&/' /etc/fstab

4 做好解析

vim /etc/hosts
192.168.4.200 master
192.168.4.201 node1
192.168.4.202 node2

5 配置内核参数

 vim /etc/sysctl.d/kubernetes.conf 

net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
保存退出后刷新配置立马生效

 sysctl -p /etc/sysctl.d/kubernetes.conf

6 加载ipvs模块

 vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
​
 chmod +x /etc/sysconfig/modules/ipvs.modules
 /etc/sysconfig/modules/ipvs.modules

7 安装依赖包

yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl     sysstat libseccomp wget vim net-tools git

重启机器
init 6

二、部署过程

1. 安装配置docker                 # 在所有节点执行
2. 安装软件                       # 在所有节点执行
3. 部署 Kubernetes Master        # 在Master节点执行
4. 部署网络插件                   # 在Master节点执行
5. 部署 Kubernetes Worker        # 在所有Worker节点执行

1.安装配置docker

在所有节点执行

1 配置yum源

cd /etc/yum.repos.d/
curl https://gitee.com/leedon21/k8s/raw/master/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo

2 安装docker

yum install -y docker-ce-19.03.15-3.el7

3 配置docker

mkdir /etc/docker
vim /etc/docker/daemon.json
{ 
  "exec-opts":["native.cgroupdriver=systemd"], 
  "log-driver":"json-file", 
  "log-opts":{ "max-size":"100m" } 
}

4 启动docker

systemctl start docker
systemctl disable docker  

2.安装软件

在所有节点执行

1 配置yum源

cd /etc/yum.repos.d/
curl https://gitee.com/leedon21/k8s/raw/master/kubernetes.repo -o kubernetes.repo

2 安装软件

yum install -y kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0

可用 yum list --showduplicates|egrep kubeadm 查看有哪些可用版本

3 设置开机启动kubelet

systemctl enable kubelet

3.部署Master 节点

这一步只需要在Master节点操作

1 创建初始化yaml文件

[root@master ~] mkdir k8s
[root@master ~] cd k8s/
[root@master k8s] kubeadm config print init-defaults > init.yml  输入命令即可获得文件
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.4.200         # 本机IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers  # 使用国内镜像仓库
kind: ClusterConfiguration
kubernetesVersion: v1.17.4    # 版本
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16    # 这个子网要和flannel中的子网相同
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

2 初始化

[root@master k8s]# kubeadm init --config=init.yml  | tee kubeadm-init.log
W1222 11:13:44.974038   20895 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1222 11:13:44.974083   20895 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.4.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.4.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.4.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W1222 11:13:47.487580   20895 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1222 11:13:47.488343   20895 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.502284 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.4.200:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:56627755bd00042088611a78ee2fa9fe1af08de756da5e2bec8c242e006c23ec

kubeadm init主要执行了以下操作:
[init]:指定版本进行初始化操作
[preflight]:初始化前的检查和下载所需要的Docker镜像文件
[kubelet-start]:生成kubelet的配置文件”“/var/lib/kubelet/config.yaml”,没有这个文件kubelet无法启动,所以初始化之前的kubelet实际上启动不会成功。
[certificates]:生成Kubernetes使用的证书,存放在/etc/kubernetes/pki目录中。
[kubeconfig] :生成 KubeConfig 文件,存放在/etc/kubernetes目录中,组件之间通信需要使用对应文件。
[control-plane]:使用/etc/kubernetes/manifest目录下的YAML文件,安装 Master 组件。
[etcd]:使用/etc/kubernetes/manifest/etcd.yaml安装Etcd服务。
[wait-control-plane]:等待control-plan部署的Master组件启动。
[apiclient]:检查Master组件服务状态。 [uploadconfig]:更新配置 。
[kubelet]:使用configMap配置kubelet。 [patchnode]:更新CNI信息到Node上,通过注释的方式记录。
[mark-control-plane]:为当前节点打标签,打了角色Master,和不可调度标签,这样默认就不会使用Master节点来运行Pod。
[bootstrap-token]:生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到 。
[addons]:安装附加组件CoreDNS和kube-proxy。

3 配置kubectl

在所有希望能用kubectl的节点配置 无论在master节点或node节点,要能够执行kubectl命令必须进行配置,有两种配置方式

方式一,通过配置文件

[root@master k8s] mkdir -p $HOME/.kube
[root@master k8s] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master k8s] sudo chown $(id -u):$(id -g) $HOME/.kube/config   # root用户可不做这步

方式二,通过环境变量

 echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> ~/.bashrc
 source ~/.bashrc

配置好kubectl后,就可以使用kubectl命令了

[root@master k8s] kubectl get no
NAME     STATUS     ROLES    AGE     VERSION
master   NotReady   master   5m46s   v1.17.4
​
[root@master k8s] kubectl get po -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-9d85f5447-pxzzm          0/1     Pending   0          7m18s
coredns-9d85f5447-vmb8z          0/1     Pending   0          7m18s
etcd-master                      1/1     Running   0          7m31s
kube-apiserver-master            1/1     Running   0          7m31s
kube-controller-manager-master   1/1     Running   0          7m31s
kube-proxy-dc9t2                 1/1     Running   0          7m18s
kube-scheduler-master            1/1     Running   0          7m31s
​
[root@master k8s]  kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}

由于未安装网络插件,coredns处于pending状态,node处于NotReady 状态

4.部署网络插件

仅在Master节点操作

kubernetes支持多种网络方案,这里使用常用的 flannel 方案.

 kubectl apply -f  https://gitee.com/leedon21/k8s/raw/master/kube-flannel.yml

过一会再次查看node和 Pod状态,全部OK, 所有的核心组件都起来了

[root@master k8s] kubectl get po -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-9d85f5447-pxzzm          1/1     Running   0          17m
coredns-9d85f5447-vmb8z          1/1     Running   0          17m
etcd-master                      1/1     Running   0          17m
kube-apiserver-master            1/1     Running   0          17m
kube-controller-manager-master   1/1     Running   0          17m
kube-flannel-ds-amd64-6vwbl      1/1     Running   0          11m
kube-proxy-dc9t2                 1/1     Running   0          17m
kube-scheduler-master            1/1     Running   0          17m
​
[root@master k8s] kubectl get no
NAME     STATUS     ROLES    AGE   VERSION
master   Ready      master   26s   v1.17.4

这里如果还不行则安装这个地址

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

5.安装node节点

在所有noder节点操作

Kubernetes 的 Worker 节点跟 Master 节点几乎是相同的,它们运行着的都是一个 kubelet 组件。 唯一的区别在于,Master 节点上还会自动运行 kube-apiserver、kube-scheduler、kube-controller-manger 这三个系统 Pod。

从init的输出或kubeadm-init.log文件中获取命令将node节点加入到集群中来:
(刚刚master初始化时生成的)

[root@node1 ~] kubeadm join 192.168.4.200:6443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:56627755bd00042088611a78ee2fa9fe1af08de756da5e2bec8c242e006c23ec
W1222 11:20:29.934496   11108 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
	[WARNING Hostname]: hostname "node1" could not be reached
	[WARNING Hostname]: hostname "node1": lookup node1 on 114.114.114.114:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

加入后过一会再回到master查看节点情况

[root@master ~] kubectl get no
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   23m   v1.17.4
node1    Ready    <none>   19m   v1.17.4
node2    Ready    <none>   18m   v1.17.4

待所有节点都ready后, 集群部署完成

三、问题解决

如果安装过程中出现问题, 无论是Master还是Node, 都可以执行 kubeadm reset 命令进行重置

[root@node2 ~] kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0321 22:54:01.292739    7918 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
​
 reset进程没有清理CNI配置
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
​
 reset进程没有清理iptables或IPVS表
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
​
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
​
 reset进程没有清理kubeconfig文件
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

四、总结

kubeadm工作流程

init工作流程
1 环境检查

当我们执行 kubeadm init 指令后,kubeadm 首先做一系列的环境检查工作,以确定这台机器可以用来部署 Kubernetes,我们称之为Preflight Checks。它包括了很多方面,比如:

• Linux 内核的版本必须是否是 3.10 以上?
• Linux Cgroups 模块是否可用?
• 机器的 hostname是否标准?在 Kubernetes 项目里,机器的名字以及一切存储在 Etcd 中的 API 对象,都必须使用标准的 DNS 命名(RFC1123)。
• 用户安装的 kubeadm 和 kubelet 的版本是否匹配?
• 机器上是不是已经安装了 Kubernetes的二进制文件?
• Kubernetes 的工作端口 10250/10251/10252 端口是不是已经被占用?
• ip、mount 等 Linux 指令是否存在?
• Docker 是否已经安装? ……

2 生成证书

Kubernetes 对外提供服务时默认都要通过 HTTPS 才能访问 kube-apiserver。通过了 Preflight Checks 之后,kubeadm 会帮我们生成 Kubernetes 集群所需的各种证书文件,放在 /etc/kubernetes/pki 目录下。

在这里插入图片描述

也可以选择不让 kubeadm 为你生成这些证书,而是拷贝现有的证书到如下证书的目录里

3 创建授权文件

接下来,kubeadm 为组件生成访问apiserver的授权文件。存放在/etc/kubernetes中,以conf结尾。

在这里插入图片描述

这些文件里面记录的是apiserver端点信息、客户端上下文、token等信息。这样,对应的组件(scheduler,kubelet 等),可以直接加载相应的文件,使用里面的信息与 kube-apiserver 建立安全连接。

4 启动 Master 组件

Kubernetes 的三个 Master 组件 kube-apiserver、kube-controller-manager、kube-scheduler 都会被使用 Pod 的方式部署起来。另外,Etcd也会以 Pod 的方式部署起来。

在这里插入图片描述

在 Kubernetes 中,有一种特殊的容器启动方法叫做"Static Pod"。它允许你把要部署的 Pod 的 YAML 文件放在一个指定的目录里。当这台机器上的 kubelet 启动时,它会自动检查这个目录,加载所有的 Pod YAML 文件,然后在这台机器上启动它们。
Kubeadm 会通过检查 HostIP:6443/healthz 这个URL对Master组件进行健康检查。当 Master 组件完全运行起来后,kubeadm 会为集群生成一个 Bootstrap Token。任何一个安装了 kubelet 和 kubadm 的节点,都可以凭此Token 通过 kubeadm join 加入到这个集群当中。

Kubeadm 会将很多重要信息,通过 ConfigMap 的方式保存在 Etcd 当中,供后续部署 Node 节点使用。

[root@master ~] kubectl get cm -n kube-system
NAME                                 DATA   AGE
coredns                              1      22h
extension-apiserver-authentication   6      22h
kube-flannel-cfg                     2      22h
kube-proxy                           2      22h
kubeadm-config                       2      22h
kubelet-config-1.17                  1      22h

5 安装其他插件

kubeadm init 的最后一步是安装 kube-proxy 和 DNS 这两个插件。它们分别用来提供整个集群的服务发现和 DNS 功能。其中 kube-proxy 是以 DaemonSet 的方式布署在每个节点上, 而 DNS 则是使用 Deployment 控制器来管理。

在这里插入图片描述

join工作流程

kubeadm init 生成 bootstrap token 之后,就可以在任意一台安装了 kubelet 和 kubeadm 的机器上执行 kubeadm join 了。

为什么执行 kubeadm join 需要这样一个 token 呢? 因为,任何一台机器想要成为 Kubernetes 集群中的一个节点,就必须在集群的 kube-apiserver 上注册。可要想跟 apiserver 打交道,这台机器就必须要获取到相应的证书文件(CA 文件)。

为了能够一键安装,我们不想去 Master 节点上手动拷贝这些文件。 所以,kubeadm 需要发起一次"不安全模式"的访问到 kube-apiserver,从而拿到保存在 ConfigMap 中的 cluster-info(它保存了 APIServer 的授权信息)。 而 bootstrap token,扮演的就是这个过程中的安全验证的角色。 只要有了 cluster-info 里的 kube-apiserver 的地址、端口、证书,kubelet 就可以以"安全模式"连接到 apiserver 上,这样一个新的节点就部署完成了。

文章采纳:https://zhuanlan.zhihu.com/p/121098101

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值