使用 kubeadm 创建 v1.19.0 kubernetes 集群

本文所使用机器配置情况:

主机名公网 IP内网 IP操作系统硬件配置用途
k8s-master47.242.253.25172.31.117.60CentOS 7.7 64位双核/2GB内存/20GB硬盘master
k8s-node147.242.250.255172.31.117.61CentOS 7.7 64位双核/2GB内存/20GB硬盘node1

需要特别指出的是,本文使用的主机是阿里云香港主机,在安装过程中没有遇到网络问题。并且本文搭建的 kubernetes 网络也仅是测试使用,在实际生产环境中搭建集群需要仔细查看各个安装参数的具体含义,参考的官方文档地址在本文最后参考资料部分。

1. 在所有节点上安装 docker kubelet kubelet kubeadm

采用官方脚本安装 docker:

$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sudo sh get-docker.sh

开启 Docker:

$ sudo systemctl start docker

kubeadm、kubelet、kubelet 在节点上有不同的作用:

  • kubeadm:安装集群的命令行工具,在 master 上主要用于对集群的安装设置,在 node 上主要执行新节点加入集群的操作。
  • kubelet:运行于集群所有节点的核心组件,用于启动pod和容器。
  • kubectl:控制集群的命令行工具。

在所有节点上执行:

$ cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

将 SELinux 设置为 permissive 模式,相当于将其禁用,主要作用是保证容器能够访问主机文件系统:

$ sudo setenforce 0
$ sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

由于需要安装的是 1.19.0 的集群,则安装时需要指定版本号,否则将默认安装最新版:

$ yum install -y kubelet-1.19.0-0 --disableexcludes=kubernetes
$ yum install -y kubectl-1.19.0-0 --disableexcludes=kubernetes
$ yum install -y kubeadm-1.19.0-0 --disableexcludes=kubernetes # kubeadm-1.19.0-0 一定要最后安装

如果想要安装 kubelet 的其它版本,可以执行如下命令查看所有可安装的版本,查看 kubectl 和 kubeadm 版本同理:

$ yum list --showduplicates kubelet --disableexcludes=kubernetes

使用 yum install 安装的顺序很关键,kubeadm 一定要最后安装。如果先安装 kubeadm,则 yum 会自动安装最新版的 kubelet 和 kubectl 作为 kubeadm 的依赖。这样会导致 kubeadm 在接下来进行初始化操作的时候失败。

启动 kubelet:

$ sudo systemctl enable --now kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

2. 初始化 master 节点

首先获取 master 主机的内网 IP 地址,一般在云主机厂商的控制面板上都会显示内网地址,也可以使用 ifconfig 命令查看:

$ ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:6b:8e:de:02  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.31.117.60  netmask 255.255.240.0  broadcast 172.31.127.255
        ether 00:16:3e:00:b7:03  txqueuelen 1000  (Ethernet)
        RX packets 208023  bytes 307062112 (292.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 39362  bytes 3275173 (3.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

172.31.117.60 是 master eth0 私有地址,在 master 上执行:

$ kubeadm init --apiserver-advertise-address 172.31.117.60 --pod-network-cidr=10.244.0.0/16
I0304 22:52:00.565539    2424 version.go:252] remote version is much newer: v1.20.4; falling back to: stable-1.19
W0304 22:52:00.960003    2424 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.8
[preflight] Running pre-flight checks
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.117.60]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.31.117.60 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.31.117.60 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.002210 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 3wfvhr.zcstqjk1cr3ehft4
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.117.60:6443 --token 3wfvhr.zcstqjk1cr3ehft4 \
    --discovery-token-ca-cert-hash sha256:07e2d554d807a5012a9dba6718b28081be235c2826ae2ffd0ee4c38a344f98f4

3. 在 master 节点配置集群

在 master 上新建 jerry 用户:

$ useradd jerry  # 添加新用户 jerry
$ passwd jerry   # 给用户设置密码

给新建用户 jerry 添加超级权限,编辑 /etc/sudoers 文件:

$ vi /etc/sudoers

在 /etc/sudoers 文件中找到 root ALL=(ALL) ALL,然后在其下一行添加 jerry ALL=(ALL) ALL,然后输入 wq! 命令保存退出。
在这里插入图片描述

切换到 jerry 用户:

$ su - jerry
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果没有给 jerry 用户添加超级权限,则在执行超级权限的命令的时候会提示无法执行 jerry is not in the sudoers file. This incident will be reported.

之后使用 kubectl 命令均需在非 root 用户下执行。

4. 在 master 节点上安装 pod 网络

以 jerry 用户执行,添加 flannel 网络:

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

查看节点,当前只有 master 节点:

$ kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   11h   v1.19.0

需要将 node 节点添加到 kubernetes 网络中。

5. 添加 node 节点

在前文执行 master 初始化操作的时候,在最后的输出信息中提示可以将 node 节点加入到 kubernetes 网络中,在 node 节点上执行以下命令:

$ kubeadm join 172.31.117.60:6443 --token 3wfvhr.zcstqjk1cr3ehft4 --discovery-token-ca-cert-hash sha256:07e2d554d807a5012a9dba6718b28081be235c2826ae2ffd0ee4c38a344f98f4
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

如果没有记录 token 则可以在 master 节点上列出所有 token:

$ kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
3wfvhr.zcstqjk1cr3ehft4   11h         2021-03-05T22:52:54+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

TTL 表示该 token 的有效期,表示还有 11 小时过期,默认新生成的 token 有效期为 24 小时。如果显示 token 过期,或者无 token 则可以重新生成 token:

$ kubeadm token create
W0306 15:29:51.837213   30203 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
76mnmd.rips0ooplhrww3ur

获取密钥哈希值,即 discovery-token-ca-cert-hash:

$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
07e2d554d807a5012a9dba6718b28081be235c2826ae2ffd0ee4c38a344f98f4

在 master 上获取 kubernetes 节点信息:

$ kubectl get nodes
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   12h    v1.19.0
k8s-node1    Ready    <none>   6m6s   v1.19.0

kubernetes 必要的 pod 也已生成:

$ kubectl get pod --all-namespaces
NAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE
kube-system   coredns-f9fd979d6-9vbst                  1/1     Running   0          30m
kube-system   coredns-f9fd979d6-q7ng4                  1/1     Running   0          30m
kube-system   etcd-vm-0-11-centos                      1/1     Running   0          31m
kube-system   kube-apiserver-vm-0-11-centos            1/1     Running   0          31m
kube-system   kube-controller-manager-vm-0-11-centos   1/1     Running   0          31m
kube-system   kube-flannel-ds-5snmq                    1/1     Running   0          8m24s
kube-system   kube-flannel-ds-mqtq7                    1/1     Running   0          5m9s
kube-system   kube-proxy-54hc5                         1/1     Running   0          5m9s
kube-system   kube-proxy-dc927                         1/1     Running   0          30m
kube-system   kube-scheduler-vm-0-11-centos            1/1     Running   0          31m

至此,一个 1.19.0 版本的 kubernetes 已经正常安装完成了,之后可以自由安装应用了。

6. 参考资料

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值