无坑点ubuntu 部署k8s集群保姆教程(保证一次成功)

CSDN话题挑战赛第2期
参赛话题:云原生技术栈分享

一、 环境准备

系统版本 :

ubuntu -v20.04.2

k8s 版本:

k8s -v1.23.1

主机名地址说明
k8s-master1192.168.146.200master节点,能连外网,至少2核CPU,2G内存
k8s-noden1192.168.146.201node节点,能连外网,至少2核CPU,2G内存
k8s-noden2192.168.146.202node节点,能连外网,至少2核CPU,2G内存

二、安装docker(所有机器——是指所有的节点包含master和node,后面也一样单独装的会说明)

# 安装docker所需的工具(安装最新版即可)
apt-get update
apt-get install docker.io -y
# 设置开机启动并启动docker  
sudo systemctl start docker
sudo systemctl enble docker

三 、设置k8s 环境准备条件

# 禁用交换分区(在旧版的 k8s 中 kubelet 都要求关闭 swapoff ,但最新版的 kubelet 其实已经支持 swap ,因此这一步其实可以不做。)
swapoff -a
# 永久禁用,打开/etc/fstab注释掉swap那一行。  
sudo vim /etc/fstab
# 修改内核参数(首先确认你的系统已经加载了 br_netfilter 模块,默认是没有该模块的,需要你先安装 bridge-utils)
apt-get install -y bridge-utils
modprobe br_netfilter
lsmod | grep br_netfilter
# 如果报错找不到包,需要先更新 apt-get update -y

四、 安装与配置k8s

1、安装 kubelet kubeadm kubectl(master节点执行)

# 安装基础环境
apt-get install -y ca-certificates curl software-properties-common apt-transport-https curl
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
# 执行配置k8s阿里云源  
echo 'deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main' >>/etc/apt/sources.list.d/kubernetes.list
# 执行更新
apt-get update -y
# 安装kubeadm、kubectl、kubelet  
apt-get install -y kubelet=1.23.1-00 kubeadm=1.23.1-00 kubectl=1.23.1-00
# 阻止自动更新(apt upgrade时忽略)。所以更新的时候先unhold,更新完再hold。
apt-mark hold kubelet kubeadm kubectl

安装如果报错的话,可以先卸载原有的kubectl、kubeadm、kubelet 然后再执行安装

2、部署master (master上执行)

创建kubeadm-config.yaml 配置文件,文件内容如下:

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.146.200
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: master
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.23.1
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: cgroupfs

修改 文件中的advertiseAddress 参数为当前机器的局域网地址

在运行 kubeadm init 之前可以先执行 kubeadm config images pull 来测试与 gcr.io 的连接,kubeadm config images pull尝试是否可以拉取镜像,如果你的服务器在国内,由于某些原因,是无法访问"k8s.gcr.io", “gcr.io”, “quay.io”

可以先测试一下:

kubeadm config images pull

如果不能正常拉取,那么接着往下,如果可以正常拉取,可以直接跳转到 下一步 四.3

2.1 查看kubeadm config 依赖的images有哪些

#查看kubeadm config所需的镜像
kubeadm config images list
#执行结果如下
k8s.gcr.io/kube-apiserver:v1.23.8
k8s.gcr.io/kube-controller-manager:v1.23.8
k8s.gcr.io/kube-scheduler:v1.23.8
k8s.gcr.io/kube-proxy:v1.23.8
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6

2.2 然后就可以曲线救国从国内镜像拉取这些镜像了(有些是可以直接拉取的,比如 k8s.gcr.io/coredns/coredns:v1.8.6)

#从国内镜像拉取
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.8
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.8
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.8
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0
docker pull coredns/coredns:1.8.6

2.3 根据2.1中的依赖对这些镜像进行重命名(这里要注意重命名的版本号有的是带v的,有的是不带的)

#将拉取下来的images重命名为kubeadm config所需的镜像名字
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8 k8s.gcr.io/kube-apiserver:v1.23.8
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.8 k8s.gcr.io/kube-controller-manager:v1.23.8
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.8 k8s.gcr.io/kube-scheduler:v1.23.8
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.8 k8s.gcr.io/kube-proxy:v1.23.8
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 k8s.gcr.io/pause:3.6
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0 k8s.gcr.io/etcd:3.5.1-0
docker tag coredns/coredns:1.8.6 k8s.gcr.io/coredns/coredns:v1.8.6

3 、执行初始化操作

kubeadm init --config kubeadm-config.yaml
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
	[WARNING Hostname]: hostname "master" could not be reached
	[WARNING Hostname]: hostname "master": lookup master on 127.0.0.53:53: server misbehaving
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.146.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.146.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.146.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.502883 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.146.200:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:9d783c63bf9a20ce9634788b96905f369668d9f439e444e271907561455b8779 
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

4、获取加入集群的token

kubeadm token create --print-join-command
kubeadm join 192.168.146.200:6443 --token ikx47e.4knl5pwe9fenrwds --discovery-token-ca-cert-hash sha256:9d783c63bf9a20ce9634788b96905f369668d9f439e444e271907561455b8779 

5、部署工作节点(在node上执行)

5.1、安装环境

# 安装基础环境
apt-get install -y ca-certificates curl software-properties-common apt-transport-https curl
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
# 执行配置k8s阿里云源  
vim /etc/apt/sources.list.d/kubernetes.list
#加入以下内容
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
# 执行更新
apt-get update -y
# 安装kubeadm、kubectl、kubelet  
apt-get install -y kubelet=1.23.1-00 kubeadm=1.23.1-00 kubectl=1.23.1-00
# 阻止自动更新(apt upgrade时忽略)。所以更新的时候先unhold,更新完再hold。
apt-mark hold kubelet kubeadm kubectl

5.2、加入集群

使用之前kubeadm token create --print-join-command 获取的token 加入集群

kubeadm join 192.168.146.200:6443 --token ikx47e.4knl5pwe9fenrwds --discovery-token-ca-cert-hash sha256:9d783c63bf9a20ce9634788b96905f369668d9f439e444e271907561455b8779

如果此处报错,则需要重启 即可

kubeadm reset

加入成功后,可以在master节点上使用kubectl get nodes命令查看到加入的节点

五、部署 Calico(master机器)

以上步骤安装完后,机器搭建起来了,但状态还是NotReady状态,如下,master机器需要安装Calico

kubectl apply -f https://docs.projectcalico.org/v3.21/manifests/calico.yaml

安装完成后需要等待k8s重新拉起节点

六、完成

在master执行,即可看到节点已为Ready状态

kubectl get nodes
NAME         STATUS     ROLES                  AGE   VERSION
k8s-noden1   NotReady   <none>                 14m   v1.23.1
k8s-noden2   Ready      <none>                 14m   v1.23.1
master       Ready      control-plane,master   31m   v1.23.1

  • 1
    点赞
  • 12
    收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
©️2022 CSDN 皮肤主题:精致技术 设计师:CSDN官方博客 返回首页
评论

打赏作者

linus.lin

你的鼓励将是我创作的最大动力

¥2 ¥4 ¥6 ¥10 ¥20
输入1-500的整数
余额支付 (余额:-- )
扫码支付
扫码支付:¥2
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、C币套餐、付费专栏及课程。

余额充值