kubeadm搭建kubernetes

目录

K8S概述

K8S的定义和功能

K8S的使用场景

K8S的架构原理

K8S部署

节点部署

基础环境配置

搭建kubernetes集群环境

通过小实验验证kubernetes集群搭建情况


K8S概述

K8S的定义和功能

K8s,即Kubernetes,是一个开源的容器编排平台,用于自动化计算机应用程序的部署、扩展和管理。它能够管理云平台中多个主机上的容器化应用,让部署容器化的应用变得简单和高效。K8s具备一系列引人注目的特性,其中包括:

  • 自动化: K8s可以自动部署、自动重启、自动复制和自动伸缩/扩展应用。
  • 可移植性: 它支持公有云、私有云、混合云和多重云(multi-cloud)环境。
  • 可扩展性: K8s是模块化、插件化、可挂载和可组合的。
  • 服务发现和负载均衡: K8s能够暴露容器,如果流量过大,它能进行负载均衡分配网络流量。
  • 自我修复: 如果容器失败,K8s会自动重启它;当节点失败时,它会重新安排应用到其他节点上。
  • 存储编排: K8s可以自动创建存储卷以持久化容器数据。

K8S的使用场景

K8s适用于多种不同的使用场景,尤其是在微服务架构中,它能够提供从开发到生产的无缝服务部署。常见的应用场景包括:

  • 自动化运维平台: 中小型企业使用K8s来构建自动化运维平台,以降本增效。
  • 充分利用服务器资源: 通过容器化部署,可以更高效地使用服务器资源。
  • 服务无缝迁移: 容器化的应用可以轻松地在开发、测试和生产环境之间迁移,保持环境一致性。
  • 微服务架构: K8s与微服务架构紧密相关,支持分布式系统的各个微服务独立扩展。

K8S的架构原理

K8s的架构设计是为了满足现代软件开发的需要而设计的。它主要由控制节点(Master)和工作节点(Node)组成。在Master节点上运行着API Server、Scheduler、Controller Manager和etcd等关键组件。而Node节点则运行着Kubelet、Kube-proxy和Docker(Docker Engine)等组件。

  • 控制平面: 由Master节点组成,负责决策和控制整个集群。
  • 工作节点: 由Node节点组成,负责任务的实际执行,包括容器的运行。
  • API Server: 提供REST API,是集群操作的入口。
  • Scheduler: 负责资源调度,决定将Pod调度到哪个Node。
  • Controller Manager: 负责维护集群状态,如故障检测、自动扩展等。
  • etcd: 键值存储,用于保存集群状态信息。
  • Kubelet: Node上的代理,负责与Master通信,并维护容器的生命周期。
  • Kube-proxy: Node上的网络代理,实现Service的通信和负载均衡。

K8S部署

节点部署

服务器名称IP配置
master192.168.100.1202核、4G内存、100G硬盘
node1192.168.100.1214核、8G内存、100G硬盘
node2192.168.100.1222核、4G内存、100G硬盘

基础环境配置

(1)修改主机名

[root@localhost ~]# hostnamectl set-hostname master
[root@localhost ~]# bash
[root@master ~]#
[root@localhost ~]# hostnamectl set-hostname node1
[root@localhost ~]# bash
[root@node1 ~]#
[root@localhost ~]# hostnamectl set-hostname node2
[root@localhost ~]# bash
[root@node2 ~]#

(2)编辑3台主机hosts文件

[root@localhost ~]# vi /etc/hosts
192.168.100.120 master
192.168.100.121 node1
192.168.100.122 node2

(3)关闭防火墙

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
[root@localhost ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@localhost ~]# setenforce 0
[root@localhost ~]# sed -ri 's/.swap.*/#&/' /etc/fstab
[root@localhost ~]# swapoff -a

(4)将桥接的ipv4流量传递到iptables的链

[root@localhost ~]# vi /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@localhost ~]# sysctl --system     //生效

搭建kubernetes集群环境

(1)安装docker

  • 安装docker时需要注意docker的版本与k8s有对应关系
  • 本实验 k8s版本1.23.6 对应 docker版本20+
​[root@localhost ~]# yum install -y wget
[root@localhost ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
[root@localhost ~]# yum -y install docker-ce
[root@localhost ~]# systemctl enable docker && systemctl start docker
[root@localhost ~]# docker --version
Docker version 20.10.0, build 2ae903e

(2)添加阿里云yum源

​[root@localhost ~]# vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes 
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

(3)安装kubernetes、kubelet

[root@localhost ~]# yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6
[root@localhost ~]# systemctl enable kubelet

(4)配置/etc/docker/daemon.json文件

  • 三台服务器都配置
  • Docker安装后默认没有daemon.json文件,需要我们手动创建,docker不管在任意平台、以任何方式启动,都会默认读取这里的配置,方便用户统一管理不同系统下的docker、daemon配置。
​[root@localhost ~]# vi /etc/docker/daemon.json
 {
  "registry-mirrors": ["https://n5jclonh.mirror.aliyuncs.com"],
  "insecure-registries": ["192.168.100.122:8858"],  
  "exec-opts": ["native.cgroupdriver=systemd"]
}
[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl restart docker
[root@localhost ~]# systemctl restart kubelet
  • 参数详解
  • "registry-mirrors": [""], 设置镜像加速器地址
  • "insecure-registries": [""], 设置私有仓库地址,可以设为http
  • "exec-opts": ["native.cgroupdriver=systemd"], 运行时执行选项,这里不配置默认为cgroupfs,默认情况下部署K8S master初始化会报错

(5)部署kubernetes Master的初始化

[root@master ~]# kubeadm init \
--apiserver-advertise-address=192.168.100.120 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.6 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.100.120]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.100.120 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.100.120 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.505160 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: trcpxa.t504p8y5boqni2od
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.120:6443 --token trcpxa.t504p8y5boqni2od \
        --discovery-token-ca-cert-hash sha256:20d533aa4690d490f78b905ac6fd2ae595f35c456ddad540f939389a914382a9

初始化完会生成一串命令用于node节点的加入。

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

向集群添加新节点,执行在master初始化末尾输出的kubeadm join命令

[root@node1 ~]# kubeadm join 192.168.100.120:6443 --token trcpxa.t504p8y5boqni2od \
>--discovery-token-ca-cert-hash sha256:20d533aa4690d490f78b905ac6fd2ae595f35c456ddad540f939389a914382a9 
[root@node2 ~]# kubeadm join 192.168.100.120:6443 --token trcpxa.t504p8y5boqni2od \
>--discovery-token-ca-cert-hash sha256:20d533aa4690d490f78b905ac6fd2ae595f35c456ddad540f939389a914382a9 

可以在master节点查看node信息

[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES                  AGE     VERSION
master   NotReady   control-plane,master   15m     v1.23.6
node1    NotReady   <none>                 3m10s   v1.23.6
node2    NotReady   <none>                 3m8s    v1.23.6

由于网络插件还没有部署,所有节点状态还未准备就绪 (NotReady).

[root@master ~]# cd /opt/
[root@master opt]# mkdir k8s
[root@master opt]# cd k8s/
[root@master k8s]# vi /etc/hosts
199.232.68.133 raw.githubusercontent.com
[root@master k8s]# curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/calico.yaml -o calico.yaml
[root@master k8s]# ls 
calico.yaml
[root@master k8s]# grep image calico.yaml
          image: docker.io/calico/cni:v3.26.4
          imagePullPolicy: IfNotPresent
          image: docker.io/calico/cni:v3.26.4
          imagePullPolicy: IfNotPresent
          image: docker.io/calico/node:v3.26.4
          imagePullPolicy: IfNotPresent
          image: docker.io/calico/node:v3.26.4
          imagePullPolicy: IfNotPresent
          image: docker.io/calico/kube-controllers:v3.26.4
          imagePullPolicy: IfNotPresent
  //删除镜像docker.io/前缀,避免下载过慢导致失败
[root@master k8s]# sed -i 's#docker.io/##g' calico.yaml
[root@master k8s]# grep image calico.yaml
          image: calico/cni:v3.26.4
          imagePullPolicy: IfNotPresent
          image: calico/cni:v3.26.4
          imagePullPolicy: IfNotPresent
          image: calico/node:v3.26.4
          imagePullPolicy: IfNotPresent
          image: calico/node:v3.26.4
          imagePullPolicy: IfNotPresent
          image: calico/kube-controllers:v3.26.4
          imagePullPolicy: IfNotPresent

(6)部署CNI网络插件

[root@master k8s]# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
[root@master k8s]# kubectl get po -n kube-system
NAME                                     READY   STATUS    RESTARTS   AGE
calico-kube-controllers-b5d5cbbb-2295h   1/1     Running   0          3m18s
calico-node-hr2f8                        1/1     Running   0          3m18s
calico-node-jgbt2                        1/1     Running   0          3m18s
calico-node-tsv4m                        1/1     Running   0          3m18s
coredns-6d8c4cb4d-8svwc                  1/1     Running   0          27m
coredns-6d8c4cb4d-99sn4                  1/1     Running   0          27m
etcd-master                              1/1     Running   0          27m
kube-apiserver-master                    1/1     Running   0          27m
kube-controller-manager-master           1/1     Running   0          27m
kube-proxy-7nc4z                         1/1     Running   0          27m
kube-proxy-kq2d4                         1/1     Running   0          15m
kube-proxy-vsdm6                         1/1     Running   0          15m
kube-scheduler-master                    1/1     Running   0          27m
[root@master k8s]# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   33m   v1.23.6
node1    Ready    <none>                 21m   v1.23.6
node2    Ready    <none>                 21m   v1.23.6

注:集群中的pod需要下载4个镜像,就是我们上面更改docker.io前缀下面的镜像。

      controllers可能出现污点问题,等待即可,后面会更新污点和容忍的相关内容。

可以通过如下命令查看pod的信息。

[root@master k8s]# kubectl describe po <_NAME_> -n kube-system

通过小实验验证kubernetes集群搭建情况

[root@master k8s]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@master k8s]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@master k8s]# kubectl get pod,svc
NAME                         READY   STATUS              RESTARTS   AGE
pod/nginx-85b98978db-v8k5m   0/1     ContainerCreating   0          23s

NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP        35m
service/nginx        NodePort    10.97.22.88   <none>        80:31053/TCP   19s
[root@master k8s]# curl 192.168.100.120:31053
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

到这里就完成了kubernetes集群的搭建,但是有点小瑕疵,node节点无法获取到nodes信息

将master节点的admin.conf拷贝到从节点,设置环境变量

[root@master k8s]# scp /etc/kubernetes/admin.conf root@node1:/etc/kubernetes
[root@master k8s]# scp /etc/kubernetes/admin.conf root@node2:/etc/kubernetes
[root@node1 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@node1 ~]# source ~/.bash_profile
[root@node1 ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   45m   v1.23.6
node1    Ready    <none>                 32m   v1.23.6
node2    Ready    <none>                 32m   v1.23.6
[root@node2 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@node2 ~]# source ~/.bash_profile
[root@node2 ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   45m   v1.23.6
node1    Ready    <none>                 32m   v1.23.6
node2    Ready    <none>                 32m   v1.23.6

本人学习记录,不足之处请大家指出,感谢您的阅读!

  • 36
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值