2. kubeadm部署kubernetes 1.29集群

一、kubeadm部署kubernetes 1.29集群

1、环境规划

192.168.140.10 k8s-master.linux.com 2U4G
192.168.140.11 k8s-node01.linux.com 2U8G
192.168.140.12 k8s-node02.linux.com 2U8G

2、主机基础环境配置

2.1 关闭防火墙、SELinux、时间同步

[root@k8s-master ~]# crontab -l
*/30 * * * * /usr/sbin/ntpdate ntp.aliyun.com &> /dev/null

2.2 所有主机配置ssh免密

[root@k8s-master ~]# ssh-keygen -t rsa 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:cr/9cErpOZuf5odAZntHxChTqREWotP0Xhdd8f/TEoA root@k8s-master.linux.com
The key's randomart image is:
+---[RSA 2048]----+
|          o ++.*=|
|         + =+ o *|
|        o E o=..o|
|         . .=o .o|
|      . S  +.....|
|       o .  o...+|
|          . +oo+o|
|           =.*ooo|
|          . BB=. |
+----[SHA256]-----+
[root@k8s-master ~]# 
[root@k8s-master ~]# mv /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys
[root@k8s-master ~]# 
[root@k8s-master ~]# scp -r /root/.ssh/ root@192.168.140.11:/root/
[root@k8s-master ~]# scp -r /root/.ssh/ root@192.168.140.12:/root/

2.3 所有主机添加主机名解析

[root@k8s-master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.140.10	k8s-master.linux.com
192.168.140.11	k8s-node01.linux.com
192.168.140.12	k8s-node02.linux.com
[root@k8s-master ~]# 
[root@k8s-master ~]# for i in 11 12
> do
> scp /etc/hosts root@192.168.140.$i:/etc/hosts
> done
hosts                                                                                               100%  267    57.0KB/s   00:00    
hosts                                                                                               100%  267    56.4KB/s   00:00    
[root@k8s-master ~]# 

2.4 所有主机关闭swap

[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# 
[root@k8s-master ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           3770         228        3368          11         174        3332
Swap:             0           0           0
[root@k8s-master ~]# 
[root@k8s-master ~]# sed -ri '/swap/d' /etc/fstab 

2.5 所有主机修改内核参数

[root@k8s-master ~]# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2130720
[root@k8s-master ~]# 
[root@k8s-master ~]# sysctl -p /etc/sysctl.d/k8s.conf 

加载内核模块

[root@k8s-master ~]# modprobe br_netfilter
[root@k8s-master ~]# modprobe ip_conntrack
[root@k8s-master ~]# 

2.6 安装基础软件依赖

[root@k8s-master ~]# for i in 10 11 12
> do
> ssh root@192.168.140.$i yum install -y wget jq psmisc net-tools nfs-utils socat telnet device-mapper-persistent-data lvm2 git tar zip curl conntrack ipvsadm ipset iptables sysstat libseccomp
> done

2.7 加载lvs负载均衡策略

[root@k8s-master ~]# cat /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
#
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
[root@k8s-master ~]# 
[root@k8s-master ~]# bash /etc/sysconfig/modules/ipvs.modules
[root@k8s-master ~]# 
[root@k8s-master ~]# lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs                 145458  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4      15053  0 
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
nf_conntrack          139264  2 ip_vs,nf_conntrack_ipv4
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack

3、所有主机安装容器引擎软件

k8s 1.25版本后弃用docker, 改用containerd

3.1 安装docker-ce

[root@k8s-master ~]# cat /etc/yum.repos.d/docker-ce.repo 

[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/source/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/source/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/debug-$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/source/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[root@k8s-master ~]# for i in 10 11 12 
> do
> scp /etc/yum.repos.d/docker-ce.repo root@192.168.140.$i:/etc/yum.repos.d/docker-ce.repo 
> done
docker-ce.repo                                                                                      100% 2081     4.7MB/s   00:00    
docker-ce.repo                                                                                      100% 2081     2.4MB/s   00:00    
docker-ce.repo                                                                                      100% 2081     1.8MB/s   00:00    

[root@k8s-master ~]# for i in 10 11 12
> do
> ssh root@192.168.140.$i yum install -y docker-ce
> done
[root@k8s-master ~]# cat /etc/docker/daemon.json
{
  "registry-mirrors": ["https://rywdmoco.mirror.aliyuncs.com"]
}
[root@k8s-master ~]# 
[root@k8s-master ~]# systemctl enable --now docker

3.2 安装containerd

[root@k8s-master ~]# yum install -y containerd.io 
[root@k8s-master ~]# containerd config default > /etc/containerd/config.toml
[root@k8s-master ~]# vim /etc/containerd/config.toml 
SystemdCgroup = true
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"

[root@k8s-master ~]# systemctl enable --now containerd.service 

3.3 安装kubeadm、kubectl、kubelet软件

[root@k8s-master ~]# cat /etc/yum.repos.d/k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/rpm/
enabled=1
gpgcheck=0
[root@k8s-master ~]# yum install -y kubeadm-1.29.1 kubectl-1.29.1 kubelet-1.29.1 

3.4 启动kubelet

[root@k8s-master ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"

[root@k8s-master ~]# systemctl enable --now kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

3.5 配置crictl客户端工具

[root@k8s-master ~]# cat /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock

image-endpoint: unix:///run/containerd/containerd.sock

timeout: 10

debug: false
[root@k8s-master ~]# 
[root@k8s-master ~]# systemctl restart containerd.service 

4、部署k8s集群组件

4.1 在Master节点创建集群初始化文件

[root@k8s-master ~]# kubeadm config print init-defaults > init.yaml
[root@k8s-master ~]# cat init.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.140.10           // kube-apiserver监听地址
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: k8s-master.linux.com						// master节点的名称, 控制面板
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers			// 镜像仓库地址
kind: ClusterConfiguration
kubernetesVersion: 1.29.1																							// k8s集群版本,和kubeadm版本一致
networking:
  dnsDomain: cluster.local
  podSubnet: 10.88.0.0/16																							// pod网段,给容器分配IP的网段
  serviceSubnet: 10.96.0.0/16																					// service网段,给服务分配IP的网段
scheduler: {}

4.2 初始化集群

建议提前导入相应的镜像, docker导入的镜像是在文件系统中(磁盘)
containerd导入的镜像在k8s.io命名空间

# ctr -n k8s.io image import xxxxx.tar 

Master节点镜像参考

[root@k8s-master ~]# crictl image ls
IMAGE                                                                         TAG                 IMAGE ID            SIZE
docker.io/calico/cni                                                          v3.27.0             8e8d96a874c0e       211MB
docker.io/calico/kube-controllers                                             v3.27.0             4e87edec0297d       75.5MB
docker.io/calico/node                                                         v3.27.0             1843802b91be8       342MB
registry.aliyuncs.com/google_containers/etcd                                  3.5.10-0            a0eed15eed449       149MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.5.10-0            a0eed15eed449       149MB
registry.aliyuncs.com/google_containers/kube-apiserver                        v1.29.1             53b148a9d1963       128MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.29.1             53b148a9d1963       128MB
registry.aliyuncs.com/google_containers/pause                                 3.9                 e6f1816883972       747kB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.9                 e6f1816883972       747kB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   v1.11.1             cbb01a7bd410d       61.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.29.1             79d451ca186a6       33.4MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.29.1             43c6c10396b89       28.4MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.29.1             406945b511542       60.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.6                 6270bb605e12e       302kB

Node节点镜像参考

[root@k8s-node01 ~]# crictl image ls
IMAGE                                                            TAG                 IMAGE ID            SIZE
docker.io/calico/cni                                             v3.27.0             8e8d96a874c0e       211MB
docker.io/calico/kube-controllers                                v3.27.0             4e87edec0297d       75.5MB
docker.io/calico/node                                            v3.27.0             1843802b91be8       342MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy   v1.29.1             43c6c10396b89       28.4MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause        3.6                 6270bb605e12e       302kB
[root@k8s-node01 ~]# 

初始化集群

[root@k8s-master ~]# kubeadm init --config=init.yaml  --ignore-preflight-errors=SystemVerification
[init] Using Kubernetes version: v1.29.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0724 11:30:56.143832   23538 checks.go:835] detected that the sandbox image "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master.linux.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.140.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master.linux.com localhost] and IPs [192.168.140.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master.linux.com localhost] and IPs [192.168.140.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.001829 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master.linux.com as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master.linux.com as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.140.10:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:005e565c273b5d4beadbaff11f556f5062030439896b2bc937fc7b81bc8475df 

4.3 定义KUBECONFIG环境变量

目的:为了让kubectl客户端工具可正常使用

[root@k8s-master ~]# vim /etc/profile
export KUBECONFIG=/etc/kubernetes/admin.conf
[root@k8s-master ~]# 
[root@k8s-master ~]# source /etc/profile

4.4 添加工作节点

[root@k8s-node01 ~]# kubeadm join 192.168.140.10:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:005e565c273b5d4beadbaff11f556f5062030439896b2bc937fc7b81bc8475df
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

4.5 查看初始化后的状态

[root@k8s-master ~]# kubectl get nodes
NAME                   STATUS     ROLES           AGE     VERSION
k8s-master.linux.com   NotReady   control-plane   11m     v1.29.1
k8s-node01.linux.com   NotReady   <none>          2m30s   v1.29.1
k8s-node02.linux.com   NotReady   <none>          118s    v1.29.1
[root@k8s-master ~]# kubectl get pod -A -o wide
NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE     IP               NODE                   NOMINATED NODE   READINESS GATES
kube-system   coredns-5f98f8d567-ghrpr                       0/1     Pending   0          13m     <none>           <none>                 <none>           <none>
kube-system   coredns-5f98f8d567-txgzh                       0/1     Pending   0          13m     <none>           <none>                 <none>           <none>
kube-system   etcd-k8s-master.linux.com                      1/1     Running   0          13m     192.168.140.10   k8s-master.linux.com   <none>           <none>
kube-system   kube-apiserver-k8s-master.linux.com            1/1     Running   0          13m     192.168.140.10   k8s-master.linux.com   <none>           <none>
kube-system   kube-controller-manager-k8s-master.linux.com   1/1     Running   0          13m     192.168.140.10   k8s-master.linux.com   <none>           <none>
kube-system   kube-proxy-945mx                               1/1     Running   0          3m48s   192.168.140.12   k8s-node02.linux.com   <none>           <none>
kube-system   kube-proxy-hmnrm                               1/1     Running   0          4m20s   192.168.140.11   k8s-node01.linux.com   <none>           <none>
kube-system   kube-proxy-j5hnw                               1/1     Running   0          13m     192.168.140.10   k8s-master.linux.com   <none>           <none>
kube-system   kube-scheduler-k8s-master.linux.com            1/1     Running   0          13m     192.168.140.10   k8s-master.linux.com   <none>           <none>

5、部署calico打通容器网络通信

calico基于BGP协议设计

[root@k8s-master ~]# vim calico.yaml 
  - name: CALICO_IPV4POOL_CIDR
    value: "10.88.0.0/16"

事先导入镜像

ctr -n k8s.io image import calico_node_v3.27.0.tar 
ctr -n k8s.io image import calico_kube-controllers_v3.27.0.tar 
ctr -n k8s.io image import calico_cni_v3.27.0.tar 
[root@k8s-master ~]# kubectl create -f calico.yaml 

6、k8s集群部署完成

6.1 查看核心组件运行状态

[root@k8s-master ~]# kubectl get pod -A -o wide 
NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE    IP               NODE                   NOMINATED NODE   REA
kube-system   calico-kube-controllers-5fc7d6cf67-jg28l       1/1     Running   0          117m   10.88.242.129    k8s-node02.linux.com   <none>           <no
kube-system   calico-node-2wdsg                              1/1     Running   0          117m   192.168.140.11   k8s-node01.linux.com   <none>           <no
kube-system   calico-node-dt9n9                              1/1     Running   0          117m   192.168.140.10   k8s-master.linux.com   <none>           <no
kube-system   calico-node-nskx9                              1/1     Running   0          117m   192.168.140.12   k8s-node02.linux.com   <none>           <no
kube-system   coredns-5f98f8d567-ghrpr                       1/1     Running   0          155m   10.88.179.2      k8s-master.linux.com   <none>           <no
kube-system   coredns-5f98f8d567-txgzh                       1/1     Running   0          155m   10.88.179.1      k8s-master.linux.com   <none>           <no
kube-system   etcd-k8s-master.linux.com                      1/1     Running   0          156m   192.168.140.10   k8s-master.linux.com   <none>           <no
kube-system   kube-apiserver-k8s-master.linux.com            1/1     Running   0          156m   192.168.140.10   k8s-master.linux.com   <none>           <no
kube-system   kube-controller-manager-k8s-master.linux.com   1/1     Running   0          156m   192.168.140.10   k8s-master.linux.com   <none>           <no
kube-system   kube-proxy-945mx                               1/1     Running   0          146m   192.168.140.12   k8s-node02.linux.com   <none>           <no
kube-system   kube-proxy-hmnrm                               1/1     Running   0          147m   192.168.140.11   k8s-node01.linux.com   <none>           <no
kube-system   kube-proxy-j5hnw                               1/1     Running   0          155m   192.168.140.10   k8s-master.linux.com   <none>           <no
kube-system   kube-scheduler-k8s-master.linux.com            1/1     Running   0          156m   192.168.140.10   k8s-master.linux.com   <none>           <

6.2 查看节点的运行状态

[root@k8s-master ~]# kubectl get nodes
NAME                   STATUS   ROLES           AGE    VERSION
k8s-master.linux.com   Ready    control-plane   159m   v1.29.1
k8s-node01.linux.com   Ready    <none>          150m   v1.29.1
k8s-node02.linux.com   Ready    <none>          149m   v1.29.1
Ubuntu 22.04是一种Linux操作系统,而Kubernetes(简称k8s)是一个用于容器编排和管理的开源平台,Docker是一种容器化技术。下面是安装Kubernetes 1.29和Docker的步骤: 1. 安装Ubuntu 22.04操作系统: - 下载Ubuntu 22.04的ISO镜像文件,并创建一个可启动的安装介质(如USB驱动器或光盘)。 - 将安装介质插入计算机,并启动计算机。 - 打开终端,执行以下命令以更新软件包列表: ``` sudo apt update ``` - 安装Docker的依赖包: ``` sudo apt install apt-transport-https ca-certificates curl software-properties-common ``` - 添加Docker的官方GPG密钥: ``` curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg ``` - 添加Docker的软件源: ``` echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null ``` - 更新软件包列表并安装Docker: ``` sudo apt update sudo apt install docker-ce docker-ce-cli containerd.io ``` - 验证Docker是否成功安装: ``` sudo docker run hello-world ``` 3. 安装Kubernetes 1.29: - 添加Kubernetes的软件源: ``` curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list ``` - 更新软件包列表并安装Kubernetes: ``` sudo apt update sudo apt install -y kubelet=1.29.0- 配置kubelet: ``` sudo systemctl enable kubelet ``` 以上是在Ubuntu 22.04上安装Kubernetes 1.29和Docker的步骤。
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值