sealos方式部署kubernetes集群

sealos方式部署kubernetes集群

#以下内容为三分钟部署k8s三节点环境部署

1.主机映射

#注意是否为三节点,以及基础环境,节点后续可以增加/减少

[root@k8s-master ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.10.110 k8s-master

192.168.10.111 k8s-node1

192.168.10.112 k8s-node2

2.下载sealos工具

[root@k8s-master ~]# wget https://github.com/labring/sealos/releases/download/v4.3.0/sealos_4.3.0_linux_amd64.rpm

#从github上拉取该rpm软件包

--2023-12-03 19:43:12--  https://github.com/labring/sealos/releases/download/v4.3.0/sealos_4.3.0_linux_amd64.rpm

Resolving github.com (github.com)... 20.205.243.166

Connecting to github.com (github.com)|20.205.243.166|:443... connected.

HTTP request sent, awaiting response... 302 Found

Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/144849757/69c10add-a85a-4230-ab86-aa5f8e982d4e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20231204%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20231204T004313Z&X-Amz-Expires=300&X-Amz-Signature=4026ce3e9eee386f745ad24879350d87381be43893c12663a3851eeb53c4c506&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=144849757&response-content-disposition=attachment%3B%20filename%3Dsealos_4.3.0_linux_amd64.rpm&response-content-type=application%2Foctet-stream [following]

--2023-12-03 19:43:13--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/144849757/69c10add-a85a-4230-ab86-aa5f8e982d4e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20231204%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20231204T004313Z&X-Amz-Expires=300&X-Amz-Signature=4026ce3e9eee386f745ad24879350d87381be43893c12663a3851eeb53c4c506&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=144849757&response-content-disposition=attachment%3B%20filename%3Dsealos_4.3.0_linux_amd64.rpm&response-content-type=application%2Foctet-stream

Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...

Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.108.133|:443... connected.

HTTP request sent, awaiting response... 200 OK

Length: 29076085 (28M) [application/octet-stream]

Saving to: ‘sealos_4.3.0_linux_amd64.rpm’

100%[=======================================================================================================>] 29,076,085  3.60MB/s   in 8.6s   

2023-12-03 19:43:23 (3.21 MB/s) - ‘sealos_4.3.0_linux_amd64.rpm’ saved [29076085/29076085]

3.安装工具

[root@k8s-master ~]# yum install -y sealos_4.3.0_linux_amd64.rpm

.........

Total size: 78 M

Installed size: 78 M

Downloading packages:

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

  Verifying  : sealos-4.3.0-1.x86_64                                                                                                         1/1

Installed:

  sealos.x86_64 0:4.3.0-1                                                                                                                        

Complete!

4.集群部署

[root@k8s-master ~]# sealos run labring/kubernetes:v1.25.0 labring/helm:v3.8.2 labring/calico:v3.24.1  --masters 192.168.10.110 --nodes 192.168.10.111,192.168.10.112 -p 000000

#一主二从集群

2023-12-03T19:51:21 info Start to create a new cluster: master [192.168.10.110], worker [192.168.10.111 192.168.10.112], registry 192.168.10.110

2023-12-03T19:51:21 info Executing pipeline Check in CreateProcessor.

2023-12-03T19:51:22 info checker:hostname []

2023-12-03T19:51:22 info checker:timeSync []

2023-12-03T19:51:22 info Executing pipeline PreProcess in CreateProcessor.

Resolving "labring/kubernetes" using unqualified-search registries (/etc/containers/registries.conf)

Trying to pull docker.io/labring/kubernetes:v1.25.0...

Getting image source signatures

Copying blob 9e8cd553f9c2 done  

Copying blob 4013845ba3fe done  

Copying blob 88af23a6a8b4 done  

Copying blob 0ad330619635 done  

Copying config 787c59ad33 done  

Writing manifest to image destination

Storing signatures

Resolving "labring/helm" using unqualified-search registries (/etc/containers/registries.conf)

Trying to pull docker.io/labring/helm:v3.8.2...

Getting image source signatures

Copying blob 53a6eade9e7e done  

Copying config 1123e8b4b4 done  

Writing manifest to image destination

Storing signatures

Resolving "labring/calico" using unqualified-search registries (/etc/containers/registries.conf)

Trying to pull docker.io/labring/calico:v3.24.1...

Getting image source signatures

Copying blob 740f1fdd328f done  

Copying config 6bbbb5354a done  

Writing manifest to image destination

Storing signatures

2023-12-03T20:01:13 info Executing pipeline RunConfig in CreateProcessor.

2023-12-03T20:01:13 info Executing pipeline MountRootfs in CreateProcessor.

2023-12-03T20:01:41 info Executing pipeline MirrorRegistry in CreateProcessor.         

2023-12-03T20:01:43 info Executing pipeline Bootstrap in CreateProcessor

 INFO [2023-12-03 20:01:43] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait...

192.168.10.111:22        INFO [2023-12-03 20:01:43] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait...

192.168.10.112:22        INFO [2023-12-03 20:01:43] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait...

which: no docker in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin)

 WARN [2023-12-03 20:01:43] >> Replace disable_apparmor = false to disable_apparmor = true

 INFO [2023-12-03 20:01:43] >> check root,port,cri success

192.168.10.111:22       which: no docker in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)

192.168.10.112:22       which: no docker in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)

192.168.10.111:22        WARN [2023-12-03 20:01:43] >> Replace disable_apparmor = false to disable_apparmor = true

192.168.10.111:22        INFO [2023-12-03 20:01:43] >> check root,port,cri success

192.168.10.112:22        WARN [2023-12-03 20:01:43] >> Replace disable_apparmor = false to disable_apparmor = true

192.168.10.112:22        INFO [2023-12-03 20:01:43] >> check root,port,cri success

2023-12-03T20:01:43 info domain sealos.hub:192.168.10.110 append success

192.168.10.111:22       2023-12-03T20:01:44 info domain sealos.hub:192.168.10.110 append success

192.168.10.112:22       2023-12-03T20:01:44 info domain sealos.hub:192.168.10.110 append success

Created symlink from /etc/systemd/system/multi-user.target.wants/registry.service to /etc/systemd/system/registry.service.

 INFO [2023-12-03 20:01:44] >> Health check registry!

 INFO [2023-12-03 20:01:44] >> registry is running

 INFO [2023-12-03 20:01:44] >> init registry success

Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.

192.168.10.112:22       Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.

192.168.10.111:22       Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.

 INFO [2023-12-03 20:01:47] >> Health check containerd!

 INFO [2023-12-03 20:01:47] >> containerd is running

 INFO [2023-12-03 20:01:47] >> init containerd success

Created symlink from /etc/systemd/system/multi-user.target.wants/image-cri-shim.service to /etc/systemd/system/image-cri-shim.service.

 INFO [2023-12-03 20:01:47] >> Health check image-cri-shim!

 INFO [2023-12-03 20:01:47] >> image-cri-shim is running

 INFO [2023-12-03 20:01:47] >> init shim success

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

modprobe: ERROR: could not insert 'bridge': Unknown symbol in module, or unknown parameter (see dmesg)

modprobe: ERROR: could not insert 'ip_vs_rr': Unknown symbol in module, or unknown parameter (see dmesg)

192.168.10.112:22        INFO [2023-12-03 20:01:47] >> Health check containerd!

* Applying /usr/lib/sysctl.d/00-system.conf ...

net.bridge.bridge-nf-call-ip6tables = 0

net.bridge.bridge-nf-call-iptables = 0

net.bridge.bridge-nf-call-arptables = 0

* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...

kernel.yama.ptrace_scope = 0

* Applying /usr/lib/sysctl.d/50-default.conf ...

kernel.sysrq = 16

kernel.core_uses_pid = 1

kernel.kptr_restrict = 1

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.all.rp_filter = 1

net.ipv4.conf.default.accept_source_route = 0

net.ipv4.conf.all.accept_source_route = 0

net.ipv4.conf.default.promote_secondaries = 1

net.ipv4.conf.all.promote_secondaries = 1

fs.protected_hardlinks = 1

fs.protected_symlinks = 1

* Applying /etc/sysctl.d/99-sysctl.conf ...

fs.file-max = 1048576 # sealos

net.bridge.bridge-nf-call-ip6tables = 1 # sealos

net.bridge.bridge-nf-call-iptables = 1 # sealos

net.core.somaxconn = 65535 # sealos

net.ipv4.conf.all.rp_filter = 0 # sealos

net.ipv4.ip_forward = 1 # sealos

net.ipv4.ip_local_port_range = 1024 65535 # sealos

net.ipv4.tcp_keepalive_intvl = 30 # sealos

net.ipv4.tcp_keepalive_time = 600 # sealos

net.ipv4.vs.conn_reuse_mode = 0 # sealos

net.ipv4.vs.conntrack = 1 # sealos

192.168.10.112:22        INFO [2023-12-03 20:01:47] >> containerd is running

192.168.10.112:22        INFO [2023-12-03 20:01:47] >> init containerd success

net.ipv6.conf.all.forwarding = 1 # sealos

* Applying /etc/sysctl.conf ...

fs.file-max = 1048576 # sealos

net.bridge.bridge-nf-call-ip6tables = 1 # sealos

net.bridge.bridge-nf-call-iptables = 1 # sealos

net.core.somaxconn = 65535 # sealos

net.ipv4.conf.all.rp_filter = 0 # sealos

net.ipv4.ip_forward = 1 # sealos

net.ipv4.ip_local_port_range = 1024 65535 # sealos

net.ipv4.tcp_keepalive_intvl = 30 # sealos

net.ipv4.tcp_keepalive_time = 600 # sealos

net.ipv4.vs.conn_reuse_mode = 0 # sealos

net.ipv4.vs.conntrack = 1 # sealos

net.ipv6.conf.all.forwarding = 1 # sealos

192.168.10.111:22        INFO [2023-12-03 20:01:47] >> Health check containerd!

192.168.10.111:22        INFO [2023-12-03 20:01:47] >> containerd is running

192.168.10.111:22        INFO [2023-12-03 20:01:47] >> init containerd success

192.168.10.112:22       Created symlink from /etc/systemd/system/multi-user.target.wants/image-cri-shim.service to /etc/systemd/system/image-cri-shim.service.

 INFO [2023-12-03 20:01:47] >> pull pause image sealos.hub:5000/pause:3.8

192.168.10.111:22       Created symlink from /etc/systemd/system/multi-user.target.wants/image-cri-shim.service to /etc/systemd/system/image-cri-shim.service.

192.168.10.112:22        INFO [2023-12-03 20:01:47] >> Health check image-cri-shim!

192.168.10.111:22        INFO [2023-12-03 20:01:47] >> Health check image-cri-shim!

192.168.10.112:22        INFO [2023-12-03 20:01:47] >> image-cri-shim is running

192.168.10.112:22        INFO [2023-12-03 20:01:47] >> init shim success

192.168.10.112:22       127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

192.168.10.111:22        INFO [2023-12-03 20:01:47] >> image-cri-shim is running

192.168.10.112:22       ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.10.111:22        INFO [2023-12-03 20:01:47] >> init shim success

192.168.10.111:22       127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

192.168.10.111:22       ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.10.111:22       modprobe: ERROR: could not insert 'bridge': Unknown symbol in module, or unknown parameter (see dmesg)

192.168.10.111:22       modprobe: ERROR: could not insert 'ip_vs_rr': Unknown symbol in module, or unknown parameter (see dmesg)

192.168.10.111:22       * Applying /usr/lib/sysctl.d/00-system.conf ...

192.168.10.111:22       net.bridge.bridge-nf-call-ip6tables = 0

192.168.10.111:22       net.bridge.bridge-nf-call-iptables = 0

192.168.10.111:22       net.bridge.bridge-nf-call-arptables = 0

192.168.10.111:22       * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...

192.168.10.111:22       kernel.yama.ptrace_scope = 0

192.168.10.111:22       * Applying /usr/lib/sysctl.d/50-default.conf ...

192.168.10.111:22       kernel.sysrq = 16

192.168.10.111:22       kernel.core_uses_pid = 1

192.168.10.111:22       kernel.kptr_restrict = 1

192.168.10.111:22       net.ipv4.conf.default.rp_filter = 1

192.168.10.111:22       net.ipv4.conf.all.rp_filter = 1

192.168.10.111:22       net.ipv4.conf.default.accept_source_route = 0

192.168.10.111:22       net.ipv4.conf.all.accept_source_route = 0

192.168.10.111:22       net.ipv4.conf.default.promote_secondaries = 1

192.168.10.111:22       net.ipv4.conf.all.promote_secondaries = 1

192.168.10.111:22       fs.protected_hardlinks = 1

192.168.10.111:22       fs.protected_symlinks = 1

192.168.10.111:22       * Applying /etc/sysctl.d/99-sysctl.conf ...

192.168.10.111:22       fs.file-max = 1048576 # sealos

192.168.10.111:22       net.bridge.bridge-nf-call-ip6tables = 1 # sealos

192.168.10.111:22       net.bridge.bridge-nf-call-iptables = 1 # sealos

192.168.10.111:22       net.core.somaxconn = 65535 # sealos

192.168.10.111:22       net.ipv4.conf.all.rp_filter = 0 # sealos

192.168.10.111:22       net.ipv4.ip_forward = 1 # sealos

192.168.10.111:22       net.ipv4.ip_local_port_range = 1024 65535 # sealos

192.168.10.111:22       net.ipv4.tcp_keepalive_intvl = 30 # sealos

192.168.10.111:22       net.ipv4.tcp_keepalive_time = 600 # sealos

192.168.10.111:22       net.ipv4.vs.conn_reuse_mode = 0 # sealos

192.168.10.111:22       net.ipv4.vs.conntrack = 1 # sealos

192.168.10.111:22       net.ipv6.conf.all.forwarding = 1 # sealos

192.168.10.111:22       * Applying /etc/sysctl.conf ...

192.168.10.111:22       fs.file-max = 1048576 # sealos

192.168.10.111:22       net.bridge.bridge-nf-call-ip6tables = 1 # sealos

192.168.10.111:22       net.bridge.bridge-nf-call-iptables = 1 # sealos

192.168.10.111:22       net.core.somaxconn = 65535 # sealos

192.168.10.111:22       net.ipv4.conf.all.rp_filter = 0 # sealos

192.168.10.111:22       net.ipv4.ip_forward = 1 # sealos

192.168.10.111:22       net.ipv4.ip_local_port_range = 1024 65535 # sealos

192.168.10.111:22       net.ipv4.tcp_keepalive_intvl = 30 # sealos

192.168.10.111:22       net.ipv4.tcp_keepalive_time = 600 # sealos

192.168.10.111:22       net.ipv4.vs.conn_reuse_mode = 0 # sealos

192.168.10.111:22       net.ipv4.vs.conntrack = 1 # sealos

192.168.10.111:22       net.ipv6.conf.all.forwarding = 1 # sealos

192.168.10.112:22       modprobe: ERROR: could not insert 'bridge': Unknown symbol in module, or unknown parameter (see dmesg)

192.168.10.112:22       modprobe: ERROR: could not insert 'ip_vs_rr': Unknown symbol in module, or unknown parameter (see dmesg)

192.168.10.112:22       * Applying /usr/lib/sysctl.d/00-system.conf ...

192.168.10.112:22       net.bridge.bridge-nf-call-ip6tables = 0

192.168.10.112:22       net.bridge.bridge-nf-call-iptables = 0

192.168.10.112:22       net.bridge.bridge-nf-call-arptables = 0

192.168.10.112:22       * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...

192.168.10.112:22       kernel.yama.ptrace_scope = 0

192.168.10.112:22       * Applying /usr/lib/sysctl.d/50-default.conf ...

192.168.10.112:22       kernel.sysrq = 16

192.168.10.112:22       kernel.core_uses_pid = 1

192.168.10.112:22       kernel.kptr_restrict = 1

192.168.10.112:22       net.ipv4.conf.default.rp_filter = 1

192.168.10.112:22       net.ipv4.conf.all.rp_filter = 1

192.168.10.112:22       net.ipv4.conf.default.accept_source_route = 0

192.168.10.112:22       net.ipv4.conf.all.accept_source_route = 0

192.168.10.112:22       net.ipv4.conf.default.promote_secondaries = 1

192.168.10.112:22       net.ipv4.conf.all.promote_secondaries = 1

192.168.10.112:22       fs.protected_hardlinks = 1

192.168.10.112:22       fs.protected_symlinks = 1

192.168.10.112:22       * Applying /etc/sysctl.d/99-sysctl.conf ...

192.168.10.112:22       fs.file-max = 1048576 # sealos

192.168.10.112:22       net.bridge.bridge-nf-call-ip6tables = 1 # sealos

192.168.10.112:22       net.bridge.bridge-nf-call-iptables = 1 # sealos

192.168.10.112:22       net.core.somaxconn = 65535 # sealos

192.168.10.112:22       net.ipv4.conf.all.rp_filter = 0 # sealos

192.168.10.112:22       net.ipv4.ip_forward = 1 # sealos

192.168.10.112:22       net.ipv4.ip_local_port_range = 1024 65535 # sealos

192.168.10.112:22       net.ipv4.tcp_keepalive_intvl = 30 # sealos

192.168.10.112:22       net.ipv4.tcp_keepalive_time = 600 # sealos

192.168.10.112:22       net.ipv4.vs.conn_reuse_mode = 0 # sealos

192.168.10.112:22       net.ipv4.vs.conntrack = 1 # sealos

192.168.10.112:22       net.ipv6.conf.all.forwarding = 1 # sealos

192.168.10.112:22       * Applying /etc/sysctl.conf ...

192.168.10.112:22       fs.file-max = 1048576 # sealos

192.168.10.112:22       net.bridge.bridge-nf-call-ip6tables = 1 # sealos

192.168.10.112:22       net.bridge.bridge-nf-call-iptables = 1 # sealos

192.168.10.112:22       net.core.somaxconn = 65535 # sealos

192.168.10.112:22       net.ipv4.conf.all.rp_filter = 0 # sealos

192.168.10.112:22       net.ipv4.ip_forward = 1 # sealos

192.168.10.112:22       net.ipv4.ip_local_port_range = 1024 65535 # sealos

192.168.10.112:22       net.ipv4.tcp_keepalive_intvl = 30 # sealos

192.168.10.112:22       net.ipv4.tcp_keepalive_time = 600 # sealos

192.168.10.112:22       net.ipv4.vs.conn_reuse_mode = 0 # sealos

192.168.10.112:22       net.ipv4.vs.conntrack = 1 # sealos

192.168.10.112:22       net.ipv6.conf.all.forwarding = 1 # sealos

Image is up to date for sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517

Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.

 INFO [2023-12-03 20:01:48] >> init kubelet success

 INFO [2023-12-03 20:01:48] >> init rootfs success

192.168.10.111:22        INFO [2023-12-03 20:01:49] >> pull pause image sealos.hub:5000/pause:3.8

192.168.10.112:22        INFO [2023-12-03 20:01:49] >> pull pause image sealos.hub:5000/pause:3.8

192.168.10.111:22       Image is up to date for sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517

192.168.10.111:22       Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.

192.168.10.111:22        INFO [2023-12-03 20:01:49] >> init kubelet success

192.168.10.111:22        INFO [2023-12-03 20:01:49] >> init rootfs success

192.168.10.112:22       Image is up to date for sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517

192.168.10.112:22       Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.

192.168.10.112:22        INFO [2023-12-03 20:01:49] >> init kubelet success

192.168.10.112:22        INFO [2023-12-03 20:01:49] >> init rootfs success

2023-12-03T20:01:49 info Executing pipeline Init in CreateProcessor.

2023-12-03T20:01:49 info start to copy kubeadm config to master0

2023-12-03T20:01:49 info start to generate cert and kubeConfig...

2023-12-03T20:01:49 info start to generator cert and copy to masters...

2023-12-03T20:01:49 info apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local k8s-master:k8s-master kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost] map[10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 192.168.10.110:192.168.10.110]}

2023-12-03T20:01:49 info Etcd altnames : {map[k8s-master:k8s-master localhost:localhost] map[127.0.0.1:127.0.0.1 192.168.10.110:192.168.10.110 ::1:::1]}, commonName : k8s-master

2023-12-03T20:01:51 info start to copy etc pki files to masters

2023-12-03T20:01:51 info start to copy etc pki files to masters

2023-12-03T20:01:51 info start to create kubeconfig...

2023-12-03T20:01:51 info start to copy kubeconfig files to masters

2023-12-03T20:01:51 info start to copy static files to masters

2023-12-03T20:01:51 info start to init master0...

2023-12-03T20:01:51 info domain apiserver.cluster.local:192.168.10.110 append success

W1203 20:01:51.882408    2598 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!

W1203 20:01:51.882483    2598 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0

[init] Using Kubernetes version: v1.25.0

[preflight] Running pre-flight checks

        [WARNING FileExisting-socat]: socat not found in system path

[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

[certs] Using certificateDir folder "/etc/kubernetes/pki"

[certs] Using existing ca certificate authority

[certs] Using existing apiserver certificate and key on disk

[certs] Using existing apiserver-kubelet-client certificate and key on disk

[certs] Using existing front-proxy-ca certificate authority

[certs] Using existing front-proxy-client certificate and key on disk

[certs] Using existing etcd/ca certificate authority

[certs] Using existing etcd/server certificate and key on disk

[certs] Using existing etcd/peer certificate and key on disk

[certs] Using existing etcd/healthcheck-client certificate and key on disk

[certs] Using existing apiserver-etcd-client certificate and key on disk

[certs] Using the existing "sa" key

[kubeconfig] Using kubeconfig folder "/etc/kubernetes"

[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"

[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"

W1203 20:02:02.026392    2598 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://192.168.10.110:6443, got: https://apiserver.cluster.local:6443

[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"

W1203 20:02:02.170228    2598 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://192.168.10.110:6443, got: https://apiserver.cluster.local:6443

[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Starting the kubelet

[control-plane] Using manifest folder "/etc/kubernetes/manifests"

[control-plane] Creating static Pod manifest for "kube-apiserver"

[control-plane] Creating static Pod manifest for "kube-controller-manager"

[control-plane] Creating static Pod manifest for "kube-scheduler"

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

[apiclient] All control plane components are healthy after 6.002745 seconds

[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster

[upload-certs] Skipping phase. Please see --upload-certs

[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]

[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes

[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities

and service account keys on each node and then running the following as root:

  kubeadm join apiserver.cluster.local:6443 --token <value withheld> \

        --discovery-token-ca-cert-hash sha256:ce9f451f1c4913a945f9cb21ecd4301b4d681acb1cb590f14b1e1558e025bc50 \

        --control-plane --certificate-key <value withheld>

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join apiserver.cluster.local:6443 --token <value withheld> \

        --discovery-token-ca-cert-hash sha256:ce9f451f1c4913a945f9cb21ecd4301b4d681acb1cb590f14b1e1558e025bc50

2023-12-03T20:02:09 info Executing pipeline Join in CreateProcessor.

2023-12-03T20:02:09 info [192.168.10.111:22 192.168.10.112:22] will be added as worker

2023-12-03T20:02:09 info start to get kubernetes token...

2023-12-03T20:02:10 info fetch certSANs from kubeadm configmap

2023-12-03T20:02:10 info start to join 192.168.10.112:22 as worker

2023-12-03T20:02:10 info start to copy kubeadm join config to node: 192.168.10.112:22

2023-12-03T20:02:10 info start to join 192.168.10.111:22 as worker

2023-12-03T20:02:10 info start to copy kubeadm join config to node: 192.168.10.111:22

192.168.10.112:22       2023-12-03T20:02:10 info domain apiserver.cluster.local:10.103.97.2 append success

192.168.10.112:22       2023-12-03T20:02:10 info domain lvscare.node.ip:192.168.10.112 append success

2023-12-03T20:02:10 info run ipvs once module: 192.168.10.112:22

192.168.10.112:22       2023-12-03T20:02:10 info Trying to add route

192.168.10.112:22       2023-12-03T20:02:10 info success to set route.(host:10.103.97.2, gateway:192.168.10.112)

2023-12-03T20:02:10 info start join node: 192.168.10.112:22

192.168.10.112:22s to 19W1203 20:02:10.972684    3001 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!

192.168.10.112:22       [preflight] Running pre-flight checks

192.168.10.112:22               [WARNING FileExisting-socat]: socat not found in system path

192.168.10.111:22       2023-12-03T20:02:11 info domain apiserver.cluster.local:10.103.97.2 append success

192.168.10.111:22       2023-12-03T20:02:11 info domain lvscare.node.ip:192.168.10.111 append success

2023-12-03T20:02:11 info run ipvs once module: 192.168.10.111:22

192.168.10.111:22       2023-12-03T20:02:11 info Trying to add route

192.168.10.111:22       2023-12-03T20:02:11 info success to set route.(host:10.103.97.2, gateway:192.168.10.111)

2023-12-03T20:02:11 info start join node: 192.168.10.111:22

192.168.10.111:22       W1203 20:02:11.368514    3021 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!

192.168.10.111:22       [preflight] Running pre-flight checks

192.168.10.111:22               [WARNING FileExisting-socat]: socat not found in system path

192.168.10.112:22       [preflight] Reading configuration from the cluster...

192.168.10.112:22       [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

192.168.10.112:22       W1203 20:02:23.459501    3001 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0

192.168.10.112:22       [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

192.168.10.112:22       [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

192.168.10.112:22       [kubelet-start] Starting the kubelet

192.168.10.112:22       [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

192.168.10.111:22       [preflight] Reading configuration from the cluster...

192.168.10.111:22       [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

192.168.10.111:22       W1203 20:02:23.853094    3021 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0

192.168.10.111:22       [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

192.168.10.111:22       [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

192.168.10.111:22       [kubelet-start] Starting the kubelet

192.168.10.111:22       [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

192.168.10.111:22

192.168.10.111:22       This node has joined the cluster:

192.168.10.111:22       * Certificate signing request was sent to apiserver and a response was received.

192.168.10.111:22       * The Kubelet was informed of the new secure connection details.

192.168.10.111:22

192.168.10.111:22       Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

192.168.10.111:22

2023-12-03T20:02:37 info succeeded in joining 192.168.10.111:22 as worker

192.168.10.112:22

192.168.10.112:22       This node has joined the cluster:

192.168.10.112:22       * Certificate signing request was sent to apiserver and a response was received.

192.168.10.112:22       * The Kubelet was informed of the new secure connection details.

192.168.10.112:22

192.168.10.112:22       Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

192.168.10.112:22

2023-12-03T20:02:37 info succeeded in joining 192.168.10.112:22 as worker

2023-12-03T20:02:37 info start to sync lvscare static pod to node: 192.168.10.111:22 master: [192.168.10.110:6443]

2023-12-03T20:02:37 info start to sync lvscare static pod to node: 192.168.10.112:22 master: [192.168.10.110:6443]

192.168.10.112:22       2023-12-03T20:02:37 info generator lvscare static pod is success

192.168.10.111:22       2023-12-03T20:02:37 info generator lvscare static pod is success

2023-12-03T20:02:37 info Executing pipeline RunGuest in CreateProcessor.

Release "calico" does not exist. Installing it now.

NAME: calico

LAST DEPLOYED: Sun Dec  3 20:02:39 2023

NAMESPACE: tigera-operator

STATUS: deployed

REVISION: 1

TEST SUITE: None

2023-12-03T20:02:40 info succeeded in creating a new cluster, enjoy it!

2023-12-03T20:02:40 info

      ___           ___           ___           ___       ___           ___

     /\  \         /\  \         /\  \         /\__\     /\  \         /\  \

    /::\  \       /::\  \       /::\  \       /:/  /    /::\  \       /::\  \

   /:/\ \  \     /:/\:\  \     /:/\:\  \     /:/  /    /:/\:\  \     /:/\ \  \

  _\:\~\ \  \   /::\~\:\  \   /::\~\:\  \   /:/  /    /:/  \:\  \   _\:\~\ \  \

 /\ \:\ \ \__\ /:/\:\ \:\__\ /:/\:\ \:\__\ /:/__/    /:/__/ \:\__\ /\ \:\ \ \__\

 \:\ \:\ \/__/ \:\~\:\ \/__/ \/__\:\/:/  / \:\  \    \:\  \ /:/  / \:\ \:\ \/__/

  \:\ \:\__\    \:\ \:\__\        \::/  /   \:\  \    \:\  /:/  /   \:\ \:\__\

   \:\/:/  /     \:\ \/__/        /:/  /     \:\  \    \:\/:/  /     \:\/:/  /

    \::/  /       \:\__\         /:/  /       \:\__\    \::/  /       \::/  /

     \/__/         \/__/         \/__/         \/__/     \/__/         \/__/

                  Website: https://www.sealos.io/

                  Address: github.com/labring/sealos

                  Version: 4.3.0-7ee53f1d

5.资源检查

[root@k8s-master ~]# kubectl get nodes -o wide #检查node节点

NAME         STATUS   ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME

k8s-master   Ready    control-plane   33m   v1.25.0   192.168.10.110   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   containerd://1.6.24

k8s-node1    Ready    <none>          33m   v1.25.0   192.168.10.111   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   containerd://1.6.24

k8s-node2    Ready    <none>          33m   v1.25.0   192.168.10.112   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   containerd://1.6.24

[root@k8s-master ~]# kubectl get cs #获取集群状态

Warning: v1 ComponentStatus is deprecated in v1.19+

NAME                 STATUS    MESSAGE                         ERROR

controller-manager   Healthy   ok                              

scheduler            Healthy   ok                              

etcd-0               Healthy   {"health":"true","reason":""}

[root@k8s-master ~]# kubectl get pods -A #检查所有pod资源状态

NAMESPACE          NAME                                       READY   STATUS    RESTARTS      AGE

calico-apiserver   calico-apiserver-756859fb8-4f59j           1/1     Running   0             32m

calico-apiserver   calico-apiserver-756859fb8-m6j4p           1/1     Running   0             32m

calico-system      calico-kube-controllers-85666c5b94-4tglf   1/1     Running   0             32m

calico-system      calico-node-9frf5                          1/1     Running   0             32m

calico-system      calico-node-hhxhz                          1/1     Running   0             32m

calico-system      calico-node-t2vcl                          1/1     Running   0             32m

calico-system      calico-typha-67c8bcd76b-djqmn              1/1     Running   0             32m

calico-system      calico-typha-67c8bcd76b-jvxsn              1/1     Running   0             32m

calico-system      csi-node-driver-ltwbv                      2/2     Running   0             32m

calico-system      csi-node-driver-mlfnd                      2/2     Running   0             32m

calico-system      csi-node-driver-rt42f                      2/2     Running   0             32m

kube-system        coredns-565d847f94-87jv9                   1/1     Running   0             33m

kube-system        coredns-565d847f94-vrtsl                   1/1     Running   0             33m

kube-system        etcd-k8s-master                            1/1     Running   0             34m

kube-system        kube-apiserver-k8s-master                  1/1     Running   0             34m

kube-system        kube-controller-manager-k8s-master         1/1     Running   0             34m

kube-system        kube-proxy-6xdjp                           1/1     Running   0             33m

kube-system        kube-proxy-b7q8k                           1/1     Running   0             33m

kube-system        kube-proxy-fmr2j                           1/1     Running   0             33m

kube-system        kube-scheduler-k8s-master                  1/1     Running   0             34m

kube-system        kube-sealos-lvscare-k8s-node1              1/1     Running   0             33m

kube-system        kube-sealos-lvscare-k8s-node2              1/1     Running   0             33m

tigera-operator    tigera-operator-6675dc47f4-km9ln           1/1     Running   1 (32m ago)   33m

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

SilentCodeY

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值