sealos4.3.5安装手册(二)追加节点

接上一篇sealos4.3.5安装手册(一)部署集群中我们讲解了集群的部署,这篇中我们将继续完善集群,追加 2个主节点和1个子节点

1. 查看sealos帮助,找到添加命令和参数

查看命令

[root@k8s-master1 ~]# sealos -h
sealos is a Kubernetes distribution, a unified OS to manage cloud native applications.

Cluster Management Commands:
  apply         Run cloud images within a kubernetes cluster with Clusterfile
  cert          update Kubernetes API server's cert
  run           Run cloud native applications with ease, with or without a existing cluster
  reset         Reset all, everything in the cluster
  status        state of sealos

Node Management Commands:
  add           Add nodes into cluster
  delete        Remove nodes from cluster

Remote Operation Commands:
  exec          Execute shell command or script on specified nodes
  scp           Copy file to remote on specified nodes

Experimental Commands:
  registry      registry related

Container and Image Commands:
  build         Build an image using instructions in a Containerfile or Kubefile
  create        Create a cluster without running the CMD, for inspecting image
  diff          Inspect changes to the object's file systems
  inspect       Inspect the configuration of a container or image
  images        List images in local storage
  load          Load image(s) from archive file
  login         Login to a container registry
  logout        Logout of a container registry
  manifest      Manipulate manifest lists and image indexes
  merge         merge multiple images into one
  pull          Pull images from the specified location
  push          Push an image to a specified destination
  rmi           Remove one or more images from local storage
  save          Save image into archive file
  tag           Add an additional name to a local image

Other Commands:
  completion    Generate the autocompletion script for the specified shell
  docs          generate API reference
  env           prints out all the environment information in use by sealos
  gen           generate a Clusterfile with all default settings
  version       Print version info

Use "sealos <command> --help" for more information about a given command.

查看添加命令的参数

[root@k8s-master1 ~]# sealos add -h
Add nodes into cluster

Examples:

add to nodes :
	sealos add --nodes x.x.x.x

add to default cluster:
	sealos add --masters x.x.x.x --nodes x.x.x.x
	sealos add --masters x.x.x.x-x.x.x.y --nodes x.x.x.x-x.x.x.y

add with different ssh setting:
	sealos add --masters x.x.x.x --nodes x.x.x.x --passwd your_diff_passwd
Please note that the masters and nodes added in one command should have the save password.

Options:
    --cluster='default':
	name of cluster to applied join action

    --masters='':
	masters to be joined

    --nodes='':
	nodes to be joined

    -p, --passwd='':
	use given password to authenticate with

    -i, --pk='/root/.ssh/id_rsa':
	selects a file from which the identity (private key) for public key authentication is read

    --pk-passwd='':
	passphrase for decrypting a PEM encoded private key

    --port=22:
	port to connect to on the remote host

    -u, --user='':
	username to authenticate as

Usage:
  sealos add [flags] [options]

Use "sealos options" for a list of global command-line options (applies to all commands).

2. 添加节点

根据帮忙提示,添加42和43主节点,47子节点
有密码的需要添加 --passwd

[root@k8s-master1 ~]# sealos add --masters 192.168.1.42-192.168.1.43 --nodes 192.168.1.47 --passwd 'kgb007'
2023-10-14T10:51:14 info start to scale this cluster
2023-10-14T10:51:14 info Executing pipeline JoinCheck in ScaleProcessor.
2023-10-14T10:51:14 info checker:hostname [192.168.1.41:22 192.168.1.42:22 192.168.1.43:22 192.168.1.47:22]
2023-10-14T10:51:14 info checker:timeSync [192.168.1.41:22 192.168.1.42:22 192.168.1.43:22 192.168.1.47:22]
2023-10-14T10:51:15 info Executing pipeline PreProcess in ScaleProcessor.
2023-10-14T10:51:15 info Executing pipeline PreProcessImage in ScaleProcessor.
2023-10-14T10:51:15 info Executing pipeline RunConfig in ScaleProcessor.
2023-10-14T10:51:15 info Executing pipeline MountRootfs in ScaleProcessor.
2023-10-14T10:51:51 info Executing pipeline Bootstrap in ScaleProcessorit/s)          
192.168.1.42:22	 INFO [2023-10-14 10:51:51] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait... 
192.168.1.47:22	 INFO [2023-10-14 10:51:51] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait... 
192.168.1.43:22	 INFO [2023-10-14 10:51:51] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait... 
192.168.1.42:22	which: no docker in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
192.168.1.42:22	 WARN [2023-10-14 10:51:52] >> Replace disable_apparmor = false to disable_apparmor = true 
192.168.1.42:22	 INFO [2023-10-14 10:51:52] >> check root,port,cri success 
192.168.1.47:22	which: no docker in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
192.168.1.47:22	 WARN [2023-10-14 10:51:52] >> Replace disable_apparmor = false to disable_apparmor = true 
192.168.1.47:22	 INFO [2023-10-14 10:51:52] >> check root,port,cri success 
192.168.1.43:22	which: no docker in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
192.168.1.43:22	 WARN [2023-10-14 10:51:52] >> Replace disable_apparmor = false to disable_apparmor = true 
192.168.1.43:22	 INFO [2023-10-14 10:51:52] >> check root,port,cri success 
192.168.1.47:22	2023-10-14T10:51:52 info domain sealos.hub:192.168.1.41 append success
192.168.1.42:22	2023-10-14T10:51:52 info domain sealos.hub:192.168.1.41 append success
192.168.1.43:22	2023-10-14T10:51:52 info domain sealos.hub:192.168.1.41 append success
192.168.1.42:22	Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
192.168.1.43:22	Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
192.168.1.47:22	Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
192.168.1.47:22	 INFO [2023-10-14 10:51:57] >> Health check containerd! 
192.168.1.47:22	 INFO [2023-10-14 10:51:57] >> containerd is running 
192.168.1.47:22	 INFO [2023-10-14 10:51:57] >> init containerd success 
192.168.1.42:22	 INFO [2023-10-14 10:51:57] >> Health check containerd! 
192.168.1.42:22	 INFO [2023-10-14 10:51:57] >> containerd is running 
192.168.1.42:22	 INFO [2023-10-14 10:51:57] >> init containerd success 
192.168.1.43:22	 INFO [2023-10-14 10:51:57] >> Health check containerd! 
192.168.1.43:22	 INFO [2023-10-14 10:51:57] >> containerd is running 
192.168.1.43:22	 INFO [2023-10-14 10:51:57] >> init containerd success 
192.168.1.47:22	Created symlink from /etc/systemd/system/multi-user.target.wants/image-cri-shim.service to /etc/systemd/system/image-cri-shim.service.
192.168.1.42:22	Created symlink from /etc/systemd/system/multi-user.target.wants/image-cri-shim.service to /etc/systemd/system/image-cri-shim.service.
192.168.1.47:22	 INFO [2023-10-14 10:51:58] >> Health check image-cri-shim! 
192.168.1.47:22	 INFO [2023-10-14 10:51:58] >> image-cri-shim is running 
192.168.1.47:22	 INFO [2023-10-14 10:51:58] >> init shim success 
192.168.1.43:22	Created symlink from /etc/systemd/system/multi-user.target.wants/image-cri-shim.service to /etc/systemd/system/image-cri-shim.service.
192.168.1.47:22	127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.1.47:22	::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.42:22	 INFO [2023-10-14 10:51:58] >> Health check image-cri-shim! 
192.168.1.42:22	 INFO [2023-10-14 10:51:58] >> image-cri-shim is running 
192.168.1.42:22	 INFO [2023-10-14 10:51:58] >> init shim success 
192.168.1.42:22	127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.1.42:22	::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.43:22	 INFO [2023-10-14 10:51:58] >> Health check image-cri-shim! 
192.168.1.43:22	 INFO [2023-10-14 10:51:58] >> image-cri-shim is running 
192.168.1.43:22	 INFO [2023-10-14 10:51:58] >> init shim success 
192.168.1.43:22	127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.1.43:22	::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.42:22	Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
192.168.1.42:22	Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
192.168.1.47:22	Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
192.168.1.47:22	Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
192.168.1.42:22	* Applying /usr/lib/sysctl.d/00-system.conf ...
192.168.1.42:22	net.bridge.bridge-nf-call-ip6tables = 0
192.168.1.42:22	net.bridge.bridge-nf-call-iptables = 0
192.168.1.42:22	net.bridge.bridge-nf-call-arptables = 0
192.168.1.42:22	* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
192.168.1.42:22	kernel.yama.ptrace_scope = 0
192.168.1.42:22	* Applying /usr/lib/sysctl.d/50-default.conf ...
192.168.1.42:22	kernel.sysrq = 16
192.168.1.42:22	kernel.core_uses_pid = 1
192.168.1.42:22	kernel.kptr_restrict = 1
192.168.1.42:22	net.ipv4.conf.default.rp_filter = 1
192.168.1.42:22	net.ipv4.conf.all.rp_filter = 1
192.168.1.42:22	net.ipv4.conf.default.accept_source_route = 0
192.168.1.42:22	net.ipv4.conf.all.accept_source_route = 0
192.168.1.42:22	net.ipv4.conf.default.promote_secondaries = 1
192.168.1.42:22	net.ipv4.conf.all.promote_secondaries = 1
192.168.1.42:22	fs.protected_hardlinks = 1
192.168.1.42:22	fs.protected_symlinks = 1
192.168.1.42:22	* Applying /etc/sysctl.d/99-sysctl.conf ...
192.168.1.42:22	fs.file-max = 1048576 # sealos
192.168.1.42:22	net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.42:22	net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.42:22	net.core.somaxconn = 65535 # sealos
192.168.1.42:22	net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.42:22	net.ipv4.ip_forward = 1 # sealos
192.168.1.42:22	net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.42:22	net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.42:22	net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.42:22	net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.42:22	net.ipv4.vs.conntrack = 1 # sealos
192.168.1.42:22	net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.42:22	* Applying /etc/sysctl.conf ...
192.168.1.42:22	fs.file-max = 1048576 # sealos
192.168.1.42:22	net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.42:22	net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.42:22	net.core.somaxconn = 65535 # sealos
192.168.1.42:22	net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.42:22	net.ipv4.ip_forward = 1 # sealos
192.168.1.42:22	net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.42:22	net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.42:22	net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.42:22	net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.42:22	net.ipv4.vs.conntrack = 1 # sealos
192.168.1.42:22	net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.47:22	* Applying /usr/lib/sysctl.d/00-system.conf ...
192.168.1.47:22	net.bridge.bridge-nf-call-ip6tables = 0
192.168.1.47:22	net.bridge.bridge-nf-call-iptables = 0
192.168.1.47:22	net.bridge.bridge-nf-call-arptables = 0
192.168.1.47:22	* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
192.168.1.47:22	kernel.yama.ptrace_scope = 0
192.168.1.47:22	* Applying /usr/lib/sysctl.d/50-default.conf ...
192.168.1.47:22	kernel.sysrq = 16
192.168.1.47:22	kernel.core_uses_pid = 1
192.168.1.47:22	kernel.kptr_restrict = 1
192.168.1.47:22	net.ipv4.conf.default.rp_filter = 1
192.168.1.47:22	net.ipv4.conf.all.rp_filter = 1
192.168.1.47:22	net.ipv4.conf.default.accept_source_route = 0
192.168.1.47:22	net.ipv4.conf.all.accept_source_route = 0
192.168.1.47:22	net.ipv4.conf.default.promote_secondaries = 1
192.168.1.47:22	net.ipv4.conf.all.promote_secondaries = 1
192.168.1.47:22	fs.protected_hardlinks = 1
192.168.1.47:22	fs.protected_symlinks = 1
192.168.1.47:22	* Applying /etc/sysctl.d/99-sysctl.conf ...
192.168.1.47:22	fs.file-max = 1048576 # sealos
192.168.1.47:22	net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.47:22	net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.47:22	net.core.somaxconn = 65535 # sealos
192.168.1.47:22	net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.47:22	net.ipv4.ip_forward = 1 # sealos
192.168.1.47:22	net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.47:22	net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.47:22	net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.47:22	net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.47:22	net.ipv4.vs.conntrack = 1 # sealos
192.168.1.47:22	net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.47:22	* Applying /etc/sysctl.conf ...
192.168.1.47:22	fs.file-max = 1048576 # sealos
192.168.1.47:22	net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.47:22	net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.47:22	net.core.somaxconn = 65535 # sealos
192.168.1.47:22	net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.47:22	net.ipv4.ip_forward = 1 # sealos
192.168.1.47:22	net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.47:22	net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.47:22	net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.47:22	net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.47:22	net.ipv4.vs.conntrack = 1 # sealos
192.168.1.47:22	net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.43:22	Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
192.168.1.43:22	Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
192.168.1.43:22	* Applying /usr/lib/sysctl.d/00-system.conf ...
192.168.1.43:22	net.bridge.bridge-nf-call-ip6tables = 0
192.168.1.43:22	net.bridge.bridge-nf-call-iptables = 0
192.168.1.43:22	net.bridge.bridge-nf-call-arptables = 0
192.168.1.43:22	* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
192.168.1.43:22	kernel.yama.ptrace_scope = 0
192.168.1.43:22	* Applying /usr/lib/sysctl.d/50-default.conf ...
192.168.1.43:22	kernel.sysrq = 16
192.168.1.43:22	kernel.core_uses_pid = 1
192.168.1.43:22	kernel.kptr_restrict = 1
192.168.1.43:22	net.ipv4.conf.default.rp_filter = 1
192.168.1.43:22	net.ipv4.conf.all.rp_filter = 1
192.168.1.43:22	net.ipv4.conf.default.accept_source_route = 0
192.168.1.43:22	net.ipv4.conf.all.accept_source_route = 0
192.168.1.43:22	net.ipv4.conf.default.promote_secondaries = 1
192.168.1.43:22	net.ipv4.conf.all.promote_secondaries = 1
192.168.1.43:22	fs.protected_hardlinks = 1
192.168.1.43:22	fs.protected_symlinks = 1
192.168.1.43:22	* Applying /etc/sysctl.d/99-sysctl.conf ...
192.168.1.43:22	fs.file-max = 1048576 # sealos
192.168.1.43:22	net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.43:22	net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.43:22	net.core.somaxconn = 65535 # sealos
192.168.1.43:22	net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.43:22	net.ipv4.ip_forward = 1 # sealos
192.168.1.43:22	net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.43:22	net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.43:22	net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.43:22	net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.43:22	net.ipv4.vs.conntrack = 1 # sealos
192.168.1.43:22	net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.43:22	* Applying /etc/sysctl.conf ...
192.168.1.43:22	fs.file-max = 1048576 # sealos
192.168.1.43:22	net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.43:22	net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.43:22	net.core.somaxconn = 65535 # sealos
192.168.1.43:22	net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.43:22	net.ipv4.ip_forward = 1 # sealos
192.168.1.43:22	net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.43:22	net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.43:22	net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.43:22	net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.43:22	net.ipv4.vs.conntrack = 1 # sealos
192.168.1.43:22	net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.42:22	 INFO [2023-10-14 10:52:02] >> pull pause image sealos.hub:5000/pause:3.9 
192.168.1.47:22	 INFO [2023-10-14 10:52:02] >> pull pause image sealos.hub:5000/pause:3.9 
192.168.1.43:22	 INFO [2023-10-14 10:52:02] >> pull pause image sealos.hub:5000/pause:3.9 
192.168.1.42:22	Image is up to date for sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
192.168.1.42:22	Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
192.168.1.47:22	Image is up to date for sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
192.168.1.42:22	 INFO [2023-10-14 10:52:03] >> init kubelet success 
192.168.1.42:22	 INFO [2023-10-14 10:52:03] >> init rootfs success 
192.168.1.47:22	Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
192.168.1.47:22	 INFO [2023-10-14 10:52:03] >> init kubelet success 
192.168.1.47:22	 INFO [2023-10-14 10:52:03] >> init rootfs success 
192.168.1.43:22	Image is up to date for sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
192.168.1.43:22	Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
192.168.1.43:22	 INFO [2023-10-14 10:52:03] >> init kubelet success 
192.168.1.43:22	 INFO [2023-10-14 10:52:03] >> init rootfs success 
2023-10-14T10:52:03 info Executing pipeline Join in ScaleProcessor.
2023-10-14T10:52:03 info [192.168.1.42:22 192.168.1.43:22] will be added as master
2023-10-14T10:52:03 info start to init filesystem join masters...
2023-10-14T10:52:03 info start to copy static files to masters
2023-10-14T10:52:03 info start to copy kubeconfig files to masters
2023-10-14T10:52:03 info start to copy etc pki files to masters/1, 64 it/s)
2023-10-14T10:52:04 info start to get kubernetes token...                             
2023-10-14T10:52:04 info start to copy kubeadm join config to master: 192.168.1.43:22
2023-10-14T10:52:05 info start to copy kubeadm join config to master: 192.168.1.42:22
2023-10-14T10:52:05 info fetch certSANs from kubeadm configmap1/1, 261 it/s)
2023-10-14T10:52:05 info start to join 192.168.1.42:22 as master
2023-10-14T10:52:05 info start to generator cert 192.168.1.42:22 as master
192.168.1.42:22	2023-10-14T10:52:06 info apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local k8s-master2:k8s-master2 kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost] map[10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 192.168.1.41:192.168.1.41 192.168.1.42:192.168.1.42]}
192.168.1.42:22	2023-10-14T10:52:06 info Etcd altnames : {map[k8s-master2:k8s-master2 localhost:localhost] map[127.0.0.1:127.0.0.1 192.168.1.42:192.168.1.42 ::1:::1]}, commonName : k8s-master2
192.168.1.42:22	2023-10-14T10:52:06 info sa.key sa.pub already exist
192.168.1.42:22	2023-10-14T10:52:07 info domain apiserver.cluster.local:192.168.1.41 append success
192.168.1.42:22	W1014 10:52:08.052850    2005 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
192.168.1.42:22	[preflight] Running pre-flight checks
192.168.1.42:22		[WARNING FileExisting-socat]: socat not found in system path
192.168.1.42:22	[preflight] Reading configuration from the cluster...
192.168.1.42:22	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.1.42:22	W1014 10:52:08.552033    2005 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0
192.168.1.42:22	[preflight] Running pre-flight checks before initializing the new control plane instance
192.168.1.42:22	[preflight] Pulling images required for setting up a Kubernetes cluster
192.168.1.42:22	[preflight] This might take a minute or two, depending on the speed of your internet connection
192.168.1.42:22	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
192.168.1.42:22	W1014 10:52:22.294658    2005 checks.go:835] detected that the sandbox image "sealos.hub:5000/pause:3.9" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
192.168.1.42:22	[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
192.168.1.42:22	[download-certs] Saving the certificates to the folder: "/etc/kubernetes/pki"
192.168.1.42:22	[certs] Using certificateDir folder "/etc/kubernetes/pki"
192.168.1.42:22	[certs] Using the existing "etcd/peer" certificate and key
192.168.1.42:22	[certs] Using the existing "etcd/healthcheck-client" certificate and key
192.168.1.42:22	[certs] Using the existing "apiserver-etcd-client" certificate and key
192.168.1.42:22	[certs] Using the existing "etcd/server" certificate and key
192.168.1.42:22	[certs] Using the existing "apiserver-kubelet-client" certificate and key
192.168.1.42:22	[certs] Using the existing "apiserver" certificate and key
192.168.1.42:22	[certs] Using the existing "front-proxy-client" certificate and key
192.168.1.42:22	[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
192.168.1.42:22	[certs] Using the existing "sa" key
192.168.1.42:22	[kubeconfig] Generating kubeconfig files
192.168.1.42:22	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
192.168.1.42:22	[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
192.168.1.42:22	W1014 10:52:36.935920    2005 kubeconfig.go:264] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://192.168.1.42:6443, got: https://apiserver.cluster.local:6443
192.168.1.42:22	[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
192.168.1.42:22	W1014 10:52:37.167706    2005 kubeconfig.go:264] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://192.168.1.42:6443, got: https://apiserver.cluster.local:6443
192.168.1.42:22	[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
192.168.1.42:22	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
192.168.1.42:22	[control-plane] Creating static Pod manifest for "kube-apiserver"
192.168.1.42:22	[control-plane] Creating static Pod manifest for "kube-controller-manager"
192.168.1.42:22	[control-plane] Creating static Pod manifest for "kube-scheduler"
192.168.1.42:22	[check-etcd] Checking that the etcd cluster is healthy
192.168.1.42:22	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.1.42:22	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.1.42:22	[kubelet-start] Starting the kubelet
192.168.1.42:22	[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.1.42:22	[etcd] Announced new etcd member joining to the existing etcd cluster
192.168.1.42:22	[etcd] Creating static Pod manifest for "etcd"
192.168.1.42:22	[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
192.168.1.42:22	The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
192.168.1.42:22	[mark-control-plane] Marking the node k8s-master2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
192.168.1.42:22	[mark-control-plane] Marking the node k8s-master2 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
192.168.1.42:22	
192.168.1.42:22	This node has joined the cluster and a new control plane instance was created:
192.168.1.42:22	
192.168.1.42:22	* Certificate signing request was sent to apiserver and approval was received.
192.168.1.42:22	* The Kubelet was informed of the new secure connection details.
192.168.1.42:22	* Control plane label and taint were applied to the new node.
192.168.1.42:22	* The Kubernetes control plane instances scaled up.
192.168.1.42:22	* A new etcd member was added to the local/stacked etcd cluster.
192.168.1.42:22	
192.168.1.42:22	To start administering your cluster from this node, you need to run the following as a regular user:
192.168.1.42:22	
192.168.1.42:22		mkdir -p $HOME/.kube
192.168.1.42:22		sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
192.168.1.42:22		sudo chown $(id -u):$(id -g) $HOME/.kube/config
192.168.1.42:22	
192.168.1.42:22	Run 'kubectl get nodes' to see this node join the cluster.
192.168.1.42:22	
192.168.1.42:22	2023-10-14T10:52:59 info domain apiserver.cluster.local delete success
192.168.1.42:22	2023-10-14T10:52:59 info domain apiserver.cluster.local:192.168.1.42 append success
2023-10-14T10:52:59 info succeeded in joining 192.168.1.42:22 as master
2023-10-14T10:52:59 info start to join 192.168.1.43:22 as master
2023-10-14T10:52:59 info start to generator cert 192.168.1.43:22 as master
192.168.1.43:22	2023-10-14T10:53:00 info apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local k8s-master3:k8s-master3 kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost] map[10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 192.168.1.41:192.168.1.41 192.168.1.43:192.168.1.43]}
192.168.1.43:22	2023-10-14T10:53:00 info Etcd altnames : {map[k8s-master3:k8s-master3 localhost:localhost] map[127.0.0.1:127.0.0.1 192.168.1.43:192.168.1.43 ::1:::1]}, commonName : k8s-master3
192.168.1.43:22	2023-10-14T10:53:00 info sa.key sa.pub already exist
192.168.1.43:22	2023-10-14T10:53:02 info domain apiserver.cluster.local:192.168.1.41 append success
192.168.1.43:22	W1014 10:53:02.242175    1986 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
192.168.1.43:22	[preflight] Running pre-flight checks
192.168.1.43:22		[WARNING FileExisting-socat]: socat not found in system path
192.168.1.43:22	[preflight] Reading configuration from the cluster...
192.168.1.43:22	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.1.43:22	W1014 10:53:02.598221    1986 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0
192.168.1.43:22	[preflight] Running pre-flight checks before initializing the new control plane instance
192.168.1.43:22	[preflight] Pulling images required for setting up a Kubernetes cluster
192.168.1.43:22	[preflight] This might take a minute or two, depending on the speed of your internet connection
192.168.1.43:22	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
192.168.1.43:22	W1014 10:53:16.884534    1986 checks.go:835] detected that the sandbox image "sealos.hub:5000/pause:3.9" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
192.168.1.43:22	[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
192.168.1.43:22	[download-certs] Saving the certificates to the folder: "/etc/kubernetes/pki"
192.168.1.43:22	[certs] Using certificateDir folder "/etc/kubernetes/pki"
192.168.1.43:22	[certs] Using the existing "apiserver-kubelet-client" certificate and key
192.168.1.43:22	[certs] Using the existing "apiserver" certificate and key
192.168.1.43:22	[certs] Using the existing "etcd/server" certificate and key
192.168.1.43:22	[certs] Using the existing "etcd/peer" certificate and key
192.168.1.43:22	[certs] Using the existing "etcd/healthcheck-client" certificate and key
192.168.1.43:22	[certs] Using the existing "apiserver-etcd-client" certificate and key
192.168.1.43:22	[certs] Using the existing "front-proxy-client" certificate and key
192.168.1.43:22	[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
192.168.1.43:22	[certs] Using the existing "sa" key
192.168.1.43:22	[kubeconfig] Generating kubeconfig files
192.168.1.43:22	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
192.168.1.43:22	[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
192.168.1.43:22	W1014 10:53:31.994728    1986 kubeconfig.go:264] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://192.168.1.43:6443, got: https://apiserver.cluster.local:6443
192.168.1.43:22	[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
192.168.1.43:22	W1014 10:53:32.241906    1986 kubeconfig.go:264] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://192.168.1.43:6443, got: https://apiserver.cluster.local:6443
192.168.1.43:22	[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
192.168.1.43:22	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
192.168.1.43:22	[control-plane] Creating static Pod manifest for "kube-apiserver"
192.168.1.43:22	[control-plane] Creating static Pod manifest for "kube-controller-manager"
192.168.1.43:22	[control-plane] Creating static Pod manifest for "kube-scheduler"
192.168.1.43:22	[check-etcd] Checking that the etcd cluster is healthy
192.168.1.43:22	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.1.43:22	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.1.43:22	[kubelet-start] Starting the kubelet
192.168.1.43:22	[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.1.43:22	[etcd] Announced new etcd member joining to the existing etcd cluster
192.168.1.43:22	[etcd] Creating static Pod manifest for "etcd"
192.168.1.43:22	[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
192.168.1.43:22	The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
192.168.1.43:22	[mark-control-plane] Marking the node k8s-master3 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
192.168.1.43:22	[mark-control-plane] Marking the node k8s-master3 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
192.168.1.43:22	
192.168.1.43:22	This node has joined the cluster and a new control plane instance was created:
192.168.1.43:22	
192.168.1.43:22	* Certificate signing request was sent to apiserver and approval was received.
192.168.1.43:22	* The Kubelet was informed of the new secure connection details.
192.168.1.43:22	* Control plane label and taint were applied to the new node.
192.168.1.43:22	* The Kubernetes control plane instances scaled up.
192.168.1.43:22	* A new etcd member was added to the local/stacked etcd cluster.
192.168.1.43:22	
192.168.1.43:22	To start administering your cluster from this node, you need to run the following as a regular user:
192.168.1.43:22	
192.168.1.43:22		mkdir -p $HOME/.kube
192.168.1.43:22		sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
192.168.1.43:22		sudo chown $(id -u):$(id -g) $HOME/.kube/config
192.168.1.43:22	
192.168.1.43:22	Run 'kubectl get nodes' to see this node join the cluster.
192.168.1.43:22	
192.168.1.43:22	2023-10-14T10:53:49 info domain apiserver.cluster.local delete success
192.168.1.43:22	2023-10-14T10:53:49 info domain apiserver.cluster.local:192.168.1.43 append success
2023-10-14T10:53:49 info succeeded in joining 192.168.1.43:22 as master
2023-10-14T10:53:49 info [192.168.1.47:22] will be added as worker
2023-10-14T10:53:49 info fetch certSANs from kubeadm configmap
2023-10-14T10:53:49 info start to join 192.168.1.47:22 as worker
2023-10-14T10:53:49 info start to copy kubeadm join config to node: 192.168.1.47:22
192.168.1.47:22l2023-10-14T10:53:50 info domain apiserver.cluster.local:10.103.97.2 append success
192.168.1.47:22	2023-10-14T10:53:50 info domain lvscare.node.ip:192.168.1.47 append success
2023-10-14T10:53:50 info run ipvs once module: 192.168.1.47:22
192.168.1.47:22	2023-10-14T10:53:50 info Trying to add route
192.168.1.47:22	2023-10-14T10:53:50 info success to set route.(host:10.103.97.2, gateway:192.168.1.47)
2023-10-14T10:53:50 info start join node: 192.168.1.47:22
192.168.1.47:22	W1014 10:53:51.106974    1990 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
192.168.1.47:22	[preflight] Running pre-flight checks
192.168.1.47:22		[WARNING FileExisting-socat]: socat not found in system path
192.168.1.47:22	[preflight] Reading configuration from the cluster...
192.168.1.47:22	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.1.47:22	W1014 10:54:01.476623    1990 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0
192.168.1.47:22	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.1.47:22	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.1.47:22	[kubelet-start] Starting the kubelet
192.168.1.47:22	[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.1.47:22	
192.168.1.47:22	This node has joined the cluster:
192.168.1.47:22	* Certificate signing request was sent to apiserver and a response was received.
192.168.1.47:22	* The Kubelet was informed of the new secure connection details.
192.168.1.47:22	
192.168.1.47:22	Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
192.168.1.47:22	
2023-10-14T10:54:04 info succeeded in joining 192.168.1.47:22 as worker
2023-10-14T10:54:04 info start to sync lvscare static pod to node: 192.168.1.47:22 master: [192.168.1.41:6443 192.168.1.42:6443 192.168.1.43:6443]
2023-10-14T10:54:04 info start to sync lvscare static pod to node: 192.168.1.46:22 master: [192.168.1.41:6443 192.168.1.42:6443 192.168.1.43:6443]
192.168.1.47:22	2023-10-14T10:54:04 info generator lvscare static pod is success
192.168.1.46:22	2023-10-14T10:54:05 info generator lvscare static pod is success
2023-10-14T10:54:05 info Executing pipeline RunGuest in ScaleProcessor.
2023-10-14T10:54:05 info succeeded in scaling this cluster
2023-10-14T10:54:05 info 
      ___           ___           ___           ___       ___           ___
     /\  \         /\  \         /\  \         /\__\     /\  \         /\  \
    /::\  \       /::\  \       /::\  \       /:/  /    /::\  \       /::\  \
   /:/\ \  \     /:/\:\  \     /:/\:\  \     /:/  /    /:/\:\  \     /:/\ \  \
  _\:\~\ \  \   /::\~\:\  \   /::\~\:\  \   /:/  /    /:/  \:\  \   _\:\~\ \  \
 /\ \:\ \ \__\ /:/\:\ \:\__\ /:/\:\ \:\__\ /:/__/    /:/__/ \:\__\ /\ \:\ \ \__\
 \:\ \:\ \/__/ \:\~\:\ \/__/ \/__\:\/:/  / \:\  \    \:\  \ /:/  / \:\ \:\ \/__/
  \:\ \:\__\    \:\ \:\__\        \::/  /   \:\  \    \:\  /:/  /   \:\ \:\__\
   \:\/:/  /     \:\ \/__/        /:/  /     \:\  \    \:\/:/  /     \:\/:/  /
    \::/  /       \:\__\         /:/  /       \:\__\    \::/  /       \::/  /
     \/__/         \/__/         \/__/         \/__/     \/__/         \/__/

                  Website: https://www.sealos.io/
                  Address: github.com/labring/sealos
                  Version: 4.3.5-881c10cb


3. 查看新增的节点

添加完成,查看节点状态

[root@k8s-master1 ~]# kubectl get node
NAME          STATUS   ROLES           AGE   VERSION
k8s-master1   Ready    control-plane   17h   v1.27.6
k8s-master2   Ready    control-plane   28m   v1.27.6
k8s-master3   Ready    control-plane   27m   v1.27.6
k8s-node1     Ready    <none>          17h   v1.27.6
k8s-node2     Ready    <none>          27m   v1.27.6

之后一篇sealos4.3.5安装手册(三)管理界面将讲解k8s如何添加看板,方便界面化操作。

下面这个图可以增加博主分享知识的动力,来助力一下吧

学习的动力

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值