sealos部署k8s集群

一、 在线安装k8s集群

1、准备服务器

  • 服务器配置尽量高一点如8C32G,防止后续扩容配置重启服务器导致服务的短暂不可用
  • 生产环境master节点至少三个
  • master节点相对于node节点,配置可以稍微低一点如4C16G,但master节点服务器配置要一致

2、服务器配置

  1. 初始化配置
    安装环境准备:下面的操作需要在所有的节点上执行。
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时

# 关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

# 根据规划设置主机名
hostnamectl set-hostname <hostname>

# 在master添加hosts
cat >> /etc/hosts << EOF
192.168.150.138 k8s.master01
192.168.150.136 k8s.node01
192.168.150.137 k8s.node02
EOF

# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # 生效

# 时间同步
yum install ntpdate -y
ntpdate time.windows.com

  1. 设置master节点到node节点的免密登录
#生成密钥对
[root@k8s-master01 ~] ssh-keygen #一路按回车即可,会在当前用户.ssh目录下生产相应秘钥文件

#拷贝公钥到master、node节点
[root@k8s-master01 ~] ssh-copy-id -i .ssh/id_rsa.pub root@192.168.150.136
[root@k8s-master01 ~] ssh-copy-id -i .ssh/id_rsa.pub root@192.168.150.137
[root@k8s-master01 ~] ssh-copy-id -i .ssh/id_rsa.pub root@192.168.150.138

3、二进制安装sealos

wget https://github.com/labring/sealos/releases/download/v4.1.3/sealos_4.1.3_linux_amd64.tar.gz \
   && tar zxvf sealos_4.1.3_linux_amd64.tar.gz sealos && chmod +x sealos && mv sealos /usr/bin

4、自定义安装配置

  1. 运行 sealos gen 生成一个 Clusterfile,例如:
    基于docker
    为了方便配置harbor私有仓库,我这边使用的基于docker的版本镜像
sealos run labring/kubernetes-docker:v1.22.15 labring/helm:v3.8.2 labring/calico:v3.24.1 \
--masters 192.168.150.138 \
--nodes 192.168.150.136,192.168.150.137\
--passwd 123456  > Clusterfile

默认使用containerd

$ sealos gen labring/kubernetes:v1.22.15 labring/helm:v3.8.2 labring/calico:v3.24.1 \
   --masters 192.168.150.138 \
   --nodes 192.168.150.136,192.168.150.137 \
   --passwd 12345678 > Clusterfile

改用docker

sealos run labring/kubernetes-docker:v1.22.15 labring/helm:v3.8.2 labring/calico:v3.24.1 \
	--masters 10.140.19.201 \
	--nodes 10.140.19.202,10.140.19.205,10.140.19.206 \
	--passwd 12345678
  1. 生成的 Clusterfile 如下:
apiVersion: apps.sealos.io/v1beta1
kind: Cluster
metadata:
  creationTimestamp: null
  name: default
spec:
  hosts:
  - ips:
    - 192.168.150.138:22
    roles:
    - master
    - amd64
  - ips:
    - 192.168.150.136:22
    - 192.168.150.137:22
    roles:
    - node
    - amd64
  image:
  - labring/kubernetes-docker:v1.22.15
  - labring/calico:v3.24.1
  ssh:
    passwd: 123456
    pk: /root/.ssh/id_rsa
    port: 22
    user: root
status: {}
  1. calico Clusterfile 追加到生成的 Clusterfile 后,然后更新集群配置。例如,要修改 pods 的 CIDR 范围,就可以修改 networking.podSubnetspec.data.spec.calicoNetwork.ipPools.cidr 字段。最终的 Clusterfile 会像是这样:
apiVersion: apps.sealos.io/v1beta1
kind: Cluster
metadata:
  creationTimestamp: null
  name: default
spec:
  hosts:
  - ips:
    - 192.168.150.138:22
    roles:
    - master
    - amd64
  - ips:
    - 192.168.150.136:22
    - 192.168.150.137:22
    roles:
    - node
    - amd64
  image:
  - labring/kubernetes:v1.22.15
  - labring/calico:v3.24.1
  ssh:
    passwd: 123456
    pk: /root/.ssh/id_rsa
    port: 22
    user: root
status: {}
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
networking:
  podSubnet: 10.160.0.0/12
---
apiVersion: apps.sealos.io/v1beta1
kind: Config
metadata:
  name: calico
spec:
  path: manifests/calico.yaml
  data: |
    apiVersion: operator.tigera.io/v1
    kind: Installation
    metadata:
      name: default
    spec:
      # Configures Calico networking.
      calicoNetwork:
        # Note: The ipPools section cannot be modified post-install.
        ipPools:
        - blockSize: 26
          # Note: Must be the same as podCIDR
          cidr: 10.160.0.0/12
          encapsulation: IPIP
          natOutgoing: Enabled
          nodeSelector: all()
        nodeAddressAutodetectionV4:
          interface: "eth.*|en.*"
  1. 运行 sealos apply -f Clusterfile 启动集群。集群运行成功后会把 Clusterfile 保存到 .sealos/default/Clusterfile 文件中,可以修改其中字段来重新 apply 对集群进行变更。

注意:

  • 可以参考官方文档或运行 kubeadm config print init-defaults 命令来打印 kubeadm 配置。
  • 实验性功能使用方法请查看 CLI

5、开始安装集群

[root@k8s-master01 ~]# sealos apply -f Clusterfile

整个过程持续10分钟左右:
输出内容如下:

2022-12-03T13:21:11 info Start to create a new cluster: master [192.168.150.138], worker [192.168.150.136 192.168.150.137]
2022-12-03T13:21:11 info Executing pipeline Check in CreateProcessor.
2022-12-03T13:21:11 info checker:hostname [192.168.150.138:22 192.168.150.136:22 192.168.150.137:22]
2022-12-03T13:21:12 info checker:timeSync [192.168.150.138:22 192.168.150.136:22 192.168.150.137:22]
2022-12-03T13:21:12 info Executing pipeline PreProcess in CreateProcessor.
Resolving "labring/kubernetes" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/labring/kubernetes:v1.24...
Getting image source signatures
Copying blob 492bb2eddf21 done
Copying config 160c217560 done
Writing manifest to image destination
Storing signatures
160c217560b2bbd5601116f97b80af3a0b816333371ceeb24c8c99b70bd704cd
Resolving "labring/helm" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/labring/helm:v3.8.2...
Getting image source signatures
Copying blob 53a6eade9e7e done
Copying config 1123e8b4b4 done
Writing manifest to image destination
Storing signatures
1123e8b4b455ed291f3ec7273af62e49458fe3dd141f5e7cb2a4243d6284deec
Resolving "labring/calico" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/labring/calico:v3.24.1...
Getting image source signatures
Copying blob f9de59270f64 done
Copying config e2122fc58f done
Writing manifest to image destination
Storing signatures
e2122fc58fd32f1c93ac75da5c473aed746f1ad9b31a73d1f81a0579b96e775b
default-ov02hfcf
default-gavmcjsy
default-krjxdzgj
2022-12-03T13:23:22 info Executing pipeline RunConfig in CreateProcessor.
2022-12-03T13:23:22 info Executing pipeline MountRootfs in CreateProcessor.
[1/1]copying files to 192.168.150.137:22  25% [==>            ] (1/4, 1 it/s) [0s:2s]which: no docker in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin)
INFO [2022-12-03 13:23:46] >> check root,port,cri success
[1/1]copying files to 192.168.150.137:22  50% [======>        ] (2/4, 39 it/min) [12s:3s]Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
[1/1]copying files to 192.168.150.136:22  50% [======>        ] (2/4, 38 it/min) [13s:3s] INFO [2022-12-03 13:23:59] >> Health check containerd!
INFO [2022-12-03 13:23:59] >> containerd is running
INFO [2022-12-03 13:23:59] >> init containerd success
Created symlink from /etc/systemd/system/multi-user.target.wants/image-cri-shim.service to /etc/systemd/system/image-cri-shim.service.
INFO [2022-12-03 13:24:00] >> Health check image-cri-shim!
INFO [2022-12-03 13:24:01] >> image-cri-shim is running
INFO [2022-12-03 13:24:01] >> init shim success

* Applying /usr/lib/sysctl.d/00-system.conf ...
  net.bridge.bridge-nf-call-ip6tables = 0
  net.bridge.bridge-nf-call-iptables = 0
  net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
  kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
  kernel.sysrq = 16
  kernel.core_uses_pid = 1
  kernel.kptr_restrict = 1
  net.ipv4.conf.default.rp_filter = 1
  net.ipv4.conf.all.rp_filter = 1
  net.ipv4.conf.default.accept_source_route = 0
  net.ipv4.conf.all.accept_source_route = 0
  net.ipv4.conf.default.promote_secondaries = 1
  net.ipv4.conf.all.promote_secondaries = 1
  fs.protected_hardlinks = 1
  fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
  net.bridge.bridge-nf-call-ip6tables = 1
  net.bridge.bridge-nf-call-iptables = 1
  net.ipv4.conf.all.rp_filter = 0
* Applying /etc/sysctl.conf ...
  net.ipv4.ip_forward = 1
  Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
  Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
  Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
   INFO [2022-12-03 13:24:06] >> init kube success
   INFO [2022-12-03 13:24:06] >> init containerd rootfs success
  192.168.150.137:22: which: no docker in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
  192.168.150.137:22:  INFO [2022-12-03 13:25:17] >> check root,port,cri success
  192.168.150.136:22: which: no docker in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
  192.168.150.136:22:  INFO [2022-12-03 13:25:31] >> check root,port,cri success
  192.168.150.137:22: Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
  192.168.150.137:22:  INFO [2022-12-03 13:25:37] >> Health check containerd!
  192.168.150.137:22:  INFO [2022-12-03 13:25:37] >> containerd is running
  192.168.150.137:22:  INFO [2022-12-03 13:25:37] >> init containerd success
  192.168.150.137:22: Created symlink from /etc/systemd/system/multi-user.target.wants/image-cri-shim.service to /etc/systemd/system/image-cri-shim.service.
  192.168.150.137:22:  INFO [2022-12-03 13:25:38] >> Health check image-cri-shim!
  192.168.150.137:22:  INFO [2022-12-03 13:25:38] >> image-cri-shim is running
  192.168.150.137:22:  INFO [2022-12-03 13:25:38] >> init shim success
  192.168.150.137:22: * Applying /usr/lib/sysctl.d/00-system.conf ...
  192.168.150.137:22: net.bridge.bridge-nf-call-ip6tables = 0
  192.168.150.137:22: net.bridge.bridge-nf-call-iptables = 0
  192.168.150.137:22: net.bridge.bridge-nf-call-arptables = 0
  192.168.150.137:22: * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
  192.168.150.137:22: kernel.yama.ptrace_scope = 0
  192.168.150.137:22: * Applying /usr/lib/sysctl.d/50-default.conf ...
  192.168.150.137:22: kernel.sysrq = 16
  192.168.150.137:22: kernel.core_uses_pid = 1
  192.168.150.137:22: kernel.kptr_restrict = 1
  192.168.150.137:22: net.ipv4.conf.default.rp_filter = 1
  192.168.150.137:22: net.ipv4.conf.all.rp_filter = 1
  192.168.150.137:22: net.ipv4.conf.default.accept_source_route = 0
  192.168.150.137:22: net.ipv4.conf.all.accept_source_route = 0
  192.168.150.137:22: net.ipv4.conf.default.promote_secondaries = 1
  192.168.150.137:22: net.ipv4.conf.all.promote_secondaries = 1
  192.168.150.137:22: fs.protected_hardlinks = 1
  192.168.150.137:22: fs.protected_symlinks = 1
  192.168.150.137:22: * Applying /etc/sysctl.d/99-sysctl.conf ...
  192.168.150.137:22: * Applying /etc/sysctl.d/k8s.conf ...
  192.168.150.137:22: net.bridge.bridge-nf-call-ip6tables = 1
  192.168.150.137:22: net.bridge.bridge-nf-call-iptables = 1
  192.168.150.137:22: net.ipv4.conf.all.rp_filter = 0
  192.168.150.137:22: * Applying /etc/sysctl.conf ...
  192.168.150.137:22: net.ipv4.ip_forward = 1
  192.168.150.136:22: Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
  192.168.150.137:22: Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
  192.168.150.137:22: Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
  192.168.150.136:22:  INFO [2022-12-03 13:25:44] >> Health check containerd!
  192.168.150.136:22:  INFO [2022-12-03 13:25:44] >> containerd is running
  192.168.150.136:22:  INFO [2022-12-03 13:25:44] >> init containerd success
  192.168.150.136:22: Created symlink from /etc/systemd/system/multi-user.target.wants/image-cri-shim.service to /etc/systemd/system/image-cri-shim.service.
  192.168.150.136:22:  INFO [2022-12-03 13:25:47] >> Health check image-cri-shim!
  192.168.150.136:22:  INFO [2022-12-03 13:25:48] >> image-cri-shim is running
  192.168.150.136:22:  INFO [2022-12-03 13:25:48] >> init shim success
  192.168.150.136:22: * Applying /usr/lib/sysctl.d/00-system.conf ...
  192.168.150.136:22: net.bridge.bridge-nf-call-ip6tables = 0
  192.168.150.136:22: net.bridge.bridge-nf-call-iptables = 0
  192.168.150.136:22: net.bridge.bridge-nf-call-arptables = 0
  192.168.150.136:22: * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
  192.168.150.136:22: kernel.yama.ptrace_scope = 0
  192.168.150.136:22: * Applying /usr/lib/sysctl.d/50-default.conf ...
  192.168.150.136:22: kernel.sysrq = 16
  192.168.150.136:22: kernel.core_uses_pid = 1
  192.168.150.136:22: kernel.kptr_restrict = 1
  192.168.150.136:22: net.ipv4.conf.default.rp_filter = 1
  192.168.150.136:22: net.ipv4.conf.all.rp_filter = 1
  192.168.150.136:22: net.ipv4.conf.default.accept_source_route = 0
  192.168.150.136:22: net.ipv4.conf.all.accept_source_route = 0
  192.168.150.136:22: net.ipv4.conf.default.promote_secondaries = 1
  192.168.150.136:22: net.ipv4.conf.all.promote_secondaries = 1
  192.168.150.136:22: fs.protected_hardlinks = 1
  192.168.150.136:22: fs.protected_symlinks = 1
  192.168.150.136:22: * Applying /etc/sysctl.d/99-sysctl.conf ...
  192.168.150.136:22: * Applying /etc/sysctl.d/k8s.conf ...
  192.168.150.136:22: net.bridge.bridge-nf-call-ip6tables = 1
  192.168.150.136:22: net.bridge.bridge-nf-call-iptables = 1
  192.168.150.136:22: net.ipv4.conf.all.rp_filter = 0
  192.168.150.136:22: * Applying /etc/sysctl.conf ...
  192.168.150.136:22: net.ipv4.ip_forward = 1
  192.168.150.136:22: Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
  192.168.150.136:22: Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
  192.168.150.137:22: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
  192.168.150.137:22:  INFO [2022-12-03 13:26:02] >> init kube success
  192.168.150.137:22:  INFO [2022-12-03 13:26:03] >> init containerd rootfs success
  192.168.150.136:22: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
  192.168.150.136:22:  INFO [2022-12-03 13:26:20] >> init kube success
  192.168.150.136:22:  INFO [2022-12-03 13:26:20] >> init containerd rootfs success
  2022-12-03T13:26:29 info Executing pipeline Init in CreateProcessor.
  2022-12-03T13:26:29 info start to copy kubeadm config to master0
  2022-12-03T13:26:36 info start to generate cert and kubeConfig...
  2022-12-03T13:26:36 info start to generator cert and copy to masters...
  2022-12-03T13:26:37 info apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local k8s-master01:k8s-master01 kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost] map[10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 192.168.150.138:192.168.150.138]}
  2022-12-03T13:26:37 info Etcd altnames : {map[k8s-master01:k8s-master01 localhost:localhost] map[127.0.0.1:127.0.0.1 192.168.150.138:192.168.150.138 ::1:::1]}, commonName : k8s-master01
  2022-12-03T13:26:40 info start to copy etc pki files to masters
  2022-12-03T13:26:40 info start to create kubeconfig...
  2022-12-03T13:26:42 info start to copy kubeconfig files to masters
  2022-12-03T13:26:42 info start to copy static files to masters
  2022-12-03T13:26:42 info start to apply registry
  Created symlink from /etc/systemd/system/multi-user.target.wants/registry.service to /etc/systemd/system/registry.service.
   INFO [2022-12-03 13:26:43] >> Health check registry!
   INFO [2022-12-03 13:26:43] >> registry is running
   INFO [2022-12-03 13:26:43] >> init registry success
  2022-12-03T13:26:43 info start to init master0...
  2022-12-03T13:26:43 info registry auth in node 192.168.150.138:22
  2022-12-03T13:26:44 info domain sealos.hub:192.168.150.138 append success
  2022-12-03T13:26:45 info domain apiserver.cluster.local:192.168.150.138 append success
  W1203 13:26:47.278211    1853 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
  [init] Using Kubernetes version: v1.24.4
  [preflight] Running pre-flight checks
        [WARNING FileExisting-socat]: socat not found in system path
        [WARNING Hostname]: hostname "k8s-master01" could not be reached
        [WARNING Hostname]: hostname "k8s-master01": lookup k8s-master01 on 114.114.114.114:53: no such host
  [preflight] Pulling images required for setting up a Kubernetes cluster
  [preflight] This might take a minute or two, depending on the speed of your internet connection
  [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  [certs] Using certificateDir folder "/etc/kubernetes/pki"
  [certs] Using existing ca certificate authority
  [certs] Using existing apiserver certificate and key on disk
  [certs] Using existing apiserver-kubelet-client certificate and key on disk
  [certs] Using existing front-proxy-ca certificate authority
  [certs] Using existing front-proxy-client certificate and key on disk
  [certs] Using existing etcd/ca certificate authority
  [certs] Using existing etcd/server certificate and key on disk
  [certs] Using existing etcd/peer certificate and key on disk
  [certs] Using existing etcd/healthcheck-client certificate and key on disk
  [certs] Using existing apiserver-etcd-client certificate and key on disk
  [certs] Using the existing "sa" key
  [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
  [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
  W1203 13:27:41.625270    1853 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://192.168.150.138:6443, got: https://apiserver.cluster.local:6443
  [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
  W1203 13:27:41.976983    1853 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://192.168.150.138:6443, got: https://apiserver.cluster.local:6443
  [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
  [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  [kubelet-start] Starting the kubelet
  [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  [control-plane] Creating static Pod manifest for "kube-apiserver"
  [control-plane] Creating static Pod manifest for "kube-controller-manager"
  [control-plane] Creating static Pod manifest for "kube-scheduler"
  [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  [kubelet-check] Initial timeout of 40s passed.
  [apiclient] All control plane components are healthy after 68.520377 seconds
  [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
  [upload-certs] Skipping phase. Please see --upload-certs
  [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
  [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
  [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
  [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  [addons] Applied essential addon: CoreDNS
  [addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join apiserver.cluster.local:6443 --token <value withheld> \
        --discovery-token-ca-cert-hash sha256:547bbf832d660cfbf35191db348a78d5b22b6762fb89cb8cc5dd2a98c158aac5 \
        --control-plane --certificate-key <value withheld>

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join apiserver.cluster.local:6443 --token <value withheld> \
        --discovery-token-ca-cert-hash sha256:547bbf832d660cfbf35191db348a78d5b22b6762fb89cb8cc5dd2a98c158aac5
2022-12-03T13:29:00 info Executing pipeline Join in CreateProcessor.
2022-12-03T13:29:00 info [192.168.150.136:22 192.168.150.137:22] will be added as worker
2022-12-03T13:29:00 info start to get kubernetes token...
2022-12-03T13:29:04 info start to join 192.168.150.137:22 as worker
2022-12-03T13:29:04 info start to copy kubeadm join config to node: 192.168.150.137:22
2022-12-03T13:29:04 info start to join 192.168.150.136:22 as worker
2022-12-03T13:29:04 info start to copy kubeadm join config to node: 192.168.150.136:22
192.168.150.137:22: 2022-12-03T13:29:10 info domain apiserver.cluster.local:10.103.97.2 append success
192.168.150.137:22: 2022-12-03T13:29:12 info domain lvscare.node.ip:192.168.150.137 append success
2022-12-03T13:29:12 info registry auth in node 192.168.150.137:22
192.168.150.137:22: 2022-12-03T13:29:14 info domain sealos.hub:192.168.150.138 append success
2022-12-03T13:29:14 info run ipvs once module: 192.168.150.137:22
192.168.150.136:22: 2022-12-03T13:29:15 info domain apiserver.cluster.local:10.103.97.2 append success
192.168.150.137:22: 2022-12-03T13:29:15 info Trying to add route
192.168.150.137:22: 2022-12-03T13:29:15 info success to set route.(host:10.103.97.2, gateway:192.168.150.137)
2022-12-03T13:29:15 info start join node: 192.168.150.137:22
192.168.150.137:22: W1203 13:29:16.148407    3998 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
192.168.150.137:22: [preflight] Running pre-flight checks
192.168.150.137:22:     [WARNING FileExisting-socat]: socat not found in system path
192.168.150.136:22: 2022-12-03T13:29:16 info domain lvscare.node.ip:192.168.150.136 append success
2022-12-03T13:29:16 info registry auth in node 192.168.150.136:22
192.168.150.136:22: 2022-12-03T13:29:17 info domain sealos.hub:192.168.150.138 append success
2022-12-03T13:29:17 info run ipvs once module: 192.168.150.136:22
192.168.150.136:22: 2022-12-03T13:29:18 info Trying to add route
192.168.150.136:22: 2022-12-03T13:29:18 info success to set route.(host:10.103.97.2, gateway:192.168.150.136)
2022-12-03T13:29:18 info start join node: 192.168.150.136:22
192.168.150.136:22: W1203 13:29:18.916599    3968 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
192.168.150.136:22: [preflight] Running pre-flight checks
192.168.150.136:22:     [WARNING FileExisting-socat]: socat not found in system path
192.168.150.137:22:     [WARNING Hostname]: hostname "k8s-node01" could not be reached
192.168.150.137:22:     [WARNING Hostname]: hostname "k8s-node01": lookup k8s-node01 on 114.114.114.114:53: no such host
192.168.150.137:22: [preflight] Reading configuration from the cluster...
192.168.150.137:22: [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.150.137:22: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.150.137:22: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.150.137:22: [kubelet-start] Starting the kubelet
192.168.150.137:22: [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.150.136:22:     [WARNING Hostname]: hostname "k8s-node02" could not be reached
192.168.150.136:22:     [WARNING Hostname]: hostname "k8s-node02": lookup k8s-node02 on 114.114.114.114:53: no such host
192.168.150.136:22: [preflight] Reading configuration from the cluster...
192.168.150.136:22: [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.150.136:22: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.150.136:22: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.150.136:22: [kubelet-start] Starting the kubelet
192.168.150.136:22: [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.150.137:22:
192.168.150.137:22: This node has joined the cluster:
192.168.150.137:22: * Certificate signing request was sent to apiserver and a response was received.
192.168.150.137:22: * The Kubelet was informed of the new secure connection details.
192.168.150.137:22:
192.168.150.137:22: Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
192.168.150.137:22:
2022-12-03T13:29:39 info succeeded in joining 192.168.150.137:22 as worker
192.168.150.136:22:
192.168.150.136:22: This node has joined the cluster:
192.168.150.136:22: * Certificate signing request was sent to apiserver and a response was received.
192.168.150.136:22: * The Kubelet was informed of the new secure connection details.
192.168.150.136:22:
192.168.150.136:22: Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
192.168.150.136:22:
2022-12-03T13:29:41 info succeeded in joining 192.168.150.136:22 as worker
2022-12-03T13:29:42 info start to sync lvscare static pod to node: 192.168.150.137:22 master: [192.168.150.138:6443]
2022-12-03T13:29:42 info start to sync lvscare static pod to node: 192.168.150.136:22 master: [192.168.150.138:6443]
192.168.150.137:22: 2022-12-03T13:29:43 info generator lvscare static pod is success
192.168.150.136:22: 2022-12-03T13:29:44 info generator lvscare static pod is success
2022-12-03T13:29:44 info Executing pipeline RunGuest in CreateProcessor.
2022-12-03T13:29:44 info guest cmd is cp opt/helm /usr/bin/
2022-12-03T13:29:44 info guest cmd is kubectl create namespace tigera-operator
namespace/tigera-operator created
2022-12-03T13:29:45 info guest cmd is helm install calico charts/calico --namespace tigera-operator
NAME: calico
LAST DEPLOYED: Sat Dec  3 13:29:56 2022
NAMESPACE: tigera-operator
STATUS: deployed
REVISION: 1
TEST SUITE: None
2022-12-03T13:30:03 info succeeded in creating a new cluster, enjoy it!
2022-12-03T13:30:03 info
      ___           ___           ___           ___       ___           ___
     /\  \         /\  \         /\  \         /\__\     /\  \         /\  \
    /::\  \       /::\  \       /::\  \       /:/  /    /::\  \       /::\  \
   /:/\ \  \     /:/\:\  \     /:/\:\  \     /:/  /    /:/\:\  \     /:/\ \  \
  _\:\~\ \  \   /::\~\:\  \   /::\~\:\  \   /:/  /    /:/  \:\  \   _\:\~\ \  \
 /\ \:\ \ \__\ /:/\:\ \:\__\ /:/\:\ \:\__\ /:/__/    /:/__/ \:\__\ /\ \:\ \ \__\
 \:\ \:\ \/__/ \:\~\:\ \/__/ \/__\:\/:/  / \:\  \    \:\  \ /:/  / \:\ \:\ \/__/
  \:\ \:\__\    \:\ \:\__\        \::/  /   \:\  \    \:\  /:/  /   \:\ \:\__\
   \:\/:/  /     \:\ \/__/        /:/  /     \:\  \    \:\/:/  /     \:\/:/  /
    \::/  /       \:\__\         /:/  /       \:\__\    \::/  /       \::/  /
     \/__/         \/__/         \/__/         \/__/     \/__/         \/__/

                  Website :https://www.sealos.io/
                  Address :github.com/labring/sealos

6、查看集群

  • 集群创建完需要等个几分钟才会显示正常
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES           AGE     VERSION
k8s-master01   NotReady   control-plane   2m21s   v1.24.4
k8s-node01     NotReady   <none>          97s     v1.24.4
k8s-node02     NotReady   <none>          94s     v1.24.4
[root@k8s-master01 ~]# kubectl get pods
No resources found in default namespace.
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES           AGE     VERSION
k8s-master01   Ready      control-plane   3m50s   v1.24.4
k8s-node01     NotReady   <none>          3m6s    v1.24.4
k8s-node02     NotReady   <none>          3m3s    v1.24.4
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES           AGE     VERSION
k8s-master01   Ready    control-plane   9m5s    v1.24.4
k8s-node01     Ready    <none>          8m21s   v1.24.4
k8s-node02     Ready    <none>          8m18s   v1.24.4
[root@k8s-master01 ~]# kubectl get pods
No resources found in default namespace.
[root@k8s-master01 ~]# kubectl get pods -A
NAMESPACE          NAME                                       READY   STATUS    RESTARTS       AGE
calico-apiserver   calico-apiserver-6df8578b5f-4sd8b          1/1     Running   0              4m7s
calico-apiserver   calico-apiserver-6df8578b5f-g62wv          1/1     Running   0              4m7s
calico-system      calico-kube-controllers-5b8957ccd7-jsmzm   1/1     Running   0              9m12s
calico-system      calico-node-4wvsg                          1/1     Running   0              9m13s
calico-system      calico-node-q7t5v                          1/1     Running   0              9m13s
calico-system      calico-node-s668p                          1/1     Running   0              9m13s
calico-system      calico-typha-5cd6db69-jt9ss                1/1     Running   0              9m14s
calico-system      calico-typha-5cd6db69-pdlb5                1/1     Running   0              9m6s
calico-system      csi-node-driver-dqrv9                      2/2     Running   0              7m35s
calico-system      csi-node-driver-m86lh                      2/2     Running   0              6m21s
calico-system      csi-node-driver-w9p77                      2/2     Running   0              5m46s
kube-system        coredns-6d4b75cb6d-vvx5g                   1/1     Running   0              10m
kube-system        coredns-6d4b75cb6d-zzfnp                   1/1     Running   0              10m
kube-system        etcd-k8s-master01                          1/1     Running   0              10m
kube-system        kube-apiserver-k8s-master01                1/1     Running   0              10m
kube-system        kube-controller-manager-k8s-master01       1/1     Running   0              10m
kube-system        kube-proxy-9jk96                           1/1     Running   0              9m59s
kube-system        kube-proxy-sv4nj                           1/1     Running   0              9m56s
kube-system        kube-proxy-x2cfj                           1/1     Running   0              10m
kube-system        kube-scheduler-k8s-master01                1/1     Running   1 (2m6s ago)   10m
kube-system        kube-sealos-lvscare-k8s-node01             1/1     Running   0              9m34s
kube-system        kube-sealos-lvscare-k8s-node02             1/1     Running   0              9m51s
tigera-operator    tigera-operator-6bb888d6fc-5ftwh           1/1     Running   0              9m35s
[root@k8s-master01 ~]# kubectl get pods -A -owide
NAMESPACE          NAME                                       READY   STATUS    RESTARTS        AGE     IP                NODE           NOMINATED NODE   READINESS GATES
calico-apiserver   calico-apiserver-6df8578b5f-4sd8b          1/1     Running   0               4m17s   100.125.152.2     k8s-node02     <none>           <none>
calico-apiserver   calico-apiserver-6df8578b5f-g62wv          1/1     Running   0               4m17s   100.97.125.2      k8s-node01     <none>           <none>
calico-system      calico-kube-controllers-5b8957ccd7-jsmzm   1/1     Running   0               9m22s   100.124.32.129    k8s-master01   <none>           <none>
calico-system      calico-node-4wvsg                          1/1     Running   0               9m23s   192.168.150.137   k8s-node01     <none>           <none>
calico-system      calico-node-q7t5v                          1/1     Running   0               9m23s   192.168.150.138   k8s-master01   <none>           <none>
calico-system      calico-node-s668p                          1/1     Running   0               9m23s   192.168.150.136   k8s-node02     <none>           <none>
calico-system      calico-typha-5cd6db69-jt9ss                1/1     Running   0               9m24s   192.168.150.136   k8s-node02     <none>           <none>
calico-system      calico-typha-5cd6db69-pdlb5                1/1     Running   0               9m16s   192.168.150.137   k8s-node01     <none>           <none>
calico-system      csi-node-driver-dqrv9                      2/2     Running   0               7m45s   100.124.32.130    k8s-master01   <none>           <none>
calico-system      csi-node-driver-m86lh                      2/2     Running   0               6m31s   100.125.152.1     k8s-node02     <none>           <none>
calico-system      csi-node-driver-w9p77                      2/2     Running   0               5m56s   100.97.125.1      k8s-node01     <none>           <none>
kube-system        coredns-6d4b75cb6d-vvx5g                   1/1     Running   0               10m     100.124.32.132    k8s-master01   <none>           <none>
kube-system        coredns-6d4b75cb6d-zzfnp                   1/1     Running   0               10m     100.124.32.131    k8s-master01   <none>           <none>
kube-system        etcd-k8s-master01                          1/1     Running   0               10m     192.168.150.138   k8s-master01   <none>           <none>
kube-system        kube-apiserver-k8s-master01                1/1     Running   0               10m     192.168.150.138   k8s-master01   <none>           <none>
kube-system        kube-controller-manager-k8s-master01       1/1     Running   0               10m     192.168.150.138   k8s-master01   <none>           <none>
kube-system        kube-proxy-9jk96                           1/1     Running   0               10m     192.168.150.137   k8s-node01     <none>           <none>
kube-system        kube-proxy-sv4nj                           1/1     Running   0               10m     192.168.150.136   k8s-node02     <none>           <none>
kube-system        kube-proxy-x2cfj                           1/1     Running   0               10m     192.168.150.138   k8s-master01   <none>           <none>
kube-system        kube-scheduler-k8s-master01                1/1     Running   1 (2m16s ago)   10m     192.168.150.138   k8s-master01   <none>           <none>
kube-system        kube-sealos-lvscare-k8s-node01             1/1     Running   0               9m44s   192.168.150.137   k8s-node01     <none>           <none>
kube-system        kube-sealos-lvscare-k8s-node02             1/1     Running   0               10m     192.168.150.136   k8s-node02     <none>           <none>
tigera-operator    tigera-operator-6bb888d6fc-5ftwh           1/1     Running   0               9m45s   192.168.150.137   k8s-node01     <none>           <none>
[root@k8s-master01 ~]# kubectl cluster-info
Kubernetes control plane is running at https://apiserver.cluster.local:6443
CoreDNS is running at https://apiserver.cluster.local:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

报错解决

问题1:function “semverCompare” not defined
[EROR] Applied to cluster error: render env to rootfs failed: failed to create template: /var/lib/containers/storage/overlay/bbfc1569f981a2d0389cf78e288b7392298a39524b7e5a9ebd94611d23a2dcee/merged/etc/image-cri-shim.yaml.tmpl template: image-cri-shim.yaml.tmpl:21: function "semverCompare" not defined
问题2:exec auth.sh failed
error Applied to cluster error: failed to init exec auth.sh failed exit status 127
解决方式

升级sealos到最新版本

二、离线安装k8s集群

下载离线包

https://pan.baidu.com/s/1fu_l8yL_K6BLpSIugKhvAg?pwd=47f5#list/path=%2Fsharelink33820949-51949982255598%2Fsealos%E7%A6%BB%E7%BA%BF%E5%8C%85%2Farm64&parentPath=%2Fsharelink33820949-51949982255598

下载sealos工具

wget https://github.com/labring/sealos/releases/download/v4.0.0/sealos_4.0.0_linux_amd64.tar.gz

tar xf sealos_4.0.0_linux_amd64.tar.gz
mv sealos  /usr/local/bin/
chmod +x /usr/local/bin/sealos

安装k8s集群

老版
wget 
sealos init --passwd '12345678' \
    --master 10.11.1.2-10.11.1.4 \ 
    --node 10.111.1.5 \
    --pkg-url /root/kube1.23.6.tar.gz \
    --version v1.23.6
参数说明:
  • passwd 服务器密码 123456
  • master k8s master节点IP地址 10.11.1.2-10.11.1.4
  • node k8s node节点IP地址 10.11.1.3
  • pkg-url 离线资源包地址,支持下载到本地,或者一个远程地址 /root/kube1.23.6.tar.gz
  • version 资源包对应的版本 v1.23.6
  • apiserver apiserver的虚拟IP地址与域名映射。配置在/etc/hosts中 默认值"apiserver.cluster.local"
  • cert-sans kubernetes apiServerCertSANs ex. 47.0.0.22 sealyun.com
  • interface 默认值为"eth.* en.* em.*"
    ipip 是否开起ipip模式 默认值为true
  • kubeadm-config kubeadm-config.yaml模板文件地址
  • lvscare-image lvscare镜像名 默认值为"fanux/lvscare"
  • lvscare-tag lvscare镜像标签 默认值为“latest”
  • mtu ipip模式的mtu值 默认值为"1440"
  • network 指定安装安装的CNI组件 默认值为"calico"
  • pk SSH私钥文件路径 默认值为"/root/.ssh/id_rsa"
  • pk-passwd SSH私钥密码
  • podcidr POD的IP地址网络段 默认值为"100.64.0.0/10"
  • repo 指定安装时镜像拉取的镜像注册仓库地址 默认值为"k8s.gcr.io"
  • svccidr k8s service的IP地址网络段 默认值为"10.96.0.0/12"
  • user ssh用户名 默认值为"root"
  • vip 虚拟IP地址 默认值为"10.103.97.2"
  • vlog 设置kubeadm的日志级别
  • without-cni 如果设置为true,则不安装CNI组件
  • config 集群配置文件路径(全局配置参数) 默认为$HOME/.sealos/config.yaml
  • info 指定安装时的日志输出级别 true为Info级别,false为Debug级别
新版
# Run a single node kubernetes
$ sealos run labring/kubernetes:v1.24.0 labring/calico:v3.22.1

# Run a HA kubernetes cluster
$ sealos run labring/kubernetes:v1.24.0 labring/calico:v3.22.1 \
     --masters 192.168.64.2,192.168.64.22,192.168.64.20 \
     --nodes 192.168.64.21,192.168.64.19 -p [your-ssh-passwd]

# Add masters or nodes
$ sealos add --masters 192.168.64.20 --nodes 192.168.64.21,192.168.64.22

# Delete your cluster
$ sealos reset
  • 0
    点赞
  • 14
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值