01.使用 KubeKey 在Linux上预配置生产就绪的 Kubernetes 和 KubeSphere 集群

简介

KubeSphere 是在 Kubernetes 之上构建的面向云原生应用的分布式操作系统,完全开源,支持多云与多集群管理,提供全栈的 IT 自动化运维能力,简化企业的 DevOps 工作流。它的架构可以非常方便地使第三方应用与云原生生态组件进行即插即用 (plug-and-play) 的集成。

KubeSphere 还开源了 KubeKey 帮助企业一键在公有云或数据中心快速搭建 Kubernetes 集群,提供单节点、多节点、集群插件安装,以及集群升级与运维。
传送门: KubeSphere 官网

本章演示如何使用 KubeKey 在不同环境的 Linux 上预配置生产就绪的 Kubernetes 和 KubeSphere 集群。 您还可以使用 KubeKey 轻松扩展和缩小集群,并根据需要设置各种存储类。

系统 最低要求(每个节点)

系统最低要求 (每个节点)
Ubuntu 16.04,18.04,20.04CPU:2 核,内存:4 G,硬盘:40 G
CentOS 7.x,8.xCPU:2 核,内存:4 G,硬盘:40 G

版本如下

名称版本
KubeKey3.0.7
Kubernetes1.23.8
KubeSphere3.3.1

主机分配

建议使用环境干净的主机,无服务运行

主机名称IP角色容器运行时容器运行时版本
master01192.168.0.3control plane, etcd, workerdocker19.3.8+
node01192.168.0.5workerdocker19.3.8+
node02192.168.0.7workerdocker19.3.8+
node03192.168.0.8workerdocker19.3.8+

1. 环境配置

以下操作每台主机都要执行
云主机需要去对应厂商web控制台配置安全组策略使得各主机网络和端口互通

1.1 k8s基础系统环境配置

1.1.1 设置主机名

  # 如果是云主机可以去云主机控制台修改,示例为master01主机,其他主机也要修改
  hostnamectl set-hostname master01

1.1.2 配置yum源

# 对于 Ubuntu
sed -i 's/cn.archive.ubuntu.com/mirrors.ustc.edu.cn/g' /etc/apt/sources.list

# 对于 CentOS 7
sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \
         -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \
         -i.bak \
         /etc/yum.repos.d/CentOS-*.repo

# 对于 CentOS 8
sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \
         -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \
         -i.bak \
         /etc/yum.repos.d/CentOS-*.repo

# 对于私有仓库
sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://192.168.1.123/centos|g' -i.bak  /etc/yum.repos.d/CentOS-*.repo

1.1.3 安装一些必备工具

# 对于 Ubuntu
apt update && apt upgrade -y && apt install -y wget psmisc vim net-tools nfs-kernel-server telnet lvm2 git tar curl socat

# 对于 CentOS 7
yum update -y && yum -y install  wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git tar curl socat

# 对于 CentOS 8
yum update -y && yum -y install wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl socat

1.1.4 关闭防火墙

# Ubuntu忽略,CentOS执行
systemctl disable --now firewalld

1.1.5 关闭SELinux

# Ubuntu忽略,CentOS执行
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

1.1.6 关闭交换分区

sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a && sysctl -w vm.swappiness=0

cat /etc/fstab
# /dev/mapper/centos-swap swap                    swap    defaults        0 0

1.1.7 网络配置(俩种方式二选一)

# Ubuntu忽略,CentOS执行

#--------------------- 方式一 --------------------------
systemctl disable --now NetworkManager
systemctl start network && systemctl enable network

#--------------------- 方式二 --------------------------
cat > /etc/NetworkManager/conf.d/calico.conf << EOF 
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*
EOF
systemctl restart NetworkManager

1.1.8 进行时间同步(俩种方式二选一)

#--------------------- 方式一 --------------------------
# 服务端 服务端ip(master01):192.168.0.5   主机网段:192.168.0.0/24  
# apt install chrony -y
yum install chrony -y
cat > /etc/chrony.conf << EOF 
pool ntp.aliyun.com iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 192.168.0.0/24
local stratum 10
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
EOF

systemctl restart chronyd ; systemctl enable chronyd

# 客户端
# apt install chrony -y
yum install chrony -y
cat > /etc/chrony.conf << EOF 
pool 192.168.0.5 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
EOF

systemctl restart chronyd ; systemctl enable chronyd

# 使用客户端进行验证
chronyc sources -v

#--------------------- 方式二 --------------------------
#1.下载安装ntpdate
   yum install ntpdate -y
#2.同步阿里云时间
   ntpdate  time2.aliyun.com
#3.将系统时间写入硬件时间
   hwclock --systohc
   timedatectl
#4.强制系统时间写入CMOS中防止重启失效
   hwclock -w  #或 clock -w

1.1.9 配置ulimit

ulimit -SHn 65535
cat >> /etc/security/limits.conf <<EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimitedd
EOF

1.1.10 配置免密登录(三选一)

使用 KubeKey等脚本安装可不配置

# 产生ssh密钥
# 在集群每台主机产生SSH认证密钥对,弹出对话框直接回车即可
ssh-keygen -t rsa

#--------------------- 方式一 --------------------------
# 将主节点生成的公钥文件id_rsa.pub拷贝至集群其他从节点主机,并加入主节点的授权列表。
# 在主节点主机执行:
cd ~/.ssh/
ssh-copy-id -i ~/.ssh/id_rsa.pub root@{target_host}
# {target_host}包括集群的所有主机,可以是主机名或IP地址
# 该命令完成后会在{target_host}的~/.ssh/目录下生成文件authorized_keys.

#--------------------- 方式二 --------------------------
# 主节点主机执行:
cd ~/.ssh
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
# 将主节点生成的公钥文件id_rsa.pub拷贝至集群其他从节点主机target_hosts:
scp ~/.ssh/id_rsa.pub root@{target_hosts}:~/
# 从节点主机分别执行:
cat ~/id_rsa.pub >> ~/.ssh/authorized_keys
# 所有主机设置文件目录权限。
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
# 主节点主机分别免密码登录检查。
ssh {tarqet.host}

#--------------------- 方式二 --------------------------
# apt install -y sshpass
# SSHPASS 主机密码
# 主节点执行即可
 yum install -y sshpass
 ssh-keygen -f /root/.ssh/id_rsa -P ''
 export IP="192.168.0.5 192.168.0.7 192.168.0.8"
 export SSHPASS=xxxxx 
 for HOST in $IP;do
      sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST
 done

1.1.11 添加启用源

# Ubuntu忽略,CentOS执行

# 为 RHEL-8或 CentOS-8配置源
yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y 
sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo 
sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo 

# 为 RHEL-7 SL-7 或 CentOS-7 安装 ELRepo 
yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y 
sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo 
sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo 

# 查看可用安装包
yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available

1.1.12 升级内核至4.18版本以上

# Ubuntu忽略,CentOS执行

# 安装最新的内核
# 我这里选择的是稳定版kernel-ml   如需更新长期维护版本kernel-lt  
yum -y --enablerepo=elrepo-kernel  install  kernel-ml

# 查看已安装那些内核
rpm -qa | grep kernel

# 查看默认内核
grubby --default-kernel

# 若不是最新的使用命令设置
ls /boot/vmlinuz-* | grep elrepo | xargs grubby --set-default

# 重启生效
reboot

# v8 整合命令为:
yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available -y ; yum  --enablerepo=elrepo-kernel  install  kernel-ml -y ; grubby --default-kernel ; reboot 

# v7 整合命令为:
yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available -y ; yum  --enablerepo=elrepo-kernel  install  kernel-ml -y ; grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel ; reboot 

1.1.13 安装ipvsadm

# 对于 Ubuntu
# apt install ipvsadm ipset sysstat conntrack -y

# 对于 CentOS
yum install ipvsadm ipset sysstat conntrack libseccomp -y
cat >> /etc/modules-load.d/ipvs.conf <<EOF 
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

systemctl restart systemd-modules-load.service

lsmod | grep -e ip_vs -e nf_conntrack

1.1.15 修改内核参数

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384

net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
EOF

sysctl --system

1.1.16 所有节点配置hosts本地解析

有些厂商云主机可不配置(ping 主机名 可以ping通)

cat > /etc/hosts <<EOF
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6


192.168.0.5 master01
192.168.0.7 node01
192.168.0.8 node02
192.168.0.3 node03
EOF

1.2 Docker安装和配置(k8s版本<1.24)

k8s1.24及以后版本不再支持dockershim(垫片)
1.24及以后版本使用docker作为容器运行时
可考虑cri-docker(本文章不涉及)

1.2.1 安装

# 对于 Ubuntu
# sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common && curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - && sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" && sudo apt-get update && sudo apt-get install -y docker-ce
# 为了避免每次命令都输入sudo,可以设置用户权限(将当前用户添加到docker组里面),注意执行后须注销重新登录
# sudo usermod -a -G docker $USER
# (可选)通过修改daemon配置文件/etc/docker/daemon.json来使用加速器
# (可选)修改docker默认数据存储目录  daemon.json添加"data-root":"/data/app/docker"

sudo mkdir -p /etc/docker && sudo tee /etc/docker/daemon.json <<-'EOF'
{ 
  "data-root":"/data/app/docker"
  "registry-mirrors": ["https://epsax6ut.mirror.aliyuncs.com"],
  "log-driver":"json-file",
  "log-opts": {"max-size":"10m", "max-file":"3"}
}
EOF
# sudo systemctl daemon-reload
# sudo systemctl restart docker

# 对于 CentOS
# 导入镜像源
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
# 查找可用docker版本
yum list docker-ce --showduplicates | sort -r
# 安装docker
yum -y install docker-ce-20.10.10-3.el7
# (可选)通过修改daemon配置文件/etc/docker/daemon.json来使用加速器
# 使用> 重定向后 文件 中原本的内容会被覆盖
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://epsax6ut.mirror.aliyuncs.com"],
  "log-driver":"json-file",
  "log-opts": {"max-size":"10m", "max-file":"3"}
}
EOF
# (可选)修改docker默认数据存储目录  daemon.json添加"data-root":"/data/app/docker"
cat > /etc/docker/daemon.json <<EOF
{  
  "data-root":"/data/app/docker"
}
EOF

# 重新加载
systemctl daemon-reload
# 重启
systemctl restart docker
# 设置开机自启动
systemctl enable --now docker
# 查看
docker info

1.2.2 卸载

# 对于 Ubuntu
# 关闭
systemctl stop docker
# 删除某软件及其安装时自动安装的所有包。
sudo apt-get autoremove docker docker-ce docker-engine docker.io containerd runc
# 删除docker卸载残留。
dpkg -l | grep docker
# 删除无用的相关的配置文件
dpkg -l |grep ^rc|awk '{print $2}' |sudo xargs dpkg -P 
# 卸载没有删除的docker相关插件
sudo apt-get autoremove docker-ce-*
# 删除docker的相关配置
sudo rm -rf /etc/systemd/system/docker.service.d
sudo rm -rf /var/lib/docker
# 检查是否卸载成功
docker --version

# 对于 CentOS
# 关闭
systemctl stop docker
# 卸载
yum remove docker-ce docker-ce-cli containerd.io docker-compose-plugin
rm -rf /var/lib/docker
rm -rf /var/lib/containerd

2. 使用KubeKey安装Kubernetes 和 KubeSphere

以下操作在master01主节点执行即可
下载 KubeKey 后,如果您将其传输至访问 Googleapis 同样受限的新机器,请您在执行以下步骤之前务必再次执行 export KKZONE=cn 命令

2.1 准备 KubeKey

KubeKey(由 Go 语言开发)是一种全新的安装工具,替代了以前使用的基于 ansible 的安装程序。KubeKey
为您提供灵活的安装选择,您可以仅安装 Kubernetes,也可以同时安装 Kubernetes 和 KubeSphere。

KubeKey 的几种使用场景:
仅安装 Kubernetes;
使用一个命令同时安装 Kubernetes 和 KubeSphere;
扩缩集群;
升级集群;
安装 Kubernetes 相关的插件(Chart 或 YAML)。

2.1.1 下载KubeKey

# 下载速度慢或者无法下载请务必执行
export KKZONE=cn
# 下载
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
# 添加执行权限
chmod +x kk
# 查看 ls 
# anaconda-ks.cfg  kk  kubekey  kubekey-v3.0.7-linux-amd64.tar.gz
# 查看能使用 KubeKey 安装的所有受支持的 Kubernetes 版本
./kk version --show-supported-k8s

2.1.2 创建集群配置文件

# 创建配置文件
./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.23.8 -f config-sample.yaml
# 查看 ls 可以看到 config-sample.yaml 已经生成
# anaconda-ks.cfg  config-sample.yaml  kk  kubekey  kubekey-v3.0.7-linux-amd64.tar.gz

2.1.3 修改集群配置文件

# 修改配置文件
vim config-sample.yaml
# 详细内容
# name:实例的主机名。
# address:任务机和其他实例通过 SSH 相互连接所使用的 IP 地址。根据您的环境,可以是公有 IP 地址或私有 IP 地址。例如,一些云平台为每个实例提供一个公有 IP 地址,用于通过 SSH 访问。在这种情况下,您可以在该字段填入这个公有 IP 地址。
# internalAddress:实例的私有 IP 地址。
# 账号请务必使用root,以确保有足够的权限执行脚本
# 在本教程中,端口 22 是 SSH 的默认端口,因此您无需将它添加至该 YAML 文件中。否则,您需要在 IP 地址后添加对应端口号,如所示。
# hosts:
#  - {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, port: 8022, user: root, password: Testing123}
# 内置高可用配置开启:
# config-sample.yaml 文件中的 address 和 port 应缩进两个空格。
# 大多数情况下,您需要在负载均衡器的 address 字段中提供私有 IP 地址。但是,不同的云厂商可能对负载均衡器有不同的配置。例如,如果您在阿里云上配置服务器负载均衡器 (SLB),平台会为 SLB 分配一个公共 IP 地址,所以您需要在 address 字段中指定公共 IP 地址。
# 负载均衡器默认的内部访问域名是 lb.kubesphere.local。
# 若要使用内置负载均衡器,请将 internalLoadbalancer 字段取消注释。
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: master01, address: 192.168.0.5, internalAddress: 192.168.0.5, user: root, password: "Y369_Hx@Aisino"}
  - {name: node01, address: 192.168.0.7, internalAddress: 192.168.0.7, user: root, password: "Y369_Hx@Aisino"}
  - {name: node02, address: 192.168.0.8, internalAddress: 192.168.0.8, user: root, password: "Y369_Hx@Aisino"}
  - {name: node03, address: 192.168.0.3, internalAddress: 192.168.0.3, user: root, password: "Y369_Hx@Aisino"}
  roleGroups:
    etcd:
    - master01
    control-plane: 
    - master01
    worker:
    - node01
    - node02
    - node03
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.23.8
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: docker
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: []



---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.3.1
spec:
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: ""
  namespace_override: ""
  # dev_tag: ""
  etcd:
    monitoring: false
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort
    # apiserver:
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      volumeSize: 2Gi
    openldap:
      enabled: false
      volumeSize: 2Gi
    minio:
      volumeSize: 20Gi
    monitoring:
      # type: external
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
      GPUMonitoring:
        enabled: false
    gpu:
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
  alerting:
    enabled: false
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: false
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:
    enabled: false
    # resources: {}
    jenkinsMemoryLim: 8Gi
    jenkinsMemoryReq: 4Gi
    jenkinsVolumeSize: 8Gi
  events:
    enabled: false
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    # ruler:
    #   enabled: true
    #   replicas: 2
    #   resources: {}
  logging:
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:
    enabled: false
  monitoring:
    storageClass: ""
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1
    #   volumeSize: 20Gi
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: false
  servicemesh:
    enabled: false
    istio:
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: false
        cni:
          enabled: false
  edgeruntime:
    enabled: false
    kubeedge:
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ""
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  terminal:
    timeout: 600


2.2 开始安装Kubernetes 和 KubeSphere

2.2.1 使用配置文件创建集群

下载速度慢或者无法下载请务必执行

export KKZONE=cn
# 开始安装 如果使用其他名称,则需要将上面的 config-sample.yaml 更改为您自己的文件
./kk create cluster -f config-sample.yaml
# 示例展示 输入yes继续

[root@master01 ~]# ./kk create cluster -f config-sample.yaml


 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

10:50:07 CST [GreetingsModule] Greetings
10:50:08 CST message: [node03]
Greetings, KubeKey!
10:50:08 CST message: [node01]
Greetings, KubeKey!
10:50:09 CST message: [master01]
Greetings, KubeKey!
10:50:09 CST message: [node02]
Greetings, KubeKey!
10:50:09 CST success: [node03]
10:50:09 CST success: [node01]
10:50:09 CST success: [master01]
10:50:09 CST success: [node02]
10:50:09 CST [NodePreCheckModule] A pre-check on nodes
10:50:11 CST success: [node02]
10:50:11 CST success: [node03]
10:50:11 CST success: [node01]
10:50:11 CST success: [master01]
10:50:11 CST [ConfirmModule] Display confirmation form
+----------+------+------+---------+----------+-------+-------+---------+-----------+--------+----------+------------+------------+-------------+------------------+--------------+
| name     | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker   | containerd | nfs client | ceph client | glusterfs client | time         |
+----------+------+------+---------+----------+-------+-------+---------+-----------+--------+----------+------------+------------+-------------+------------------+--------------+
| master01 | y    | y    | y       | y        | y     | y     | y       | y         |        | 20.10.10 | 1.6.18     | y          |             |                  | CST 10:50:11 |
| node01   | y    | y    | y       | y        | y     | y     | y       | y         |        | 20.10.10 | 1.6.18     | y          |             |                  | CST 10:50:11 |
| node02   | y    | y    | y       | y        | y     | y     | y       | y         |        | 20.10.10 | 1.6.18     | y          |             |                  | CST 10:50:10 |
| node03   | y    | y    | y       | y        | y     | y     | y       | y         |        | 20.10.10 | 1.6.18     | y          |             |                  | CST 10:50:11 |
+----------+------+------+---------+----------+-------+-------+---------+-----------+--------+----------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: ^C
# 安装需要一点时间,需要耐心等待
# 在 kubectl 中执行以下命令检查安装过程
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
# 出现以下日志表示安装完毕
# KubeSphere初始账号密码都在日志里打印出
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://221.178.114.111:30880
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2023-03-08 17:48:29
#####################################################
17:48:33 CST success: [master01]
17:48:33 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.

Please check the result using the command:

        kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

2.2.2 检查安装是否成功

# 检查集群的node节点 kubectl get nodes
# STATUS 为 Ready 则表示Kubernetes集群准备完毕   
[root@master01 ~]# kubectl get nodes
NAME       STATUS   ROLES                  AGE   VERSION
master01   Ready    control-plane,master   18h   v1.23.8
node01     Ready    worker                 18h   v1.23.8
node02     Ready    worker                 18h   v1.23.8
node03     Ready    worker                 18h   v1.23.8
# 检查pod是否都是Running状态 kubectl get pod -A
# 都为Running状态下就可以登录kubesphere平台  
[root@master01 ~]# kubectl get pod -A
NAMESPACE                      NAME                                             READY   STATUS    RESTARTS   AGE
kube-system                    calico-kube-controllers-676c86494f-nd94j         1/1     Running   0          18h
kube-system                    calico-node-5ssxs                                1/1     Running   0          18h
kube-system                    calico-node-qmsjq                                1/1     Running   0          18h
kube-system                    calico-node-tq8rl                                1/1     Running   0          18h
kube-system                    calico-node-vw5dt                                1/1     Running   0          18h
kube-system                    coredns-757cd945b-nsgf9                          1/1     Running   0          18h
kube-system                    coredns-757cd945b-x7x2g                          1/1     Running   0          18h
kube-system                    haproxy-node01                                   1/1     Running   0          18h
kube-system                    haproxy-node02                                   1/1     Running   0          18h
kube-system                    haproxy-node03                                   1/1     Running   0          18h
kube-system                    kube-apiserver-master01                          1/1     Running   0          18h
kube-system                    kube-controller-manager-master01                 1/1     Running   0          18h
kube-system                    kube-proxy-5tjp9                                 1/1     Running   0          18h
kube-system                    kube-proxy-b5fwl                                 1/1     Running   0          18h
kube-system                    kube-proxy-qjzjl                                 1/1     Running   0          18h
kube-system                    kube-proxy-tbl6r                                 1/1     Running   0          18h
kube-system                    kube-scheduler-master01                          1/1     Running   0          18h
kube-system                    nodelocaldns-77lzr                               1/1     Running   0          18h
kube-system                    nodelocaldns-sgxmc                               1/1     Running   0          18h
kube-system                    nodelocaldns-wlpqn                               1/1     Running   0          18h
kube-system                    nodelocaldns-xfhkc                               1/1     Running   0          18h
kube-system                    openebs-localpv-provisioner-7974b86588-dkqh2     1/1     Running   0          18h
kube-system                    snapshot-controller-0                            1/1     Running   0          18h
kubesphere-controls-system     default-http-backend-659cc67b6b-f5qtr            1/1     Running   0          18h
kubesphere-controls-system     kubectl-admin-7966644f4b-48lh9                   1/1     Running   0          18h
kubesphere-monitoring-system   alertmanager-main-0                              2/2     Running   0          18h
kubesphere-monitoring-system   alertmanager-main-1                              2/2     Running   0          18h
kubesphere-monitoring-system   alertmanager-main-2                              2/2     Running   0          18h
kubesphere-monitoring-system   kube-state-metrics-69f4fbb5d6-6s9c8              3/3     Running   0          18h
kubesphere-monitoring-system   node-exporter-55l8q                              2/2     Running   0          18h
kubesphere-monitoring-system   node-exporter-dmjwd                              2/2     Running   0          18h
kubesphere-monitoring-system   node-exporter-pfbbp                              2/2     Running   0          18h
kubesphere-monitoring-system   node-exporter-v6vvt                              2/2     Running   0          18h
kubesphere-monitoring-system   notification-manager-deployment-cdd656fd-jws5d   2/2     Running   0          18h
kubesphere-monitoring-system   notification-manager-deployment-cdd656fd-nlgpw   2/2     Running   0          18h
kubesphere-monitoring-system   notification-manager-operator-7f7c564948-848xm   2/2     Running   0          18h
kubesphere-monitoring-system   prometheus-k8s-0                                 2/2     Running   0          18h
kubesphere-monitoring-system   prometheus-k8s-1                                 2/2     Running   0          18h
kubesphere-monitoring-system   prometheus-operator-684988fc5c-kjxjd             2/2     Running   0          18h
kubesphere-system              ks-apiserver-54694f8797-vkcrf                    1/1     Running   0          18h
kubesphere-system              ks-console-64ff6fd5bc-vmvp2                      1/1     Running   0          18h
kubesphere-system              ks-controller-manager-cb4b86fc4-288vv            1/1     Running   0          18h
kubesphere-system              ks-installer-87bbff65c-vj662                     1/1     Running   0          18h

2.2.3 登录KubeSphere查看

Console: http://集群任意一个ip:30880
Account: admin
Password: P@88w0rd

在这里插入图片描述
在这里插入图片描述

# 至此集群已经可以使用了
# 下面内容将介绍如何通过 KubeSphere 方便快捷的配置集群
  • 5
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值