Linux系统下KubeSphere3.4.1离线安装包制作及部署过程

一、概述

KubeSphere 是 GitHub 上的一个开源项目,是成千上万名社区用户的聚集地。很多用户都在使用 KubeSphere 运行工作负载。对于在 Linux 上的安装,KubeSphere 既可以部署在云端,也可以部署在本地环境中,例如 AWS EC2、Azure VM 和裸机等。

KubeSphere 为用户提供轻量级安装程序 KubeKey(该程序支持安装 Kubernetes、KubeSphere 及相关插件),安装过程简单而友好。KubeKey 不仅能帮助用户在线创建集群,还能作为离线安装解决方案。

以下是可用的安装选项:

  • All-in-One:在单个节点上安装 KubeSphere(仅为让用户快速熟悉 KubeSphere)。
  • 多节点安装:在多个节点上安装 KubeSphere(用于测试或开发)。
  • 在 Linux 上离线安装:将 KubeSphere 的所有镜像打包(便于在 Linux 上进行离线安装)。
  • 高可用安装:安装具有多个节点的高可用 KubeSphere 集群,该集群用于生产环境。
  • 最小化安装:仅安装 KubeSphere 所需的最少系统组件。以下是最低资源要求:
    • 2 个 CPU
    • 4 GB 运行内存
    • 40 GB 存储空间
  • 全家桶安装:安装 KubeSphere 的所有可用系统组件,例如 DevOps、服务网格、告警等。

注意:并非所有选项都相互排斥,例如,您可以在离线环境中使用最小化安装将 KubeSphere 部署在多个节点上。

本文主要介绍在Linux系统下离线安装KubeSphere

 前提条件:

要开始进行多节点安装,您需要参考如下示例准备至少三台主机。

主机 IP主机名称角色系统版本
172.31.10.2node1联网主机用于制作离线包Ubuntu20.04 LTS
172.31.10.43node2离线环境主节点Ubuntu20.04 LTS
172.31.10.44node3离线环境镜像仓库节点Ubuntu20.04 LTS

部署准备

在私有云上创建3台云主机如下,物理机也可以

二、制作离线安装包

KubeKey 是一个用于部署 Kubernetes 集群的开源轻量级工具。它提供了一种灵活、快速、便捷的方式来仅安装 Kubernetes/K3s,或同时安装 Kubernetes/K3s 和 KubeSphere,以及其他云原生插件。除此之外,它也是扩展和升级集群的有效工具。

KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概念,为用户离线部署 Kubernetes 集群提供了一种解决方案。manifest 是一个描述当前 Kubernetes 集群信息和定义 artifact 制品中需要包含哪些内容的文本文件。在过去,用户需要准备部署工具,镜像 tar 包和其他相关的二进制文件,每位用户需要部署的 Kubernetes 版本和需要部署的镜像都是不同的。现在使用 KubeKey,用户只需使用清单 manifest 文件来定义将要离线部署的集群环境需要的内容,再通过该 manifest 来导出制品 artifact 文件即可完成准备工作。离线部署时只需要 KubeKey 和 artifact 就可快速、简单的在环境中部署镜像仓库和 Kubernetes 集群。

2.1 登录node1执行以下命令下载并解压KubeKey

curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -

2.2 在node1上执行以下命令,并复制示例中的 manifest 内容。

vim manifest.yaml
---

apiVersion: kubekey.kubesphere.io/v1alpha2

kind: Manifest

metadata:

  name: sample

spec:

  arches:

  - amd64

  operatingSystems:

  - arch: amd64

    type: linux

    id: centos

    version: "7"

    repository:

      iso:

        localPath:

        url: https://github.com/kubesphere/kubekey/releases/download/v3.0.10/centos7-rpms-amd64.iso

  - arch: amd64

    type: linux

    id: ubuntu

    version: "20.04"

    repository:

      iso:

        localPath:

        url: https://github.com/kubesphere/kubekey/releases/download/v3.0.10/ubuntu-20.04-debs-amd64.iso

  kubernetesDistributions:

  - type: kubernetes

    version: v1.23.15

  components:

    helm:

      version: v3.9.0

    cni:

      version: v1.2.0

    etcd:

      version: v3.4.13

    calicoctl:

      version: v3.23.2

   ## For now, if your cluster container runtime is containerd, KubeKey will add a docker 20.10.8 container runtime in the below list.

   ## The reason is KubeKey creates a cluster with containerd by installing a docker first and making kubelet connect the socket file of containerd which docker contained.

    containerRuntimes:

    - type: docker

      version: 20.10.8

    - type: containerd

      version: 1.6.4

    crictl:

      version: v1.24.0

    docker-registry:

      version: "2"

    harbor:

      version: v2.5.3

    docker-compose:

      version: v2.2.2

  images:

  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.15

  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.15

  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.15

  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.15

  - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6

  - registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6

  - registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2

  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2

  - registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2

  - registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2

  - registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.23.2

  - registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.12.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3

  - registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2

  - registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12

  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.4.1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.4.1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.4.1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.4.1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z

  - registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z

  - registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v1.1.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4

  - registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2

  - registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine

  - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine

  - registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14

  - registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.13.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/iptables-manager:v1.13.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/edgeservice:v0.3.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2

  - registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.3.2

  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:ks-v3.4.1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:ks-v3.4.1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:ks-v3.4.1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.4.0-2.319.3-1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/inbound-agent:4.10-2

  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2

  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11

  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16

  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17

  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18

  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2-podman

  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0-podman

  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0-podman

  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11-podman

  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0-podman

  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman

  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16-podman

  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17-podman

  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18-podman

  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2ioperator:v3.2.1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2irun:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2i-binary:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-centos7:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-runtime:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-centos7:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-runtime:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-centos7:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-centos7:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-runtime:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-runtime:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-8-centos7:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-6-centos7:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-4-centos7:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-36-centos7:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-35-centos7:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-34-centos7:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-27-centos7:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/argocd:v2.3.3

  - registry.cn-beijing.aliyuncs.com/kubesphereio/argocd-applicationset:v0.4.1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/dex:v2.30.2

  - registry.cn-beijing.aliyuncs.com/kubesphereio/redis:6.2.6-alpine

  - registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.7.1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.39.1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.6.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.31.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:8.3.3

  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v2.3.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v2.3.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6

  - registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.8.22

  - registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch:2.6.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch-dashboards:2.6.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch-curator:v0.0.5

  - registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.14.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03

  - registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.9.4

  - registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:v1.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.6.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.6.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.6.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.14.6

  - registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.14.6

  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.29

  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.29

  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.29

  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.29

  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.29

  - registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.50.1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.50

  - registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/nginx:1.14-alpine

  - registry.cn-beijing.aliyuncs.com/kubesphereio/wget:1.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/hello:plain-text

  - registry.cn-beijing.aliyuncs.com/kubesphereio/wordpress:4.8-apache

  - registry.cn-beijing.aliyuncs.com/kubesphereio/hpa-example:latest

  - registry.cn-beijing.aliyuncs.com/kubesphereio/fluentd:v1.4.2-2.0

  - registry.cn-beijing.aliyuncs.com/kubesphereio/perl:latest

  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-productpage-v1:1.16.2

  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v1:1.16.2

  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v2:1.16.2

  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-details-v1:1.16.2

  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-ratings-v1:1.16.3

  - registry.cn-beijing.aliyuncs.com/kubesphereio/scope:1.13.0

注意: 

  • 若需要导出的 artifact 文件中包含操作系统依赖文件(如:conntarck、chrony 等),可在 operationSystem 元素中的 .repostiory.iso.url 中配置相应的 ISO 依赖文件下载地址或者提前下载 ISO 包到本地在 localPath 里填写本地存放路径并删除 url 配置项。

  • 开启 harbor 和 docker-compose 配置项,为后面通过 KubeKey 自建 harbor 仓库推送镜像使用。

  • 默认创建的 manifest 里面的镜像列表从 docker.io 获取。

  • 可根据实际情况修改 manifest-sample.yaml 文件的内容,用于之后导出期望的 artifact 文件。

  • 您可以访问 Release v3.0.7 🌈 · kubesphere/kubekey · GitHub 下载 ISO 文件。

 2.3 (可选)如果已经有k8s集群,可以在已有集群中执行 KubeKey 命令生成 manifest 文件,并参照步骤 2 中的示例修改 manifest 文件内容。

./kk create manifest

2.4 导出制品 artifact

如果能够正常访问github,则使用以下命令导出

./kk artifact export -m manifest.yaml -o kubesphere.tar.gz

如果不能够访问github,则使用以下命令导出 

export KKZONE=cn



./kk artifact export -m manifest.yaml -o kubesphere.tar.gz

注意:

制品(artifact)是一个根据指定的 manifest 文件内容导出的包含镜像 tar 包和相关二进制文件的 tgz 包。在 KubeKey 初始化镜像仓库、创建集群、添加节点和升级集群的命令中均可指定一个 artifact,KubeKey 将自动解包该 artifact 并在执行命令时直接使用解包出来的文件。

  • 导出时请确保网络连接正常。

  • KubeKey 会解析镜像列表中的镜像名,若镜像名中的镜像仓库需要鉴权信息,可在 manifest 文件中的 .registry.auths 字段中进行配置。

需要上传到其它离线节点的文件内容大致如下图:kubesphere.tar.gz(13G,里面包含了所需的镜像及组件)、kk、kubekey

三、开始离线安装

3.1 将下载的 KubeKey kubesphere.tar.gz(13G,里面包含了所需的镜像及组件)、kk通过 U 盘等介质拷贝至离线环境安装节点。

3.2 执行以下命令创建离线集群配置文件:

./kk create config --with-kubesphere v3.4.1 --with-kubernetes v1.23.15 -f config-sample.yaml

 3.3 执行以下命令修改离线集群配置文件:

vim config-sample.yaml

注意:

  • 按照实际离线环境配置修改节点信息。
  • 必须指定 registry 仓库部署节点(用于 KubeKey 部署自建 Harbor 仓库)。
  • registry 里必须指定 type 类型为 harbor,否则默认安装 docker registry。
apiVersion: kubekey.kubesphere.io/v1alpha2

kind: Cluster

metadata:

  name: sample

spec:

  hosts:

  - {name: master, address: 192.168.0.3, internalAddress: 192.168.0.3, user: root, password: "<REPLACE_WITH_YOUR_ACTUAL_PASSWORD>"}

  - {name: node1, address: 192.168.0.4, internalAddress: 192.168.0.4, user: root, password: "<REPLACE_WITH_YOUR_ACTUAL_PASSWORD>"}



  roleGroups:

    etcd:

    - master

    control-plane:

    - master

    worker:

    - node1

    # 如需使用 kk 自动部署镜像仓库,请设置该主机组 (建议仓库与集群分离部署,减少相互影响)

    registry:

    - node1

  controlPlaneEndpoint:

    ## Internal loadbalancer for apiservers

    # internalLoadbalancer: haproxy



    domain: lb.kubesphere.local

    address: ""

    port: 6443

  kubernetes:

    version: v1.23.15

    clusterName: cluster.local

  network:

    plugin: calico

    kubePodsCIDR: 10.233.64.0/18

    kubeServiceCIDR: 10.233.0.0/18

    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni

    multusCNI:

      enabled: false

  registry:

    # 如需使用 kk 部署 harbor, 可将该参数设置为 harbor,不设置该参数且需使用 kk 创建容器镜像仓库,将默认使用docker registry。

    type: harbor

    # 如使用 kk 部署的 harbor 或其他需要登录的仓库,可设置对应仓库的auths,如使用 kk 创建的 docker registry 仓库,则无需配置该参数。

    # 注意:如使用 kk 部署 harbor,该参数请于 harbor 启动后设置。

    #auths:

    #  "dockerhub.kubekey.local":

    #    username: admin

    #    password: Harbor12345

    # 设置集群部署时使用的私有仓库

    privateRegistry: ""

    namespaceOverride: ""

    registryMirrors: []

    insecureRegistries: []

  addons: []

注意:以下图片为本次实验根据实际情况修改 ,特别注意节点名称、IP、用户名、密码要跟实际情况对应

 ​​​

3.4 执行以下命令安装镜像仓库:

./kk init registry -f config-sample.yaml -a kubesphere.tar.gz

注意:

命令中的参数解释如下:

  • config-sample.yaml 指离线环境集群的配置文件。

  • kubesphere.tar.gz 指源集群打包出来的 tar 包镜像。

报错1跟2都是由于config-sample.yaml 文件填写内容有误导致:

 需要将config-sample.yaml文件中以下内容前面的注释去掉

另外可能需要将node2跟node3的/etc/hosts文件内容修改成以下图标显示

再次执行命令安装镜像库,显示安装成功,继续往下走

3.5 创建 Harbor 项目

注意:

由于 Harbor 项目存在访问控制(RBAC)的限制,即只有指定角色的用户才能执行某些操作。如果您未创建项目,则镜像不能被推送到 Harbor。Harbor 中有两种类型的项目:

  • 公共项目(Public):任何用户都可以从这个项目中拉取镜像。
  • 私有项目(Private):只有作为项目成员的用户可以拉取镜像。

Harbor 管理员账号:admin,密码:Harbor12345。Harbor 安装文件在 /opt/harbor , 如需运维 Harbor,可至该目录下。

方法 1:执行脚本创建 Harbor 项目。

a. 执行以下命令下载指定脚本初始化 Harbor 仓库:

curl -O https://raw.githubusercontent.com/kubesphere/ks-installer/master/scripts/create_project_harbor.sh

 b. 执行以下命令修改脚本配置文件:

vim create_project_harbor.sh
#!/usr/bin/env bash

# Copyright 2018 The KubeSphere Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

url="http://192.168.6.2"   #这里替换成node3节点的IP 172.31.10.44,因为前面安装镜像库的时候在config-sample.yaml文件中定义了镜像库节点为node3
user="admin"
passwd="Harbor12345"

harbor_projects=(library
    kubesphere
    calico
    coredns
    openebs
    csiplugin
    minio
    mirrorgooglecontainers
    osixia
    prom
    thanosio
    jimmidyson
    grafana
    elastic
    istio
    jaegertracing
    jenkins
    weaveworks
    openpitrix
    joosthofman
    nginxdemos
    fluent
    kubeedge
)

for project in "${harbor_projects[@]}"; do
    echo "creating $project"
    curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}"
done
~

注意:

  • 修改 url 的值为url="http://192.168.6.2"   #这里替换成node3节点的IP 172.31.10.44,因为前面安装镜像库的时候在config-sample.yaml文件中定义了镜像库节点为node3 。

  • 需要指定仓库项目名称和镜像列表的项目名称保持一致。

  • 脚本末尾 curl 命令末尾加上 -k

c. 执行以下命令创建 Harbor 项目:

chmod +x create_project_harbor.sh
./create_project_harbor.sh

3.6 执行以下命令安装 KubeSphere 集群:

./kk create cluster -f config-sample.yaml -a kubesphere.tar.gz --with-packages

输入yes ,继续安装

参数解释如下:

  • config-sample.yaml:离线环境集群的配置文件。
  • kubesphere.tar.gz:源集群打包出来的 tar 包镜像。
  • --with-packages:若需要安装操作系统依赖,需指定该选项。

报错1:

如下图:node3节点172.31.10.43:443连接失败node3为镜像库节点

报错原因:

经过排查/etc/hosts文件发现,node2跟node3在hosts文件中ip都一样,发现前面在config-sample.yaml文件中将node3的ip地址写错了,导致安装失败,还是得细心啊,修改成正确的地址后,重新从3.2开始

报错2:

get manifest list failed by module cache
08:10:53 UTC failed: [LocalHost]
error: Pipeline[CreateClusterPipeline] execute failed: Module[CopyImagesToRegistryModule] exec failed:
failed: [LocalHost] [PushManifest] exec failed after 1 retries: get manifest list failed by module cache

报错原因,

在github上找到得

这个问题是因为官方的create_project_harbor.sh中,默认没有kubesphereio这个项目,而使用kubekey创建集群时,首先会CopyImagesToRegistry,如果没有kubeSphereio这个目录,会导致push不成功,会不执行CopyImagesToRegistry的Execute方法中的c.ModuleCache.Set("manifestList", manifestList)这段代码,最终在PushManifest的Execute时,会找不到manifestList。

解决办法就是在create_project_harbor.sh脚本中增加kubesphereio或者在harbor上手动创建kubesphereio或者config-sample.yaml中的namespaceOverride修改为”kubesphere”(不过这个我没试过)

kubekey在调用CopyImageOptions的Copy()时,虽然有重试机制,但最终并未把push不成功的error打印出来,导致后面的错误看起来非常不理解,希望能帮助到您!

在harbor上手动创建kubesphereio项目,创建完成以后重新执行上面的3.6安装 KubeSphere 集群的命令重新安装

 报错3

pull image failed: Failed to exec command: sudo -E /bin/bash -c "env PATH=$PATH docker pull dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.26.1 --platform amd64"
Error response from daemon: unknown: artifact kubesphereio/kube-controllers:v3.26.1 not found: Process exited with status 1
08:24:12 UTC failed: [node3]
08:24:12 UTC failed: [node2]
error: Pipeline[CreateClusterPipeline] execute failed: Module[PullModule] exec failed:
failed: [node3] [PullImages] exec failed after 3 retries: pull image failed: Failed to exec command: sudo -E /bin/bash -c "env PATH=$PATH docker pull dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.26.1 --platform amd64"
Error response from daemon: unknown: artifact kubesphereio/kube-controllers:v3.26.1 not found: Process exited with status 1
failed: [node2] [PullImages] exec failed after 3 retries: pull image failed: Failed to exec command: sudo -E /bin/bash -c "env PATH=$PATH docker pull dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.26.1 --platform amd64"
Error response from daemon: unknown: artifact kubesphereio/kube-controllers:v3.26.1 not found: Process exited with status 1

在github上同样有人遇到一样的问题,需要在安装集群之前安装critcl

解决方法

安装critcl

选择指定版本下载:

https://github.com/kubernetes-sigs/cri-tools/releases

或在linux下直接下载

wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.30.0/crictl-v1.30.0-linux-amd64.tar.gz

解压

sudo tar zxvf crictl-v1.30.0-linux-amd64.tar.gz -C /usr/local/bin

完成后检查版本

crictl --version

继续接着3.6的步骤,执行创建kubesphere集群的命令

**************************************************

#####################################################

###              Welcome to KubeSphere!           ###

#####################################################



Console: http://172.31.10.43:30880

Account: admin

Password: P@88w0rd



NOTES:

1. After you log into the console, please check the

monitoring status of service components in

the "Cluster Management". If any service is not

ready, please wait patiently until all components

are up and running.

1. Please change the default password after login.



#####################################################

https://kubesphere.io             2024-07-16 17:30:06

#####################################################

通过 http://{IP}:30880 使用默认帐户和密码 admin/P@88w0rd 访问 KubeSphere 的 Web 控制台。

要访问控制台,请确保在您的安全组中打开端口 30880。 

  • 13
    点赞
  • 28
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
### 回答1: 安装Linux Wireshark离线安装的步骤: 1. 下载Wireshark离线安装(例如wireshark-gtk3-3.4.1.tar.xz) 2. 解压缩安装:tar -xf wireshark-gtk3-3.4.1.tar.xz 3. 进入解压后的目录:cd wireshark-gtk3-3.4.1 4. 执行configure脚本:./configure --prefix=/usr/local 5. 执行make命令进行编译:make 6. 执行make install命令进行安装:sudo make install 安装完成后,可以在终端中输入wireshark命令启动Wireshark。 ### 回答2: Wireshark是一个开源的网络协议分析工具,可用于嗅探现有网络的实时流量或分析以前捕获的网络流量。Wireshark有跨平台版本,不过本次针对的是在Linux系统上安装Wireshark。 首先,我们需要准备Wireshark的离线安装,可以去Wireshark官网下载最新的稳定版本。下载完成后,可以解压缩到一个文件夹中。 接着,为了安装Wireshark,需要在Linux系统上安装一些依赖。对于Ubuntu或其他基于Debian的系统,可以使用以下命令安装依赖: ```sh sudo apt-get install -y libgtk-3-dev libpcap-dev libgcrypt-dev libgnutls-dev bison flex qt5-default libqt5svg5* ``` 对于基于RHEL的系统,则可以使用以下命令安装依赖: ```sh sudo yum install -y gtk3-devel gcc-c++ openssl-devel flex bison libpcap-devel qt5-devel qt5-qtsvg-devel qt5-qtbase-gui ``` 安装好依赖后,可以使用以下命令开始安装Wireshark: ```sh cd wireshark-xxx ./configure make sudo make install ``` 其中,xxx是具体解压后文件夹的名称。 安装完成后,就可以使用Wireshark来分析网络流量了。可以通过以下命令打开Wireshark: ```sh sudo wireshark ``` 需要注意的是,在使用Wireshark时需要拥有足够的权限来访问网络接口,所以建议使用sudo来启动Wireshark。而对于远程服务器上的Wireshark分析,可以使用X11 forwarding或者使用CLI版本的Tshark。Tshark是Wireshark的CLI版本,使用方法与Wireshark类似,可以通过以下命令来安装: ```sh sudo apt-get install -y tshark ``` 总结起来,安装Wireshark需要下载离线安装,并安装相关依赖,然后进行编译安装即可。在使用Wireshark时需要注意文件访问权限和网络接口的访问权限。 ### 回答3: Wireshark是一款流行的网络协议分析工具,可以帮助用户深入理解网络协议的工作原理,诊断网络问题以及分析网络数据流量。在Linux操作系统中,Wireshark可以通过离线安装的方式进行安装。 一般来说,Wireshark的离线安装括两个文件:一个是Wireshark的二进制安装文件,另一个是依赖库文件。在安装之前,需要确保系统已经安装了相关依赖库,如GTK+、Libpcap等。 具体安装步骤如下: 1. 下载Wireshark的离线安装,可以从官网或者第三方下载站点获取。 2. 解压安装,在终端中切换到安装所在的目录,执行以下命令: tar -xzf wireshark-xxx.tar.gz 3. 进入解压后的目录,执行以下命令: ./configure make sudo make install 其中,configure命令用于配置编译选项,make用于编译安装程序,sudo make install用于安装程序到系统目录。 4. 安装完成之后,可以在终端中输入wireshark命令启动程序,也可以在图形界面中点击应用程序菜单找到wireshark图标启动程序。 总之,Wireshark的离线安装是一种方便用户的安装方式,可以避免下载过程中出现网络状况不良等问题,同时也可以离线安装在没有互联网连接的系统中。用户可以参照以上步骤进行安装,体验Wireshark带来的强大网络分析功能。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

技术瘾君子1573

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值