kubespherev3.1.1离线安装从放弃到入门

前言

又回到最初的起点了,今天尝试KubeSphere 文档v3.1.1中的离线安装。
那么问题来了,为什么一开始不用这个文档?感觉官方其实没有写清楚应该用哪个或者说哪个已经不再使用了。 这个就很纠结了,所以昨天应该是采坑了,希望这次一切顺利吧。


机器列表

三台机器,两台用于部署,一台用于下载安装包。

roleiphostnamedesc
master192.168.3.65node1主节点
worker、registry192.168.3.66node2工作、镜像
packer192.168.3.64packer可联网下载软件包

一、 所有机器离线安装docker-ce

docker-ce部分我直接拷贝现成文档进来了

wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.2.tgz
tar xvf docker-20.10.2.tgz
cd docker-20.10.2
sudo cp docker/* /usr/bin/
sudo dockerd &
docker info

将docker注册成系统服务

sudo vi /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
 
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
 
[Install]
WantedBy=multi-user.target
<<<end file
systemctl start/stop docker
systemctl enable/disable docker

二、node2 离线安装docker-compose/harbor

$ curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose
$ ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
$ chmod +x /usr/bin/docker-compose
$ docker-compose --version
docker-compose version 1.24.1, build 1110ad01

$ wget -c https://storage.googleapis.com/harbor-releases/release-1.8.0/harbor-offline-installer-v1.8.2-rc1.tgz
$ tar zxvf harbor-offline-installer-v1.8.2-rc1.tgz
$ cd harbor
$ mkdir -p certs
$ openssl req \
-newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \
-x509 -days 36500 -out certs/domain.crt
#当您生成自己的证书时,请确保在字段 Common Name 中指定一个域名。例如,本示例中该字段被指定为 dockerhub.kubekey.local
Generating a 4096 bit RSA private key
..............++
......................++
writing new private key to 'certs/domain.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:
State or Province Name (full name) []:
Locality Name (eg, city) [Default City]:
Organization Name (eg, company) [Default Company Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:dockerhub.kubekey.local
Email Address []:

vim harbor.yml,注意域名、证书地址、密码的修改。

# Configuration file of Harbor
# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: dockerhub.kubekey.local

# http related config
http:
  # port for http, default is 80. If https enabled, this port will redirect to https port
  port: 80

# https related config
https:
  # https port for harbor, default is 443
  port: 443
  # The path of cert and key files for nginx
  certificate: /root/harbor/certs/domain.crt
  private_key: /root/harbor/certs/domain.key

# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
# external_url: https://reg.mydomain.com:8433

# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: 123456
...
#启用配置及载入images,只有执行了以下操作才能离线载入images
./prepare  # 如果有二次修改harbor.yml文件,请执行使配置文件生效
./install.sh --help #查看启动参数
./install.sh  --with-chartmuseum # 加载chart

后台运行

#用safari 访问网站测试:https://dockerhub.kubekey.local/
docker-compose up -d
docker-compose down

设置所有机器的host增加dockerhub.kubekey.local

cat /etc/hosts
#新增如下行...
192.168.3.66  dockerhub.kubekey.local

设置静态库拉取安全校验

#docker从私有镜像库pull/push镜像问题
vim /etc/docker/daemon.json
....
{
  "insecure-registries": ["dockerhub.kubekey.local"]
}
<<<file end
#重启服务
service docker restart

packer 下载脚本及镜像

#下载kk,没错这里不是v2.1而是v1.1.1
wget https://github.com/kubesphere/kubekey/releases/download/v2.1.1/kubekey-v1.1.1-linux-amd64.tar.gz
tar xvf kubekey-v1.1.1-linux-amd64.tar.gz
#下载imagelist
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/images-list.txt
#下载offline-installation-tool.sh
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/offline-installation-tool.sh
chmod +x offline-installation-tool.sh
#下载 Kubernetes 二进制文件
export KKZONE=cn;./offline-installation-tool.sh -b -v v1.17.9 
#拉取images文件
./offline-installation-tool.sh -s -l images-list.txt -d ./kubesphere-images
#创建公共仓库命名
vim create_project_harbor.sh
>>>>文件开始
#!/usr/bin/env bash
set -x
set -e
url="https://dockerhub.kubekey.local"  #修改url的值为https://dockerhub.kubekey.local
user="admin"
passwd="123456"

harbor_projects=(library
    kubesphereio
    kubesphere
    calico
    coredns
    openebs
    csiplugin
    minio
    mirrorgooglecontainers
    osixia
    prom
    thanosio
    jimmidyson
    grafana
    elastic
    istio
    jaegertracing
    jenkins
    weaveworks
    openpitrix
    joosthofman
    nginxdemos
    fluent
    kubeedge
)

for project in "${harbor_projects[@]}"; do
    echo "creating $project"
    #curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}" -k #curl命令末尾加上 -k
    curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/projects" -d "{\"project_name\":\"${project}\",\"metadata\":{\"public\":\"true\",\"enable_content_trust\":\"false\",\"prevent_vul\":\"false\",\"auto_scan\":\"false\"}}" -k #curl命令末尾加上 -k
done
<<<文件结束
#执行harbor项目创建
sh create_project_harbor.sh

创建后的效果如下:
创建后的效果
继续其他设置

#设置docker客户端登陆授权
docker login dockerhub.kubekey.local
admin 123456
#实际会生成一个/root/.docker/config.json,后续就无需登陆了
#推送到私有仓库
./offline-installation-tool.sh -l images-list.txt -d ./kubesphere-images -r dockerhub.kubekey.local
#如果你能看到如下,恭喜你成功了。
Loaded image: kubesphere/kube-events-exporter:v0.1.0
kubesphere/kube-apiserver:v1.20.6
dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.20.6
The push refers to repository [dockerhub.kubekey.local/kubesphere/kube-apiserver]
d88bc16e0414: Pushed
a06ec64d2560: Pushed
28699c71935f: Pushed
v1.20.6: digest: sha256:d21627934fb7546255475a7ab4472ebc1ae7952cc7ee31509ee630376c3eea03 size: 949

安装前确认

稳妥起见,把所有机器的docker、docker镜像仓库及必备软件安装好:

#需要先安装docker-ce
#安装其他必须组件
yum install socat conntrack ebtables  ipset
#docker从私有镜像库pull/push镜像问题
vim /etc/docker/daemon.json
....
{
  "insecure-registries": ["dockerhub.kubekey.local"]
}
<<<file end
#重启服务
service docker restart
#设置docker客户端登陆授权
docker login dockerhub.kubekey.local
admin 123456

执行安装

#创建配置文件
./kk create config --with-kubernetes v1.17.9 --with-kubesphere v3.1.1 -f config.yaml
# 注意修改配置的点1:dockerhub.kubekey.local设置为harbor的地址
# 注意修改配置的点2:hosts设置为自己想部署的机器
cat config.yaml
文件开始>>>>
#最后修改如下:
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 192.168.3.65, internalAddress: 192.168.3.65, user: root, password: "123456" }
  - {name: node2, address: 192.168.3.66, internalAddress: 192.168.3.66, user: root, password: "123456" }
  roleGroups:
    etcd:
    - node1
    master:
    - node1
    worker:
    - node1
    - node2
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.17.9
    imageRepo: kubesphere
    clusterName: cluster.local
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
  registry:
    registryMirrors: []
    insecureRegistries: []
    privateRegistry: dockerhub.kubekey.local  # Add the private image registry address here.
  addons: []


---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.1.1
spec:
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: ""
  etcd:
    monitoring: false
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    redis:
      enabled: false
    redisVolumSize: 2Gi
    openldap:
      enabled: false
    openldapVolumeSize: 2Gi
    minioVolumeSize: 20Gi
    monitoring:
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
    es:
      elasticsearchMasterVolumeSize: 4Gi
      elasticsearchDataVolumeSize: 20Gi
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchUrl: ""
      externalElasticsearchPort: ""
  console:
    enableMultiLogin: true
    port: 30880
  alerting:
    enabled: false
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: false
  devops:
    enabled: false
    jenkinsMemoryLim: 2Gi
    jenkinsMemoryReq: 1500Mi
    jenkinsVolumeSize: 8Gi
    jenkinsJavaOpts_Xms: 512m
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:
    enabled: false
    ruler:
      enabled: true
      replicas: 2
  logging:
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:
    enabled: false
  monitoring:
    storageClass: ""
    prometheusMemoryRequest: 400Mi
    prometheusVolumeSize: 20Gi
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  notification:
    enabled: false
  openpitrix:
    store:
      enabled: false
  servicemesh:
    enabled: false
  kubeedge:
    enabled: false
    cloudCore:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      cloudhubPort: "10000"
      cloudhubQuicPort: "10001"
      cloudhubHttpsPort: "10002"
      cloudstreamPort: "10003"
      tunnelPort: "10004"
      cloudHub:
        advertiseAddress:
          - ""
        nodeLimit: "100"
      service:
        cloudhubNodePort: "30000"
        cloudhubQuicNodePort: "30001"
        cloudhubHttpsNodePort: "30002"
        cloudstreamNodePort: "30003"
        tunnelNodePort: "30004"
    edgeWatcher:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      edgeWatcherAgent:
        nodeSelector: {"node-role.kubernetes.io/worker": ""}
        tolerations: []
<<<文件结束
#执行安装
./kk create cluster -f config.yaml
##安装确认
+-------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
| name  | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker   | nfs client | ceph client | glusterfs client | time         |
+-------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
| node2 | y    | y    | y       | y        | y     | y     | y         | 20.10.16 | y          |             | y                | PDT 22:50:07 |
| node1 | y    | y    | y       | y        | y     | y     | y         | 20.10.16 | y          |             | y                | PDT 22:50:07 |
+-------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
#如果出现如下,恭喜你安装完毕:
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.3.65:30880
Account: admin
Password: P@88w0rd

NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2022-06-03 22:58:43
#####################################################
INFO[22:58:52 PDT] Installation is complete.

Please check the result using the command:

       kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

查看效果
http://192.168.3.65:30880/clusters/default/overview
在这里插入图片描述

总结

1、kk要使用v1.1.1版本
2、参考最新文档版本 https://v3-1.docs.kubesphere.io/zh/docs/installing-on-linux/introduction/air-gapped-installation/
3、kubesphere 比其他 kubernates 集群安装的优势在于它可它包含了 一些丰富的镜像迁移脚本,并且它内置了应用商店,我们可以非常方便集成其他应用。
4、镜像仓库我们一定要独立设置,方便于镜像的集中存储及管理。如果k8s集群有问题,那我们的镜像其实没有任何影响的。
5、kubesphere 是目前来说,支持自行所需镜像的k8s离线部署最优的解决方案。

参考资料

harbor安装和使用文档
https://blog.51cto.com/u_15127502/2655378
Docker 解决Error response from daemon https://blog.csdn.net/wto882dim/article/details/84260863
V3.1.1 离线安装
https://v3-1.docs.kubesphere.io/zh/docs/installing-on-linux/introduction/air-gapped-installation/
Harbor安装
https://blog.csdn.net/shawn210/article/details/98068165
CentOS7下离线安装KubeSphere3.0集群
https://cloud.tencent.com/developer/article/1802614

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

e421083458

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值