换掉Centos 使用OpenEuler 版本 22.09
安装kubesphere 一开始还挺担心的
安装后, 发现很简单, 遇到的唯一问题就是config-sample.yaml文件的主机一栏,我写重复了,导致安装报错, 脚本找不到
scp file /home/kubekey/node1/initOS.sh to remote /usr/local/bin/kube-scripts/initOS.sh failed: Failed to exec command: sudo -E /bin/bash -c "mv -f /tmp/kubekey/usr/local/bin/kube-scripts/initOS.sh /usr/local/bin/kube-scripts/initOS.sh"
mv: 无法创建普通文件'/usr/local/bin/kube-scripts/initOS.sh': File exists: Process exited with status 1
08:25:36 CST retry: [node1]
08:25:36 CST message: [master3]
scp file /home/kubekey/master3/initOS.sh to remote /usr/local/bin/kube-scripts/initOS.sh failed: Failed to exec command: sudo -E /bin/bash -c "mv -f /tmp/kubekey/usr/local/bin/kube-scripts/initOS.sh /usr/local/bin/kube-scripts/initOS.sh"
mv: 无法获取'/tmp/kubekey/usr/local/bin/kube-scripts/initOS.sh' 的文件状态(stat): No such file or directory: Process exited with status 1
08:25:36 CST retry: [master3]
08:25:36 CST message: [node2]
scp file /home/kubekey/node2/initOS.sh to remote /usr/local/bin/kube-scripts/initOS.sh failed: Failed to exec command: sudo -E /bin/bash -c "mv -f /tmp/kubekey/usr/local/bin/kube-scripts/initOS.sh /usr/local/bin/kube-scripts/initOS.sh"
mv: 无法创建普通文件'/usr/local/bin/kube-scripts/initOS.sh': File exists: Process exited with status
原因是我的主机写重复了, 移走一份后,再去移动到重复的机器肯定找不到了
安装命令也简单
./kk create cluster --with-kubernetes v1.22.12 --with-kubesphere v3.3.1 -f config-sample.yaml
唯一的问题就是在配置文件里我已经打开相关组件的安装了,但是都没有安装
应该是我使用的命令不对吧? 决定卸载重试一下
./kk delete cluster -f config-sample-old.yaml
创建配置,然后使用配置文件创建集群
第一步:
./kk create config --with-kubernetes v1.22.12 --with-kubesphere v3.3.1
修改config-sample.yaml 如下贴上的代码
三个点都做为master, 不开启loadbalancer会报错:
将 ## Internal loadbalancer for apiservers 下面这句放开即可
internalLoadbalancer: haproxy
You must set the value of the LB address or enable the internal loadbalancer.
第二步:
./kk create cluster -f config-sample.yaml
经过这2步,所有组件安装成功了
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: njoffice04, address: 192.168.0.14, internalAddress: 192.168.0.14, user: root, password: "**************"}
- {name: njoffice05, address: 192.168.0.15, internalAddress: 192.168.0.15, user: root, password: "**************"}
- {name: njoffice06, address: 192.168.0.16, internalAddress: 192.168.0.16, user: root, password: "**************"}
roleGroups:
etcd:
- njoffice04
- njoffice05
- njoffice06
control-plane:
- njoffice04
- njoffice05
- njoffice06
worker:
- njoffice04
- njoffice05
- njoffice06
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.22.12
clusterName: cluster.local
autoRenewCerts: true
containerManager: docker
etcd:
type: kubekey
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
privateRegistry: ""
namespaceOverride: ""
registryMirrors: []
insecureRegistries: []
addons: []
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.3.1
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
zone: ""
local_registry: ""
namespace_override: ""
# dev_tag: ""
etcd:
monitoring: true
endpointIps: localhost
port: 2379
tlsEnable: true
common:
core:
console:
enableMultiLogin: true
port: 30880
type: NodePort
# apiserver:
# resources: {}
# controllerManager:
# resources: {}
redis:
enabled: true
volumeSize: 2Gi
openldap:
enabled: true
volumeSize: 2Gi
minio:
volumeSize: 20Gi
monitoring:
# type: external
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
GPUMonitoring:
enabled: false
gpu:
kinds:
- resourceName: "nvidia.com/gpu"
resourceType: "GPU"
default: true
es:
# master:
# volumeSize: 4Gi
# replicas: 1
# resources: {}
# data:
# volumeSize: 20Gi
# replicas: 1
# resources: {}
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchHost: ""
externalElasticsearchPort: ""
alerting:
enabled: true
# thanosruler:
# replicas: 1
# resources: {}
auditing:
enabled: true
# operator:
# resources: {}
# webhook:
# resources: {}
devops:
enabled: true
# resources: {}
jenkinsMemoryLim: 8Gi
jenkinsMemoryReq: 4Gi
jenkinsVolumeSize: 8Gi
events:
enabled: true
# operator:
# resources: {}
# exporter:
# resources: {}
# ruler:
# enabled: true
# replicas: 2
# resources: {}
logging:
enabled: true
logsidecar:
enabled: true
replicas: 2
# resources: {}
metrics_server:
enabled: true
monitoring:
storageClass: ""
node_exporter:
port: 9100
# resources: {}
# kube_rbac_proxy:
# resources: {}
# kube_state_metrics:
# resources: {}
# prometheus:
# replicas: 1
# volumeSize: 20Gi
# resources: {}
# operator:
# resources: {}
# alertmanager:
# replicas: 1
# resources: {}
# notification_manager:
# resources: {}
# operator:
# resources: {}
# proxy:
# resources: {}
gpu:
nvidia_dcgm_exporter:
enabled: false
# resources: {}
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: true
ippool:
type: calico
topology:
type: weave-scope
openpitrix:
store:
enabled: true
servicemesh:
enabled: true
istio:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: false
cni:
enabled: false
edgeruntime:
enabled: false
kubeedge:
enabled: false
cloudCore:
cloudHub:
advertiseAddress:
- ""
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
# resources: {}
# hostNetWork: false
iptables-manager:
enabled: true
mode: "external"
# resources: {}
# edgeService:
# resources: {}
terminal:
timeout: 600