前言
2020 年 8 月 31 日,KubeSphere 开源社区官方宣布 KubeSphere 3.0.0 GA 正式发布!KubeSphere 3.0.0 主打 “面向应用的容器混合云”,专为 「多云、多集群、多团队、多租户」 应用场景而设计,大幅增强了 「集群管理、可观察性、存储管理、网络管理、多租户安全、应用商店、安装部署」 等特性,并且进一步提升了交互设计与用户体验,KubeSphere 3.0.0 是 KubeSphere 至今为止最重要的版本更新。作为多云与多集群的统一控制平面,KubeSphere 3.0.0 带来的新功能将帮助企业加速落地 「多云与混合云策略」,降低企业对任何基础设施之上的 Kubernetes 集群运维管理的门槛,实现现代化应用在容器场景下的快速交付,为企业在生产环境构建云原生技术栈提供了 「完整的平台级解决方案」。
搭建建议
1.刚开始的话可以用单节点部署。
官方有地址:https://kubesphere.com.cn/en/docs/quick-start/all-in-one-on-linux/(在线安装的)
2.熟悉后可以使用多节点安装或者高可用安装都可以
3.以下是我自己根据教程自己搭建的过程仅供参考,如有问题请指正
一、环境准备
系统要求 | 配置要求(每个节点) | ip |
---|---|---|
Centos7.x | CPU: 2 Cores, Memory: 16G, Disk Space: 100 G | 192.168.149.175(master) |
CPU: 2 Cores, Memory: 16G, Disk Space: 100 G | 192.168.149.166(node01) | |
CPU: 2 Cores, Memory: 16G, Disk Space: 100 G | 192.168.149.181(node01) | |
二、节点要求
所有节点必须可以通过 SSH 访问。
所有节点配置时钟同步。
所有节点必须可以使用 sudo
/curl
/openssl
。
Docker 可以自己预先安装或由 KubeKey
统一安装。
软件依赖要求
不同版本的 Kubernetes 对系统软件要求有所不同,您需要根据自己的环境按照下面的要求预先安装依赖软件。
依赖 | Kubernetes 版本 ≥ 1.18 | Kubernetes 版本 < 1.18 |
---|---|---|
socat | 必须 | 可选但建议 |
conntrack | 必须 | 可选但建议 |
ebtables | 可选但建议 | 可选但建议 |
ipset | 可选但建议 | 可选但建议 |
开始安装
一、更改hostname(举例)
[root@localhost ~]# hostnamectl set-hostname kube-master
二、更改yum源为阿里源
1.备份本地源
[root@localhost ~]# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
2.下载新的 CentOS-Base.repo 到 /etc/yum.repos.d/
[root@localhost ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
3.运行 yum makecache 生成缓存
三、安装前期准备
1.关闭防火墙
[root@kube-master ~]# systemctl disable firewalld
2.关闭selinux
[root@kube-master ~]# setenforce 0
3.关闭swap分区
[root@kube-master ~]# swapoff -a
4.更改配置
[root@kube-master ~]# echo "vm.swappiness=0" >> /etc/sysctl.conf
[root@kube-master ~]# sysctl -p /etc/sysctl.conf
[root@kube-master ~]# sed -i 's$/dev/mapper/centos-swap$#/dev/mapper/centos-swap$g' /etc/fstab
5.时间同步
[root@kube-master ~]# yum -y install chrony
6.启动chronyd及加入开机自启
[root@kube-master ~]# systemctl start chronyd && systemctl enable chronyd
7.hosts解析设置
[root@kube-master ~]# cat >>/etc/hosts<<EOF
> 192.168.149.175 kube-master
> 192.168.149.161 kube-node01
> 192.168.149.181 kube-node02
> EOF
8.设置内核
[root@kube-master ~]# cat >/etc/sysctl.d/k8s.conf <<EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> EOF
9.执行以下命令生效
modprobe br_netfilter && sysctl -p /etc/sysctl.d/k8s.conf
10.检查DNS
[root@kube-master ~]# cat /etc/resolv.conf
11.安装IPVS
[root@kube-master ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
> #!/bin/bash
> modprobe -- ip_vs
> modprobe -- ip_vs_rr
> modprobe -- ip_vs_wrr
> modprobe -- ip_vs_sh
> modprobe -- nf_conntrack_ipv4
> EOF
12.修改权限以及查看是否已经正确加载所需的内核模块
[root@kube-master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
13.安装ipvsadm
yum -y install ipset ipvsadm
14.安装依赖组件
yum install -y bash-completion lrzsz wget socat conntrack ebtables ipset
15.安装docker
[root@kube-master ~]# yum install -y yum-utils
[root@kube-master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@kube-master ~]#yum makecache fast
[root@kube-master ~]# yum -y install docker-ce
[root@kube-master ~]# systemctl start docker && systemctl enable docker
[root@kube-master ~]# systemctl status docker
16.配置加速镜像
[root@kube-master ~]# sudo tee /etc/docker/daemon.json <<-'EOF'
> {
> "registry-mirrors": ["https://uafxazog.mirror.aliyuncs.com"]
> }
> EOF
17.修改docker Cgroup Driver为systemd
[root@kube-master ~]# sed -i.bak "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service
[root@kube-master ~]# sudo systemctl daemon-reload
[root@kube-master ~]# sudo systemctl restart docker
18.设置kubernetes仓库
[root@kube-master ~]# cat >/etc/yum.repos.d/kubernetes.repo <<EOF
> [kubernetes]
> name=Kubernetes
> baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
> http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@kube-master ~]# yum makecache fast
19.下载KK
[root@kube-master ~]# wget -c https://kubesphere.io/download/kubekey-v1.0.0-linux-amd64.tar.gz -O - | tar -xz
[root@kube-master ~]# chmod +x kk
20.生成config模板文件
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: kube-master, address: 192.168.149.175, internalAddress: 192.168.149.175, user: root, password: 123456}
- {name: kube-node01, address: 192.168.149.166, internalAddress: 192.168.149.166, user: root, password: 123456}
- {name: kube-node02, address: 192.168.149.181, internalAddress: 192.168.149.181, user: root, password: 123456}
roleGroups:
etcd:
- kube-master
master:
- kube-master
worker:
- kube-node01
- kube-node02
controlPlaneEndpoint:
domain: lb.kubeshere.supaur.com
address: ""
port: "6443"
kubernetes:
version: v1.17.9
imageRepo: kubesphere
clusterName: cluster.local
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
registryMirrors: []
insecureRegistries: []
addons: []
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.0.0
spec:
local_registry: ""
persistence:
storageClass: ""
authentication:
jwtSecret: ""
etcd:
monitoring: true
endpointIps: 192.168.149.175
port: 2379
tlsEnable: true
common:
es:
elasticsearchDataVolumeSize: 20Gi
elasticsearchMasterVolumeSize: 4Gi
elkPrefix: logstash
logMaxAge: 7
mysqlVolumeSize: 20Gi
minioVolumeSize: 20Gi
etcdVolumeSize: 20Gi
openldapVolumeSize: 2Gi
redisVolumSize: 2Gi
console:
enableMultiLogin: false # enable/disable multi login
port: 30880
alerting:
enabled: false
auditing:
enabled: false
devops:
enabled: true
jenkinsMemoryLim: 2Gi
jenkinsMemoryReq: 1500Mi
jenkinsVolumeSize: 8Gi
jenkinsJavaOpts_Xms: 512m
jenkinsJavaOpts_Xmx: 512m
jenkinsJavaOpts_MaxRAM: 2g
events:
enabled: true
ruler:
enabled: true
replicas: 2
logging:
enabled: true
logsidecarReplicas: 2
metrics_server:
enabled: true
monitoring:
prometheusMemoryRequest: 400Mi
prometheusVolumeSize: 20Gi
multicluster:
clusterRole: none # host | member | none
networkpolicy:
enabled: true
notification:
enabled: true
openpitrix:
enabled: true
servicemesh:
enabled: false
20.1根据实际情况对yam文件进行修改,内容如下
hosts
在 hosts 下列出所有计算机,并如上所述添加其详细信息。在这种情况下,端口 22 是 SSH 的默认端口。否则,您需要在 IP 地址后面添加端口号,例如:
- {name: kube-master, address: 192.168.149.175, internalAddress: 192.168.149.175, user: root, password: 123456}
roleGroups
etcd: etcd 节点名称
master: Master 节点名称
worker: Worker 节点名称
默认是 root 用户:
hosts:
- {name: master, address: 192.168.149.175, internalAddress: 192.168.149.175, password: 123456}
21.安装集群
./kk create cluster -f config-sample.yaml
22.查看各状态
[root@kube-master ~]# kubectl get pods -n kube-system -o wide
23.登陆页面
![在这里插入图片描述](https://i-blog.csdnimg.cn/blog_migrate/fd5a4be32014de27338c107212580dd1.png)
参考官网地址:https://v3-0.docs.kubesphere.io/zh/
请关注我们的公众号哦