在这里插入图片描述
K8s 上的 MinIO 部署实战:Helm + MinIO Operator v6.0.3全流程指南
简介
在 MinIO 的 Kubernetes 实现中,operator 和 tenant 扮演着不同但互补的角色。
MinIO Operator
- 定义: MinIO Operator 是 Kubernetes 的控制平面组件,负责管理和自动化 MinIO 实例的运维任务。
- 作用:
• 生命周期管理: 自动管理 MinIO 实例的整个生命周期,包括创建、升级、配置和删除。
• 高可用性: 确保 MinIO 实例的高可用性,通过自动复制和故障转移。
• 扩展性: 允许您轻松扩展 MinIO 集群,以满足存储需求的变化。
• 自定义资源: 使用自定义资源定义(CRD)来管理 MinIO 租户的配置。 - 组件:
• MinIO Tenant CRD: 一个 Kubernetes 自定义资源,用于定义和管理 MinIO 租户的配置。
• MinIO Pool CRD: 用于定义 MinIO 池的配置,每个池可以包含多个 MinIO 实例。
MinIO Tenant
- 定义: MinIO Tenant 是数据平面组件,实际提供存储服务。
- 作用:
• 存储服务: 提供对象存储服务,用于存储和检索数据。
• 数据管理: 管理数据的存储、访问和备份。
• 用户和权限: 管理用户访问和权限控制。 - 组件:
• MinIO Pods: 实际运行 MinIO 服务的 Kubernetes Pods。
• PVCs: 持久卷声明,用于为 MinIO 提供持久化存储。
• Services: 服务,用于访问 MinIO Pods。
机器角色分配
10.2.64.91 minio1 rke2+k8s1.31.1
10.2.64.53 minio2 rke2+k8s1.31.1
10.2.64.59 minio3 rke2+k8s1.31.1
每个主机都挂载了4块xfs文件系统格式的10G磁盘,分别对应
/data/disk1、/data/disk2、/data/disk3、/data/disk4
重要说明
1.并不是所有的配置都是在helm的values.yaml中配置的。
需要提前配置好storage class 、pv 。因为使用本地磁盘作为sc,是不支持创建pvc时自动生成pv并关联,需要提前创建好pv。pv在这里的作用就是声明好挂载了本地磁盘的目录。另外,磁盘格式使用xfs,>=4块磁盘才有纠删码功能。
2.如果重复做多次实验,一定要先删除掉宿主机目录重建。否则会导致启动异常。
因为部署tenant后会在挂载目录中自动生成 .minio.sys 目录,直接清理这个目录也可以
3.部署tenant时,注意servers数量*volumesPerServer数量不能超过宿主机上的pv总数
pools:
- servers: 1
name: minio1
volumesPerServer: 4
size: 10Gi
storageClassName: local-storage
nodeSelector:
"kubernetes.io/hostname": minio1
比如本实验中,mini1一台机器挂载了4个磁盘,那么可以
servers: 1 volumesPerServer: 4,表示minio1上有1个minio实例,每个实例挂载4个存储卷
servers: 4 volumesPerServer: 1,表示minio1上有4个minio实例,每个实例挂载1个存储卷
servers: 2 volumesPerServer: 2,表示minio1上有2个minio实例,每个实例挂载2个存储卷
当然如果你不想用光宿主机上的4块磁盘,servers: 1 volumesPerServer: 2也可以
4.tenant会自动创建pvc,不需要提前创建pvc
自动生成的pvc命名规则是 data{存储卷num}-{statefulsetName}{Num}-{minio实例num} 比如’data0-minio-minio1-0’
如果配置了参数servers: 1 volumesPerServer: 4
在宿主机minio1上只会有1个实例: minio-minio1-0
对应pvc的关系如下
minio-minio1-0:
• data0-minio-minio1-0
• data1-minio-minio1-0
• data2-minio-minio1-0
• data3-minio-minio1-0
如果配置了参数servers: 2 volumesPerServer: 2
在宿主机minio1上会有2个实例: minio-minio1-0、minio-minio1-1
对应pvc的关系如下
minio-minio1-0:
• data0-minio-minio1-0
• data0-minio-minio1-1
minio-minio1-1:
• data1-minio-minio1-0
• data1-minio-minio1-1
5.mountPath 是挂载到minio实例中的路径,volumesPerServer 配置多少个,就会在挂载时自动在路径后面加上数字,比如/export0 /export1 /export2 /export3
subPath 是minio的数据会保存到这个子目录里,对应/export0/data等
注意这个mountPath 后面的数字,仅仅代表分配存储卷的序号,并不是和宿主机的磁盘序号一一对应。并不代表 容器/export0=>宿主机/data/disk0、容器/export1=>宿主机/data/disk1、容器/export2=>宿主机/data/disk2、容器/export3=>宿主机/data/disk3
mountPath: /export
subPath: /data
6.配置用于登录minio-console的账户
# 登录页面的用户名密码,生产不建议在values.yaml中直接配置,而是引用已经配好的secret
configSecret:
name: minio-env-configuration
accessKey: minio
secretKey: minio123
# 生产环境建议使用这个方式,values.yaml中只配置这个configuration,不需要配置configSecret
configuration:
name: minio-env-configuration
# minio-env-configuration-prod需要提前配置好
cat > config.env << EOF
export MINIO_ROOT_USER="minio"
export MINIO_ROOT_PASSWORD="minio123"
EOF
base64_content=$( cat config.env | base64 | tr -d '\n' )
cat > minio-env-configuration.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
name: minio-env-configuration-prod
namespace: minio-tenant
annotations:
meta.helm.sh/release-name: minio
meta.helm.sh/release-namespace: minio-tenant
labels:
app.kubernetes.io/managed-by: Helm
type: Opaque
data:
config.env: $base64_content
EOF
kubectl apply -f minio-env-configuration.yaml
7.k8s对应minio operator版本
kubernetes | minio operator |
---|---|
1.16~1.21 | v4.x |
1.18~1.22 | v5.x |
1.21+ | v6.x |
8.云厂商的对象存储可能会更实惠
部署
minio的operator和tenant
● 参考信息:https://cloud.tencent.com/developer/article/2260306
● 参考信息:https://www.cnblogs.com/itzgr/p/18399049
请至钉钉文档查看附件《operator-6.0.3.tgz》
请至钉钉文档查看附件《tenant-6.0.3.tgz》
MinIO存储持久化
- 创建 StorageClass
创建一个 StorageClass (SC) 定义。以下是 storageclass.yaml 文件的内容:
cat > storageclass.yaml << EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
EOF
应用配置
[root@master01 ~]# kubectl apply -f storageclass.yaml
- 创建 PersistentVolumes (PV)
为每台机器上的每个磁盘创建 PersistentVolume (PV) 定义。以下是 pv.yaml 文件的内容:
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio1-disk0
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio1
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio1-disk1
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio1
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio1-disk2
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio1
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio1-disk3
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk3
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio1
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio2-disk0
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio2
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio2-disk1
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio2
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio2-disk2
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio2
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio2-disk3
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk3
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio2
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio3-disk0
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio3
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio3-disk1
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio3
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio3-disk2
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio3
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio3-disk3
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk3
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio3
应用配置
[root@master01 ~]# kubectl apply -f pv.yaml
部署MinIO Operator
MinIO Operator是一个工具,该工具扩展了k8s的api,可以通过minio operator在公有云和私有云上来部署MinIO Tenants
- 添加 MinIO Helm 仓库
[root@master01 ~]# helm repo add minio-operator https://operator.min.io
[root@master01 ~]# helm repo update
[root@master01 ~]# helm search repo minio-operator
NAME CHART VERSION APP VERSION DESCRIPTION
minio-operator/minio-operator 4.3.7 v4.3.7 A Helm chart for MinIO Operator
minio-operator/operator 6.0.3 v6.0.3 A Helm chart for MinIO Operator
minio-operator/tenant 6.0.3 v6.0.3 A Helm chart for MinIO Operator
**提示:**国内可以使用 helm repo add minio-operator https://operator.minio.org.cn
进行添加。
minio-operator/minio-operator是一个旧版本的 operator ,不需要安装。
也可使用 helm 命令导出 minio chart 的默认 values.yaml 。
# 可选,导出默认chart values
[root@master01 ~]# helm show values minio-operator/operator > operator-values.yaml
# 可选,配置相关helm vaule 。
[root@master01 ~]# curl -sLo operator-values.yaml https://raw.githubusercontent.com/minio/operator/master/helm/tenant/values.yaml
配置 Tenant 的 Helm Chart。以下是operator-values.yaml 文件的内容:
# Default values for minio-operator.
operator:
## Setup environment variables for the Operator
# env:
# - name: MINIO_OPERATOR_TLS_ENABLE
# value: "off"
# - name: CLUSTER_DOMAIN
# value: "cluster.domain"
# - name: WATCHED_NAMESPACE
# value: ""
image:
repository: minio/operator
tag: v4.5.5
pullPolicy: IfNotPresent
imagePullSecrets: [ ]
initcontainers: [ ]
replicaCount: 2
securityContext:
runAsUser: 1000
runAsGroup: 1000
runAsNonRoot: true
fsGroup: 1000
nodeSelector: { }
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: name
operator: In
values:
- minio-operator
topologyKey: kubernetes.io/hostname
tolerations: [ ]
topologySpreadConstraints: [ ]
resources:
requests:
cpu: 200m
memory: 256Mi
ephemeral-storage: 500Mi
- 导入镜像
查看 MinIO Operator 使用的镜像:
[root@master01 ~]# tar -xzf operator-6.0.3.tgz
[root@master01 ~]# cat operator/values.yaml | grep image:
输出示例:
image: minio/operator:v6.0.3
拉取镜像并导出为 tar 镜像文件:
[root@master01 ~]# docker pull minio/operator:v6.0.3
[root@master01 ~]# docker save -o minio/operator:v6.0.3 minio-operator-v6.0.3.tar
将镜像文件传输到目标机器,并导入到容器运行时:
[root@master01 ~]# ctr -n k8s.io images import minio-operator-v6.0.3.tar
- 部署 MinIO Operator
# 1.在线安装
helm install \
--namespace minio-operator \
--create-namespace \
minio-operator minio-operator/operator -f operator-values.yaml
# 2.本地安装
拉取minio-operator
# 方法1:下载operator-6.0.3.tgz
wget https://ghp.ci/https://raw.githubusercontent.com/minio/operator/master/helm-releases/operator-6.0.3.tgz
tar xf operator-6.0.3.tgz
# 方法2:从 Helm 仓库中下载指定的 Helm Chart 并自动解压的。
helm pull minio-operator/operator --untar
# 文件夹安装
helm install \
--namespace minio-operator \
--create-namespace \
minio-operator ./operator
# 离线包安装
helm install \
--namespace minio-operator \
--create-namespace \
minio-operator operator-6.0.3.tgz
# 自定义配置安装
helm install \
--namespace minio-operator \
--create-namespace \
minio-operator operator-6.0.3.tgz -f operator-values.yaml
查看验证
# 查看
[root@master01 ~]## helm list -n minio-operator
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
minio-operator minio-operator 1 2024-09-21 23:20:10.557744863 +0800 CST deployed operator-6.0.3 v6.0.3
[root@master01 ~]## kubectl get all -n minio-operator
NAME READY STATUS RESTARTS AGE
pod/minio-operator-66cbc6c865-9clxj 1/1 Running 0 12h
pod/minio-operator-66cbc6c865-kvvkh 1/1 Running 0 12h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/operator ClusterIP 10.43.124.211 <none> 4221/TCP 12h
service/sts ClusterIP 10.43.62.198 <none> 4223/TCP 12h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/minio-operator 2/2 2 2 12h
NAME DESIRED CURRENT READY AGE
replicaset.apps/minio-operator-66cbc6c865 2 2 2 12h
部署MinIO Tenant
Operator 租户 Chart 由 MinIO 正式维护和支持,MinIO 强烈建议在生产环境中使用 Operator 和 tenant 的官方 Helm Chart 。
- 添加 MinIO Helm 仓库
[root@master01 ~]# helm repo add minio-operator https://operator.min.io
[root@master01 ~]# helm repo update
[root@master01 ~]# helm search repo minio-operator
NAME CHART VERSION APP VERSION DESCRIPTION
minio-operator/minio-operator 4.3.7 v4.3.7 A Helm chart for MinIO Operator
minio-operator/operator 6.0.3 v6.0.3 A Helm chart for MinIO Operator
minio-operator/tenant 6.0.3 v6.0.3 A Helm chart for MinIO Operator
**提示:**如果使用 Helm 部署 MinIO tenant ,则必须使用 Helm 来管理或升级该部署,不要使用kubectl krew、Kustomize或类似的方法来管理或升级 MinIO tenant 。
也可使用 helm 命令导出 minio chart 的默认 values.yaml 。
# 可选,导出默认chart values
[root@master01 ~]# helm show values minio-operator/tenant > tenant-values.yaml
# 可选,配置相关helm vaule 。
[root@master01 ~]# curl -sLo tenant-values.yaml https://raw.githubusercontent.com/minio/operator/master/helm/tenant/values.yaml
配置 Tenant 的 Helm Chart。以下是tenant-values.yaml 文件的内容:
tenant:
name: tenant-default
image:
repository: quay.io/minio/minio
tag: RELEASE.2024-08-17T01-24-54Z
pullPolicy: IfNotPresent
configSecret:
name: minio-env-configuration
accessKey: minio
secretKey: minio123
mountPath: /export
subPath: /data
pools:
- servers: 1
name: minio1
volumesPerServer: 4
size: 10Gi
storageClassName: local-storage
nodeSelector:
"kubernetes.io/hostname": minio1
- servers: 1
name: minio2
volumesPerServer: 4
size: 10Gi
storageClassName: local-storage
nodeSelector:
"kubernetes.io/hostname": minio2
- servers: 1
name: minio3
volumesPerServer: 4
size: 10Gi
storageClassName: local-storage
nodeSelector:
"kubernetes.io/hostname": minio3
ingress:
api:
enabled: false
ingressClassName: "nginx"
labels: { }
annotations: { }
tls: [ ]
host: minio.local
path: /
pathType: Prefix
console:
enabled: true
ingressClassName: "nginx"
labels: { }
annotations: { }
tls: [ ]
host: minio-console.local
path: /
pathType: Prefix
- 导入镜像
查看 MinIO Operator 使用的镜像:
[root@master01 ~]# tar -xzf tenant-6.0.3.tgz
[root@master01 ~]# cat tenant/values.yaml | grep image:
输出示例:
image: minio:RELEASE.2024-08-17T01-24-54Z
拉取镜像并导出为 tar 镜像文件:
[root@master01 ~]# docker pull minio:RELEASE.2024-08-17T01-24-54Z
[root@master01 ~]# docker save -o minio-RELEASE.2024-08-17T01-24-54Z.tar minio:RELEASE.2024-08-17T01-24-54Z
将镜像文件传输到目标机器,并导入到容器运行时:
[root@master01 ~]# ctr -n k8s.io images import minio-RELEASE.2024-08-17T01-24-54Z.tar
[root@master01 ~]# ctr -n k8s.io images import busybox-latest.tar
[root@master01 ~]# ctr -n k8s.io images import operator-sidecar-v6.0.2.tar
- 部署 MinIO Tenant
# 1.在线安装
helm install \
--namespace minio-tenant \
--create-namespace \
tenant-default minio-operator/tenant -f tenant-values.yaml
# 2.本地安装
拉取minio-tenant
# 方法1:下载tenant-6.0.3.tgz
wget https://ghp.ci/https://raw.githubusercontent.com/minio/operator/master/helm-releases/tenant-6.0.3.tgz
tar xf tenant-6.0.3.tgz
# 方法2:从 Helm 仓库中下载指定的 Helm Chart 并自动解压的。
helm pull minio-operator/tenant --untar
# 文件夹安装
helm install \
--namespace tenant-default \
--create-namespace \
minio-tenant ./tenant
# 离线包安装
helm install \
--namespace tenant-default \
--create-namespace \
minio-tenant tenant-6.0.3.tgz
# 自定义配置安装
helm install \
--namespace tenant-default \
--create-namespace \
minio-tenant tenant-6.0.3.tgz -f tenant-values.yaml
# 创建 Secret
[root@master01 ~]# kubectl create secret generic minio-env-configuration-prod \
--namespace minio-tenant \
--from-literal=MINIO_ROOT_USER=minio \
--from-literal=MINIO_ROOT_PASSWORD=minio123
查看验证
# 查看
[root@master01 ~]# kubectl get all -n minio-tenant
NAME READY STATUS RESTARTS AGE
pod/minio-minio1-0 2/2 Running 0 49m
pod/minio-minio1-1 2/2 Running 0 49m
pod/minio-minio2-0 2/2 Running 0 49m
pod/minio-minio2-1 2/2 Running 0 49m
pod/minio-minio3-0 2/2 Running 0 49m
pod/minio-minio3-1 2/2 Running 0 49m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/minio ClusterIP 10.43.211.152 <none> 443/TCP 49m
service/minio-console ClusterIP 10.43.204.242 <none> 9443/TCP 49m
service/minio-hl ClusterIP None <none> 9000/TCP 49m
NAME READY AGE
statefulset.apps/minio-minio1 2/2 49m
statefulset.apps/minio-minio2 2/2 49m
statefulset.apps/minio-minio3 2/2 49m
[root@master01 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pv-minio1-disk0 10Gi RWO Retain Bound minio-tenant/data0-minio-minio1-0 local-storage <unset> 50m
pv-minio1-disk1 10Gi RWO Retain Bound minio-tenant/data1-minio-minio1-0 local-storage <unset> 50m
pv-minio1-disk2 10Gi RWO Retain Bound minio-tenant/data0-minio-minio1-1 local-storage <unset> 50m
pv-minio1-disk3 10Gi RWO Retain Bound minio-tenant/data1-minio-minio1-1 local-storage <unset> 50m
pv-minio2-disk0 10Gi RWO Retain Bound minio-tenant/data0-minio-minio2-0 local-storage <unset> 50m
pv-minio2-disk1 10Gi RWO Retain Bound minio-tenant/data0-minio-minio2-1 local-storage <unset> 50m
pv-minio2-disk2 10Gi RWO Retain Bound minio-tenant/data1-minio-minio2-1 local-storage <unset> 50m
pv-minio2-disk3 10Gi RWO Retain Bound minio-tenant/data1-minio-minio2-0 local-storage <unset> 50m
pv-minio3-disk0 10Gi RWO Retain Bound minio-tenant/data0-minio-minio3-1 local-storage <unset> 50m
pv-minio3-disk1 10Gi RWO Retain Bound minio-tenant/data0-minio-minio3-0 local-storage <unset> 50m
pv-minio3-disk2 10Gi RWO Retain Bound minio-tenant/data1-minio-minio3-0 local-storage <unset> 50m
pv-minio3-disk3 10Gi RWO Retain Bound minio-tenant/data1-minio-minio3-1 local-storage <unset> 50m
[root@minio1 tenant]# kubectl get pvc -n minio-tenant
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
data0-minio-minio1-0 Bound pv-minio1-disk0 10Gi RWO local-storage <unset> 51m
data0-minio-minio1-1 Bound pv-minio1-disk2 10Gi RWO local-storage <unset> 51m
data0-minio-minio2-0 Bound pv-minio2-disk0 10Gi RWO local-storage <unset> 51m
data0-minio-minio2-1 Bound pv-minio2-disk1 10Gi RWO local-storage <unset> 51m
data0-minio-minio3-0 Bound pv-minio3-disk1 10Gi RWO local-storage <unset> 51m
data0-minio-minio3-1 Bound pv-minio3-disk0 10Gi RWO local-storage <unset> 51m
data1-minio-minio1-0 Bound pv-minio1-disk1 10Gi RWO local-storage <unset> 51m
data1-minio-minio1-1 Bound pv-minio1-disk3 10Gi RWO local-storage <unset> 51m
data1-minio-minio2-0 Bound pv-minio2-disk3 10Gi RWO local-storage <unset> 51m
data1-minio-minio2-1 Bound pv-minio2-disk2 10Gi RWO local-storage <unset> 51m
data1-minio-minio3-0 Bound pv-minio3-disk2 10Gi RWO local-storage <unset> 51m
data1-minio-minio3-1 Bound pv-minio3-disk3 10Gi RWO local-storage <unset> 51m
通过以上步骤,每台机器上的每个磁盘都将被单独挂载到容器内部,实现分布式存储,并且 MinIO 将在指定的子路径 data 下存储数据。
附:多次测试方便使用
cat > clear.sh << EOF
helm uninstall tenant-default -n minio-tenant
kubectl delete pvc --all -n minio-tenant
kubectl delete pv --all
rm -rf /data/*
EOF
cat > start.sh << EOF
mkdir -p /data/disk{0..3}
kubectl apply -f /root/pv.yaml
helm install --namespace minio-tenant --create-namespace tenant-default minio-operator/tenant -f /root/tenant/tenant-values.yaml
EOF
rm -rf * && mkdir -p /data/disk{0..3}
部署MinIO Console
k8s集群外部使用MinIO
先对service minio-console 和minio-hl 配置ingress或者NodePort供集群外访问
NodePort参考配置
minio-console.yaml
# MinIO Console 服务(NodePort)
apiVersion: v1
kind: Service
metadata:
name: tenant-console
namespace: minio-tenant
labels:
v1.min.io/console: tenant-console
spec:
type: NodePort
ports:
- name: https-console
port: 80
targetPort: 9443 # 需确认Pod实际监听端口(默认9090或9443)
nodePort: 31090
protocol: TCP
selector:
v1.min.io/tenant: tenant-default # 修正为统一标签(原配置为myminio,此处需与Pod一致)
publishNotReadyAddresses: true
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
sessionAffinity: None
minio-hl.yaml
# MinIO API 服务(NodePort)
apiVersion: v1
kind: Service
metadata:
name: tentant-hl
namespace: minio-tenant
labels:
v1.min.io/tenant: tentant-hl
spec:
type: NodePort
ports:
- name: https-minio
port: 9000
targetPort: 9000
nodePort: 30001
protocol: TCP
selector:
v1.min.io/tenant: tenant-default # 必须与Pod标签匹配
publishNotReadyAddresses: true
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
控制台配置管理
控制台配置管理
控制台配置bucket和账户等信息
登录minio-console页面https://10.2.16.63:31090
账户密码:minio/minio123
1.创建bucket,关闭版本控制,关闭加密
2.创建Access Key和Secret Key
3.创建用户
4.设置Region
扩展tenant
不要在用过的机器上,通过增加pools中pool来扩容。
新增机器minio4,先挂载好磁盘/data/disk0
更新pv.yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio1-disk0
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio1
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio1-disk1
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio1
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio1-disk2
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio1
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio1-disk3
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk3
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio1
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio2-disk0
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio2
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio2-disk1
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio2
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio2-disk2
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio2
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio2-disk3
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk3
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio2
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio3-disk0
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio3
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio3-disk1
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio3
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio3-disk2
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio3
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio3-disk3
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk3
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio3
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio4-disk0
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio4
应用配置
kubectl apply -f pv.yaml
再调整tentant配置,增加pool
tenant:
name: minio
image:
repository: quay.io/minio/minio
tag: RELEASE.2024-08-17T01-24-54Z
pullPolicy: IfNotPresent
configSecret:
name: minio-env-configuration
accessKey: minio
secretKey: minio123
mountPath: /export
subPath: /data
pools:
- servers: 1
name: minio1
volumesPerServer: 4
size: 10Gi
storageClassName: local-storage
nodeSelector:
"kubernetes.io/hostname": minio1
- servers: 1
name: minio2
volumesPerServer: 4
size: 10Gi
storageClassName: local-storage
nodeSelector:
"kubernetes.io/hostname": minio2
- servers: 1
name: minio3
volumesPerServer: 4
size: 10Gi
storageClassName: local-storage
nodeSelector:
"kubernetes.io/hostname": minio3
- servers: 1
name: minio4
volumesPerServer: 1
size: 10Gi
storageClassName: local-storage
nodeSelector:
"kubernetes.io/hostname": minio4
ingress:
api:
enabled: false
ingressClassName: "nginx"
labels: { }
annotations: { }
tls: [ ]
host: minio.local
path: /
pathType: Prefix
console:
enabled: true
ingressClassName: "nginx"
labels: { }
annotations: { }
tls: [ ]
host: minio-console.local
path: /
pathType: Prefix
升级tenant
helm upgrade minio ./tenant -f tenant-values.yaml
备注
CentOS7.9使用parted磁盘分区xfs文件系统)
minio挂载磁盘推荐使用xfs文件系统,这里给出参考
- 创建GPT分区表
使用parted命令为磁盘创建GPT分区表。
sudo parted /dev/vdb mklabel gpt
• parted: 磁盘分区工具。
• /dev/vdb: 指定要操作的磁盘设备。
• mklabel gpt: 创建GPT分区表。
- 创建主分区
创建一个占据整个磁盘的主分区。
sudo parted /dev/vdb mkpart primary xfs 0% 100%
• mkpart primary xfs: 创建一个主分区,xfs作为文件系统类型(虽然这一步不会真正格式化分区,但可以作为格式化的提示)。
• 0% 100%: 分区从磁盘开始到结束。
- 格式化分区为XFS文件系统
使用mkfs.xfs命令格式化分区。
sudo mkfs.xfs /dev/vdb1
• mkfs.xfs: 创建XFS文件系统的命令。
• /dev/vdb1: 指定要格式化的分区。
- 创建挂载点
创建一个目录作为挂载点。
sudo mkdir -p /data/disk1
• mkdir -p: 创建目录,-p参数确保如果目录不存在则创建它。
• /data/disk1: 挂载点路径。
- 挂载分区
将分区挂载到挂载点。
sudo mount /dev/vdb1 /data/disk1
• mount: 挂载命令。
• /dev/vdb1: 要挂载的分区。
• /data/disk1: 挂载点。
- 添加到fstab以实现开机自动挂载
将分区的UUID添加到/etc/fstab文件,以实现开机自动挂载。
UUID=$(sudo blkid -s UUID -o value /dev/vdb1)
echo "UUID=$UUID /data/disk1 xfs defaults 0 0" | sudo tee -a /etc/fstab
• blkid -s UUID -o value /dev/vdb1: 获取分区的UUID。
• echo: 创建fstab文件的条目。
• tee -a: 将条目添加到/etc/fstab文件。
注意事项
• 在执行操作前,请确保备份了所有重要数据。
• 使用UUID而不是设备文件名(如/dev/sdb1)来挂载分区,可以避免因磁盘顺序变化导致的问题。
• 确保在执行这些操作时,磁盘没有被挂载。
请根据你的实际情况调整上述命令中的磁盘设备名(如/dev/vdb)和挂载点(如/data/disk1)。
shell脚本
#!/bin/bash
# 定义磁盘设备和挂载点
DISK="/dev/vdb"
MOUNT_POINT="/data/disk1"
# 创建挂载点目录
if [ ! -d "$MOUNT_POINT" ]; then
sudo mkdir -p "$MOUNT_POINT"
fi
# 检查磁盘是否已经分区,如果是,则先删除分区
if sudo parted "$DISK" print | grep -q "Partition Table"; then
echo "磁盘 $DISK 已分区,将删除现有分区。"
sudo parted -s "$DISK" mklabel gpt
fi
# 创建新的单一分区
echo "正在创建新的分区..."
sudo parted -s "$DISK" mkpart primary xfs 0% 100%
# 格式化分区为XFS文件系统
echo "正在格式化分区为XFS文件系统..."
PARTITION="${DISK}1" # 获取分区设备文件
sudo mkfs.xfs "$PARTITION"
# 挂载分区
echo "正在挂载分区..."
sudo mount "$PARTITION" "$MOUNT_POINT"
# 添加到fstab以实现开机自动挂载
echo "正在添加分区到 /etc/fstab..."
UUID=$(sudo blkid -s UUID -o value "$PARTITION")
if [ -n "$UUID" ]; then
echo "UUID=$UUID $MOUNT_POINT xfs defaults 0 0" | sudo tee -a /etc/fstab
echo "磁盘 $DISK 已分区、格式化并挂载到 $MOUNT_POINT。"
else
echo "无法获取分区的UUID,请手动添加到 /etc/fstab。"
fi
blkid