简介
在 MinIO 的 Kubernetes 实现中,operator 和 tenant 扮演着不同但互补的角色。
MinIO Operator
- 定义: MinIO Operator 是 Kubernetes 的控制平面组件,负责管理和自动化 MinIO 实例的运维任务。
- 作用:
• 生命周期管理: 自动管理 MinIO 实例的整个生命周期,包括创建、升级、配置和删除。
• 高可用性: 确保 MinIO 实例的高可用性,通过自动复制和故障转移。
• 扩展性: 允许您轻松扩展 MinIO 集群,以满足存储需求的变化。
• 自定义资源: 使用自定义资源定义(CRD)来管理 MinIO 租户的配置。 - 组件:
• MinIO Tenant CRD: 一个 Kubernetes 自定义资源,用于定义和管理 MinIO 租户的配置。
• MinIO Pool CRD: 用于定义 MinIO 池的配置,每个池可以包含多个 MinIO 实例。
MinIO Tenant
- 定义: MinIO Tenant 是数据平面组件,实际提供存储服务。
- 作用:
• 存储服务: 提供对象存储服务,用于存储和检索数据。
• 数据管理: 管理数据的存储、访问和备份。
• 用户和权限: 管理用户访问和权限控制。 - 组件:
• MinIO Pods: 实际运行 MinIO 服务的 Kubernetes Pods。
• PVCs: 持久卷声明,用于为 MinIO 提供持久化存储。
• Services: 服务,用于访问 MinIO Pods。
机器角色分配
10.2.64.91 minio1 rke2+k8s1.31.1
10.2.64.53 minio2 rke2+k8s1.31.1
10.2.64.59 minio3 rke2+k8s1.31.1
每个主机都挂载了4块xfs文件系统格式的10G磁盘,分别对应
/data/disk1、/data/disk2、/data/disk3、/data/disk4
重要说明
1.并不是所有的配置都是在helm的values.yaml中配置的。
需要提前配置好storage class 、pv 。因为使用本地磁盘作为sc,是不支持创建pvc时自动生成pv并关联,需要提前创建好pv。pv在这里的作用就是声明好挂载了本地磁盘的目录。另外,磁盘格式使用xfs,>=4块磁盘才有纠删码功能。
2.如果重复做多次实验,一定要先删除掉宿主机目录重建。否则会导致启动异常。
因为部署tenant后会在挂载目录中自动生成 .minio.sys 目录,直接清理这个目录也可以
3.部署tenant时,注意servers数量*volumesPerServer数量不能超过宿主机上的pv总数
pools:
- servers: 1
name: minio1
volumesPerServer: 4
size: 10Gi
storageClassName: local-storage
nodeSelector:
"kubernetes.io/hostname": minio1
比如本实验中,mini1一台机器挂载了4个磁盘,那么可以
servers: 1 volumesPerServer: 4,表示minio1上有1个minio实例,每个实例挂载4个存储卷
servers: 4 volumesPerServer: 1,表示minio1上有4个minio实例,每个实例挂载1个存储卷
servers: 2 volumesPerServer: 2,表示minio1上有2个minio实例,每个实例挂载2个存储卷
当然如果你不想用光宿主机上的4块磁盘,servers: 1 volumesPerServer: 2也可以
4.tenant会自动创建pvc,不需要提前创建pvc
自动生成的pvc命名规则是 data{存储卷num}-{statefulsetName}{Num}-{minio实例num} 比如’data0-myminio-minio1-0’
如果配置了参数servers: 1 volumesPerServer: 4
在宿主机minio1上只会有1个实例: myminio-minio1-0
对应pvc的关系如下
myminio-minio1-0:
• data0-myminio-minio1-0
• data1-myminio-minio1-0
• data2-myminio-minio1-0
• data3-myminio-minio1-0
如果配置了参数servers: 2 volumesPerServer: 2
在宿主机minio1上会有2个实例: myminio-minio1-0、myminio-minio1-1
对应pvc的关系如下
myminio-minio1-0:
• data0-myminio-minio1-0
• data0-myminio-minio1-1
myminio-minio1-1:
• data1-myminio-minio1-0
• data1-myminio-minio1-1
5.mountPath 是挂载到minio实例中的路径,volumesPerServer 配置多少个,就会在挂载时自动在路径后面加上数字,比如/export0 /export1 /export2 /export3
subPath 是minio的数据会保存到这个子目录里,对应/export0/data等
注意这个mountPath 后面的数字,仅仅代表分配存储卷的序号,并不是和宿主机的磁盘序号一一对应。并不代表 容器/export0=>宿主机/data/disk0、容器/export1=>宿主机/data/disk1、容器/export2=>宿主机/data/disk2、容器/export3=>宿主机/data/disk3
mountPath: /export
subPath: /data
6.配置用于登录minio-console的账户
# 登录页面的用户名密码,生产不建议在values.yaml中直接配置,而是引用已经配好的secret
configSecret:
name: myminio-env-configuration
accessKey: minio
secretKey: minio123
# 生产环境建议使用这个方式,values.yaml中只配置这个configuration,不需要配置configSecret
configuration:
name: myminio-env-configuration
# myminio-env-configuration需要提前配置好
cat > config.env << EOF
export MINIO_ROOT_USER="minio"
export MINIO_ROOT_PASSWORD="minio123"
EOF
base64_content=$( cat config.env | base64 | tr -d '\n' )
cat > myminio-env-configuration.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
name: myminio-env-configuration
namespace: minio-tenant
annotations:
meta.helm.sh/release-name: myminio
meta.helm.sh/release-namespace: minio-tenant
labels:
app.kubernetes.io/managed-by: Helm
type: Opaque
data:
config.env: $base64_content
EOF
kubectl apply -f myminio-env-configuration.yaml
7.k8s对应minio operator版本
kubernetes | minio operator |
---|---|
1.16~1.21 | v4.x |
1.18~1.22 | v5.x |
1.21+ | v6.x |
8.云厂商的对象存储可能会更实惠
部署
- 创建 StorageClass
创建一个 StorageClass (SC) 定义。以下是 storageclass.yaml 文件的内容:
cat > storageclass.yaml << EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
EOF
应用配置
kubectl apply -f storageclass.yaml
- 创建 PersistentVolumes (PV)
为每台机器上的每个磁盘创建 PersistentVolume (PV) 定义。以下是 pv.yaml 文件的内容:
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio1-disk0
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio1
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio1-disk1
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio1
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio1-disk2
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio1
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio1-disk3
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk3
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio1
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio2-disk0
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio2
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio2-disk1
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio2
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio2-disk2
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio2
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio2-disk3
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk3
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio2
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio3-disk0
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio3
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio3-disk1
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio3
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio3-disk2
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio3
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio3-disk3
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk3
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio3
应用配置
kubectl apply -f pv.yaml
- 添加 MinIO Helm 仓库
helm repo add minio-operator https://operator.min.io
helm repo update
4.部署 MinIO Operator
# 在线安装
helm install \
--namespace minio-operator \
--create-namespace \
minio-operator minio-operator/operator
# 本地安装
# 方法1:
wget https://ghp.ci/https://raw.githubusercontent.com/minio/operator/master/helm-releases/operator-6.0.3.tgz
tar xf operator-6.0.3.tgz
# 方法2:
helm pull minio-operator/operator --untar
helm install \
--namespace minio-operator \
--create-namespace \
minio-operator ./operator
# 查看
[root@minio1 ~]# helm list -n minio-operator
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
minio-operator minio-operator 1 2024-09-21 23:20:10.557744863 +0800 CST deployed operator-6.0.3 v6.0.3
[root@minio1 ~]# kubectl get all -n minio-operator
NAME READY STATUS RESTARTS AGE
pod/minio-operator-66cbc6c865-9clxj 1/1 Running 0 12h
pod/minio-operator-66cbc6c865-kvvkh 1/1 Running 0 12h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/operator ClusterIP 10.43.124.211 <none> 4221/TCP 12h
service/sts ClusterIP 10.43.62.198 <none> 4223/TCP 12h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/minio-operator 2/2 2 2 12h
NAME DESIRED CURRENT READY AGE
replicaset.apps/minio-operator-66cbc6c865 2 2 2 12h
5.部署MinIO Tenant
拉取minio-tenant
# 方法1:
wget https://ghp.ci/https://raw.githubusercontent.com/minio/operator/master/helm-releases/tenant-6.0.3.tgz
# 方法2:
helm pull minio-operator/tenant --untar
配置 Tenant 的 Helm Chart。以下是 tenant-values.yaml 文件的内容:
tenant:
name: myminio
image:
repository: quay.io/minio/minio
tag: RELEASE.2024-08-17T01-24-54Z
pullPolicy: IfNotPresent
configSecret:
name: myminio-env-configuration
accessKey: minio
secretKey: minio123
mountPath: /export
subPath: /data
pools:
- servers: 1
name: minio1
volumesPerServer: 4
size: 10Gi
storageClassName: local-storage
nodeSelector:
"kubernetes.io/hostname": minio1
- servers: 1
name: minio2
volumesPerServer: 4
size: 10Gi
storageClassName: local-storage
nodeSelector:
"kubernetes.io/hostname": minio2
- servers: 1
name: minio3
volumesPerServer: 4
size: 10Gi
storageClassName: local-storage
nodeSelector:
"kubernetes.io/hostname": minio3
ingress:
api:
enabled: false
ingressClassName: "nginx"
labels: { }
annotations: { }
tls: [ ]
host: minio.local
path: /
pathType: Prefix
console:
enabled: true
ingressClassName: "nginx"
labels: { }
annotations: { }
tls: [ ]
host: minio-console.local
path: /
pathType: Prefix
应用配置
# 在线安装
helm install \
--namespace minio-tenant \
--create-namespace \
myminio minio-operator/tenant -f tenant-values.yaml
# 本地安装
helm install \
--namespace minio-tenant \
--create-namespace \
myminio ./tenant -f tenant-values.yaml
# 查看
[root@minio1 tenant]# kubectl get all -n minio-tenant
NAME READY STATUS RESTARTS AGE
pod/myminio-minio1-0 2/2 Running 0 49m
pod/myminio-minio1-1 2/2 Running 0 49m
pod/myminio-minio2-0 2/2 Running 0 49m
pod/myminio-minio2-1 2/2 Running 0 49m
pod/myminio-minio3-0 2/2 Running 0 49m
pod/myminio-minio3-1 2/2 Running 0 49m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/minio ClusterIP 10.43.211.152 <none> 443/TCP 49m
service/myminio-console ClusterIP 10.43.204.242 <none> 9443/TCP 49m
service/myminio-hl ClusterIP None <none> 9000/TCP 49m
NAME READY AGE
statefulset.apps/myminio-minio1 2/2 49m
statefulset.apps/myminio-minio2 2/2 49m
statefulset.apps/myminio-minio3 2/2 49m
[root@minio1 tenant]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pv-minio1-disk0 10Gi RWO Retain Bound minio-tenant/data0-myminio-minio1-0 local-storage <unset> 50m
pv-minio1-disk1 10Gi RWO Retain Bound minio-tenant/data1-myminio-minio1-0 local-storage <unset> 50m
pv-minio1-disk2 10Gi RWO Retain Bound minio-tenant/data0-myminio-minio1-1 local-storage <unset> 50m
pv-minio1-disk3 10Gi RWO Retain Bound minio-tenant/data1-myminio-minio1-1 local-storage <unset> 50m
pv-minio2-disk0 10Gi RWO Retain Bound minio-tenant/data0-myminio-minio2-0 local-storage <unset> 50m
pv-minio2-disk1 10Gi RWO Retain Bound minio-tenant/data0-myminio-minio2-1 local-storage <unset> 50m
pv-minio2-disk2 10Gi RWO Retain Bound minio-tenant/data1-myminio-minio2-1 local-storage <unset> 50m
pv-minio2-disk3 10Gi RWO Retain Bound minio-tenant/data1-myminio-minio2-0 local-storage <unset> 50m
pv-minio3-disk0 10Gi RWO Retain Bound minio-tenant/data0-myminio-minio3-1 local-storage <unset> 50m
pv-minio3-disk1 10Gi RWO Retain Bound minio-tenant/data0-myminio-minio3-0 local-storage <unset> 50m
pv-minio3-disk2 10Gi RWO Retain Bound minio-tenant/data1-myminio-minio3-0 local-storage <unset> 50m
pv-minio3-disk3 10Gi RWO Retain Bound minio-tenant/data1-myminio-minio3-1 local-storage <unset> 50m
[root@minio1 tenant]# kubectl get pvc -n minio-tenant
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
data0-myminio-minio1-0 Bound pv-minio1-disk0 10Gi RWO local-storage <unset> 51m
data0-myminio-minio1-1 Bound pv-minio1-disk2 10Gi RWO local-storage <unset> 51m
data0-myminio-minio2-0 Bound pv-minio2-disk0 10Gi RWO local-storage <unset> 51m
data0-myminio-minio2-1 Bound pv-minio2-disk1 10Gi RWO local-storage <unset> 51m
data0-myminio-minio3-0 Bound pv-minio3-disk1 10Gi RWO local-storage <unset> 51m
data0-myminio-minio3-1 Bound pv-minio3-disk0 10Gi RWO local-storage <unset> 51m
data1-myminio-minio1-0 Bound pv-minio1-disk1 10Gi RWO local-storage <unset> 51m
data1-myminio-minio1-1 Bound pv-minio1-disk3 10Gi RWO local-storage <unset> 51m
data1-myminio-minio2-0 Bound pv-minio2-disk3 10Gi RWO local-storage <unset> 51m
data1-myminio-minio2-1 Bound pv-minio2-disk2 10Gi RWO local-storage <unset> 51m
data1-myminio-minio3-0 Bound pv-minio3-disk2 10Gi RWO local-storage <unset> 51m
data1-myminio-minio3-1 Bound pv-minio3-disk3 10Gi RWO local-storage <unset> 51m
通过以上步骤,每台机器上的每个磁盘都将被单独挂载到容器内部,实现分布式存储,并且 MinIO 将在指定的子路径 data 下存储数据。
附:多次测试方便使用
cat > clear.sh << EOF
helm uninstall myminio -n minio-tenant
kubectl delete pvc --all -n minio-tenant
kubectl delete pv --all
rm -rf /data/*
EOF
cat > start.sh << EOF
mkdir -p /data/disk{0..3}
kubectl apply -f /root/pv.yaml
helm install --namespace minio-tenant --create-namespace myminio minio-operator/tenant -f /root/tenant/tenant-values.yaml
EOF
rm -rf * && mkdir -p /data/disk{0..3}
k8s集群外部使用MinIO
先对service myminio-console 和myminio-hl 配置ingress或者NodePort供集群外访问
NodePort参考配置
myminio-console.yaml
apiVersion: v1
kind: Service
metadata:
labels:
v1.min.io/console: myminio-console
name: minio-console-nodeport
namespace: minio-tenant
spec:
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: https-console
nodePort: 31074
port: 80
protocol: TCP
targetPort: 9443
selector:
v1.min.io/tenant: myminio
sessionAffinity: None
type: NodePort
myminio-hl.yaml
apiVersion: v1
kind: Service
metadata:
labels:
v1.min.io/tenant: myminio
name: minio-api
namespace: minio-tenant
spec:
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: https-minio
nodePort: 30001
port: 9000
protocol: TCP
targetPort: 9000
publishNotReadyAddresses: true
selector:
v1.min.io/tenant: myminio
type: NodePort
控制台配置bucket和账户等信息
登录minio-console页面https://10.2.64.91:31074
账户密码:minio/minio123
1.创建bucket,关闭版本控制,关闭加密
2.创建Access Key和Secret Key
3.创建用户
4.设置Region
扩展tenant
不要在用过的机器上,通过增加pools中pool来扩容。
新增机器minio4,先挂载好磁盘/data/disk0
更新pv.yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio1-disk0
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio1
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio1-disk1
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio1
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio1-disk2
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio1
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio1-disk3
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk3
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio1
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio2-disk0
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio2
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio2-disk1
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio2
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio2-disk2
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio2
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio2-disk3
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk3
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio2
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio3-disk0
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio3
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio3-disk1
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio3
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio3-disk2
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio3
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio3-disk3
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk3
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio3
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-minio4-disk0
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/disk0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minio4
应用配置
kubectl apply -f pv.yaml
再调整tentant配置,增加pool
tenant:
name: myminio
image:
repository: quay.io/minio/minio
tag: RELEASE.2024-08-17T01-24-54Z
pullPolicy: IfNotPresent
configSecret:
name: myminio-env-configuration
accessKey: minio
secretKey: minio123
mountPath: /export
subPath: /data
pools:
- servers: 1
name: minio1
volumesPerServer: 4
size: 10Gi
storageClassName: local-storage
nodeSelector:
"kubernetes.io/hostname": minio1
- servers: 1
name: minio2
volumesPerServer: 4
size: 10Gi
storageClassName: local-storage
nodeSelector:
"kubernetes.io/hostname": minio2
- servers: 1
name: minio3
volumesPerServer: 4
size: 10Gi
storageClassName: local-storage
nodeSelector:
"kubernetes.io/hostname": minio3
- servers: 1
name: minio4
volumesPerServer: 1
size: 10Gi
storageClassName: local-storage
nodeSelector:
"kubernetes.io/hostname": minio4
ingress:
api:
enabled: false
ingressClassName: "nginx"
labels: { }
annotations: { }
tls: [ ]
host: minio.local
path: /
pathType: Prefix
console:
enabled: true
ingressClassName: "nginx"
labels: { }
annotations: { }
tls: [ ]
host: minio-console.local
path: /
pathType: Prefix
升级tenant
helm upgrade myminio ./tenant -f tenant-values.yaml
CentOS7.9使用parted磁盘分区xfs文件系统)
minio挂载磁盘推荐使用xfs文件系统,这里给出参考
- 创建GPT分区表
使用parted命令为磁盘创建GPT分区表。
sudo parted /dev/vdb mklabel gpt
• parted: 磁盘分区工具。
• /dev/vdb: 指定要操作的磁盘设备。
• mklabel gpt: 创建GPT分区表。
- 创建主分区
创建一个占据整个磁盘的主分区。
sudo parted /dev/vdb mkpart primary xfs 0% 100%
• mkpart primary xfs: 创建一个主分区,xfs作为文件系统类型(虽然这一步不会真正格式化分区,但可以作为格式化的提示)。
• 0% 100%: 分区从磁盘开始到结束。
- 格式化分区为XFS文件系统
使用mkfs.xfs命令格式化分区。
sudo mkfs.xfs /dev/vdb1
• mkfs.xfs: 创建XFS文件系统的命令。
• /dev/vdb1: 指定要格式化的分区。
- 创建挂载点
创建一个目录作为挂载点。
sudo mkdir -p /data/disk1
• mkdir -p: 创建目录,-p参数确保如果目录不存在则创建它。
• /data/disk1: 挂载点路径。
- 挂载分区
将分区挂载到挂载点。
sudo mount /dev/vdb1 /data/disk1
• mount: 挂载命令。
• /dev/vdb1: 要挂载的分区。
• /data/disk1: 挂载点。
- 添加到fstab以实现开机自动挂载
将分区的UUID添加到/etc/fstab文件,以实现开机自动挂载。
UUID=$(sudo blkid -s UUID -o value /dev/vdb1)
echo "UUID=$UUID /data/disk1 xfs defaults 0 0" | sudo tee -a /etc/fstab
• blkid -s UUID -o value /dev/vdb1: 获取分区的UUID。
• echo: 创建fstab文件的条目。
• tee -a: 将条目添加到/etc/fstab文件。
注意事项
• 在执行操作前,请确保备份了所有重要数据。
• 使用UUID而不是设备文件名(如/dev/sdb1)来挂载分区,可以避免因磁盘顺序变化导致的问题。
• 确保在执行这些操作时,磁盘没有被挂载。
请根据你的实际情况调整上述命令中的磁盘设备名(如/dev/vdb)和挂载点(如/data/disk1)。
shell脚本
#!/bin/bash
# 定义磁盘设备和挂载点
DISK="/dev/vdb"
MOUNT_POINT="/data/disk1"
# 创建挂载点目录
if [ ! -d "$MOUNT_POINT" ]; then
sudo mkdir -p "$MOUNT_POINT"
fi
# 检查磁盘是否已经分区,如果是,则先删除分区
if sudo parted "$DISK" print | grep -q "Partition Table"; then
echo "磁盘 $DISK 已分区,将删除现有分区。"
sudo parted -s "$DISK" mklabel gpt
fi
# 创建新的单一分区
echo "正在创建新的分区..."
sudo parted -s "$DISK" mkpart primary xfs 0% 100%
# 格式化分区为XFS文件系统
echo "正在格式化分区为XFS文件系统..."
PARTITION="${DISK}1" # 获取分区设备文件
sudo mkfs.xfs "$PARTITION"
# 挂载分区
echo "正在挂载分区..."
sudo mount "$PARTITION" "$MOUNT_POINT"
# 添加到fstab以实现开机自动挂载
echo "正在添加分区到 /etc/fstab..."
UUID=$(sudo blkid -s UUID -o value "$PARTITION")
if [ -n "$UUID" ]; then
echo "UUID=$UUID $MOUNT_POINT xfs defaults 0 0" | sudo tee -a /etc/fstab
echo "磁盘 $DISK 已分区、格式化并挂载到 $MOUNT_POINT。"
else
echo "无法获取分区的UUID,请手动添加到 /etc/fstab。"
fi
blkid