云原生(三十四) Kubernetes篇之平台存储系统实战_rbd pvc 扩容

本文介绍了如何在Kubernetes中使用RookCeph提供的文件存储(CephFS)服务,包括设置StorageClass,创建并管理PersistentVolumeClaims(PVC),以及进行部署和动态PVC扩容的测试过程。同时讨论了有状态和无状态应用对块存储和共享存储的不同需求。
摘要由CSDN通过智能技术生成
  storageClassName: "rook-ceph-block"
  resources:
    requests:
      storage: 20Mi

apiVersion: v1
kind: Service
metadata:
name: sts-nginx
namespace: default
spec:
selector:
app: sts-nginx
type: ClusterIP
ports:

  • name: sts-nginx
    port: 80
    targetPort: 80
    protocol: TCP


> 
> 测试: 创建sts、修改nginx数据、删除sts、重新创建sts。他们的数据丢不丢,共享不共享
> 
> 
> 



### **三、文件存储(CephFS)**


#### **1、配置**


常用 文件存储。 RWX模式;如:10个Pod共同操作一个地方


**参考文档:**[Ceph Docs](https://bbs.csdn.net/topics/618545628)


 



apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
name: myfs
namespace: rook-ceph # namespace:cluster
spec:

The metadata pool spec. Must use replication.

metadataPool:
replicated:
size: 3
requireSafeReplicaSize: true
parameters:
# Inline compression mode for the data pool
# Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compression
compression_mode:
none
# gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
# for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
#target_size_ratio: “.5”

The list of data pool specs. Can use replication or erasure coding.

dataPools:
- failureDomain: host
replicated:
size: 3
# Disallow setting pool with replica 1, this could lead to data loss without recovery.
# Make sure you’re ABSOLUTELY CERTAIN that is what you want
requireSafeReplicaSize: true
parameters:
# Inline compression mode for the data pool
# Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compression
compression_mode:
none
# gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
# for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
#target_size_ratio: “.5”

Whether to preserve filesystem after CephFilesystem CRD deletion

preserveFilesystemOnDelete: true

The metadata service (mds) configuration

metadataServer:
# The number of active MDS instances
activeCount: 1
# Whether each active MDS instance will have an active standby with a warm metadata cache for faster failover.
# If false, standbys will be available, but will not have a warm cache.
activeStandby: true
# The affinity rules to apply to the mds deployment
placement:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: role
# operator: In
# values:
# - mds-node
# topologySpreadConstraints:
# tolerations:
# - key: mds-node
# operator: Exists
# podAffinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- rook-ceph-mds
# topologyKey: kubernetes.io/hostname will place MDS across different hosts
topologyKey: kubernetes.io/hostname
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- rook-ceph-mds
# topologyKey: */zone can be used to spread MDS across different AZ
# Use <topologyKey: failure-domain.beta.kubernetes.io/zone> in k8s cluster if your cluster is v1.16 or lower
# Use <topologyKey: topology.kubernetes.io/zone> in k8s cluster is v1.17 or upper
topologyKey: topology.kubernetes.io/zone
# A key/value list of annotations
annotations:
# key: value
# A key/value list of labels
labels:
# key: value
resources:
# The requests and limits set here, allow the filesystem MDS Pod(s) to use half of one CPU core and 1 gigabyte of memory
# limits:
# cpu: “500m”
# memory: “1024Mi”
# requests:
# cpu: “500m”
# memory: “1024Mi”
# priorityClassName: my-priority-class
mirroring:
enabled: false




apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-cephfs

Change “rook-ceph” provisioner prefix to match the operator namespace if needed

provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:

clusterID is the namespace where operator is deployed.

clusterID: rook-ceph

CephFS filesystem name into which the volume shall be created

fsName: myfs

Ceph pool into which the volume shall be created

Required for provisionVolume: “true”

pool: myfs-data0

The secrets contain Ceph admin credentials. These are generated automatically by the operator

in the same namespace as the cluster.

csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph

reclaimPolicy: Delete
allowVolumeExpansion: true


 


#### **2、测试**



apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy
namespace: default
labels:
app: nginx-deploy
spec:
selector:
matchLabels:
app: nginx-deploy
replicas: 3
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: nginx-deploy
spec:
containers:
- name: nginx-deploy
image: nginx
volumeMounts:
- name: localtime
mountPath: /etc/localtime
- name: nginx-html-storage
mountPath: /usr/share/nginx/html
volumes:
- name: localtime
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
- name: nginx-html-storage
persistentVolumeClaim:
claimName: nginx-pv-claim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pv-claim
labels:
app: nginx-deploy
spec:
storageClassName: rook-cephfs
accessModes:
- ReadWriteMany ##如果是ReadWriteOnce将会是什么效果
resources:
requests:
storage: 10Mi



> 
> 测试,创建deploy、修改页面、删除deploy,新建deploy是否绑定成功,数据是否在
> 
> 
> 



### **四、pvc扩容**


参照CSI(容器存储接口)文档:


**卷扩容:**[Ceph Docs](https://bbs.csdn.net/topics/618545628)



#### 动态卷扩容



> 
> # 之前创建storageclass的时候已经配置好了  
>  # 测试:去容器挂载目录  curl -O 某个大文件  默认不能下载
> 
> 
> # 修改原来的PVC,可以扩充容器。
> 
> 
> # 注意,只能扩容,不能缩容
> 
> 
> 


有状态应用(3个副本)使用块存储。自己操作自己的pvc挂载的pv;也不丢失


无状态应用(3个副本)使用共享存储。很多人操作一个pvc挂载的一个pv;也不丢失



![img](https://img-blog.csdnimg.cn/img_convert/76338d6ea7d62ebd013b3332256e79ee.png)
![img](https://img-blog.csdnimg.cn/img_convert/9d02d8318ec90211b4751d569c6b99ef.png)

**网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。**

**[需要这份系统化资料的朋友,可以戳这里获取](https://bbs.csdn.net/topics/618545628)**


**一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!**

3046302)]
[外链图片转存中...(img-eVu6dZv9-1714283046303)]

**网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。**

**[需要这份系统化资料的朋友,可以戳这里获取](https://bbs.csdn.net/topics/618545628)**


**一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!**

  • 30
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值