rook安装ceph为kubevirt虚拟机提供存储(二)

1.环境准备:

kubernetes环境安装完成,如没有安装可参考:kubernetes单节点安装_闫利朋的博客-CSDN博客

kubevirt环境安装完成,如没有安装可参考:

kubernetes安装kubevirt提供虚拟机功能(一)_闫利朋的博客-CSDN博客

确认分区或设备文件系统是否格式化:

# lsblk -f 
NAME            FSTYPE      LABEL UUID                                   MOUNTPOINT
sda                                                                      
├─sda1          vfat              2747-4FB6                              /boot/efi
├─sda2          xfs               f8093fc3-9851-4030-9e9c-22609686763b   /boot
└─sda3          LVM2_member       EyMCMm-nGBs-D22f-8w0e-pZLJ-4QGk-GDAhrp 
  ├─centos-root xfs               24a4c7a3-7f57-488b-8004-a8f988e97e54   /
  └─centos-swap swap              7c1714cb-c9a8-4be9-bd43-5ff9ff10969b   
sdb                                                                      
sdc                                                                      
sdd                                                                      
sde                                                                      
如果该FSTYPE字段不为空,则表示该文件系统未完全清空,需要操作如下脚本:

# vim clean.sh 

DISK="/dev/sde"   #根据自己系统磁盘名称修改

# Zap the disk to a fresh, usable state (zap-all is important, b/c MBR has to be clean)
sgdisk --zap-all $DISK

# Wipe a large portion of the beginning of the disk to remove more LVM metadata that may be present
dd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsync

# SSDs may be better cleaned with blkdiscard instead of dd
blkdiscard $DISK

# Inform the OS of partition table changes
partprobe $DISK

2.ceph安装:

相关资源下载:rook-1.8.7相关资源-kubernetes文档类资源-CSDN下载

#  git clone --single-branch --branch v1.8.7 https://github.com/rook/rook.git

# cd rook/deploy/examples/

# kubectl create -f crds.yaml -f common.yaml -f operator.yaml

# kubectl create -f cluster.yaml

查看安装情况:

# kubectl get pod -n rook-ceph 
NAME                                                 READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-9tqf4                               3/3     Running     0          3h32m
csi-cephfsplugin-provisioner-6f54f6c477-b64z5        6/6     Running     0          3h32m
csi-rbdplugin-fgj8p                                  3/3     Running     0          3h32m
csi-rbdplugin-provisioner-6d765b47d5-7rrk9           6/6     Running     0          3h32m
rook-ceph-crashcollector-worker01-5f44fc68d4-cpjpf   1/1     Running     0          3h31m
rook-ceph-mgr-a-6f66b8d9bf-tqv2k                     1/1     Running     0          3h31m
rook-ceph-mon-a-dfd544ccf-t6g8r                      1/1     Running     0          3h32m
rook-ceph-operator-7bf8ff479-4nnql                   1/1     Running     0          3h32m
rook-ceph-osd-0-6d8c5877df-swzzd                     1/1     Running     0          3h31m
rook-ceph-osd-1-77464b6b94-97n55                     1/1     Running     0          3h31m
rook-ceph-osd-2-57f755955b-gpthb                     1/1     Running     0          3h31m
rook-ceph-osd-3-5fb47b9ccb-ml6tf                     1/1     Running     0          3h31m
rook-ceph-osd-prepare-worker01-dzdmm                 0/1     Completed   0          3h31m
rook-ceph-tools-5c6844fcd5-wm6js                     1/1     Running     0          3h28m
安装toolbox工具箱:

# kubectl create -f toolbox.yaml

进入toolbox容器,检查ceph集群状态:

# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash

[rook@rook-ceph-tools-5c6844fcd5-wm6js /]$ ceph status
  cluster:
    id:     7d358d90-4176-4a8b-9c8c-f9d979df77a3
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum a (age 3h)
    mgr: a(active, since 3h)
    osd: 4 osds: 4 up (since 3h), 4 in (since 3h)
 
  data:
    pools:   2 pools, 33 pgs
    objects: 18 objects, 99 B
    usage:   22 MiB used, 4.4 TiB / 4.4 TiB avail
    pgs:     33 active+clean

3.部署crd

# kubectl apply -f snapshot.storage.k8s.io_volumesnapshotclasses.yaml

# kubectl apply -f snapshot.storage.k8s.io_volumesnapshotcontents.yaml

# kubectl apply -f snapshot.storage.k8s.io_volumesnapshots.yaml

4.安装external snapshotter

# kubectl apply -f rbac-snapshot-controller.yaml

# kubectl apply -f setup-snapshot-controller.yaml

5. 创建sc、pvc等验证

创建sc:

# kubectl apply -f storageclass-test.yaml 
cephblockpool.ceph.rook.io/replicapool created
storageclass.storage.k8s.io/rook-ceph-block created

# kubectl get sc 
NAME              PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   3s

apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool
  namespace: rook-ceph # namespace:cluster
spec:
  failureDomain: osd
  replicated:
    size: 1
    # Disallow setting pool with replica 1, this could lead to data loss without recovery.
    # Make sure you're *ABSOLUTELY CERTAIN* that is what you want
    requireSafeReplicaSize: false
    # gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
    # for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
    #targetSizeRatio: .5
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-ceph-block
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.rbd.csi.ceph.com # driver:namespace:operator
parameters:
  # clusterID is the namespace where the rook cluster is running
  # If you change this namespace, also change the namespace below where the secret namespaces are defined
  clusterID: rook-ceph # namespace:cluster

  # If you want to use erasure coded pool with RBD, you need to create
  # two pools. one erasure coded and one replicated.
  # You need to specify the replicated pool here in the `pool` parameter, it is
  # used for the metadata of the images.
  # The erasure coded pool must be set as the `dataPool` parameter below.
  #dataPool: ec-data-pool
  pool: replicapool

  # RBD image format. Defaults to "2".
  imageFormat: "2"

  # RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.
  imageFeatures: layering

  # The secrets contain Ceph admin credentials. These are generated automatically by the operator
  # in the same namespace as the cluster.
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
  csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph # namespace:cluster
  # Specify the filesystem type of the volume. If not specified, csi-provisioner
  # will set default as `ext4`.
  csi.storage.k8s.io/fstype: ext4
# uncomment the following to use rbd-nbd as mounter on supported nodes
#mounter: rbd-nbd
allowVolumeExpansion: true
reclaimPolicy: Delete

创建pvc:

# kubectl apply -f pvc.yaml 
persistentvolumeclaim/rbd-pvc created
# kubectl get pvc 
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
rbd-pvc   Bound    pvc-9a010782-3281-4df6-8f59-8e4fdd563f01   1Gi        RWO            rook-ceph-block   4s

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: rook-ceph-block

CSI卷克隆:

# kubectl apply -f pvc-clone.yaml 
persistentvolumeclaim/rbd-pvc-clone created
# kubectl get pvc 
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
rbd-pvc         Bound    pvc-9a010782-3281-4df6-8f59-8e4fdd563f01   1Gi        RWO            rook-ceph-block   145m
rbd-pvc-clone   Bound    pvc-c91a5460-b5f4-495f-8328-a137bc3f8721   1Gi        RWO            rook-ceph-block   3s

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc-clone
spec:
  storageClassName: rook-ceph-block
  dataSource:
    name: rbd-pvc
    kind: PersistentVolumeClaim
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

卷快照:

创建快照:

# kubectl apply -f snapshotclass.yaml 
volumesnapshotclass.snapshot.storage.k8s.io/csi-rbdplugin-snapclass created

---
# 1.17 <= K8s <= v1.19
# apiVersion: snapshot.storage.k8s.io/v1beta1
# K8s >= v1.20
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: rbd-pvc-snapshot
spec:
  volumeSnapshotClassName: csi-rbdplugin-snapclass
  source:
    persistentVolumeClaimName: rbd-pvc

恢复快照:

# kubectl apply -f pvc-restore.yaml 
persistentvolumeclaim/rbd-pvc-restore created
# kubectl get pvc 
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
rbd-pvc           Bound    pvc-9a010782-3281-4df6-8f59-8e4fdd563f01   1Gi        RWO            rook-ceph-block   147m
rbd-pvc-clone     Bound    pvc-c91a5460-b5f4-495f-8328-a137bc3f8721   1Gi        RWO            rook-ceph-block   106s
rbd-pvc-restore   Bound    pvc-ca239aa1-5fc5-4cbc-ab8b-25423fce0a0d   1Gi        RWO            rook-ceph-block   4s

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc-restore
spec:
  storageClassName: rook-ceph-block
  dataSource:
    name: rbd-pvc-snapshot
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

6. 卸载:

清理ceph集群上创建的资源:

# kubectl delete -n rook-ceph cephblockpool replicapool
# kubectl delete storageclass rook-ceph-block
删除CephCluster CRD

# kubectl -n rook-ceph patch cephcluster rook-ceph --type merge -p '{"spec":{"cleanupPolicy":{"confirmation":"yes-really-destroy-data"}}}'

# kubectl -n rook-ceph delete cephcluster rook-ceph

# for CRD in $(kubectl get crd -n rook-ceph | awk '/ceph.rook.io/ {print $1}'); do
    kubectl get -n rook-ceph "$CRD" -o name | \
    xargs -I {} kubectl patch -n rook-ceph {} --type merge -p '{"metadata":{"finalizers": [null]}}'
done

删除Operator及相关资源

# kubectl delete -f operator.yaml
# kubectl delete -f common.yaml
# kubectl delete -f crds.yaml

删除主机上的数据:

DISK="/dev/sdX"

# Zap the disk to a fresh, usable state (zap-all is important, b/c MBR has to be clean)
sgdisk --zap-all $DISK

# Wipe a large portion of the beginning of the disk to remove more LVM metadata that may be present
dd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsync

# SSDs may be better cleaned with blkdiscard instead of dd
blkdiscard $DISK

# Inform the OS of partition table changes
partprobe $DISK

# ls /dev/mapper/ceph-* | xargs -I% -- dmsetup remove %
# rm -rf /dev/ceph-*
# rm -rf /dev/mapper/ceph--*

# rm -rf /var/lib/rook/*

本次rook安装ceph完成, 如在安装出现问题,可以留言探讨。

参考:Ceph Docs

kubevirt交流群:766168407

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值