kubernetes部署 rook ceph

10 篇文章 0 订阅
1 篇文章 0 订阅

环境: centos7.6, kubernetes 1.15.3, rook 1.3.4

部署 rook ceph

1、部署 rook ceph
官网下载 rook、解压后, cd rook-1.3.4/cluster/examples/kubernetes/ceph

部署 crd

kb apply -f common.yaml

部署 operator

kb apply -f operator.yaml

修改 cluster.yaml,主要修改 useAllNodes: false,useAllDevices: falsenodes ,nodes

...
  storage: # cluster level storage configuration and selection
    useAllNodes: false
    useAllDevices: false
...
cluster level config
#        storeType: filestore
#    - name: "172.17.4.301"
#      deviceFilter: "^sd."

    nodes:
    - name: "10.1.1.160"
      devices:
        - name: "sdc"  #ceph osd 使用的磁盘名称
        - name: "sdd"
        - name: "sde"
        - name: "sdf"
        - name: "sdg"
        - name: "sdh"
      resources:
        limits:
          cpu: "5000m"
          memory: "12288Mi"
        requests:
          cpu: "3000m"
          memory: "6144Mi"
      config:
        metadataDevice: "sdb"   # osd 缓存使用的 ssd 硬盘,可以没有
       
    - name: "10.1.1.161"
      devices:
        - name: "sdc"
        - name: "sdd"
        - name: "sde"
        - name: "sdf"
        - name: "sdg"
        - name: "sdh"
      resources:
        limits:
          cpu: "5000m"
          memory: "12288Mi"
        requests:
          cpu: "3000m"
          memory: "6144Mi"
      config:
        metadataDevice: "sdb"
...

部署 cephcluster

kb apply -f cluster.yaml
[root@k8sGUPMaster01 ceph]# kb get po -n rook-ceph
NAME                                                    READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-829m7                                  3/3     Running     0          3h37m
csi-cephfsplugin-lv9dv                                  3/3     Running     0          3h37m
csi-cephfsplugin-provisioner-6ddffd9ddd-hs4kj           5/5     Running     0          3h37m
...

如果需要覆盖之前的 rook 部署,可以先执行下面的清理操作:
2、清除使用 pvc、pv、ceph-fs 的 pod,删除对应的 pvc、pv。删除 rgw 创建的 configmap 和 secret(后面可能无法删除)

3、清除 k8s 集群中的 rook 部署

注意修改 cd /root/rook-master/cluster/examples/kubernetes/ceph

[root@k8sGUPMaster01 ceph]# cat /root/rook-master/cluster/examples/kubernetes/ceph/rook-destroy.sh 
#!/bin/sh
cd /root/rook-master/cluster/examples/kubernetes/ceph
kubectl delete -n rook-ceph cephblockpool replicapool
kubectl delete storageclass rook-ceph-block
kubectl delete -f csi/cephfs/kube-registry.yaml
kubectl delete storageclass csi-cephfs

kubectl -n rook-ceph delete cephcluster rook-ceph

kubectl delete -f operator.yaml
kubectl delete -f common.yaml

kubectl delete -f common.yaml 可能会执行到最后卡住,可以 ctrl c 后重新执行 kubectl delete -f common.yaml,简单测试可行,副作用未知。

4、在所有的 osd 节点执行清理工作,可以使用 ansible

copy 脚本到每个 osd 节点

ansible nodes -i inventory/*** -m copy -a "src=zap-disk.sh dest=/tmp/zap-disk.sh"

每个 osd 节点都执行 sh /tmp/zap-disk.sh 清理磁盘和配置文件

ansible nodes -i inventory/*** -m shell -a "sh /tmp/zap-disk.sh" --become

/tmp/zap-disk.sh 脚本内容如下:

[root@GPU01 ~]# cat /tmp/zap-disk.sh 
#!/usr/bin/env bash
i=1
while [ $i -lt 8 ]  #8 块存储磁盘
do
  j=`echo $i|awk '{printf "%c",97+$i}'`
  #echo $j
  DISK="/dev/sd$j"
  sgdisk --zap-all $DISK
  dd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsync
  i=$(($i+1))
done
# These steps only have to be run once on each node
# If rook sets up osds using ceph-volume, teardown leaves some devices mapped that lock the disks.
ls /dev/mapper/ceph-* | xargs -I% -- dmsetup remove %
# ceph-volume setup can leave ceph-<UUID> directories in /dev (unnecessary clutter)
rm -rf /dev/ceph-*
rm -rf /var/lib/rook

注意事项:

如果 k8s 集群出现网络问题,会导致 rook 安装失败 。

参考文章:

Cleaning up a Cluster

部署 rbd storageclass
k8s >=1.13
[root@k8s01 ceph]# cat storageclass.yaml 
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool
  namespace: rook-ceph
spec:
  failureDomain: host
  replicated:
    size: 3
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: rook-ceph-block
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
    # clusterID is the namespace where the rook cluster is running
    clusterID: rook-ceph
    # Ceph pool into which the RBD image shall be created
    pool: replicapool

    # RBD image format. Defaults to "2".
    imageFormat: "2"

    # RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.
    imageFeatures: layering

    # The secrets contain Ceph admin credentials.
    csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
    csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
    csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
    csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph

    # Specify the filesystem type of the volume. If not specified, csi-provisioner
    # will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock
    # in hyperconverged settings where the volume is mounted on the same node as the osds.
    csi.storage.k8s.io/fstype: ext4

# Delete the rbd volume when a PVC is deleted
reclaimPolicy: Delete

创建 storageclass

cd /root/rook-1.3.4/cluster/examples/kubernetes/ceph/csi/rbd
kb create -f storageclass.yaml 
k8s<=1.12
[root@k8s01 ceph]# cat /root/rook-1.3.4/cluster/examples/kubernetes/ceph/flex/storageclass.yaml 
#################################################################################################################
# Create a storage class with a pool that sets replication for a production environment.
# A minimum of 3 nodes with OSDs are required in this example since the default failureDomain is host.
#  kubectl create -f storageclass.yaml
#################################################################################################################

apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool
  namespace: rook-ceph
spec:
  replicated:
    size: 3
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-ceph-block
provisioner: ceph.rook.io/block
# Works for Kubernetes 1.14+
allowVolumeExpansion: true
parameters:
  blockPool: replicapool
  # Specify the namespace of the rook cluster from which to create volumes.
  # If not specified, it will use `rook` as the default namespace of the cluster.
  # This is also the namespace where the cluster will be
  clusterNamespace: rook-ceph
  # Specify the filesystem type of the volume. If not specified, it will use `ext4`.
  fstype: xfs
  # (Optional) Specify an existing Ceph user that will be used for mounting storage with this StorageClass.
  #mountUser: user1
  # (Optional) Specify an existing Kubernetes secret name containing just one key holding the Ceph user secret.
  # The secret must exist in each namespace(s) where the storage will be consumed.
  #mountSecret: ceph-user1-secret

创建 storageclass

cd /root/rook-1.3.4/cluster/examples/kubernetes/ceph/flex/
kb create -f storageclass.yaml 
测试 rbd storageclass

部署 mysql 、wordpress

cd /root/rook-1.3.4/cluster/examples/kubernetes
kb apply -f mysql.yaml
kb apply -f wordpress.yaml
清理环境

删除 rbd storageclass 需要慎重,确保数据安全

kubectl delete -f wordpress.yaml
kubectl delete -f mysql.yaml
kubectl delete -n rook-ceph cephblockpools.ceph.rook.io replicapool
kubectl delete storageclass rook-ceph-block
部署 ceph 对象存储网关

1、修改 object.yaml 中 name: gpu-store

apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
  name: gpu-store
  namespace: rook-ceph
spec:
  metadataPool:
    failureDomain: host
    replicated:
      size: 3
  dataPool:
    failureDomain: host
    erasureCoded:
      dataChunks: 2
      codingChunks: 1
  preservePoolsOnDelete: true
  gateway:
    type: s3
    sslCertificateRef:
    port: 80
    securePort:
    instances: 1
kubectl create -f object.yaml

执行完 CephObjectStore 后,operator 会创建必要的 pools 等资源来启动 rgw 服务

2、创建 Bucket(非必须)

修改storageclass-bucket-delete.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: rook-ceph-bucket
provisioner: ceph.rook.io/bucket
reclaimPolicy: Delete
parameters:
  objectStoreName: gpu-store
  objectStoreNamespace: rook-ceph
  region: us-east-1
kubectl create -f storageclass-bucket-delete.yaml

修改 object-bucket-claim-delete.yaml,创建 object bucket claim(obc),没创建一个 obc ,ceph 将会创建一个新的 bucket

apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
  name: ceph-bucket
spec:
  generateBucketName: ceph-bkt
  storageClassName: rook-ceph-bucket
kubectl create -f object-bucket-claim-delete.yaml

3、使用 s3cmd 测试 object 存储

export AWS_HOST=$(kubectl -n default get cm ceph-bucket -o yaml | grep BUCKET_HOST | awk '{print $2}')
export AWS_ACCESS_KEY_ID=$(kubectl -n default get secret ceph-bucket -o yaml | grep AWS_ACCESS_KEY_ID | awk '{print $2}' | base64 --decode)
export AWS_SECRET_ACCESS_KEY=$(kubectl -n default get secret ceph-bucket -o yaml | grep AWS_SECRET_ACCESS_KEY | awk '{print $2}' | base64 --decode)
export AWS_ENDPOINT=$AWS_HOST:80

上传对象

echo "Hello Rook" > /tmp/rookObj
s3cmd put /tmp/rookObj --no-ssl --host=${AWS_HOST} --host-bucket=  s3://rookbucket

下载对象

s3cmd get s3://rookbucket/rookObj /tmp/rookObj-download --no-ssl --host=${AWS_HOST} --host-bucket=
cat /tmp/rookObj-download

注意:

k8s 集群中如果资源删除不了,也不要直接在 etcd 中删除

参考文章:

Object Storage

集群外部使用对象存储

修改 rgw-external.yaml ,rook_object_store: gpu-store

[root@k8s01 ceph]# cat rgw-external.yaml 
apiVersion: v1
kind: Service
metadata:
  name: rook-ceph-rgw-my-store-external
  namespace: rook-ceph
  labels:
    app: rook-ceph-rgw
    rook_cluster: rook-ceph
    rook_object_store: gpu-store
spec:
  ports:
  - name: rgw
    port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: rook-ceph-rgw
    rook_cluster: rook-ceph
    rook_object_store: gpu-store
  sessionAffinity: None
  type: NodePort

部署 nodeport 类型的 svc

kb create -f rgw-external.yaml
创建对象存储 user

修改 object-user.yaml, store: gpu-store

[root@k8s01 ceph]# cat object-user.yaml
#################################################################################################################
# Create an object store user for access to the s3 endpoint.
#  kubectl create -f object-user.yaml
#################################################################################################################

apiVersion: ceph.rook.io/v1
kind: CephObjectStoreUser
metadata:
  name: uat-user
  namespace: rook-ceph
spec:
  store: gpu-store
  displayName: "auth by uat"

部署 CephObjectStoreUser, 创建完成后,rook operator 会自动创建对应CephObjectStore rgw user

kubectl create -f object-user.yaml

查看 rgw user accesskey 和 secretkey

kubectl -n rook-ceph get secret rook-ceph-object-user-gpu-store-uat-user -o yaml | grep AccessKey | awk '{print $2}' | base64 --decode
kubectl -n rook-ceph get secret rook-ceph-object-user-gpu-store-uat-user -o yaml | grep SecretKey | awk '{print $2}' | base64 --decode
登陆 toolbox 使用 ceph 命令

部署 toolbox pod

[root@k8s01 ceph]# kb apply -f toolbox.yaml 
deployment.apps/rook-ceph-tools created
[root@k8s01 ceph]# pwd
/root/rook-1.3.4/cluster/examples/kubernetes/ceph

登陆 toolbox

[root@k8s01 ceph]# kb -n rook-ceph exec -it $(kubectl -n rook-ceph get po -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
bash: warning: setlocale: LC_CTYPE: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_COLLATE: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_MESSAGES: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_NUMERIC: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_TIME: cannot change locale (en_US.UTF-8): No such file or directory
[root@rook-ceph-tools-68b66b77db-jtb4q /]# ceph -s

ceph dashboard

开启 nodeport

cd /root/rook-1.3.4/cluster/examples/kubernetes/ceph
kb apply -f dashboard-external-https.yaml

登陆密码

 kb get secret rook-ceph-dashboard-password  -n rook-ceph -o jsonpath='{.data.password}' | base64 -d

dashborad 显示对象存储网关

rook 1.3.4 官方文档无法实现

快捷键:换行(Shift+回车), 发送(Ctrl+回车或回车)
发送

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值