使用rook部署ceph_如何使用Rook在Kubernetes中设置Ceph集群

本文详细介绍了如何在Kubernetes环境中利用Rook部署Ceph存储集群,实现数据持久化。首先,文章阐述了Kubernetes容器的无状态特性以及Rook在存储管理中的作用。接着,讲解了部署前的准备,包括硬件和软件要求。然后,逐步指导读者设置Rook,创建Ceph集群,并添加块存储。通过实例展示了如何使用rook-ceph-block创建MongoDB部署,确保数据存储。最后,介绍了Rook Toolbox的使用,用于检查Ceph集群状态和故障排查。通过这个教程,读者将掌握在Kubernetes上搭建和管理Ceph存储的方法。
摘要由CSDN通过智能技术生成

使用rook部署ceph

The author selected the Mozilla Foundation to receive a donation as part of the Write for DOnations program.

作者选择Mozilla基金会作为Write for DOnations计划的一部分接受捐赠。

介绍 (Introduction)

Kubernetes containers are stateless as a core principle, but data must still be managed, preserved, and made accessible to other services. Stateless means that the container is running in isolation without any knowledge of past transactions, which makes it easy to replace, delete, or distribute the container. However, it also means that data will be lost for certain lifecycle events like restart or deletion.

Kubernetes容器是无状态的核心原则,但是数据仍然必须被管理,保留并使其可被其他服务访问。 无状态意味着容器正在孤立运行,无需任何有关过去事务的知识,这使得替换,删除或分发容器变得容易。 但是,这也意味着对于某些生命周期事件(如重新启动或删除),数据将丢失。

Rook is a storage orchestration tool that provides a cloud-native, open source solution for a diverse set of storage providers. Rook uses the power of Kubernetes to turn a storage system into self-managing services that provide a seamless experience for saving Kubernetes application or deployment data.

Rook是一种存储编排工具,可为各种存储提供商提供云原生的开源解决方案。 Rook利用Kubernetes的功能将存储系统转变为自我管理服务,这些服务提供了无缝的体验来保存Kubernetes应用程序或部署数据。

Ceph is a highly scalable distributed-storage solution offering object, block, and file storage. Ceph clusters are designed to run on any hardware using the so-called CRUSH algorithm (Controlled Replication Under Scalable Hashing).

Ceph是一种高度可扩展的分布式存储解决方案,提供对象,块和文件存储。 Ceph集群被设计为使用所谓的CRUSH算法 (可伸缩哈希下的受控复制)在任何硬件上运行

One main benefit of this deployment is that you get the highly scalable storage solution of Ceph without having to configure it manually using the Ceph command line, because Rook automatically handles it. Kubernetes applications can then mount block devices and filesystems from Rook to preserve and monitor their application data.

这种部署的一个主要好处是,您可以获得Ceph​​的高度可扩展的存储解决方案,而无需使用Ceph命令行手动配置它,因为Rook会自动处理它。 然后,Kubernetes应用程序可以从Rook挂载块设备和文件系统,以保留和监视其应用程序数据。

In this tutorial, you will set up a Ceph cluster using Rook and use it to persist data for a MongoDB database as an example.

在本教程中,您将使用Rook设置一个Ceph集群,并以其为MongoDB数据库持久化数据。

先决条件 (Prerequisites)

Before you begin this guide, you’ll need the following:

在开始本指南之前,您需要满足以下条件:

  • A DigitalOcean Kubernetes cluster with at least three nodes that each have 2 vCPUs and 4 GB of Memory. To create a cluster on DigitalOcean and connect to it, see the Kubernetes Quickstart.

    一个DigitalOcean Kubernetes集群,至少具有三个节点,每个节点具有2个vCPU和4 GB的内存。 要在DigitalOcean上创建集群并连接到集群,请参见Kubernetes Quickstart

  • The kubectl command-line tool installed on a development server and configured to connect to your cluster. You can read more about installing kubectl in its official documentation.

    kubectl命令行工具安装在开发服务器上,并配置为连接到集群。 您可以在其官方文档中阅读有关安装kubectl的更多信息。

  • A DigitalOcean block storage Volume with at least 100 GB for each node of the cluster you just created—for example, if you have three nodes you will need three Volumes. Select Manually Format rather than automatic and then attach your Volume to the Droplets in your node pool. You can follow the Volumes Quickstart to achieve this.

    每个刚创建的群集的每个节点至少具有100 GB的DigitalOcean块存储卷-例如,如果您有三个节点,则需要三个卷。 选择“ 手动格式化”而不是“自动格式化” ,然后将卷附加到节点池中的Droplet。 您可以按照Volumes Quickstart来实现。

第1步-设置Rook (Step 1 — Setting up Rook)

After completing the prerequisite, you have a fully functional Kubernetes cluster with three nodes and three Volumes—you’re now ready to set up Rook.

完成前提条件后,您将拥有一个具有三个节点和三个卷的功能齐全的Kubernetes集群,现在您可以设置Rook。

In this section, you will clone the Rook repository, deploy your first Rook operator on your Kubernetes cluster, and validate the given deployment status. A Rook operator is a container that automatically bootstraps the storage clusters and monitors the storage daemons to ensure the storage clusters are healthy.

在本节中,您将克隆Rook存储库,在Kubernetes集群上部署第一个Rook 运算符 ,并验证给定的部署状态。 Rook运算符是一个容器,可以自动引导存储集群并监视存储后台驻留程序以确保存储集群运行状况良好。

First, you will clone the Rook repository, so you have all the resources needed to start setting up your Rook cluster:

首先,您将克隆Rook存储库,因此您拥有开始设置Rook集群所需的所有资源:

  • git clone --single-branch --branch release-1.3 https://github.com/rook/rook.git

    git clone --single-branch --branch版本-1.3 https://github.com/rook/rook.git

This command will clone the Rook repository from Github and create a folder with the name of rook in your directory. Now enter the directory using the following command:

该命令将从Github克隆Rook存储库,并在目录中创建一个名为rook的文件夹。 现在,使用以下命令输入目录:

  • cd rook/cluster/examples/kubernetes/ceph

    cd rook / cluster / examples / kubernetes / ceph

Next you will continue by creating the common resources you needed for your Rook deployment, which you can do by deploying the Kubernetes config file that is available by default in the directory:

接下来,您将继续创建Rook部署所需的公共资源,这可以通过部署Kubernetes配置文件来完成,该文件默认情况下在目录中可用:

  • kubectl create -f common.yaml

    kubectl创建-f common.yaml

The resources you’ve created are mainly CustomResourceDefinitions (CRDs) and define new resources that the operator will later use. They contain resources like the ServiceAccount, Role, RoleBinding, ClusterRole, and ClusterRoleBinding.

您创建的资源主要是CustomResourceDefinitions (CRD),并定义了操作员以后将使用的新资源。 它们包含诸如ServiceAccount,Role,RoleBinding,ClusterRole和ClusterRoleBinding之类的资源。

Note: This standard file assumes that you will deploy the Rook operator and all Ceph daemons in the same namespace. If you want to deploy the operator in a separate namespace, see the comments throughout the common.yaml file.

注意:此标准文件假定您将Rook运算符和所有Ceph守护程序部署在同一名称空间中。 如果要在单独的命名空间中部署操作员,请参阅common.yaml文件中的注释。

After the common resources are created, the next step is to create the Rook operator.

创建公用资源后,下一步是创建Rook运算符。

Before deploying the operator.yaml file, you will need to change the CSI_RBD_GRPC_METRICS_PORT variable because your DigitalOcean Kubernetes cluster already uses the standard port by default. Open the file with the following command:

在部署operator.yaml文件之前,您将需要更改CSI_RBD_GRPC_METRICS_PORT变量,因为默认情况下,DigitalOcean Kubernetes集群已使用标准端口。 使用以下命令打开文件:

  • nano operator.yaml

    纳米算子

Then search for the CSI_RBD_GRPC_METRICS_PORT variable, uncomment it by removing the #, and change the value from port 9001 to 9093:

然后搜索CSI_RBD_GRPC_METRICS_PORT变量,通过删除#取消注释,并将值从端口9001更改为9093

operator.yaml
运算符
kind: ConfigMap
apiVersion: v1
metadata:
  name: rook-ceph-operator-config
  namespace: rook-ceph
data:
  ROOK_CSI_ENABLE_CEPHFS: "true"
  ROOK_CSI_ENABLE_RBD: "true"
  ROOK_CSI_ENABLE_GRPC_METRICS: "true"
  CSI_ENABLE_SNAPSHOTTER: "true"
  CSI_FORCE_CEPHFS_KERNEL_CLIENT: "true"
  ROOK_CSI_ALLOW_UNSUPPORTED_VERSION: "false"
  # Configure CSI CSI Ceph FS grpc and liveness metrics port
  # CSI_CEPHFS_GRPC_METRICS_PORT: "9091"
  # CSI_CEPHFS_LIVENESS_METRICS_PORT: "9081"
  # Configure CSI RBD grpc and liveness metrics port
  CSI_RBD_GRPC_METRICS_PORT: "9093"
  # CSI_RBD_LIVENESS_METRICS_PORT: "9080"

Once you’re done, save and exit the file.

完成后,保存并退出文件。

Next, you can deploy the operator using the following command:

接下来,您可以使用以下命令部署操作员:

  • kubectl create -f operator.yaml

    kubectl创建-f operator.yaml

The command will output the following:

该命令将输出以下内容:


   
   
Output
configmap/rook-ceph-operator-config created deployment.apps/rook-ceph-operator created

Again, you’re using the kubectl create command with the -f flag to assign the file that you want to apply. It will take around a couple of seconds for the operator to be running. You can verify the status using the following command:

同样,您使用带有-f标志的kubectl create命令来分配要应用的文件。 操作员将需要大约几秒钟的时间运行。 您可以使用以下命令来验证状态:

  • kubectl get pod -n rook-ceph

    kubectl获取pod -n rook-ceph

You use the -n flag to get the pods of a specific Kubernetes namespace (rook-ceph in this example).

您可以使用-n标志来获取特定Kubernetes名称空间的Pod(在此示例中为rook-ceph ceph)。

Once the operator deployment is ready, it will trigger the creation of the DeamonSets that are in charge of creating the rook-discovery agents on each worker node of your cluster. You’ll receive output similar to:

一旦操作员部署准备就绪,它将触发DeamonSets的创建,这些DeamonSets负责在集群的每个工作节点上创建rook-discovery代理。 您将收到类似于以下内容的输出:


   
   
Output
NAME READY STATUS RESTARTS AGE rook-ceph-operator-599765ff49-fhbz9 1/1 Running 0 92s rook-discover-6fhlb 1/1 Running 0 55s rook-discover-97kmz 1/1 Running 0 55s rook-discover-z5k2z 1/1 Running 0 55s

You have successfully installed Rook and deployed your first operator. Next, you will create a Ceph cluster and verify that it is working.

您已成功安装Rook并部署了您的第一个操作员。 接下来,您将创建一个Ceph集群并验证其是否正常运行。

第2步-创建Ceph集群 (Step 2 — Creating a Ceph Cluster)

Now that you have successfully set up Rook on your Kubernetes cluster, you’ll continue by creating a Ceph cluster within the Kubernetes cluster and verifying its functionality.

现在您已经成功在Kubernetes集群上设置了Rook,接下来将在Kubernetes集群中创建一个Ceph集群并验证其功能。

First let’s review the most important Ceph components and their functionality:

首先,让我们回顾最重要的Ceph组件及其功能:

  • Ceph Monitors, also known as MONs, are responsible for maintaining the maps of the cluster required for the Ceph daemons to coordinate with each other. There should always be more than one MON running to increase the reliability and availability of your storage service.

    Ceph监视器 ,也称为MON,负责维护Ceph守护程序相互协调所需的集群映射。 为了确保存储服务的可靠性和可用性,应始终运行多个MON。

  • Ceph Managers, also known as MGRs, are runtime daemons responsible for keeping track of runtime metrics and the current state of your Ceph cluster. They run alongside your monitoring daemons (MONs) to provide additional monitoring and an interface to external monitoring and management systems.

    Ceph管理器 (也称为MGR)是运行时守护程序,负责跟踪运行时指标和Ceph集群的当前状态。 它们与您的监视守护程序(MON)一起运行,以提供其他监视以及与外部监视和管理系统的接口。

  • Ceph Object Store Devices, also known as OSDs, are responsible for storing objects on a local file system and providing access to them over the network. These are usually tied to one physical disk of your cluster. Ceph clients interact with OSDs directly.

    Ceph对象存储设备 (也称为OSD)负责将对象存储在本地文件系统上,并通过网络提供对它们的访问。 这些通常绑定到群集的一个物理磁盘。 Ceph客户端直接与OSD交互。

To interact with the data of your Ceph storage, a client will first make contact with the Ceph Monitors (MONs) to obtain the current version of the cluster map. The cluster map contains the data storage location as well as the cluster topology. The Ceph clients then use the cluster map to decide which OSD they need to interact with.

为了与您的Ceph存储中的数据进行交互,客户端将首先与Ceph监视器(MONs)联系以获得集群映射的当前版本。 群集图包含数据存储位置以及群集拓扑。 然后,Ceph客户端使用群集映射表来确定与之交互的OSD。

Rook enables Ceph storage to run on your Kubernetes cluster. All of these components are running in your Rook cluster and will directly interact with the Rook agents. This provides a more streamlined experience for administering your Ceph cluster by hiding Ceph components like placement groups and storage maps while still providing the options of advanced configurations.

Rook使Ceph存储可以在您的Kubernetes集群上运行。 所有这些组件都在您的Rook集群中运行,并将直接与Rook代理进行交互。 通过隐藏Ceph组件(如放置组和存储图),同时仍提供高级配置选项,这为管理Ceph集群提供了更为简化的体验。

Now that you have a better understanding of what Ceph is and how it is used in Rook, you will continue by setting up your Ceph cluster.

现在您对Ceph是什么以及在Rook中如何使用有了更好的了解,您将继续设置Ceph集群。

You can complete the setup by either running the example configuration, found in the examples directory of the Rook project, or by writing your own configuration. The example configuration is fine for most use cases and provides excellent documentation of optional parameters.

您可以通过运行位于Rook项目的examples目录中的示例配置或编写自己的配置来完成设置。 示例配置适合大多数用例,并提供了有关可选参数的出色文档。

Now you’ll start the creation process of a Ceph cluster Kubernetes Object.

现在,您将开始创建Ceph集群Kubernetes Object的过程

First, you need to create a YAML file:

首先,您需要创建一个YAML文件:

  • nano cephcluster.yaml

    纳米头孢类

The configuration defines how the Ceph cluster will be deployed. In this example, you will deploy three Ceph Monitors (MON) and enable the Ceph dashboard. The Ceph dashboard is out of scope for this tutorial, but you can use it later in your own individual project for visualizing the current status of your Ceph cluster.

该配置定义了如何部署Ceph集群。 在此示例中,您将部署三个Ceph监视器(MON)并启用Ceph仪表板。 Ceph仪表板不在本教程的讨论范围之内,但是您以后可以在自己的单个项目中使用它来可视化Ceph集群的当前状态。

Add the following content to define the apiVersion and the Kubernetes Object kind as well as the name and the namespace the Object should be deployed in:

添加以下内容以定义apiVersion和Kubernetes对象的kind以及应在其中部署对象的namenamespace

cephcluster.yaml
cephcluster.yaml
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: rook-ceph
  namespace: rook-ceph

After that, add the spec key, which defines the model that Kubernetes will use to create your Ceph cluster. You’ll first define the image version you want to use and whether you allow unsupported Ceph versions or not:

之后,添加spec密钥,该密钥定义Kubernetes将用来创建Ceph集群的模型。 首先,您要定义要使用的映像版本,以及是否允许使用不受支持的Ceph版本:

cephcluster.yaml
cephcluster.yaml
spec:
  cephVersion:
    image: ceph/ceph:v14.2.8
    allowUnsupported: false

Then set the data directory where configuration files will be persisted using the dataDirHostPath key:

然后使用dataDirHostPath项设置将配置文件dataDirHostPath在其中的数据目录:

cephcluster.yaml
cephcluster.yaml
dataDirHostPath: /var/lib/rook

Next, you define if you want to skip upgrade checks and when you want to upgrade your cluster using the following parameters:

接下来,使用以下参数定义是否要跳过升级检查以及何时要升级群集:

cephcluster.yaml
cephcluster.yaml
skipUpgradeChecks: false
  continueUpgradeAfterChecksEvenIfNotHealthy: false

You configure the number of Ceph Monitors (MONs) using the mon key. You also allow the deployment of multiple MONs per node:

您使用mon键配置Ceph监视器(MON)的数量。 您还允许每个节点部署多个MON:

cephcluster.yaml
cephcluster.yaml
mon:
    count: 3
    allowMultiplePerNode: false

Options for the Ceph dashboard are defined under the dashboard key. This gives you options to enable the dashboard, customize the port, and prefix it when using a reverse proxy:

Ceph仪表板的选项在dashboard键下定义。 这为您提供了启用仪表板,自定义端口并在使用反向代理时为其添加前缀的选项:

cephcluster.yaml
cephcluster.yaml
dashboard:
    enabled: true
    # serve the dashboard under a subpath (useful when you are accessing the dashboard via a reverse proxy)
    # urlPrefix: /ceph-dashboard
    # serve the dashboard at the given port.
    # port: 8443
    # serve the dashboard using SSL
    ssl: false

You can also enable monitoring of your cluster with the monitoring key (monitoring requires Prometheus to be pre-installed):

您还可以使用monitoring键启用对群集的monitoring (监视需要预安装Prometheus ):

cephcluster.yaml
cephcluster.yaml
monitoring:
    enabled: false
    rulesNamespace: rook-ceph

RDB stands for RADOS (Reliable Autonomic Distributed Object Store) block device, which are thin-provisioned and resizable Ceph block devices that store data on multiple nodes.

RDB代表RADOS(可靠的自主分布对象存储)块设备,它是在多个节点上存储数据的精简配置和可调整大小的Ceph块设备。

RBD images can be asynchronously shared between two Ceph clusters by enabling rbdMirroring. Since we’re working with one cluster in this tutorial, this isn’t necessary. The number of workers is therefore set to 0:

通过启用rbdMirroring ,可以在两个Ceph集群之间异步共享RBD图像。 由于我们在本教程中使用的是一个集群,因此这不是必需的。 因此,工人人数设置为0

cephcluster.yaml
cephcluster.yaml
rbdMirroring:
    workers: 0

You can enable the crash collector for the Ceph daemons:

您可以为Ceph守护程序启用崩溃收集器:

cephcluster.yaml
cephcluster.yaml
crashCollector:
    disable: false

The cleanup policy is only important if you want to delete your cluster. That is why this option has to be left empty:

仅当您要删除集群时,清理策略才重要。 这就是为什么必须将此选项留空的原因:

cephcluster.yaml
cephcluster.yaml
cleanupPolicy:
    deleteDataDirOnHosts: ""
  removeOSDsIfOutAndSafeToRemove: false

The storage key lets you define the cluster level storage options; for example, which node and devices to use, the database size, and how many OSDs to create per device:

使用storage键,您可以定义集群级别的存储选项。 例如,要使用的节点和设备,数据库大小以及每个设备要创建多少OSD:

cephcluster.yaml
cephcluster.yaml
storage:
    useAllNodes: true
    useAllDevices: true
    config:
      # metadataDevice: "md0" # specify a non-rotational storage so ceph-volume will use it as block db device of bluestore.
      # databaseSizeMB: "1024" # uncomment if the disks are smaller than 100 GB
      # journalSizeMB: "1024"  # uncomment if the disks are 20 GB or smaller

You use the disruptionManagement key to manage daemon disruptions during upgrade or fencing:

您可以使用disruptionManagement键来管理升级或隔离期间的守护程序中断:

cephcluster.yaml
cephcluster.yaml
disruptionManagement:
    managePodBudgets: false
    osdMaintenanceTimeout: 30
    manageMachineDisruptionBudgets: false
    machineDisruptionBudgetNamespace: openshift-machine-api

These configuration blocks will result in the final following file:

这些配置块将生成最终的以下文件:

cephcluster.yaml
cephcluster.yaml
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: rook-ceph
  namespace: rook-ceph
spec:
  cephVersion:
    image: ceph/ceph:v14.2.8
    allowUnsupported: false
  dataDirHostPath: /var/lib/rook
  skipUpgradeChecks: false
  continueUpgradeAfterChecksEvenIfNotHealthy: false
  mon:
    count: 3
    allowMultiplePerNode: false
  dashboard:
    enabled: true
    # serve the dashboard under a subpath (useful when you are accessing the dashboard via a reverse proxy)
    # urlPrefix: /ceph-dashboard
    # serve the dashboard at the given port.
    # port: 8443
    # serve the dashboard using SSL
    ssl: false
  monitoring:
    enabled: false
    rulesNamespace: rook-ceph
  rbdMirroring:
    workers: 0
  crashCollector:
    disable: false
  cleanupPolicy:
    deleteDataDirOnHosts: ""
  removeOSDsIfOutAndSafeToRemove: false
  storage:
    useAllNodes: true
    useAllDevices: true
    config:
      # metadataDevice: "md0" # specify a non-rotational storage so ceph-volume will use it as block db device of bluestore.
      # databaseSizeMB: "1024" # uncomment if the disks are smaller than 100 GB
      # journalSizeMB: "1024"  # uncomment if the disks are 20 GB or smaller
  disruptionManagement:
    managePodBudgets: false
    osdMaintenanceTimeout: 30
    manageMachineDisruptionBudgets: false
    machineDisruptionBudgetNamespace: openshift-machine-api

Once you’re done, save and exit your file.

完成后,保存并退出文件。

You can also customize your deployment by, for example changing your database size or defining a custom port for the dashboard. You can find more options for your cluster deployment in the cluster example of the Rook repository.

您还可以通过例如更改数据库大小或为仪表板定义自定义端口来自定义部署。 您可以在Rook存储库的集群示例中找到更多用于集群部署的选项。

Next, apply this manifest in your Kubernetes cluster:

接下来,将此清单应用于您的Kubernetes集群:

  • kubectl apply -f cephcluster.yaml

    kubectl应用-f cephcluster.yaml

Now check that the pods are running:

现在检查Pod是否正在运行:

  • kubectl get pod -n rook-ceph

    kubectl获取pod -n rook-ceph

This usually takes a couple of minutes, so just refresh until your output reflects something like the following:

这通常需要几分钟,因此请刷新直到输出反映如下内容:


   
   
Output
NAME READY STATUS RESTARTS AGE csi-cephfsplugin-lz6dn 3/3 Running 0 3m54s csi-cephfsplugin-provisioner-674847b584-4j9jw 5/5 Running 0 3m54s csi-cephfsplugin-provisioner-674847b584-h2cgl 5/5 Running 0 3m54s csi-cephfsplugin-qbpnq 3/3 Running 0 3m54s csi-cephfsplugin-qzsvr 3/3 Running 0 3m54s csi-rbdplugin-kk9sw 3/3 Running 0 3m55s csi-rbdplugin-l95f8 3/3 Running 0 3m55s csi-rbdplugin-provisioner-64ccb796cf-8gjwv 6/6 Running 0 3m55s csi-rbdplugin-provisioner-64ccb796cf-dhpwt 6/6 Running 0 3m55s csi-rbdplugin-v4hk6 3/3 Running 0 3m55s rook-ceph-crashcollector-pool-33zy7-68cdfb6bcf-9cfkn 1/1 Running 0 109s rook-ceph-crashcollector-pool-33zyc-565559f7-7r6rt 1/1 Running 0 53s rook-ceph-crashcollector-pool-33zym-749dcdc9df-w4xzl 1/1 Running 0 78s rook-ceph-mgr-a-7fdf77cf8d-ppkwl 1/1 Running 0 53s rook-ceph-mon-a-97d9767c6-5ftfm 1/1 Running 0 109s rook-ceph-mon-b-9cb7bdb54-lhfkj 1/1 Running 0 96s rook-ceph-mon-c-786b9f7f4b-jdls4 1/1 Running 0 78s rook-ceph-operator-599765ff49-fhbz9 1/1 Running 0 6m58s rook-ceph-osd-prepare-pool-33zy7-c2hww 1/1 Running 0 21s rook-ceph-osd-prepare-pool-33zyc-szwsc 1/1 Running 0 21s rook-ceph-osd-prepare-pool-33zym-2p68b 1/1 Running 0 21s rook-discover-6fhlb 1/1 Running 0 6m21s rook-discover-97kmz 1/1 Running 0 6m21s rook-discover-z5k2z 1/1 Running 0 6m21s

You have now successfully set up your Ceph cluster and can continue by creating your first storage block.

现在,您已经成功设置了Ceph集群,并且可以通过创建第一个存储块来继续。

步骤3 —添加块存储 (Step 3 — Adding Block Storage)

Block storage allows a single pod to mount storage. In this section, you will create a storage block that you can use later in your applications.

块存储允许单个容器安装存储。 在本节中,您将创建一个存储块,以后可以在您的应用程序中使用它。

Before Ceph can provide storage to your cluster, you first need to create a storageclass and a cephblockpool. This will allow Kubernetes to interoperate with Rook when creating persistent volumes:

在Ceph可以为您的集群提供存储之前,您首先需要创建一个storageclass和一个cephblockpool 。 创建持久卷时,这将允许Kubernetes与Rook进行互操作:

  • kubectl apply -f ./csi/rbd/storageclass.yaml

    kubectl apply -f ./csi/rbd/storageclass.yaml

The command will output the following:

该命令将输出以下内容:


   
   
Output
cephblockpool.ceph.rook.io/replicapool created storageclass.storage.k8s.io/rook-ceph-block created

Note: If you’ve deployed the Rook operator in a namespace other than rook-ceph you need to change the prefix in the provisioner to match the namespace you use.

注意:如果您已将Rook运算符部署在rook-ceph以外的名称空间中,则需要在rook-ceph中更改前缀以匹配您使用的名称空间。

After successfully deploying the storageclass and cephblockpool, you will continue by defining the PersistentVolumeClaim (PVC) for your application. A PersistentVolumeClaim is a resource used to request storage from your cluster.

成功部署storageclasscephblockpool ,您将继续为应用程序定义PersistentVolumeClaim(PVC) 。 PersistentVolumeClaim是用于从群集请求存储的资源。

For that, you first need to create a YAML file:

为此,您首先需要创建一个YAML文件:

  • nano pvc-rook-ceph-block.yaml

    纳米pvc-rook-ceph-block.yaml

Add the following for your PersistentVolumeClaim:

为您的PersistentVolumeClaim添加以下内容:

pvc-rook-ceph-block.yaml
pvc-rook-ceph-block.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongo-pvc
spec:
  storageClassName: rook-ceph-block
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

First, you need to set an apiVersion (v1 is the current stable version). Then you need to tell Kubernetes which type of resource you want to define using the kind key (PersistentVolumeClaim in this case).

首先,您需要设置一个apiVersion ( v1是当前的稳定版本)。 然后,您需要使用kind键(在这种情况下为PersistentVolumeClaim )告诉Kubernetes您要定义哪种资源类型。

The spec key defines the model that Kubernetes will use to create your PersistentVolumeClaim. Here you need to select the storage class you created earlier: rook-ceph-block. You can then define the access mode and limit the resources of the claim. ReadWriteOnce means the volume can only be mounted by a single node.

spec键定义了Kubernetes将用来创建PersistentVolumeClaim的模型。 在这里,您需要选择之前创建的存储类: rook-ceph-block 。 然后,您可以定义访问方式并限制索赔的资源。 ReadWriteOnce表示该卷只能由单个节点装入。

Now that you have defined the PersistentVolumeClaim, it is time to deploy it using the following command:

现在您已经定义了PersistentVolumeClaim,是时候使用以下命令来部署它了:

  • kubectl apply -f pvc-rook-ceph-block.yaml

    kubectl应用-f pvc-rook-ceph-block.yaml

You will receive the following output:

您将收到以下输出:


   
   
Output
persistentvolumeclaim/mongo-pvc created

You can now check the status of your PVC:

现在,您可以检查PVC的状态:

  • kubectl get pvc

    kubectl获取pvc

When the PVC is bound, you are ready:

绑定PVC后,您就可以准备:


   
   
Output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mongo-pvc Bound pvc-ec1ca7d1-d069-4d2a-9281-3d22c10b6570 5Gi RWO rook-ceph-block 16s

You have now successfully created a storage class and used it to create a PersistenVolumeClaim that you will mount to a application to persist data in the next section.

现在,您已经成功创建了一个存储类,并使用它创建了一个PersistenVolumeClaim ,将其安装到应用程序以在下一部分中保留数据。

步骤4 —使用rook-ceph-block创建MongoDB部署 (Step 4 — Creating a MongoDB Deployment with a rook-ceph-block)

Now that you have successfully created a storage block and a persistent volume, you will put it to use by implementing it in a MongoDB application.

现在,您已经成功创建了存储块和持久卷,您将通过在MongoDB应用程序中实现它来使用它。

The configuration will contain a few things:

该配置将包含一些内容:

  • A single container deployment based on the latest version of the mongo image.

    基于最新版本的mongo映像的单个容器部署。

  • A persistent volume to preserve the data of the MongoDB database.

    用于保留MongoDB数据库数据的持久卷。
  • A service to expose the MongoDB port on port 31017 of every node so you can interact with it later.

    公开每个节点的端口31017上的MongoDB端口的服务,以便您以后可以与其进行交互。

First open the configuration file:

首先打开配置文件:

  • nano mongo.yaml

    纳米mongo.yaml

Start the manifest with the Deployment resource:

使用Deployment资源启动清单:

mongo.yaml
mongo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongo
spec:
  selector:
    matchLabels:
      app: mongo
  template:
    metadata:
      labels:
        app: mongo
    spec:
      containers:
      - image: mongo:latest
        name: mongo
        ports:
        - containerPort: 27017
          name: mongo
        volumeMounts:
        - name: mongo-persistent-storage
          mountPath: /data/db
      volumes:
      - name: mongo-persistent-storage
        persistentVolumeClaim:
          claimName: mongo-pvc

...

For each resource in the manifest, you need to set an apiVersion. For deployments and services, use apiVersion: apps/v1, which is a stable version. Then, tell Kubernetes which resource you want to define using the kind key. Each definition should also have a name defined in metadata.name.

对于清单中的每个资源,您需要设置一个apiVersion 。 对于部署和服务,请使用apiVersion: apps/v1 ,这是一个稳定的版本。 然后,使用kind键告诉Kubernetes您要定义哪个资源。 每个定义还应该在metadata.name定义一个名称。

The spec section tells Kubernetes what the desired state of your final state of the deployment is. This definition requests that Kubernetes should create one pod with one replica.

spec部分告诉Kubernetes您最终部署状态的期望状态是什么。 该定义要求Kubernetes应该使用一个副本创建一个Pod。

Labels are key-value pairs that help you organize and cross-reference your Kubernetes resources. You can define them using metadata.labels and you can later search for them using selector.matchLabels.

标签是键值对,可帮助您组织和交叉引用Kubernetes资源。 您可以使用metadata.labels定义它们,以后可以使用selector.matchLabels搜索它们。

The spec.template key defines the model that Kubernetes will use to create each of your pods. Here you will define the specifics of your pod’s deployment like the image name, container ports, and the volumes that should be mounted. The image will then automatically be pulled from an image registry by Kubernetes.

spec.template键定义了Kubernetes将用来创建每个Pod的模型。 在这里,您将定义Pod部署的细节,例如映像名称,容器端口和应挂载的卷。 然后,Kubernetes将自动从映像注册表中提取该映像。

Here you will use the PersistentVolumeClaim you created earlier to persist the data of the /data/db directory of the pods. You can also specify extra information like environment variables that will help you with further customizing your deployment.

在这里,您将使用之前创建的PersistentVolumeClaim来持久保存Pod的/data/db目录中的/data/db 。 您还可以指定其他信息,例如环境变量,这些信息将帮助您进一步自定义部署。

Next, add the following code to the file to define a Kubernetes Service that exposes the MongoDB port on port 31017 of every node in your cluster:

接下来,将以下代码添加到文件中以定义Kubernetes Service ,该Service公开集群中每个节点的端口31017上的MongoDB端口:

mongo.yaml
mongo.yaml
...

---
apiVersion: v1
kind: Service
metadata:
  name: mongo
  labels:
    app: mongo
spec:
  selector:
    app: mongo
  type: NodePort
  ports:
    - port: 27017
      nodePort: 31017

Here you also define an apiVersion, but instead of using the Deployment type, you define a Service. The service will receive connections on port 31017 and forward them to the pods’ port 27017, where you can then access the application.

在这里,您还定义了一个apiVersion ,但是没有使用Deployment类型,而是定义了Service 。 该服务将在端口31017上接收连接,并将它们转发到Pod的端口27017 ,然后您可以在其中访问应用程序。

The service uses NodePort as the service type, which will expose the Service on each Node’s IP at a static port between 30000 and 32767 (31017 in this case).

该服务使用NodePort作为服务类型,这将在3000032767之间的静态端口(在本例中为31017 )的每个节点的IP上公开该Service

Now that you have defined the deployment, it is time to deploy it:

现在,您已经定义了部署,是时候部署它了:

  • kubectl apply -f mongo.yaml

    kubectl应用-f mongo.yaml

You will see the following output:

您将看到以下输出:


   
   
Output
deployment.apps/mongo created service/mongo created

You can check the status of the deployment and service:

您可以检查部署和服务的状态:

  • kubectl get svc,deployments

    kubectl获取svc,部署

The output will be something like this:

输出将是这样的:


   
   
Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 33m service/mongo NodePort 10.245.124.118 <none> 27017:31017/TCP 4m50s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/mongo 1/1 1 1 4m50s

After the deployment is ready, you can start saving data into your database. The easiest way to do so is by using the MongoDB shell, which is included in the MongoDB pod you just started. You can open it using kubectl.

部署准备就绪后,您可以开始将数据保存到数据库中。 最简单的方法是使用MongoDB shell,该外壳包含在刚启动的MongoDB pod中。 您可以使用kubectl打开它。

For that you are going to need the name of the pod, which you can get using the following command:

为此,您将需要吊舱的名称,可以使用以下命令获得该吊舱的名称:

  • kubectl get pods

    kubectl得到豆荚

The output will be similar to this:

输出将类似于以下内容:


   
   
Output
NAME READY STATUS RESTARTS AGE mongo-7654889675-mjcks 1/1 Running 0 13m

Now copy the name and use it in the exec command:

现在复制名称,并在exec命令中使用它:

  • kubectl exec -it your_pod_name mongo

    kubectl exec -it your_pod_name mongo

Now that you are in the MongoDB shell let’s continue by creating a database:

现在您已经在MongoDB Shell中,让我们继续创建数据库:

  • use test

    使用测试

The use command switches between databases or creates them if they don’t exist.

use命令在数据库之间切换或创建数据库(如果不存在)。


   
   
Output
switched to db test

Then insert some data into your new test database. You use the insertOne() method to insert a new document in the created database:

然后将一些数据插入新的test数据库。 您可以使用insertOne()方法在创建的数据库中插入新文档:

  • db.test.insertOne( {name: "test", number: 10 })

    db.test.insertOne({name:“ test”,number:10})

   
   
Output
{ "acknowledged" : true, "insertedId" : ObjectId("5f22dd521ba9331d1a145a58") }

The next step is retrieving the data to make sure it is saved, which can be done using the find command on your collection:

下一步是检索数据以确保已保存,这可以使用集合上的find命令来完成:

  • db.getCollection("test").find()

    db.getCollection(“ test”)。find()

The output will be similar to this:

输出将类似于以下内容:


   
   
Output
NAME READY STATUS RESTARTS AGE { "_id" : ObjectId("5f1b18e34e69b9726c984c51"), "name" : "test", "number" : 10 }

Now that you have saved some data into the database, it will be persisted in the underlying Ceph volume structure. One big advantage of this kind of deployment is the dynamic provisioning of the volume. Dynamic provisioning means that applications only need to request the storage and it will be automatically provided by Ceph instead of developers creating the storage manually by sending requests to their storage providers.

现在您已将一些数据保存到数据库中,这些数据将保留在基础Ceph卷结构中。 这种部署的一大优势是动态配置卷。 动态预配置意味着应用程序只需要请求存储,它将由Ceph自动提供,而不是开发人员通过将请求发送到其存储提供程序来手动创建存储。

Let’s validate this functionality by restarting the pod and checking if the data is still there. You can do this by deleting the pod, because it will be restarted to fulfill the state defined in the deployment:

让我们通过重新启动Pod并检查数据是否仍然存在来验证此功能。 您可以通过删除pod来完成此操作,因为它将重新启动以实现部署中定义的状态:

  • kubectl delete pod -l app=mongo

    kubectl删除pod -l app = mongo

Now let’s validate that the data is still there by connecting to the MongoDB shell and printing out the data. For that you first need to get your pod’s name and then use the exec command to open the MongoDB shell:

现在,通过连接到MongoDB shell并打印出数据来验证数据是否仍然存在。 为此,您首先需要获取pod的名称,然后使用exec命令打开MongoDB shell:

  • kubectl get pods

    kubectl得到豆荚

The output will be similar to this:

输出将类似于以下内容:


   
   
Output
NAME READY STATUS RESTARTS AGE mongo-7654889675-mjcks 1/1 Running 0 13m

Now copy the name and use it in the exec command:

现在复制名称并在exec命令中使用它:

  • kubectl exec -it your_pod_name mongo

    kubectl exec -it your_pod_name mongo

After that, you can retrieve the data by connecting to the database and printing the whole collection:

之后,您可以通过连接到数据库并打印整个集合来检索数据:

  • use test

    使用测试
  • db.getCollection("test").find()

    db.getCollection(“ test”)。find()

The output will look similar to this:

输出将类似于以下内容:


   
   
Output
NAME READY STATUS RESTARTS AGE { "_id" : ObjectId("5f1b18e34e69b9726c984c51"), "name" : "test", "number" : 10 }

As you can see the data you saved earlier is still in the database even though you restarted the pod. Now that you have successfully set up Rook and Ceph and used them to persist the data of your deployment, let’s review the Rook toolbox and what you can do with it.

如您所见,即使重新启动Pod,先前保存的数据仍在数据库中。 既然您已经成功设置了Rook和Ceph并使用它们来保存部署数据,那么让我们回顾一下Rook工具箱及其处理方法。

步骤5 —运行Rook工具箱 (Step 5 — Running the Rook Toolbox)

The Rook Toolbox is a tool that helps you get the current state of your Ceph deployment and troubleshoot problems when they arise. It also allows you to change your Ceph configurations like enabling certain modules, creating users, or pools.

Rook工具箱是一个工具,可以帮助您获取Ceph部署的当前状态并在出现问题时进行故障排除。 它还允许您更改Ceph配置,例如启用某些模块,创建用户或池。

In this section, you will install the Rook Toolbox and use it to execute basic commands like getting the current Ceph status.

在本节中,您将安装Rook工具箱并使用它执行基本命令,例如获取当前的Ceph状态。

The toolbox can be started by deploying the toolbox.yaml file, which is in the examples/kubernetes/ceph directory:

可以通过部署examples/kubernetes/ceph目录中的toolbox.yaml文件来启动工具箱:

  • kubectl apply -f toolbox.yaml

    kubectl套用-f toolbox.yaml

You will receive the following output:

您将收到以下输出:


   
   
Output
deployment.apps/rook-ceph-tools created

Now check that the pod is running:

现在检查pod是否正在运行:

  • kubectl -n rook-ceph get pod -l "app=rook-ceph-tools"

    kubectl -n rook-ceph获取pod -l“ app = rook-ceph-tools”

Your output will be similar to this:

您的输出将类似于以下内容:


   
   
Output
NAME READY STATUS RESTARTS AGE rook-ceph-tools-7c5bf67444-bmpxc 1/1 Running 0 9s

Once the pod is running you can connect to it using the kubectl exec command:

在pod运行之后,您可以使用kubectl exec命令连接到它:

  • kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash

    kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l“ app = rook-ceph-tools” -o jsonpath ='{。items [0] .metadata.name}')bash

Let’s break this command down for better understanding:

为了更好的理解,让我们分解一下此命令:

  1. The kubectl exec command lets you execute commands in a pod; like setting an environment variable or starting a service. Here you use it to open the BASH terminal in the pod. The command that you want to execute is defined at the end of the command.

    使用kubectl exec命令可以在Pod中执行命令。 例如设置环境变量或启动服务。 在这里,您可以用它来打开Pod中的BASH终端。 要执行的命令在命令末尾定义。

  2. You use the -n flag to specify the Kubernetes namespace the pod is running in.

    您可以使用-n标志来指定运行Pod的Kubernetes命名空间。

  3. The -i (interactive) and -t (tty) flags tell Kubernetes that you want to run the command in interactive mode with tty enabled. This lets you interact with the terminal you open.

    -i (interactive)和-t ( tty )标志告诉Kubernetes您要在启用tty交互模式下运行命令。 这使您可以与打开的终端进行交互。

  4. $() lets you define an expression in your command. That means that the expression will be evaluated (executed) before the main command and the resulting value will then be passed to the main command as an argument. Here we define another Kubernetes command to get a pod where the label app=rook-ceph-tool and read the name of the pod using jsonpath. We then use the name as an argument for our first command.

    $()使您可以在命令中定义表达式。 这意味着将在主命令之前对表达式进行求值(执行),然后将所得值作为参数传递给主命令。 在这里,我们定义了另一个Kubernetes命令来获取标签为app=rook-ceph-tool的容器,并使用jsonpath读取容器的名称。 然后,将名称用作第一个命令的参数。

Note: As already mentioned this command will open a terminal in the pod, so your prompt will change to reflect this.

注意:如前所述,此命令将在Pod中打开一个终端,因此您的提示将更改以反映这一点。

Now that you are connected to the pod you can execute Ceph commands for checking the current status or troubleshooting error messages. For example the ceph status command will give you the current health status of your Ceph configuration and more information like the running MONs, the current running data pools, the available and used storage, and the current I/O operations:

现在您已连接到Pod,可以执行Ceph命令来检查当前状态或对错误消息进行故障排除。 例如, ceph status命令将为您提供Ceph配置的当前健康状态以及更多信息,例如正在运行的MON,当前正在运行的数据池,可用和已使用的存储以及当前的I / O操作:

  • ceph status

    头孢状态

Here is the output of the command:

这是命令的输出:


   
   
Output
cluster: id: 71522dde-064d-4cf8-baec-2f19b6ae89bf health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 23h) mgr: a(active, since 23h) osd: 3 osds: 3 up (since 23h), 3 in (since 23h) data: pools: 1 pools, 32 pgs objects: 61 objects, 157 MiB usage: 3.4 GiB used, 297 GiB / 300 GiB avail pgs: 32 active+clean io: client: 5.3 KiB/s wr, 0 op/s rd, 0 op/s wr

You can also query the status of specific items like your OSDs using the following command:

您还可以使用以下命令查询OSD等特定项目的状态:

  • ceph osd status

    ceph osd状态

This will print information about your OSD like the used and available storage and the current state of the OSD:

这将打印有关OSD的信息,例如已使用和可用的存储以及OSD的当前状态:


   
   
Output
+----+------------+-------+-------+--------+---------+--------+---------+-----------+ | id | host | used | avail | wr ops | wr data | rd ops | rd data | state | +----+------------+-------+-------+--------+---------+--------+---------+-----------+ | 0 | node-3jis6 | 1165M | 98.8G | 0 | 0 | 0 | 0 | exists,up | | 1 | node-3jisa | 1165M | 98.8G | 0 | 5734 | 0 | 0 | exists,up | | 2 | node-3jise | 1165M | 98.8G | 0 | 0 | 0 | 0 | exists,up | +----+------------+-------+-------+--------+---------+--------+---------+-----------+

More information about the available commands and how you can use them to debug your Ceph deployment can be found in the official documentation.

有关可用命令以及如何使用它们调试Ceph部署的更多信息,请参见官方文档

You have now successfully set up a complete Rook Ceph cluster on Kubernetes that helps you persist the data of your deployments and share their state between the different pods without having to use some kind of external storage or provision storage manually. You also learned how to start the Rook Toolbox and use it to debug and troubleshoot your Ceph deployment.

现在,您已经在Kubernetes上成功建立了完整的Rook Ceph集群,可帮助您持久保存部署数据并在不同的Pod之间共享它们的状态,而无需手动使用某种外部存储或置备存储。 您还学习了如何启动Rook工具箱,并使用它来调试和调试Ceph部署。

结论 (Conclusion)

In this article, you configured your own Rook Ceph cluster on Kubernetes and used it to provide storage for a MongoDB application. You extracted useful terminology and became familiar with the essential concepts of Rook so you can customize your deployment.

在本文中,您在Kubernetes上配置了自己的Rook Ceph集群,并使用它为MongoDB应用程序提供了存储。 您提取了有用的术语,并熟悉Rook的基本概念,因此可以自定义部署。

If you are interested in learning more, consider checking out the official Rook documentation and the example configurations provided in the repository for more configuration options and parameters.

如果您想了解更多信息,请考虑查看Rook官方文档以及存储库中提供的示例配置,以获取更多配置选项和参数。

You can also try out the other kinds of storage Ceph provides like shared file systems if you want to mount the same volume to multiple pods at the same time.

如果您想同时将同一卷安装到多个Pod,您还可以尝试使用Ceph提供的其他类型的存储,例如共享文件系统

翻译自: https://www.digitalocean.com/community/tutorials/how-to-set-up-a-ceph-cluster-within-kubernetes-using-rook

使用rook部署ceph

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值