rook issues

ceph-volumeattacher: failed rbd single_major check, assuming it's unsupported: failed to check for rbd module single_major param: Failed to complete 'check kmod param': exit status 1. modinfo: ERROR:

could not get modinfo from 'rbd': Exec format error

modinfo rbd
filename:       /lib/modules/3.10.0-693.21.1.el7.x86_64/kernel/drivers/block/rbd.ko.xz

解决:

给容器里安装xz 解压命令,因为新版内核中库文件是 xz 压缩包

或者将宿主机的rbd.ko.xz 解压,将解压后的rbd.ko 软连接到 rbd.ko.xz, 这样容器就不用解压了
------------------------------------------------------------

2018-04-21 04:03:27.450807 I | cephosd: skipping device sdb that is in use (not by rook). fs: , ownPartitions: false

2018-04-21 04:03:27.483015 I | cephosd: configuring osd devices: {"Entries":{}}

解决:

使用无分区的裸磁盘

------------------------------------------------------------

MountVolume.SetUp failed for volume "pvc-f002e1fe-469c-11e8-9dca-90b8d0599f2f" : mount command failed, status: Failure, reason: Rook: Error getting RPC client: error connecting to socket /usr/libexec/kubernetes/kubelet-plugins/volume/exec/rook.io~rook/.rook.sock: dial unix /usr/libexec/kubernetes/kubelet-plugins/volume/exec/rook.io~rook/.rook.sock: connect: no such file or directory

 

解决:

rook-operator.yml

env:
- name: FLEXVOLUME_DIR_PATH
value: "/var/lib/kubelet/volumeplugins"

kubelet 启动加上

--volume-plugin-dir=/var/lib/kubelet/volumeplugins

------------------------------------------------------------------------------------------

pod event:

MountVolume.SetUp failed for volume "pvc-99dabdb1-46a4-11e8-9dca-90b8d0599f2f" : mount command failed, status: Failure, reason: Rook: Mount volume failed: failed to attach volume replicapool/pvc-99dabdb1-46a4-11e8-9dca-90b8d0599f2f: failed to map image replicapool/pvc-99dabdb1-46a4-11e8-9dca-90b8d0599f2f cluster rook. failed to map image replicapool/pvc-99dabdb1-46a4-11e8-9dca-90b8d0599f2f: Failed to complete 'rbd': signal: interrupt. . output: pod has unbound PersistentVolumeClaims (repeated 7 times)

rook-agent  logs:

2018-04-23 03:11:28.618916 I | exec: Running command: rbd map replicapool/pvc-99dabdb1-46a4-11e8-9dca-90b8d0599f2f --id admin --cluster=rook --keyring=/tmp/rook.keyring362403923 -m 10.254.233.241:6790 --conf=/dev/null

2018-04-23 03:12:28.620071 I | exec: Timeout waiting for process rbd to return. Sending interrupt signal to the process
2018-04-23 03:12:28.624230 E | flexdriver: Attach volume replicapool/pvc-99dabdb1-46a4-11e8-9dca-90b8d0599f2f failed: failed to attach volume replicapool/pvc-99dabdb1-46a4-11e8-9dca-90b8d0599f2f: failed to map image replicapool/pvc-99dabdb1-46a4-11e8-9dca-90b8d0599f2f cluster rook. failed to map image replicapool/pvc-99dabdb1-46a4-11e8-9dca-90b8d0599f2f: Failed to complete 'rbd': signal: interrupt. . output:

进入容器手动执行

rbd: sysfs write failed

 

解决:

 

转载于:https://www.cnblogs.com/mhc-fly/p/8921169.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
安装 Rook 需要以下步骤: 1. 安装 Rook CRDs 和 Operator 使用以下命令安装 Rook CRDs 和 Operator: ``` $ kubectl create -f https://github.com/rook/rook/raw/release-1.7/cluster/examples/kubernetes/ceph/common.yaml $ kubectl create -f https://github.com/rook/rook/raw/release-1.7/cluster/examples/kubernetes/ceph/operator.yaml ``` 检查 Operator 是否正常运行: ``` $ kubectl -n rook-ceph get pod ``` 2. 创建 Ceph 集群 创建 Rook 集群需要以下步骤: - 创建 Ceph 集群定义 - 创建 Ceph 存储定义 - 创建 Ceph 存储类 - 创建 Ceph 块存储和文件存储 在创建之前,需要先配置 Ceph 集群所需的参数,如 OSD 数量、Monitors 数量等。可以通过修改 `cluster.yaml` 文件来配置这些参数。 使用以下命令创建 Ceph 集群: ``` $ kubectl create -f https://github.com/rook/rook/raw/release-1.7/cluster/examples/kubernetes/ceph/cluster.yaml ``` 检查 Ceph 集群是否成功创建: ``` $ kubectl -n rook-ceph get pod ``` 3. 创建 Ceph 存储定义 使用以下命令创建 Ceph 存储定义: ``` $ kubectl create -f https://github.com/rook/rook/raw/release-1.7/cluster/examples/kubernetes/ceph/storageclass.yaml ``` 4. 创建 Ceph 块存储和文件存储 使用以下命令创建 Ceph 块存储和文件存储: ``` $ kubectl create -f https://github.com/rook/rook/raw/release-1.7/cluster/examples/kubernetes/ceph/csi/rbd/storageclass.yaml $ kubectl create -f https://github.com/rook/rook/raw/release-1.7/cluster/examples/kubernetes/ceph/csi/cephfs/storageclass.yaml ``` 至此,Rook 集群已经成功安装并且 Ceph 存储已经创建完毕。可以使用 `kubectl get` 命令来查看集群状态和存储状态。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值