ceph-deploy osd activate报错ERROR: error creating empty object store in xxx Permission den

我在用jewel版本部署ceph时,激活OSD给我报错:ERROR: error creating empty object store in /ceph/osd: (13) Permission denied

说什么权限被拒绝

[ceph@admin-node my-cluster]$ ceph-deploy osd activate node2:/ceph/osd node3:/ceph/osd
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /bin/ceph-deploy osd activate node2:/ceph/osd node3:/ceph/osd
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x17d2ea8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x17c5cf8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('node2', '/ceph/osd', None), ('node3', '/ceph/osd', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks node2:/ceph/osd: node3:/ceph/osd:
[node2][DEBUG ] connection detected need for sudo
[node2][DEBUG ] connected to host: node2 
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] activating host node2 disk /ceph/osd
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[node2][DEBUG ] find the location of an executable
[node2][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /ceph/osd
[node2][WARNIN] main_activate: path = /ceph/osd
[node2][WARNIN] activate: Cluster uuid is 46ac86e8-1efe-403c-b735-587f9d76a905
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node2][WARNIN] activate: Cluster name is ceph
[node2][WARNIN] activate: OSD uuid is f7ce1b0e-c579-41b8-ad66-3595b10bb1ca
[node2][WARNIN] activate: OSD id is 0
[node2][WARNIN] activate: Initializing OSD...
[node2][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /ceph/osd/activate.monmap
[node2][WARNIN] got monmap epoch 1
[node2][WARNIN] command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /ceph/osd/activate.monmap --osd-data /ceph/osd --osd-journal /ceph/osd/journal --osd-uuid f7ce1b0e-c579-41b8-ad66-3595b10bb1ca --keyring /ceph/osd/keyring --setuser ceph --setgroup ceph
[node2][WARNIN] Traceback (most recent call last):
[node2][WARNIN]   File "/usr/sbin/ceph-disk", line 9, in <module>
[node2][WARNIN]     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5371, in run
[node2][WARNIN]     main(sys.argv[1:])
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5322, in main
[node2][WARNIN]     args.func(args)
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3453, in main_activate
[node2][WARNIN]     init=args.mark_init,
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3273, in activate_dir
[node2][WARNIN]     (osd_id, cluster) = activate(path, activate_key_template, init)
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3378, in activate
[node2][WARNIN]     keyring=keyring,
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2853, in mkfs
[node2][WARNIN]     '--setgroup', get_ceph_group(),
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2800, in ceph_osd_mkfs
[node2][WARNIN]     raise Error('%s failed : %s' % (str(arguments), error))
[node2][WARNIN] ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', u'0', '--monmap', '/ceph/osd/activate.monmap', '--osd-data', '/ceph/osd', '--osd-journal', '/ceph/osd/journal', '--osd-uuid', u'f7ce1b0e-c579-41b8-ad66-3595b10bb1ca', '--keyring', '/ceph/osd/keyring', '--setuser', 'ceph', '--setgroup', 'ceph'] failed : 2017-11-28 03:56:42.441317 7fd3a1d26800 -1 filestore(/ceph/osd) mkfs: write_version_stamp() failed: (13) Permission denied
[node2][WARNIN] 2017-11-28 03:56:42.441332 7fd3a1d26800 -1 OSD::mkfs: ObjectStore::mkfs failed with error -13
[node2][WARNIN] 2017-11-28 03:56:42.441379 7fd3a1d26800 -1  ** ERROR: error creating empty object store in /ceph/osd: (13) Permission denied
[node2][WARNIN] 
[node2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /ceph/osd

你部署的osd在哪里就把ceph权限赋予它

chown ceph:ceph /ceph/osd/


  • 2
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值