CentOS 7.6安装部署测试Jewel版本Ceph集群

CentOS 7.6安装部署Jewel版本Ceph集群

基础环境:

centos7.6

服务分布:

mon ceph0、ceph2、cphe3 注意mon为奇数节点
osd ceph0、ceph1、ceph2、ceph3
rgw ceph1
deploy ceph0

host解析(所有节点)

[root@idcv-ceph0 ~]# ntp

[root@idcv-ceph0 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
172.20.1.138 idcv-ceph0
172.20.1.139 idcv-ceph1
172.20.1.140 idcv-ceph2
172.20.1.141 idcv-ceph3

ntp时间同步(所有节点)

[root@idcv-ceph0 ~]# yum install ntp ntpdate ntp-doc
[root@idcv-ceph0 ~]# ntpdate cn.pool.ntp.org

ssh免密登陆(所有节点,本次用的为root账户,也可为ceph建立专门的账号)

[root@idcv-ceph0 ~]# yum install openssh-server
[root@idcv-ceph0 ~]# ssh-keygen
[root@idcv-ceph0 ~]# ssh-copy-id root@idcv-ceph1
[root@idcv-ceph0 ~]# ssh-copy-id root@idcv-ceph2
[root@idcv-ceph0 ~]# ssh-copy-id root@idcv-ceph3

update系统(所有节点)

[root@idcv-ceph0 ~]# yum update

关闭iptables和selinux

[root@idcv-ceph0 ~]# systemctl disable firewalld
[root@idcv-ceph0 ~]# systemctl stop firewalld
[root@idcv-ceph0 ~]# sed -i 's/enforcing/disabled/g' /etc/selinux/config
[root@idcv-ceph0 ~]# setenforce 0
[root@idcv-ceph0 ~]# yum install yum-plugin-priorities
#如果不想关闭防火墙,允许monitor服务和允许OSD和MDS
sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent
sudo firewall-cmd --zone=public --add-service=ceph --permanent
sudo firewall-cmd --reload
#如果用的是iptables,那么开放端口,默认monitor端口6789,默认OSD端口6800:7300
sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT
/sbin/service iptables save

安装部署deploy节点

设置yum源,本次装的是jewel版本,入徐安装其他版本切换yum源

[root@idcv-ceph0 ~]# cat /etc/yum.repos.d/ceph.repo
 [Ceph]
 name=Ceph packages for basearch baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/basearch
 enabled=1
 gpgcheck=1
 type=rpm-md
 gpgkey=<https://mirrors.aliyun.com/ceph/keys/release.asc>
 priority=1
 [Ceph-noarch]
 name=Ceph noarch packages
 baseurl=<http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch>
 enabled=1
 gpgcheck=1
 type=rpm-md
 gpgkey=<https://mirrors.aliyun.com/ceph/keys/release.asc>
 priority=1
 [ceph-source]
 name=Ceph source packages
 baseurl=<http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS>
 enabled=1
 gpgcheck=1
 type=rpm-md
 gpgkey=<https://mirrors.aliyun.com/ceph/keys/release.asc>
 priority=1

安装ceph-deploy工具

[root@idcv-ceph0 ~]# yum install ceph-deploy
[root@idcv-ceph0 ~]# ceph-deploy --version
1.5.39
[root@idcv-ceph0 ~]# ceph -v
ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)

创建部署目录并部署集群

[root@idcv-ceph0 ~]# mkdir cluster
[root@idcv-ceph0 ~]# cd cluster
#安装ceph 集群,并加入节点,会自动生成配置文件
[root@idcv-ceph0 cluster]# ceph-deploy new idcv-ceph0 idcv-ceph1  idcv-ceph2 idcv-ceph3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy new idcv-ceph0 idcv-ceph1 idcv-ceph2 idcv-ceph3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ]  username                      : None
[ceph_deploy.cli][INFO ]  func                          : <function new at 0x7f7c607aa5f0>
[ceph_deploy.cli][INFO ]  verbose                       : False
[ceph_deploy.cli][INFO ]  overwrite_conf                : False
[ceph_deploy.cli][INFO ]  quiet                         : False
[ceph_deploy.cli][INFO ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f7c5ff1bcf8>
[ceph_deploy.cli][INFO ]  cluster                       : ceph
[ceph_deploy.cli][INFO ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO ]  mon                           : ['idcv-ceph0', 'idcv-ceph1', 'idcv-ceph2', 'idcv-ceph3']
[ceph_deploy.cli][INFO ]  public_network                : None
[ceph_deploy.cli][INFO ]  ceph_conf                     : None
[ceph_deploy.cli][INFO ]  cluster_network               : None
[ceph_deploy.cli][INFO ]  default_release               : False
[ceph_deploy.cli][INFO ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] find the location of an executable
[idcv-ceph0][INFO ] Running command: /usr/sbin/ip link show
[idcv-ceph0][INFO ] Running command: /usr/sbin/ip addr show
[idcv-ceph0][DEBUG ] IP addresses found: [u'172.20.1.138']
[ceph_deploy.new][DEBUG ] Resolving host idcv-ceph0
[ceph_deploy.new][DEBUG ] Monitor idcv-ceph0 at 172.20.1.138
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph1][INFO ] Running command: ssh -CT -o BatchMode=yes idcv-ceph1
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[idcv-ceph1][DEBUG ] find the location of an executable
[idcv-ceph1][INFO ] Running command: sudo /usr/sbin/ip link show
[idcv-ceph1][INFO ] Running command: sudo /usr/sbin/ip addr show
[idcv-ceph1][DEBUG ] IP addresses found: [u'172.20.1.139']
[ceph_deploy.new][DEBUG ] Resolving host idcv-ceph1
[ceph_deploy.new][DEBUG ] Monitor idcv-ceph1 at 172.20.1.139
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph2][INFO ] Running command: ssh -CT -o BatchMode=yes idcv-ceph2
[idcv-ceph2][DEBUG ] connection detected need for sudo
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph2
[idcv-ceph2][DEBUG ] detect platform information from remote host
[idcv-ceph2][DEBUG ] detect machine type
[idcv-ceph2][DEBUG ] find the location of an executable
[idcv-ceph2][INFO ] Running command: sudo /usr/sbin/ip link show
[idcv-ceph2][INFO ] Running command: sudo /usr/sbin/ip addr show
[idcv-ceph2][DEBUG ] IP addresses found: [u'172.20.1.140']
[ceph_deploy.new][DEBUG ] Resolving host idcv-ceph2
[ceph_deploy.new][DEBUG ] Monitor idcv-ceph2 at 172.20.1.140
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph3][INFO ] Running command: ssh -CT -o BatchMode=yes idcv-ceph3
[idcv-ceph3][DEBUG ] connection detected need for sudo
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph3
[idcv-ceph3][DEBUG ] detect platform information from remote host
[idcv-ceph3][DEBUG ] detect machine type
[idcv-ceph3][DEBUG ] find the location of an executable
[idcv-ceph3][INFO ] Running command: sudo /usr/sbin/ip link show
[idcv-ceph3][INFO ] Running command: sudo /usr/sbin/ip addr show
[idcv-ceph3][DEBUG ] IP addresses found: [u'172.20.1.141']
[ceph_deploy.new][DEBUG ] Resolving host idcv-ceph3
[ceph_deploy.new][DEBUG ] Monitor idcv-ceph3 at 172.20.1.141
[ceph_deploy.new][DEBUG ] Monitor initial members are ['idcv-ceph0', 'idcv-ceph1', 'idcv-ceph2', 'idcv-ceph3']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['172.20.1.138', '172.20.1.139', '172.20.1.140', '172.20.1.141']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

安装mon服务

1、修改cpeh.conf文件
注意mon为奇数,如果为偶数,有一个不会安装,另外设置好public_network,并稍微增大mon之间时差允许范围(默认为0.05s,现改为2s)

[root@idcv-ceph0 cluster]# cat ceph.conf
[global]
fsid = 812d3acb-eaa8-4355-9a74-64f2cd5209b3
mon_initial_members = idcv-ceph0, idcv-ceph1, idcv-ceph2, idcv-ceph3
mon_host = 172.20.1.138,172.20.1.139,172.20.1.140,172.20.1.141
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = 172.20.0.0/20
mon_clock_drift_allowed = 2

2、开始部署mon服务

[root@idcv-ceph0 cluster]# ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fd263377368>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x7fd26335c6e0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts idcv-ceph0 idcv-ceph1 idcv-ceph2 idcv-ceph3
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph0 ...
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph0][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph0][DEBUG ] get remote short hostname
[idcv-ceph0][DEBUG ] deploying mon to idcv-ceph0
[idcv-ceph0][DEBUG ] get remote short hostname
[idcv-ceph0][DEBUG ] remote hostname: idcv-ceph0
[idcv-ceph0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph0][DEBUG ] create the mon path if it does not exist
[idcv-ceph0][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph0/done
[idcv-ceph0][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-idcv-ceph0/done
[idcv-ceph0][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-idcv-ceph0.mon.keyring
[idcv-ceph0][DEBUG ] create the monitor keyring file
[idcv-ceph0][INFO ] Running command: ceph-mon --cluster ceph --mkfs -i idcv-ceph0 --keyring /var/lib/ceph/tmp/ceph-idcv-ceph0.mon.keyring --setuser 167 --setgroup 167
[idcv-ceph0][DEBUG ] ceph-mon: renaming mon.noname-a 172.20.1.138:6789/0 to mon.idcv-ceph0
[idcv-ceph0][DEBUG ] ceph-mon: set fsid to 812d3acb-eaa8-4355-9a74-64f2cd5209b3
[idcv-ceph0][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-idcv-ceph0 for mon.idcv-ceph0
[idcv-ceph0][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-idcv-ceph0.mon.keyring
[idcv-ceph0][DEBUG ] create a done file to avoid re-doing the mon deployment
[idcv-ceph0][DEBUG ] create the init path if it does not exist
[idcv-ceph0][INFO ] Running command: systemctl enable ceph.target
[idcv-ceph0][INFO ] Running command: systemctl enable ceph-mon@idcv-ceph0
[idcv-ceph0][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@idcv-ceph0.service to /usr/lib/systemd/system/ceph-mon@.service.
[idcv-ceph0][INFO ] Running command: systemctl start ceph-mon@idcv-ceph0
[idcv-ceph0][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status
[idcv-ceph0][DEBUG ] ********************************************************************************
[idcv-ceph0][DEBUG ] status for monitor: mon.idcv-ceph0
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "election_epoch": 0,
[idcv-ceph0][DEBUG ] "extra_probe_peers": [
[idcv-ceph0][DEBUG ] "172.20.1.139:6789/0",
[idcv-ceph0][DEBUG ] "172.20.1.140:6789/0",
[idcv-ceph0][DEBUG ] "172.20.1.141:6789/0"
[idcv-ceph0][DEBUG ] ],
[idcv-ceph0][DEBUG ] "monmap": {
[idcv-ceph0][DEBUG ] "created": "2018-07-03 11:06:12.249491",
[idcv-ceph0][DEBUG ] "epoch": 0,
[idcv-ceph0][DEBUG ] "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3",
[idcv-ceph0][DEBUG ] "modified": "2018-07-03 11:06:12.249491",
[idcv-ceph0][DEBUG ] "mons": [
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "172.20.1.138:6789/0",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph0][DEBUG ] "rank": 0
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "0.0.0.0:0/1",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph1",
[idcv-ceph0][DEBUG ] "rank": 1
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "0.0.0.0:0/2",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph0][DEBUG ] "rank": 2
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "0.0.0.0:0/3",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph0][DEBUG ] "rank": 3
[idcv-ceph0][DEBUG ] }
[idcv-ceph0][DEBUG ] ]
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph0][DEBUG ] "outside_quorum": [
[idcv-ceph0][DEBUG ] "idcv-ceph0"
[idcv-ceph0][DEBUG ] ],
[idcv-ceph0][DEBUG ] "quorum": [],
[idcv-ceph0][DEBUG ] "rank": 0,
[idcv-ceph0][DEBUG ] "state": "probing",
[idcv-ceph0][DEBUG ] "sync_provider": []
[idcv-ceph0][DEBUG ] }
[idcv-ceph0][DEBUG ] ********************************************************************************
[idcv-ceph0][INFO ] monitor: mon.idcv-ceph0 is running
[idcv-ceph0][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph1 ...
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[idcv-ceph1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph1][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph1][DEBUG ] get remote short hostname
[idcv-ceph1][DEBUG ] deploying mon to idcv-ceph1
[idcv-ceph1][DEBUG ] get remote short hostname
[idcv-ceph1][DEBUG ] remote hostname: idcv-ceph1
[idcv-ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.mon][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph2 ...
[idcv-ceph2][DEBUG ] connection detected need for sudo
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph2
[idcv-ceph2][DEBUG ] detect platform information from remote host
[idcv-ceph2][DEBUG ] detect machine type
[idcv-ceph2][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph2][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph2][DEBUG ] get remote short hostname
[idcv-ceph2][DEBUG ] deploying mon to idcv-ceph2
[idcv-ceph2][DEBUG ] get remote short hostname
[idcv-ceph2][DEBUG ] remote hostname: idcv-ceph2
[idcv-ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph2][DEBUG ] create the mon path if it does not exist
[idcv-ceph2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph2/done
[idcv-ceph2][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-idcv-ceph2/done
[idcv-ceph2][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-idcv-ceph2.mon.keyring
[idcv-ceph2][DEBUG ] create the monitor keyring file
[idcv-ceph2][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i idcv-ceph2 --keyring /var/lib/ceph/tmp/ceph-idcv-ceph2.mon.keyring --setuser 167 --setgroup 167
[idcv-ceph2][DEBUG ] ceph-mon: renaming mon.noname-c 172.20.1.140:6789/0 to mon.idcv-ceph2
[idcv-ceph2][DEBUG ] ceph-mon: set fsid to 812d3acb-eaa8-4355-9a74-64f2cd5209b3
[idcv-ceph2][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-idcv-ceph2 for mon.idcv-ceph2
[idcv-ceph2][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-idcv-ceph2.mon.keyring
[idcv-ceph2][DEBUG ] create a done file to avoid re-doing the mon deployment
[idcv-ceph2][DEBUG ] create the init path if it does not exist
[idcv-ceph2][INFO ] Running command: sudo systemctl enable ceph.target
[idcv-ceph2][INFO ] Running command: sudo systemctl enable ceph-mon@idcv-ceph2
[idcv-ceph2][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@idcv-ceph2.service to /usr/lib/systemd/system/ceph-mon@.service.
[idcv-ceph2][INFO ] Running command: sudo systemctl start ceph-mon@idcv-ceph2
[idcv-ceph2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status
[idcv-ceph2][DEBUG ] ********************************************************************************
[idcv-ceph2][DEBUG ] status for monitor: mon.idcv-ceph2
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "election_epoch": 0,
[idcv-ceph2][DEBUG ] "extra_probe_peers": [
[idcv-ceph2][DEBUG ] "172.20.1.138:6789/0",
[idcv-ceph2][DEBUG ] "172.20.1.139:6789/0",
[idcv-ceph2][DEBUG ] "172.20.1.141:6789/0"
[idcv-ceph2][DEBUG ] ],
[idcv-ceph2][DEBUG ] "monmap": {
[idcv-ceph2][DEBUG ] "created": "2018-07-03 11:06:15.703352",
[idcv-ceph2][DEBUG ] "epoch": 0,
[idcv-ceph2][DEBUG ] "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3",
[idcv-ceph2][DEBUG ] "modified": "2018-07-03 11:06:15.703352",
[idcv-ceph2][DEBUG ] "mons": [
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "172.20.1.138:6789/0",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph2][DEBUG ] "rank": 0
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "172.20.1.140:6789/0",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph2][DEBUG ] "rank": 1
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "0.0.0.0:0/2",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph1",
[idcv-ceph2][DEBUG ] "rank": 2
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "0.0.0.0:0/3",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph2][DEBUG ] "rank": 3
[idcv-ceph2][DEBUG ] }
[idcv-ceph2][DEBUG ] ]
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph2][DEBUG ] "outside_quorum": [
[idcv-ceph2][DEBUG ] "idcv-ceph0",
[idcv-ceph2][DEBUG ] "idcv-ceph2"
[idcv-ceph2][DEBUG ] ],
[idcv-ceph2][DEBUG ] "quorum": [],
[idcv-ceph2][DEBUG ] "rank": 1,
[idcv-ceph2][DEBUG ] "state": "probing",
[idcv-ceph2][DEBUG ] "sync_provider": []
[idcv-ceph2][DEBUG ] }
[idcv-ceph2][DEBUG ] ********************************************************************************
[idcv-ceph2][INFO ] monitor: mon.idcv-ceph2 is running
[idcv-ceph2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph3 ...
[idcv-ceph3][DEBUG ] connection detected need for sudo
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph3
[idcv-ceph3][DEBUG ] detect platform information from remote host
[idcv-ceph3][DEBUG ] detect machine type
[idcv-ceph3][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph3][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph3][DEBUG ] get remote short hostname
[idcv-ceph3][DEBUG ] deploying mon to idcv-ceph3
[idcv-ceph3][DEBUG ] get remote short hostname
[idcv-ceph3][DEBUG ] remote hostname: idcv-ceph3
[idcv-ceph3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph3][DEBUG ] create the mon path if it does not exist
[idcv-ceph3][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph3/done
[idcv-ceph3][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-idcv-ceph3/done
[idcv-ceph3][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-idcv-ceph3.mon.keyring
[idcv-ceph3][DEBUG ] create the monitor keyring file
[idcv-ceph3][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i idcv-ceph3 --keyring /var/lib/ceph/tmp/ceph-idcv-ceph3.mon.keyring --setuser 167 --setgroup 167
[idcv-ceph3][DEBUG ] ceph-mon: renaming mon.noname-d 172.20.1.141:6789/0 to mon.idcv-ceph3
[idcv-ceph3][DEBUG ] ceph-mon: set fsid to 812d3acb-eaa8-4355-9a74-64f2cd5209b3
[idcv-ceph3][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-idcv-ceph3 for mon.idcv-ceph3
[idcv-ceph3][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-idcv-ceph3.mon.keyring
[idcv-ceph3][DEBUG ] create a done file to avoid re-doing the mon deployment
[idcv-ceph3][DEBUG ] create the init path if it does not exist
[idcv-ceph3][INFO ] Running command: sudo systemctl enable ceph.target
[idcv-ceph3][INFO ] Running command: sudo systemctl enable ceph-mon@idcv-ceph3
[idcv-ceph3][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@idcv-ceph3.service to /usr/lib/systemd/system/ceph-mon@.service.
[idcv-ceph3][INFO ] Running command: sudo systemctl start ceph-mon@idcv-ceph3
[idcv-ceph3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status
[idcv-ceph3][DEBUG ] ********************************************************************************
[idcv-ceph3][DEBUG ] status for monitor: mon.idcv-ceph3
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "election_epoch": 1,
[idcv-ceph3][DEBUG ] "extra_probe_peers": [
[idcv-ceph3][DEBUG ] "172.20.1.138:6789/0",
[idcv-ceph3][DEBUG ] "172.20.1.139:6789/0",
[idcv-ceph3][DEBUG ] "172.20.1.140:6789/0"
[idcv-ceph3][DEBUG ] ],
[idcv-ceph3][DEBUG ] "monmap": {
[idcv-ceph3][DEBUG ] "created": "2018-07-03 11:06:18.695039",
[idcv-ceph3][DEBUG ] "epoch": 0,
[idcv-ceph3][DEBUG ] "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3",
[idcv-ceph3][DEBUG ] "modified": "2018-07-03 11:06:18.695039",
[idcv-ceph3][DEBUG ] "mons": [
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "172.20.1.138:6789/0",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph3][DEBUG ] "rank": 0
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "172.20.1.140:6789/0",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph3][DEBUG ] "rank": 1
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "172.20.1.141:6789/0",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph3][DEBUG ] "rank": 2
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "0.0.0.0:0/2",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph1",
[idcv-ceph3][DEBUG ] "rank": 3
[idcv-ceph3][DEBUG ] }
[idcv-ceph3][DEBUG ] ]
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph3][DEBUG ] "outside_quorum": [],
[idcv-ceph3][DEBUG ] "quorum": [],
[idcv-ceph3][DEBUG ] "rank": 2,
[idcv-ceph3][DEBUG ] "state": "electing",
[idcv-ceph3][DEBUG ] "sync_provider": []
[idcv-ceph3][DEBUG ] }
[idcv-ceph3][DEBUG ] ********************************************************************************
[idcv-ceph3][INFO ] monitor: mon.idcv-ceph3 is running
[idcv-ceph3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status
[ceph_deploy][ERROR ] GenericError: Failed to create 1 monitors

3、注意mon节点只能是奇数,根据上面报错有一个节点没有安装成功mon服务,需要把idcv-ceph1删掉

[root@idcv-ceph0 cluster]# cat ceph.conf
[global]
fsid = 812d3acb-eaa8-4355-9a74-64f2cd5209b3
mon_initial_members = idcv-ceph0, idcv-ceph1, idcv-ceph2, idcv-ceph3
mon_host = 172.20.1.138,172.20.1.139,172.20.1.140,172.20.1.141
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = 172.20.0.0/20
mon_clock_drift_allowed = 2
[root@idcv-ceph0 cluster]# ceph mon remove idcv-ceph1
removing mon.idcv-ceph1 at 0.0.0.0:0/1, there will be 3 monitors
[root@idcv-ceph0 cluster]# ceph -s
cluster 812d3acb-eaa8-4355-9a74-64f2cd5209b3
health HEALTH_ERR
64 pgs are stuck inactive for more than 300 seconds
64 pgs stuck inactive
64 pgs stuck unclean
no osds
monmap e2: 3 mons at {idcv-ceph0=172.20.1.138:6789/0,idcv-ceph2=172.20.1.140:6789/0,idcv-ceph3=172.20.1.141:6789/0}
election epoch 8, quorum 0,1,2 idcv-ceph0,idcv-ceph2,idcv-ceph3
osdmap e1: 0 osds: 0 up, 0 in
flags sortbitwise,require_jewel_osds
pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
64 creating

4、也可以修改ceph.conf文件,再覆盖安装一次

[root@idcv-ceph0 cluster]# cat ceph.conf
[global]
fsid = 812d3acb-eaa8-4355-9a74-64f2cd5209b3
mon_initial_members = idcv-ceph0, idcv-ceph2, idcv-ceph3
mon_host = 172.20.1.138,172.20.1.140,172.20.1.141
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = 172.20.0.0/20
mon_clock_drift_allowed = 2
[root@idcv-ceph0 cluster]# ceph-deploy --overwrite-conf mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy --overwrite-conf mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : True
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fce9cf7a368>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x7fce9cf5f6e0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts idcv-ceph0 idcv-ceph2 idcv-ceph3
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph0 ...
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph0][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph0][DEBUG ] get remote short hostname
[idcv-ceph0][DEBUG ] deploying mon to idcv-ceph0
[idcv-ceph0][DEBUG ] get remote short hostname
[idcv-ceph0][DEBUG ] remote hostname: idcv-ceph0
[idcv-ceph0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph0][DEBUG ] create the mon path if it does not exist
[idcv-ceph0][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph0/done
[idcv-ceph0][DEBUG ] create a done file to avoid re-doing the mon deployment
[idcv-ceph0][DEBUG ] create the init path if it does not exist
[idcv-ceph0][INFO ] Running command: systemctl enable ceph.target
[idcv-ceph0][INFO ] Running command: systemctl enable ceph-mon@idcv-ceph0
[idcv-ceph0][INFO ] Running command: systemctl start ceph-mon@idcv-ceph0
[idcv-ceph0][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status
[idcv-ceph0][DEBUG ] ********************************************************************************
[idcv-ceph0][DEBUG ] status for monitor: mon.idcv-ceph0
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "election_epoch": 8,
[idcv-ceph0][DEBUG ] "extra_probe_peers": [
[idcv-ceph0][DEBUG ] "172.20.1.139:6789/0",
[idcv-ceph0][DEBUG ] "172.20.1.140:6789/0",
[idcv-ceph0][DEBUG ] "172.20.1.141:6789/0"
[idcv-ceph0][DEBUG ] ],
[idcv-ceph0][DEBUG ] "monmap": {
[idcv-ceph0][DEBUG ] "created": "2018-07-03 11:06:12.249491",
[idcv-ceph0][DEBUG ] "epoch": 2,
[idcv-ceph0][DEBUG ] "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3",
[idcv-ceph0][DEBUG ] "modified": "2018-07-03 11:21:27.254076",
[idcv-ceph0][DEBUG ] "mons": [
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "172.20.1.138:6789/0",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph0][DEBUG ] "rank": 0
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "172.20.1.140:6789/0",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph0][DEBUG ] "rank": 1
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "172.20.1.141:6789/0",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph0][DEBUG ] "rank": 2
[idcv-ceph0][DEBUG ] }
[idcv-ceph0][DEBUG ] ]
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph0][DEBUG ] "outside_quorum": [],
[idcv-ceph0][DEBUG ] "quorum": [
[idcv-ceph0][DEBUG ] 0,
[idcv-ceph0][DEBUG ] 1,
[idcv-ceph0][DEBUG ] 2
[idcv-ceph0][DEBUG ] ],
[idcv-ceph0][DEBUG ] "rank": 0,
[idcv-ceph0][DEBUG ] "state": "leader",
[idcv-ceph0][DEBUG ] "sync_provider": []
[idcv-ceph0][DEBUG ] }
[idcv-ceph0][DEBUG ] ********************************************************************************
[idcv-ceph0][INFO ] monitor: mon.idcv-ceph0 is running
[idcv-ceph0][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph2 ...
[idcv-ceph2][DEBUG ] connection detected need for sudo
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph2
[idcv-ceph2][DEBUG ] detect platform information from remote host
[idcv-ceph2][DEBUG ] detect machine type
[idcv-ceph2][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph2][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph2][DEBUG ] get remote short hostname
[idcv-ceph2][DEBUG ] deploying mon to idcv-ceph2
[idcv-ceph2][DEBUG ] get remote short hostname
[idcv-ceph2][DEBUG ] remote hostname: idcv-ceph2
[idcv-ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph2][DEBUG ] create the mon path if it does not exist
[idcv-ceph2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph2/done
[idcv-ceph2][DEBUG ] create a done file to avoid re-doing the mon deployment
[idcv-ceph2][DEBUG ] create the init path if it does not exist
[idcv-ceph2][INFO ] Running command: sudo systemctl enable ceph.target
[idcv-ceph2][INFO ] Running command: sudo systemctl enable ceph-mon@idcv-ceph2
[idcv-ceph2][INFO ] Running command: sudo systemctl start ceph-mon@idcv-ceph2
[idcv-ceph2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status
[idcv-ceph2][DEBUG ] ********************************************************************************
[idcv-ceph2][DEBUG ] status for monitor: mon.idcv-ceph2
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "election_epoch": 8,
[idcv-ceph2][DEBUG ] "extra_probe_peers": [
[idcv-ceph2][DEBUG ] "172.20.1.138:6789/0",
[idcv-ceph2][DEBUG ] "172.20.1.139:6789/0",
[idcv-ceph2][DEBUG ] "172.20.1.141:6789/0"
[idcv-ceph2][DEBUG ] ],
[idcv-ceph2][DEBUG ] "monmap": {
[idcv-ceph2][DEBUG ] "created": "2018-07-03 11:06:12.249491",
[idcv-ceph2][DEBUG ] "epoch": 2,
[idcv-ceph2][DEBUG ] "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3",
[idcv-ceph2][DEBUG ] "modified": "2018-07-03 11:21:27.254076",
[idcv-ceph2][DEBUG ] "mons": [
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "172.20.1.138:6789/0",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph2][DEBUG ] "rank": 0
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "172.20.1.140:6789/0",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph2][DEBUG ] "rank": 1
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "172.20.1.141:6789/0",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph2][DEBUG ] "rank": 2
[idcv-ceph2][DEBUG ] }
[idcv-ceph2][DEBUG ] ]
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph2][DEBUG ] "outside_quorum": [],
[idcv-ceph2][DEBUG ] "quorum": [
[idcv-ceph2][DEBUG ] 0,
[idcv-ceph2][DEBUG ] 1,
[idcv-ceph2][DEBUG ] 2
[idcv-ceph2][DEBUG ] ],
[idcv-ceph2][DEBUG ] "rank": 1,
[idcv-ceph2][DEBUG ] "state": "peon",
[idcv-ceph2][DEBUG ] "sync_provider": []
[idcv-ceph2][DEBUG ] }
[idcv-ceph2][DEBUG ] ********************************************************************************
[idcv-ceph2][INFO ] monitor: mon.idcv-ceph2 is running
[idcv-ceph2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph3 ...
[idcv-ceph3][DEBUG ] connection detected need for sudo
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph3
[idcv-ceph3][DEBUG ] detect platform information from remote host
[idcv-ceph3][DEBUG ] detect machine type
[idcv-ceph3][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph3][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph3][DEBUG ] get remote short hostname
[idcv-ceph3][DEBUG ] deploying mon to idcv-ceph3
[idcv-ceph3][DEBUG ] get remote short hostname
[idcv-ceph3][DEBUG ] remote hostname: idcv-ceph3
[idcv-ceph3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph3][DEBUG ] create the mon path if it does not exist
[idcv-ceph3][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph3/done
[idcv-ceph3][DEBUG ] create a done file to avoid re-doing the mon deployment
[idcv-ceph3][DEBUG ] create the init path if it does not exist
[idcv-ceph3][INFO ] Running command: sudo systemctl enable ceph.target
[idcv-ceph3][INFO ] Running command: sudo systemctl enable ceph-mon@idcv-ceph3
[idcv-ceph3][INFO ] Running command: sudo systemctl start ceph-mon@idcv-ceph3
[idcv-ceph3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status
[idcv-ceph3][DEBUG ] ********************************************************************************
[idcv-ceph3][DEBUG ] status for monitor: mon.idcv-ceph3
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "election_epoch": 8,
[idcv-ceph3][DEBUG ] "extra_probe_peers": [
[idcv-ceph3][DEBUG ] "172.20.1.138:6789/0",
[idcv-ceph3][DEBUG ] "172.20.1.139:6789/0",
[idcv-ceph3][DEBUG ] "172.20.1.140:6789/0"
[idcv-ceph3][DEBUG ] ],
[idcv-ceph3][DEBUG ] "monmap": {
[idcv-ceph3][DEBUG ] "created": "2018-07-03 11:06:12.249491",
[idcv-ceph3][DEBUG ] "epoch": 2,
[idcv-ceph3][DEBUG ] "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3",
[idcv-ceph3][DEBUG ] "modified": "2018-07-03 11:21:27.254076",
[idcv-ceph3][DEBUG ] "mons": [
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "172.20.1.138:6789/0",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph3][DEBUG ] "rank": 0
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "172.20.1.140:6789/0",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph3][DEBUG ] "rank": 1
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "172.20.1.141:6789/0",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph3][DEBUG ] "rank": 2
[idcv-ceph3][DEBUG ] }
[idcv-ceph3][DEBUG ] ]
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph3][DEBUG ] "outside_quorum": [],
[idcv-ceph3][DEBUG ] "quorum": [
[idcv-ceph3][DEBUG ] 0,
[idcv-ceph3][DEBUG ] 1,
[idcv-ceph3][DEBUG ] 2
[idcv-ceph3][DEBUG ] ],
[idcv-ceph3][DEBUG ] "rank": 2,
[idcv-ceph3][DEBUG ] "state": "peon",
[idcv-ceph3][DEBUG ] "sync_provider": []
[idcv-ceph3][DEBUG ] }
[idcv-ceph3][DEBUG ] ********************************************************************************
[idcv-ceph3][INFO ] monitor: mon.idcv-ceph3 is running
[idcv-ceph3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status
[ceph_deploy.mon][INFO ] processing monitor mon.idcv-ceph0
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] find the location of an executable
[idcv-ceph0][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status
[ceph_deploy.mon][INFO ] mon.idcv-ceph0 monitor has reached quorum!
[ceph_deploy.mon][INFO ] processing monitor mon.idcv-ceph2
[idcv-ceph2][DEBUG ] connection detected need for sudo
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph2
[idcv-ceph2][DEBUG ] detect platform information from remote host
[idcv-ceph2][DEBUG ] detect machine type
[idcv-ceph2][DEBUG ] find the location of an executable
[idcv-ceph2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status
[ceph_deploy.mon][INFO ] mon.idcv-ceph2 monitor has reached quorum!
[ceph_deploy.mon][INFO ] processing monitor mon.idcv-ceph3
[idcv-ceph3][DEBUG ] connection detected need for sudo
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph3
[idcv-ceph3][DEBUG ] detect platform information from remote host
[idcv-ceph3][DEBUG ] detect machine type
[idcv-ceph3][DEBUG ] find the location of an executable
[idcv-ceph3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status
[ceph_deploy.mon][INFO ] mon.idcv-ceph3 monitor has reached quorum!
[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmpBqY1be
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] get remote short hostname
[idcv-ceph0][DEBUG ] fetch remote file
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.admin
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.bootstrap-mds
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.bootstrap-mgr
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get-or-create client.bootstrap-mgr mon allow profile bootstrap-mgr
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.bootstrap-osd
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpBqY1be
[root@idcv-ceph0 cluster]# ls
 ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  ceph.client.admin.keyring  ceph-deploy-ceph.lo
 #查看集群创建的用户
 [root@idcv-ceph0 cluster]# ceph auth list

部署OSD角色

创建osd有两种方式

a、使用系统裸盘,作为存储空间;
[root@idcv-ceph0 cluster]# ceph-deploy disk zap node1:/dev/sdb node2:/dev/sdb ##通过zap命令清除分区及磁盘内容
[root@idcv-ceph0 cluster]# ceph-deploy osd prepare node1:/dev/sdb node2:/dev/sdb
[root@idcv-ceph0 cluster]# ceph-deploy osd activate node1:/dev/sdb node2:/dev/sdb
b、使用现有文件系统,以目录或分区作为存储空间,官方建议为 OSD 及其日志使用独立硬盘或分区作为存储空间
[root@idcv-ceph0 cluster]# ssh node1 “mkdir /data/osd0;chown -R ceph:ceph /data/osd0"
[root@idcv-ceph0 cluster]# ssh node2 “mkdir /data/osd0;chown -R ceph:ceph /data/osd0
[root@idcv-ceph0 cluster]# ceph-deploy osd prepare node1:/data/osd0 node2:/data/osd0
[root@idcv-ceph0 cluster]# ceph-deploy osd activate node1:/data/osd0 node2:/data/osd0

此次测试采用使用系统裸盘,先准备后激活
ceph-deploy --overwrite-conf osd prepare idcv-ceph0:/dev/sdb idcv-ceph1:/dev/sdb idcv-ceph2:/dev/sdb idcv-ceph3:/dev/sdb --zap-disk
ceph-deploy --overwrite-conf osd activate idcv-ceph0:/dev/sdb1 idcv-ceph1:/dev/sdb1 idcv-ceph2:/dev/sdb1 idcv-ceph3:/dev/sdb1

[root@idcv-ceph0 cluster]# ceph-deploy --overwrite-conf osd prepare idcv-ceph0:/dev/sdb idcv-ceph1:/dev/sdb idcv-ceph2:/dev/sdb idcv-ceph3:/dev/sdb --zap-disk
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy --overwrite-conf osd prepare idcv-ceph0:/dev/sdb idcv-ceph1:/dev/sdb idcv-ceph2:/dev/sdb idcv-ceph3:/dev/sdb --zap-disk
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] disk : [('idcv-ceph0', '/dev/sdb', None), ('idcv-ceph1', '/dev/sdb', None), ('idcv-ceph2', '/dev/sdb', None), ('idcv-ceph3', '/dev/sdb', None)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] block_wal : None
[ceph_deploy.cli][INFO ] overwrite_conf : True
[ceph_deploy.cli][INFO ] subcommand : prepare
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f103c7f35a8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] func : <function osd at 0x7f103c846f50>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : True
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks idcv-ceph0:/dev/sdb: idcv-ceph1:/dev/sdb: idcv-ceph2:/dev/sdb: idcv-ceph3:/dev/sdb:
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to idcv-ceph0
[idcv-ceph0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host idcv-ceph0 disk /dev/sdb journal None activate False
[idcv-ceph0][DEBUG ] find the location of an executable
[idcv-ceph0][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] set_type: Will colocate journal with data on /dev/sdb
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] zap: Zapping partition table on /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --zap-all -- /dev/sdb
[idcv-ceph0][WARNIN] Caution: invalid backup GPT header, but valid main header; regenerating
[idcv-ceph0][WARNIN] backup header from main header.
[idcv-ceph0][WARNIN]
[idcv-ceph0][DEBUG ] ****************************************************************************
[idcv-ceph0][DEBUG ] Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
[idcv-ceph0][DEBUG ] verification and recovery are STRONGLY recommended.
[idcv-ceph0][DEBUG ] ****************************************************************************
[idcv-ceph0][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[idcv-ceph0][DEBUG ] other utilities.
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --clear --mbrtogpt -- /dev/sdb
[idcv-ceph0][DEBUG ] Creating new GPT entries.
[idcv-ceph0][DEBUG ] The operation has completed successfully.
[idcv-ceph0][WARNIN] update_partition: Calling partprobe on zapped device /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] ptype_tobe_for_name: name = journal
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] create_partition: Creating journal partition num 2 size 5120 on /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:ca6594bd-a4b2-4be7-9aa5-69ba91ce7441 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb
[idcv-ceph0][DEBUG ] The operation has completed successfully.
[idcv-ceph0][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid
[idcv-ceph0][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/ca6594bd-a4b2-4be7-9aa5-69ba91ce7441
[idcv-ceph0][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/ca6594bd-a4b2-4be7-9aa5-69ba91ce7441
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] set_data_partition: Creating osd partition on /dev/sdb
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] ptype_tobe_for_name: name = data
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:3b210c8e-b2ac-4266-9e59-623c031ebb89 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb
[idcv-ceph0][DEBUG ] The operation has completed successfully.
[idcv-ceph0][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[idcv-ceph0][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sdb1
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
[idcv-ceph0][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6225855 blks
[idcv-ceph0][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[idcv-ceph0][DEBUG ] = crc=1 finobt=0, sparse=0
[idcv-ceph0][DEBUG ] data = bsize=4096 blocks=24903419, imaxpct=25
[idcv-ceph0][DEBUG ] = sunit=0 swidth=0 blks
[idcv-ceph0][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[idcv-ceph0][DEBUG ] log =internal log bsize=4096 blocks=12159, version=2
[idcv-ceph0][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[idcv-ceph0][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[idcv-ceph0][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.kvs_nq with options noatime,inode64
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq/ceph_fsid.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq/ceph_fsid.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq/fsid.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq/fsid.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq/magic.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq/magic.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq/journal_uuid.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq/journal_uuid.2933.tmp
[idcv-ceph0][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.kvs_nq/journal -> /dev/disk/by-partuuid/ca6594bd-a4b2-4be7-9aa5-69ba91ce7441
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
[idcv-ceph0][DEBUG ] Warning: The kernel is still using the old partition table.
[idcv-ceph0][DEBUG ] The new table will be used at the next reboot.
[idcv-ceph0][DEBUG ] The operation has completed successfully.
[idcv-ceph0][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1
[idcv-ceph0][INFO ] checking OSD status...
[idcv-ceph0][DEBUG ] find the location of an executable
[idcv-ceph0][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host idcv-ceph0 is now ready for osd use.
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[idcv-ceph1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to idcv-ceph1
[idcv-ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host idcv-ceph1 disk /dev/sdb journal None activate False
[idcv-ceph1][DEBUG ] find the location of an executable
[idcv-ceph1][INFO ] Running command: sudo /usr/sbin/ceph-disk -v prepare --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] set_type: Will colocate journal with data on /dev/sdb
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] zap: Zapping partition table on /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --zap-all -- /dev/sdb
[idcv-ceph1][DEBUG ] Creating new GPT entries.
[idcv-ceph1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[idcv-ceph1][DEBUG ] other utilities.
[idcv-ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --clear --mbrtogpt -- /dev/sdb
[idcv-ceph1][DEBUG ] Creating new GPT entries.
[idcv-ceph1][DEBUG ] The operation has completed successfully.
[idcv-ceph1][WARNIN] update_partition: Calling partprobe on zapped device /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] ptype_tobe_for_name: name = journal
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] create_partition: Creating journal partition num 2 size 5120 on /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:09dad07a-985e-4733-a228-f7b1105b7385 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb
[idcv-ceph1][DEBUG ] The operation has completed successfully.
[idcv-ceph1][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid
[idcv-ceph1][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/09dad07a-985e-4733-a228-f7b1105b7385
[idcv-ceph1][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/09dad07a-985e-4733-a228-f7b1105b7385
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] set_data_partition: Creating osd partition on /dev/sdb
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] ptype_tobe_for_name: name = data
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:2809f370-e6ad-4d29-bf6b-57fe1f2004c6 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb
[idcv-ceph1][DEBUG ] The operation has completed successfully.
[idcv-ceph1][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[idcv-ceph1][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sdb1
[idcv-ceph1][WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
[idcv-ceph1][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6225855 blks
[idcv-ceph1][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[idcv-ceph1][DEBUG ] = crc=1 finobt=0, sparse=0
[idcv-ceph1][DEBUG ] data = bsize=4096 blocks=24903419, imaxpct=25
[idcv-ceph1][DEBUG ] = sunit=0 swidth=0 blks
[idcv-ceph1][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[idcv-ceph1][DEBUG ] log =internal log bsize=4096 blocks=12159, version=2
[idcv-ceph1][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[idcv-ceph1][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[idcv-ceph1][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.HAg1vC with options noatime,inode64
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC/ceph_fsid.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC/ceph_fsid.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC/fsid.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC/fsid.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC/magic.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC/magic.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC/journal_uuid.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC/journal_uuid.2415.tmp
[idcv-ceph1][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.HAg1vC/journal -> /dev/disk/by-partuuid/09dad07a-985e-4733-a228-f7b1105b7385
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
[idcv-ceph1][DEBUG ] The operation has completed successfully.
[idcv-ceph1][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1
[idcv-ceph1][INFO ] checking OSD status...
[idcv-ceph1][DEBUG ] find the location of an executable
[idcv-ceph1][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host idcv-ceph1 is now ready for osd use.
[idcv-ceph2][DEBUG ] connection detected need for sudo
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph2
[idcv-ceph2][DEBUG ] detect platform information from remote host
[idcv-ceph2][DEBUG ] detect machine type
[idcv-ceph2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to idcv-ceph2
[idcv-ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host idcv-ceph2 disk /dev/sdb journal None activate False
[idcv-ceph2][DEBUG ] find the location of an executable
[idcv-ceph2][INFO ] Running command: sudo /usr/sbin/ceph-disk -v prepare --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] set_type: Will colocate journal with data on /dev/sdb
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] zap: Zapping partition table on /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /sbin/sgdisk --zap-all -- /dev/sdb
[idcv-ceph2][DEBUG ] Creating new GPT entries.
[idcv-ceph2][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[idcv-ceph2][DEBUG ] other utilities.
[idcv-ceph2][WARNIN] command_check_call: Running command: /sbin/sgdisk --clear --mbrtogpt -- /dev/sdb
[idcv-ceph2][DEBUG ] Creating new GPT entries.
[idcv-ceph2][DEBUG ] The operation has completed successfully.
[idcv-ceph2][WARNIN] update_partition: Calling partprobe on zapped device /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] ptype_tobe_for_name: name = journal
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] create_partition: Creating journal partition num 2 size 5120 on /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:857f0966-30d5-4ad1-9e0c-abff0fbbbc4e --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb
[idcv-ceph2][DEBUG ] The operation has completed successfully.
[idcv-ceph2][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid
[idcv-ceph2][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/857f0966-30d5-4ad1-9e0c-abff0fbbbc4e
[idcv-ceph2][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/857f0966-30d5-4ad1-9e0c-abff0fbbbc4e
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] set_data_partition: Creating osd partition on /dev/sdb
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] ptype_tobe_for_name: name = data
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:dac63cc2-6876-4004-ba3b-7786be39d392 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb
[idcv-ceph2][DEBUG ] The operation has completed successfully.
[idcv-ceph2][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[idcv-ceph2][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sdb1
[idcv-ceph2][WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
[idcv-ceph2][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6225855 blks
[idcv-ceph2][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[idcv-ceph2][DEBUG ] = crc=1 finobt=0, sparse=0
[idcv-ceph2][DEBUG ] data = bsize=4096 blocks=24903419, imaxpct=25
[idcv-ceph2][DEBUG ] = sunit=0 swidth=0 blks
[idcv-ceph2][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[idcv-ceph2][DEBUG ] log =internal log bsize=4096 blocks=12159, version=2
[idcv-ceph2][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[idcv-ceph2][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.jhzVmR with options noatime,inode64
[idcv-ceph2][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR/ceph_fsid.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR/ceph_fsid.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR/fsid.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR/fsid.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR/magic.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR/magic.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR/journal_uuid.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR/journal_uuid.2354.tmp
[idcv-ceph2][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.jhzVmR/journal -> /dev/disk/by-partuuid/857f0966-30d5-4ad1-9e0c-abff0fbbbc4e
[idcv-ceph2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
[idcv-ceph2][DEBUG ] Warning: The kernel is still using the old partition table.
[idcv-ceph2][DEBUG ] The new table will be used at the next reboot.
[idcv-ceph2][DEBUG ] The operation has completed successfully.
[idcv-ceph2][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1
[idcv-ceph2][INFO ] checking OSD status...
[idcv-ceph2][DEBUG ] find the location of an executable
[idcv-ceph2][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host idcv-ceph2 is now ready for osd use.
[idcv-ceph3][DEBUG ] connection detected need for sudo
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph3
[idcv-ceph3][DEBUG ] detect platform information from remote host
[idcv-ceph3][DEBUG ] detect machine type
[idcv-ceph3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to idcv-ceph3
[idcv-ceph3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host idcv-ceph3 disk /dev/sdb journal None activate False
[idcv-ceph3][DEBUG ] find the location of an executable
[idcv-ceph3][INFO ] Running command: sudo /usr/sbin/ceph-disk -v prepare --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] set_type: Will colocate journal with data on /dev/sdb
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] zap: Zapping partition table on /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /sbin/sgdisk --zap-all -- /dev/sdb
[idcv-ceph3][DEBUG ] Creating new GPT entries.
[idcv-ceph3][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[idcv-ceph3][DEBUG ] other utilities.
[idcv-ceph3][WARNIN] command_check_call: Running command: /sbin/sgdisk --clear --mbrtogpt -- /dev/sdb
[idcv-ceph3][DEBUG ] Creating new GPT entries.
[idcv-ceph3][DEBUG ] The operation has completed successfully.
[idcv-ceph3][WARNIN] update_partition: Calling partprobe on zapped device /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] ptype_tobe_for_name: name = journal
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] create_partition: Creating journal partition num 2 size 5120 on /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:52677a68-3cf4-4d9a-b2d4-8c823e1cb901 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb
[idcv-ceph3][DEBUG ] The operation has completed successfully.
[idcv-ceph3][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid
[idcv-ceph3][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/52677a68-3cf4-4d9a-b2d4-8c823e1cb901
[idcv-ceph3][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/52677a68-3cf4-4d9a-b2d4-8c823e1cb901
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] set_data_partition: Creating osd partition on /dev/sdb
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] ptype_tobe_for_name: name = data
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:a85b0288-85ce-4887-8249-497ba880fe10 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb
[idcv-ceph3][DEBUG ] The operation has completed successfully.
[idcv-ceph3][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[idcv-ceph3][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sdb1
[idcv-ceph3][WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
[idcv-ceph3][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6225855 blks
[idcv-ceph3][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[idcv-ceph3][DEBUG ] = crc=1 finobt=0, sparse=0
[idcv-ceph3][DEBUG ] data = bsize=4096 blocks=24903419, imaxpct=25
[idcv-ceph3][DEBUG ] = sunit=0 swidth=0 blks
[idcv-ceph3][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[idcv-ceph3][DEBUG ] log =internal log bsize=4096 blocks=12159, version=2
[idcv-ceph3][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[idcv-ceph3][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[idcv-ceph3][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.gjITlj with options noatime,inode64
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj/ceph_fsid.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj/ceph_fsid.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj/fsid.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj/fsid.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj/magic.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj/magic.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj/journal_uuid.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj/journal_uuid.2372.tmp
[idcv-ceph3][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.gjITlj/journal -> /dev/disk/by-partuuid/52677a68-3cf4-4d9a-b2d4-8c823e1cb901
[idcv-ceph3][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
[idcv-ceph3][DEBUG ] Warning: The kernel is still using the old partition table.
[idcv-ceph3][DEBUG ] The new table will be used at the next reboot.
[idcv-ceph3][DEBUG ] The operation has completed successfully.
[idcv-ceph3][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1
[idcv-ceph3][INFO ] checking OSD status...
[idcv-ceph3][DEBUG ] find the location of an executable
[idcv-ceph3][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host idcv-ceph3 is now ready for osd use.
#激活
[root@idcv-ceph0 cluster]# ceph-deploy --overwrite-conf osd activate idcv-ceph0:/dev/sdb1 idcv-ceph1:/dev/sdb1 idcv-ceph2:/dev/sdb1 idcv-ceph3:/dev/sdb1

2、查看了下idcv-ceph1没有加上去

[root@idcv-ceph0 cluster]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 99.5G 0 part
└─centos-root 253:0 0 99.5G 0 lvm /
sdb 8:16 0 100G 0 disk
├─sdb1 8:17 0 95G 0 part /var/lib/ceph/osd/ceph-0
└─sdb2 8:18 0 5G 0 part
sr0 11:0 1 1024M 0 rom
[root@idcv-ceph0 cluster]# ceph -s
cluster 812d3acb-eaa8-4355-9a74-64f2cd5209b3
health HEALTH_OK
monmap e2: 3 mons at {idcv-ceph0=172.20.1.138:6789/0,idcv-ceph2=172.20.1.140:6789/0,idcv-ceph3=172.20.1.141:6789/0}
election epoch 8, quorum 0,1,2 idcv-ceph0,idcv-ceph2,idcv-ceph3
osdmap e14: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
pgmap v27: 64 pgs, 1 pools, 0 bytes data, 0 objects
100 MB used, 284 GB / 284 GB avail
64 active+clean
[root@idcv-ceph0 cluster]#

3、使用这个方法赋予角色OSD

[root@idcv-ceph0 cluster]# ceph-deploy install --no-adjust-repos --osd idcv-ceph1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy install --no-adjust-repos --osd idcv-ceph1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f19c0ebd440>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : False
[ceph_deploy.cli][INFO ] func : <function install at 0x7f19c1f96d70>
[ceph_deploy.cli][INFO ] install_mgr : False
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : ['idcv-ceph1']
[ceph_deploy.cli][INFO ] install_rgw : False
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : True
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] nogpgcheck : False
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts idcv-ceph1
[ceph_deploy.install][DEBUG ] Detecting platform for host idcv-ceph1 ...
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph1][INFO ] installing Ceph on idcv-ceph1
[idcv-ceph1][INFO ] Running command: sudo yum clean all
[idcv-ceph1][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph1][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[idcv-ceph1][DEBUG ] Cleaning up everything
[idcv-ceph1][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[idcv-ceph1][DEBUG ] Cleaning up list of fastest mirrors
[idcv-ceph1][INFO ] Running command: sudo yum -y install ceph
[idcv-ceph1][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph1][DEBUG ] Determining fastest mirrors
[idcv-ceph1][DEBUG ] * base: mirrors.tuna.tsinghua.edu.cn
[idcv-ceph1][DEBUG ] * epel: mirrors.huaweicloud.com
[idcv-ceph1][DEBUG ] * extras: mirror.bit.edu.cn
[idcv-ceph1][DEBUG ] * updates: mirrors.huaweicloud.com
[idcv-ceph1][DEBUG ] 12 packages excluded due to repository priority protections
[idcv-ceph1][DEBUG ] Package 1:ceph-10.2.10-0.el7.x86_64 already installed and latest version
[idcv-ceph1][DEBUG ] Nothing to do
[idcv-ceph1][INFO ] Running command: sudo ceph --version
[idcv-ceph1][DEBUG ] ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)

4、节点cpeh1 还是安装不上osd角色,这边准备初始化ceph1重新添加

ceph-deploy purge 节点
ceph-deploy purgedata 节点
清楚安装包和残余数据
ceph-dpeloy install --no-adjust-repos --osd ceph1
直接装包 赋予OSD存储角色之后在添加OSD
具体步骤如下:
ceph-deploy purge idcv-ceph1
ceph-deploy purgedata idcv-ceph1
ceph-deploy --overwrite-conf osd prepare idcv-ceph1:/dev/sdb
ceph-deploy --overwrite-conf osd activate idcv-ceph1:/dev/sdb1

5、部署成功osd查看集群状态

[root@idcv-ceph0 cluster]# ceph -s
cluster 812d3acb-eaa8-4355-9a74-64f2cd5209b3
health HEALTH_OK
monmap e2: 3 mons at {idcv-ceph0=172.20.1.138:6789/0,idcv-ceph2=172.20.1.140:6789/0,idcv-ceph3=172.20.1.141:6789/0}
election epoch 8, quorum 0,1,2 idcv-ceph0,idcv-ceph2,idcv-ceph3
osdmap e27: 4 osds: 4 up, 4 in
flags sortbitwise,require_jewel_osds
pgmap v64: 104 pgs, 6 pools, 1588 bytes data, 171 objects
138 MB used, 379 GB / 379 GB avail
104 active+clean

部署RGW服务

1、部署ceph1为网关服务

[root@idcv-ceph0 cluster]# ceph-deploy install --no-adjust-repos --rgw idcv-ceph1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy install --no-adjust-repos --rgw idcv-ceph1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fba6af12440>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : False
[ceph_deploy.cli][INFO ] func : <function install at 0x7fba6bfe9d70>
[ceph_deploy.cli][INFO ] install_mgr : False
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : ['idcv-ceph1']
[ceph_deploy.cli][INFO ] install_rgw : True
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : False
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] nogpgcheck : False
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts idcv-ceph1
[ceph_deploy.install][DEBUG ] Detecting platform for host idcv-ceph1 ...
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph1][INFO ] installing Ceph on idcv-ceph1
[idcv-ceph1][INFO ] Running command: sudo yum clean all
[idcv-ceph1][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph1][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[idcv-ceph1][DEBUG ] Cleaning up everything
[idcv-ceph1][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[idcv-ceph1][DEBUG ] Cleaning up list of fastest mirrors
[idcv-ceph1][INFO ] Running command: sudo yum -y install ceph-radosgw
[idcv-ceph1][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph1][DEBUG ] Determining fastest mirrors
[idcv-ceph1][DEBUG ] * base: mirrors.aliyun.com
[idcv-ceph1][DEBUG ] * epel: mirrors.aliyun.com
[idcv-ceph1][DEBUG ] * extras: mirrors.aliyun.com
[idcv-ceph1][DEBUG ] * updates: mirror.bit.edu.cn
[idcv-ceph1][DEBUG ] 12 packages excluded due to repository priority protections
[idcv-ceph1][DEBUG ] Resolving Dependencies
[idcv-ceph1][DEBUG ] --> Running transaction check
[idcv-ceph1][DEBUG ] ---> Package ceph-radosgw.x86_64 1:10.2.10-0.el7 will be installed
[idcv-ceph1][DEBUG ] --> Finished Dependency Resolution
[idcv-ceph1][DEBUG ]
[idcv-ceph1][DEBUG ] Dependencies Resolved
[idcv-ceph1][DEBUG ]
[idcv-ceph1][DEBUG ] ================================================================================
[idcv-ceph1][DEBUG ] Package Arch Version Repository Size
[idcv-ceph1][DEBUG ] ================================================================================
[idcv-ceph1][DEBUG ] Installing:
[idcv-ceph1][DEBUG ] ceph-radosgw x86_64 1:10.2.10-0.el7 Ceph 266 k
[idcv-ceph1][DEBUG ]
[idcv-ceph1][DEBUG ] Transaction Summary
[idcv-ceph1][DEBUG ] ================================================================================
[idcv-ceph1][DEBUG ] Install 1 Package
[idcv-ceph1][DEBUG ]
[idcv-ceph1][DEBUG ] Total download size: 266 k
[idcv-ceph1][DEBUG ] Installed size: 795 k
[idcv-ceph1][DEBUG ] Downloading packages:
[idcv-ceph1][DEBUG ] Running transaction check
[idcv-ceph1][DEBUG ] Running transaction test
[idcv-ceph1][DEBUG ] Transaction test succeeded
[idcv-ceph1][DEBUG ] Running transaction
[idcv-ceph1][DEBUG ] Installing : 1:ceph-radosgw-10.2.10-0.el7.x86_64 1/1
[idcv-ceph1][DEBUG ] Verifying : 1:ceph-radosgw-10.2.10-0.el7.x86_64 1/1
[idcv-ceph1][DEBUG ]
[idcv-ceph1][DEBUG ] Installed:
[idcv-ceph1][DEBUG ] ceph-radosgw.x86_64 1:10.2.10-0.el7
[idcv-ceph1][DEBUG ]
[idcv-ceph1][DEBUG ] Complete!
[idcv-ceph1][INFO ] Running command: sudo ceph --version
[idcv-ceph1][DEBUG ] ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)

2、设置idcv-ceph1为管理网关

[root@idcv-ceph0 cluster]# ceph-deploy admin idcv-ceph1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy admin idcv-ceph1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f5f91222fc8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : ['idcv-ceph1']
[ceph_deploy.cli][INFO ] func : <function admin at 0x7f5f9234f9b0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to idcv-ceph1
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[idcv-ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

3、创建生成网关实例idcv-ceph1

[root@idcv-ceph0 cluster]# ceph-deploy rgw create idcv-ceph1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy rgw create idcv-ceph1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] rgw : [('idcv-ceph1', 'rgw.idcv-ceph1')]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f6c86f85128>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function rgw at 0x7f6c8805a7d0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts idcv-ceph1:rgw.idcv-ceph1
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to idcv-ceph1
[idcv-ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph1][WARNIN] rgw keyring does not exist yet, creating one
[idcv-ceph1][DEBUG ] create a keyring file
[idcv-ceph1][DEBUG ] create path recursively if it doesn't exist
[idcv-ceph1][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.idcv-ceph1 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.idcv-ceph1/keyring
[idcv-ceph1][INFO ] Running command: sudo systemctl enable ceph-radosgw@rgw.idcv-ceph1
[idcv-ceph1][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.idcv-ceph1.service to /usr/lib/systemd/system/ceph-radosgw@.service.
[idcv-ceph1][INFO ] Running command: sudo systemctl start ceph-radosgw@rgw.idcv-ceph1
[idcv-ceph1][INFO ] Running command: sudo systemctl enable ceph.target
[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host idcv-ceph1 and default port 7480

4、测试网关服务

[root@idcv-ceph0 cluster]# curl 172.20.1.139:7480
 <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>

Jewel版本Ceph集群功能性能测试

测试目标:

使用rbd映射挂载块存储并测试性能
使用rbd-nbd映射挂载条带块存储并测试性能
使用s3brower测试对象存储读写
使用s3fs挂载挂载对象存储
使用对象存储写使用块存储读

一、使用rdb映射挂载块存储并测试性能

1、创建image

[root@idcv-ceph0 cluster]# ceph osd pool create test_pool 100
pool 'test_pool' created
[root@idcv-ceph0 cluster]# rados lspools
rbd
.rgw.root
default.rgw.control
default.rgw.data.root
default.rgw.gc
default.rgw.log
default.rgw.users.uid
default.rgw.users.keys
default.rgw.buckets.index
default.rgw.buckets.data
test_pool
[root@idcv-ceph0 cluster]# rbd list
[root@idcv-ceph0 cluster]# rbd create test_pool/testimage1 --size 40960
[root@idcv-ceph0 cluster]# rbd create test_pool/testimage2 --size 40960
[root@idcv-ceph0 cluster]# rbd create test_pool/testimage3 --size 40960
[root@idcv-ceph0 cluster]# rbd create test_pool/testimage4 --size 40960
[root@idcv-ceph0 cluster]# rbd list
[root@idcv-ceph0 cluster]# rbd list test_pool
testimage1
testimage2
testimage3
testimage4

2、映射image

[root@idcv-ceph0 cluster]# rbd map test_pool/testimage1
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable".
In some cases useful info is found in syslog - try "dmesg | tail" or so.
rbd: map failed: (6) No such device or address
[root@idcv-ceph0 cluster]# dmesg |tail
[113320.926463] rbd: loaded (major 252)
[113320.931044] libceph: mon2 172.20.1.141:6789 session established
[113320.931364] libceph: client4193 fsid 812d3acb-eaa8-4355-9a74-64f2cd5209b3
[113320.936922] rbd: image testimage1: image uses unsupported features: 0x38
[113339.870548] libceph: mon1 172.20.1.140:6789 session established
[113339.870906] libceph: client4168 fsid 812d3acb-eaa8-4355-9a74-64f2cd5209b3
[113339.877109] rbd: image testimage1: image uses unsupported features: 0x38
[113381.405453] libceph: mon2 172.20.1.141:6789 session established
[113381.405784] libceph: client4202 fsid 812d3acb-eaa8-4355-9a74-64f2cd5209b3
[113381.411625] rbd: image testimage1: image uses unsupported features: 0x38

报错处理方法:disable新特性

[root@idcv-ceph0 cluster]# rbd info test_pool/testimage1
rbd image 'testimage1':
size 40960 MB in 10240 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.10802ae8944a
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
[root@idcv-ceph0 cluster]# rbd feature disable test_pool/testimage1
rbd: at least one feature name must be specified
[root@idcv-ceph0 cluster]# rbd feature disable test_pool/testimage1 fast-diff
[root@idcv-ceph0 cluster]# rbd feature disable test_pool/testimage1 object-map
[root@idcv-ceph0 cluster]# rbd feature disable test_pool/testimage1 exclusive-lock
[root@idcv-ceph0 cluster]# rbd feature disable test_pool/testimage1 deep-flatten
[root@idcv-ceph0 cluster]# rbd info test_pool/testimage1
rbd image 'testimage1':
size 40960 MB in 10240 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.10802ae8944a
format: 2
features: layering
flags:
[root@idcv-ceph0 cluster]# rbd map test_pool/testimage1
/dev/rbd0

同理操作testimage2\3\4,最终如下

[root@idcv-ceph0 cluster]# rbd info test_pool/testimage2
[root@idcv-ceph0 cluster]# rbd info test_pool/testimage3
[root@idcv-ceph0 cluster]# rbd info test_pool/testimage4
[root@idcv-ceph0 cluster]# rbd showmapped
id pool image snap device
0 test_pool testimage1 - /dev/rbd0
1 test_pool testimage2 - /dev/rbd1
2 test_pool testimage3 - /dev/rbd2
3 test_pool testimage4 - /dev/rbd3

备注收缩image大小

[root@idcv-ceph0 ceph-disk0]#  rbd resize -p test_pool --image testimage1 -s 10240  --allow-shrink
Resizing image: 100% complete...done.
[root@idcv-ceph0 ceph-disk0]#  rbd resize -p test_pool --image testimage2 -s 10240  --allow-shrink
Resizing image: 100% complete...done.
[root@idcv-ceph0 ceph-disk0]#  rbd resize -p test_pool --image testimage3 -s 10240  --allow-shrink
Resizing image: 100% complete...done.
[root@idcv-ceph0 ceph-disk0]#  rbd resize -p test_pool --image testimage4 -s 10240  --allow-shrink
Resizing image: 100% complete...done.

3、格式化挂载

[root@idcv-ceph0 ceph-disk0]# mkfs.xfs /dev/rbd0
[root@idcv-ceph0 ceph-disk0]# mount /dev/rbd0 /mnt/ceph-disk0

4、dd测试

[root@idcv-ceph0 ceph-disk0]# dd if=/dev/zero of=/mnt/ceph-disk0/file0 count=1000 bs=4M conv=fsync
1000+0 records in
1000+0 records out
4194304000 bytes (4.2 GB) copied, 39.1407 s, 107 MB/s

二、使用rbd-nbd映射挂载条带块存储并测试性能

1、创建image
根据官网文档条带化测试需要带参数–stripe-unit及–stripe-count
计划测试object-size为4M、4K且count为1时,object-szie为32M且count为8、16时块存储性能

[root@idcv-ceph0 ceph-disk0]# rbd create test_pool/testimage5 --size 10240 --stripe-unit 2097152 --stripe-count 16
[root@idcv-ceph0 ceph-disk0]# rbd info test_pool/testimage5
rbd image 'testimage5':
size 10240 MB in 2560 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.10c52ae8944a
format: 2
features: layering, striping, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
stripe unit: 2048 kB
stripe count: 16
[root@idcv-ceph0 ceph-disk0]# rbd create test_pool/testimage6 --size 10240 --stripe-unit 4096 --stripe-count 4
[root@idcv-ceph0 ceph-disk0]# rbd info test_pool/testimage6
rbd image 'testimage6':
size 10240 MB in 2560 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.10c82ae8944a
format: 2
features: layering, striping, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
stripe unit: 4096 bytes
stripe count: 4
[root@idcv-ceph0 ceph-disk0]# rbd create test_pool/testimage7 --size 10240 --object-size 32M --stripe-unit 4194304 --stripe-count 4
[root@idcv-ceph0 ceph-disk0]# rbd info test_pool/testimage7
rbd image 'testimage7':
size 10240 MB in 320 objects
order 25 (32768 kB objects)
block_name_prefix: rbd_data.107e238e1f29
format: 2
features: layering, striping, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
stripe unit: 4096 kB
stripe count: 4
[root@idcv-ceph0 ceph-disk0]# rbd create test_pool/testimage8 --size 10240 --object-size 32M --stripe-unit 2097152 --stripe-count 16
[root@idcv-ceph0 ceph-disk0]# rbd info test_pool/testimage8
rbd image 'testimage8':
size 10240 MB in 320 objects
order 25 (32768 kB objects)
block_name_prefix: rbd_data.109d2ae8944a
format: 2
features: layering, striping, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
stripe unit: 2048 kB
stripe count: 16

[root@idcv-ceph0 ceph-disk0]# rbd create test_pool/testimage11 --size 10240 --object-size 4M
[root@idcv-ceph0 ceph-disk0]# rbd create test_pool/testimage12 --size 10240 --object-size 4K
[root@idcv-ceph0 ceph-disk0]# rbd info test_pool/testimage11
rbd image 'testimage11':
size 10240 MB in 2560 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.10ac238e1f29
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
[root@idcv-ceph0 ceph-disk0]# rbd info test_pool/testimage12
rbd image 'testimage12':
size 10240 MB in 2621440 objects
order 12 (4096 bytes objects)
block_name_prefix: rbd_data.10962ae8944a
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:

2、映射inages

[root@idcv-ceph2 mnt]# rbd map test_pool/testimage8
rbd: sysfs write failed
In some cases useful info is found in syslog - try "dmesg | tail" or so.
rbd: map failed: (22) Invalid argument
[root@idcv-ceph2 mnt]# dmesg |tail
[118760.024660] XFS (rbd0): Log I/O Error Detected. Shutting down filesystem
[118760.024710] XFS (rbd0): Please umount the filesystem and rectify the problem(s)
[118760.024766] XFS (rbd0): Unable to update superblock counters. Freespace may not be correct on next mount.
[118858.837102] XFS (rbd0): Mounting V5 Filesystem
[118858.872345] XFS (rbd0): Ending clean mount
[173522.968410] rbd: rbd0: encountered watch error: -107
[176701.031429] rbd: image testimage8: unsupported stripe unit (got 2097152 want 33554432)
[176827.317008] rbd: image testimage8: unsupported stripe unit (got 2097152 want 33554432)
[177423.107103] rbd: image testimage8: unsupported stripe unit (got 2097152 want 33554432)
[177452.820032] rbd: image testimage8: unsupported stripe unit (got 2097152 want 33554432)

3、排错发现rbd不支持条带特性需要需要使用rbd-nbd

rbd-nbd支持所有的新特性,后续map时也不需要disable新特性,但是linux内核默认没有nbd模块,需要编译内核安装,可以参考下面链接https://blog.csdn.net/miaodichiyou/article/details/76050361

[root@idcv-ceph2 ~]# wget http://vault.centos.org/7.5.1804/updates/Source/SPackages/kernel-3.10.0-862.2.3.el7.src.rpm
[root@idcv-ceph2 ~]# rpm -ivh kernel-3.10.0-862.2.3.el7.src.rpm
[root@idcv-ceph2 ~]# cd /root/rpmbuild/
[root@idcv-ceph0 rpmbuild]# cd SOURCES/
[root@idcv-ceph0 SOURCES]# tar Jxvf linux-3.10.0-862.2.3.el7.tar.xz -C /usr/src/kernels/
[root@idcv-ceph0 SOURCES]# cd /usr/src/kernels/
[root@idcv-ceph0 kernels]# mv 3.10.0-862.6.3.el7.x86_64 3.10.0-862.6.3.el7.x86_64-old
[root@idcv-ceph0 kernels]# mv linux-3.10.0-862.2.3.el7 3.10.0-862.6.3.el7.x86_64
[root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# cd 3.10.0-862.6.3.el7.x86_64
[root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# mkdir mrproper
[root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# cp ../3.10.0-862.6.3.el7.x86_64-old/Module.symvers ./
[root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# cp /boot/config-3.10.0-862.2.3.el7.x86_64 ./.config
[root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# yum install elfutils-libelf-devel
[root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# make prepare
[root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# make scripts
[root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# make CONFIG_BLK_DEV_NBD=m M=drivers/block
[root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# modinfo nbd
[root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# cp drivers/block/nbd.ko /lib/modules/3.10.0-862.2.3.el7.x86_64/kernel/drivers/block/
[root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# depmod -a
[root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# modprobe nbd
[root@idcv-ceph0 3.10.0-862.6.3.el7.x86_64]# lsmod |grep nbd

4、使用rbd-nbd映射image

[root@idcv-ceph0 ~]# rbd-nbd map test_pool/testimage17
/dev/nbd0
[root@idcv-ceph0 ~]# rbd info test_pool/testimage17
rbd image 'testimage17':
size 10240 MB in 1280 objects
order 23 (8192 kB objects)
block_name_prefix: rbd_data.112d74b0dc51
format: 2
features: layering, striping
flags:
stripe unit: 1024 kB
stripe count: 8
[root@idcv-ceph0 ~]# mkfs.xfs /dev/nbd0
meta-data=/dev/nbd0 isize=512 agcount=4, agsize=655360 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=2621440, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@idcv-ceph0 ~]# mount /dev/nbd0 /mnt/ceph-8M/
[root@idcv-ceph0 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 100G 3.5G 96G 4% /
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 7.8G 12M 7.8G 1% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda1 497M 150M 348M 31% /boot
tmpfs 1.6G 0 1.6G 0% /run/user/0
/dev/sdb1 95G 40G 56G 42% /var/lib/ceph/osd/ceph-0
/dev/rbd0 10G 7.9G 2.2G 79% /mnt/ceph-disk0
/dev/rbd1 10G 7.9G 2.2G 79% /mnt/ceph-4M
/dev/nbd0 10G 33M 10G 1% /mnt/ceph-8M

5、dd测试性能

object-size为8M:

[root@idcv-ceph0 ~]# dd if=/dev/zero of=/mnt/ceph-8M/file0-1 count=800 bs=10M conv=fsync
800+0 records in
800+0 records out
8388608000 bytes (8.4 GB) copied, 50.964 s, 165 MB/s
[root@idcv-ceph0 ~]# dd if=/dev/zero of=/mnt/ceph-8M/file0-1 count=80 bs=100M conv=fsync
80+0 records in
80+0 records out
8388608000 bytes (8.4 GB) copied, 26.3178 s, 319 MB/s

object-size为32M:

[root@idcv-ceph0 ceph-32M]# rbd info test_pool/testimage18
rbd image 'testimage18':
size 40960 MB in 1280 objects
order 25 (32768 kB objects)
block_name_prefix: rbd_data.11052ae8944a
format: 2
features: layering, striping, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
stripe unit: 2048 kB
stripe count: 8

[root@idcv-ceph0 ceph-32M]# dd if=/dev/zero of=/mnt/ceph-32M/file0-1 count=2000 bs=10M conv=fsync
2000+0 records in
2000+0 records out
20971520000 bytes (21 GB) copied, 67.4266 s, 311 MB/s
[root@idcv-ceph0 ceph-32M]# dd if=/dev/zero of=/mnt/ceph-32M/file0-1 count=20000 bs=1M conv=fsync
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 61.7757 s, 339 MB/s

6、测试方法汇总
4m cnt=1
4k cnt=1
32M cnt=8,16
dd测试
1M
100M

32M /mnt/ceph-32M-8 /mnt/ceph-32M-16

rbd create test_pool/testimage8 --size 10240 --object-size 32M --stripe-unit 2097152 --stripe-count 16
dd if=/dev/zero of=/mnt/ceph-32M-16/file32M count=80 bs=100M conv=fsync
dd if=/dev/zero of=/mnt/ceph-32M-16/file32M count=8000 bs=1M conv=fsync

rbd create test_pool/testimage19 --size 10240 --object-size 32M --stripe-unit 4194304 --stripe-count 8
dd if=/dev/zero of=/mnt/ceph-32M-8/file32M count=80 bs=100M conv=fsync
dd if=/dev/zero of=/mnt/ceph-32M-8/file32M count=8000 bs=1M conv=fsync

4M /mnt/ceph-4M

rbd create test_pool/testimage11 --size 10240 --object-size 4M
dd if=/dev/zero of=/mnt/ceph-4M/file4M count=80 bs=100M conv=fsync
dd if=/dev/zero of=/mnt/ceph-4M/file4M count=8000 bs=1M conv=fsync

4K /mnt/ceph-4K

rbd create test_pool/testimage12 --size 10240 --object-size 4K
dd if=/dev/zero of=/mnt/ceph-4K/file4K count=80 bs=100M conv=fsync
dd if=/dev/zero of=/mnt/ceph-4K/file4K count=8000 bs=1M conv=fsync

img

7、使用fio随机写测试
先安装fio

yum install libaio-devel
wget http://brick.kernel.dk/snaps/fio-2.1.10.tar.gz
tar zxf fio-2.1.10.tar.gz
cd fio-2.1.10/
make
make install

32M-8

fio -ioengine=libaio -bs=1m -direct=1 -thread -rw=randwrite -size=4G -filename=/dev/nbd4 -name="EBS 1m randwrite test" -iodepth=1 -runtime=60
Run status group 0 (all jobs):
WRITE: io=4096.0MB, aggrb=272729KB/s, minb=272729KB/s, maxb=272729KB/s, mint=15379msec, maxt=15379msec
Disk stats (read/write):
nbd4: ios=0/32280, merge=0/0, ticks=0/36624, in_queue=36571, util=97.61%

fio -ioengine=libaio -bs=100m -direct=1 -thread -rw=randwrite -size=4G -filename=/dev/nbd4 -name="EBS 100m randwrite test" -iodepth=1 -runtime=60
Run status group 0 (all jobs):
WRITE: io=4000.0MB, aggrb=326504KB/s, minb=326504KB/s, maxb=326504KB/s, mint=12545msec, maxt=12545msec
Disk stats (read/write):
nbd4: ios=0/31391, merge=0/0, ticks=0/1592756, in_queue=1597878, util=97.04%

32M-16

fio -ioengine=libaio -bs=1m -direct=1 -thread -rw=randwrite -size=4G -filename=/dev/nbd3 -name="EBS 1m randwrite test" -iodepth=1 -runtime=60
fio -ioengine=libaio -bs=100m -direct=1 -thread -rw=randwrite -size=4G -filename=/dev/nbd3 -name="EBS 100m randwrite test" -iodepth=1 -runtime=60

4M

fio -ioengine=libaio -bs=1m -direct=1 -thread -rw=randwrite -size=4G -filename=/dev/rbd1 -name="EBS 1m randwrite test" -iodepth=1 -runtime=60
fio -ioengine=libaio -bs=100m -direct=1 -thread -rw=randwrite -size=4G -filename=/dev/rbd1 -name="EBS 100m randwrite test" -iodepth=1 -runtime=60

4K

fio -ioengine=libaio -bs=1m -direct=1 -thread -rw=randwrite -size=400M -filename=/dev/rbd2 -name="EBS 1m randwrite test" -iodepth=1 -runtime=60
fio -ioengine=libaio -bs=100m -direct=1 -thread -rw=randwrite -size=400M -filename=/dev/rbd2 -name="EBS 100m randwrite test" -iodepth=1 -runtime=60

img

三、使用s3brower测试对象存储读写

1、创建对象存储账号密码

[root@idcv-ceph0 cluster]# radosgw-admin user create --uid=test --display-name="test" --access-key=123456 --secret=123456
[root@idcv-ceph0 cluster]# radosgw-admin user info --uid=test
{
"user_id": "test",
"display_name": "test",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{
"user": "test",
"access_key": "123456",
"secret_key": "123456"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"temp_url_keys": []
}

2、安装配置s3brower

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-kMu4zzly-1589358974085)(C:\Users\ZSKY\AppData\Roaming\Typora\typora-user-images\1572665338503.png)]

3、创建bucket上传下载测试

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-6rYLqFlH-1589358974089)(C:\Users\ZSKY\AppData\Roaming\Typora\typora-user-images\1572665382342.png)]

四、使用s3fs挂载挂载对象存储读写

测试对象存储方式写入文件,从rbd方式读目录
1、安装部署

参考:https://github.com/s3fs-fuse/s3fs-fuse/releases

安装
查看README
On CentOS 7:

sudo yum install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel
git clone https://github.com/s3fs-fuse/s3fs-fuse.git
cd s3fs-fuse
./autogen.sh
./configure
make
sudo make install

或者:

[root@idcv-ceph0 ~]# wget https://github.com/s3fs-fuse/s3fs-fuse/archive/v1.83.tar.gz
[root@idcv-ceph0 ~]# ls
[root@idcv-ceph0 ~]# tar zxvf v1.83.tar.gz
[root@idcv-ceph0 s3fs-fuse-1.83]# cd s3fs-fuse-1.83/
[root@idcv-ceph0 s3fs-fuse-1.83]# ls
[root@idcv-ceph0 s3fs-fuse-1.83]# vi README.md
[root@idcv-ceph0 s3fs-fuse-1.83]# yum install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel
[root@idcv-ceph0 s3fs-fuse-1.83]# ./autogen.sh
[root@idcv-ceph0 s3fs-fuse-1.83]# ls
[root@idcv-ceph0 s3fs-fuse-1.83]# ./configure
[root@idcv-ceph0 s3fs-fuse-1.83]# make
[root@idcv-ceph0 s3fs-fuse-1.83]# make install
[root@idcv-ceph0 s3fs-fuse-1.83]# mkdir /mnt/s3
[root@idcv-ceph0 s3fs-fuse-1.83]# vi /root/.passwd-s3fs
[root@idcv-ceph0 s3fs-fuse-1.83]# chmod 600 /root/.passwd-s3fs

2、挂载

[root@idcv-ceph0 ~]# s3fs testbucket /mnt/s3 -o url=http://172.20.1.139:7480 -o umask=0022  -o use_path_request_style
[root@idcv-ceph0 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/sdb1                 95G   75G   21G  79% /var/lib/ceph/osd/ceph-0
/dev/rbd1                 10G  7.9G  2.2G  79% /mnt/ceph-4M
/dev/rbd2                 10G  814M  9.2G   8% /mnt/ceph-4K
/dev/nbd3                 10G  7.9G  2.2G  79% /mnt/ceph-32M-16
/dev/nbd4                 10G   33M   10G   1% /mnt/ceph-32M-8
s3fs                     256T     0  256T   0% /mnt/s3

3、验证读写

[root@idcv-ceph0 ~]# ls /mnt/s3/images/
kernel-3.10.0-862.2.3.el7.src.rpm  nbd.ko  test.jpg
[root@idcv-ceph0 ~]# cp /etc/hosts
hosts        hosts.allow  hosts.deny
[root@idcv-ceph0 ~]# cp /etc/hosts /mnt/s3/images/
[root@idcv-ceph0 ~]# ls /mnt/s3/images/
hosts  kernel-3.10.0-862.2.3.el7.src.rpm  nbd.ko  test.jpg

ceph集群的扩展

1、集群增加osd

节点做前面基础环境的准备工作,在管理节点上执行

ceph-deploy install idcv-ceph6
ceph-deploy osd  prepare idcv-ceph6:/data/osd0
ceph-deploy osd  activate idcv-ceph6:/data/osd0
#查看状态
ceph -s
ceph osd tree

2、集群增加mon节点

节点做前面基础环境的准备工作,在管理节点上执行

ceph-deploy install idcv-ceph7
ceph-deploy mon add idcv-ceph7
#检查状态
ceph -s
ceph mon stat

3、集群增加元数据mds节点

节点做前面基础环境的准备工作,在管理节点上执行

ceph-deploy install idcv-ceph8
ceph-deploy mds create idcv-ceph8

ceph的dashboard功能

ceph在k版以后增加了dashboard功能,目前功能还不是特别完善,可参考https://docs.ceph.com/docs/master/mgr/dashboard/

在新版ceph.conf文件中添加

[mgr]
mgr_modules = dashboard

然后设置Dashboard的ip和端口

ceph config-key put mgr/dashboard/$name/server_addr $IP
ceph config-key put mgr/dashboard/$name/server_port $PORT

最后打开浏览器方位http://IP:port

v-ceph0 s3fs-fuse-1.83]# make install
[root@idcv-ceph0 s3fs-fuse-1.83]# mkdir /mnt/s3
[root@idcv-ceph0 s3fs-fuse-1.83]# vi /root/.passwd-s3fs
[root@idcv-ceph0 s3fs-fuse-1.83]# chmod 600 /root/.passwd-s3fs


**2、挂载**

> ```bash
> [root@idcv-ceph0 ~]# s3fs testbucket /mnt/s3 -o url=http://172.20.1.139:7480 -o umask=0022  -o use_path_request_style
> [root@idcv-ceph0 ~]# df -h
> Filesystem               Size  Used Avail Use% Mounted on
> /dev/sdb1                 95G   75G   21G  79% /var/lib/ceph/osd/ceph-0
> /dev/rbd1                 10G  7.9G  2.2G  79% /mnt/ceph-4M
> /dev/rbd2                 10G  814M  9.2G   8% /mnt/ceph-4K
> /dev/nbd3                 10G  7.9G  2.2G  79% /mnt/ceph-32M-16
> /dev/nbd4                 10G   33M   10G   1% /mnt/ceph-32M-8
> s3fs                     256T     0  256T   0% /mnt/s3
> ```

**3、验证读写**

> ```bash
> [root@idcv-ceph0 ~]# ls /mnt/s3/images/
> kernel-3.10.0-862.2.3.el7.src.rpm  nbd.ko  test.jpg
> [root@idcv-ceph0 ~]# cp /etc/hosts
> hosts        hosts.allow  hosts.deny
> [root@idcv-ceph0 ~]# cp /etc/hosts /mnt/s3/images/
> [root@idcv-ceph0 ~]# ls /mnt/s3/images/
> hosts  kernel-3.10.0-862.2.3.el7.src.rpm  nbd.ko  test.jpg
> ```
>
> ## ceph集群的扩展
>
> **1、集群增加osd**
>
> 节点做前面基础环境的准备工作,在管理节点上执行
>
> ```bash
> ceph-deploy install idcv-ceph6
> ceph-deploy osd  prepare idcv-ceph6:/data/osd0
> ceph-deploy osd  activate idcv-ceph6:/data/osd0
> #查看状态
> ceph -s
> ceph osd tree
> ```
>
> **2、集群增加mon节点**
>
> 节点做前面基础环境的准备工作,在管理节点上执行
>
> ```bash
> ceph-deploy install idcv-ceph7
> ceph-deploy mon add idcv-ceph7
> #检查状态
> ceph -s
> ceph mon stat
> ```

**3、集群增加元数据mds节点**

节点做前面基础环境的准备工作,在管理节点上执行

```bash
ceph-deploy install idcv-ceph8
ceph-deploy mds create idcv-ceph8

ceph的dashboard功能

ceph在k版以后增加了dashboard功能,目前功能还不是特别完善,可参考https://docs.ceph.com/docs/master/mgr/dashboard/

在新版ceph.conf文件中添加

[mgr]
mgr_modules = dashboard

然后设置Dashboard的ip和端口

ceph config-key put mgr/dashboard/$name/server_addr $IP
ceph config-key put mgr/dashboard/$name/server_port $PORT

最后打开浏览器方位http://IP:port

ceph相关介绍参考https://cloud.tencent.com/developer/news/275070

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

正在输入中…………

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值