ceph ipv6部署

初始准备

1、环境配置ipv6地址

2、/etc/hosts配置主机名称和ipv6地址

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
fdf8:f53d:0:82e4::2 ceph-10-10-176-131

10.10.176.131	crms-10-10-176-131.crms-10-10-176-131 crms-10-10-176-131

安装

安装软件包

关闭防火墙、安装软件包等,与ipv4一致,略过

安装mon

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ ceph-deploy --username ceph new ceph-10-10-176-131
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy --username ceph new ceph-10-10-176-131
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f6d12a38de8>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f6d123bc6c8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph-10-10-176-131']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-10-10-176-131][DEBUG ] connection detected need for sudo
[ceph-10-10-176-131][DEBUG ] connected to host: ceph@ceph-10-10-176-131
[ceph-10-10-176-131][DEBUG ] detect platform information from remote host
[ceph-10-10-176-131][DEBUG ] detect machine type
[ceph-10-10-176-131][DEBUG ] find the location of an executable
[ceph-10-10-176-131][INFO  ] Running command: sudo /usr/sbin/ip link show
[ceph-10-10-176-131][INFO  ] Running command: sudo /usr/sbin/ip addr show
[ceph-10-10-176-131][DEBUG ] IP addresses found: [u'10.10.176.131', u'fdf8:f53d:0:82e4::2']
[ceph_deploy.new][DEBUG ] Resolving host ceph-10-10-176-131
[ceph_deploy.new][DEBUG ] Monitor ceph-10-10-176-131 at fdf8:f53d:0:82e4::2
[ceph_deploy.new][INFO  ] Monitors are IPv6, binding Messenger traffic on IPv6
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-10-10-176-131']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['[fdf8:f53d:0:82e4::2]']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ ll
total 28
-rw-rw-r-- 1 ceph ceph   451 Apr 11 10:07 ceph.conf
-rw-rw-r-- 1 ceph ceph   470 Apr 11 10:00 ceph.conf.bak
-rw-rw-r-- 1 ceph ceph 15027 Apr 11 10:08 ceph-deploy-ceph.log
-rw------- 1 ceph ceph    73 Apr 11 10:05 ceph.mon.keyring
ceph@ceph-10-10-176-131[/home/ceph/cluster]$ cat ceph.conf  //修改ceph.conf如下所示
[global]
fsid = beb1836c-c3c1-43a2-a5aa-d7699c9c26ef
ms_bind_ipv6 = true
ms_bind_ipv4 = false  # 这个必须添加,不然osd启动报错
mon_initial_members = ceph-10-10-176-131
mon_host = [fdf8:f53d:0:82e4::2]
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = fdf8:f53d:0:82e4::2/120

mon_allow_pool_delete = true
osd_pool_default_size = 1
osd_pool_default_min_size = 1
mon_osd_full_ratio = 0.90
mon clock drift allowed = 2
mon clock drift warn backoff = 30

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ ceph-deploy --username ceph --overwrite-conf mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy --username ceph --overwrite-conf mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : ceph
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fd2652eb1b8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7fd265558410>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-10-10-176-131
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-10-10-176-131 ...
[ceph-10-10-176-131][DEBUG ] connection detected need for sudo
[ceph-10-10-176-131][DEBUG ] connected to host: ceph@ceph-10-10-176-131
[ceph-10-10-176-131][DEBUG ] detect platform information from remote host
[ceph-10-10-176-131][DEBUG ] detect machine type
[ceph-10-10-176-131][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.8.2003 Core
[ceph-10-10-176-131][DEBUG ] determining if provided host has same hostname in remote
[ceph-10-10-176-131][DEBUG ] get remote short hostname
[ceph-10-10-176-131][DEBUG ] deploying mon to ceph-10-10-176-131
[ceph-10-10-176-131][DEBUG ] get remote short hostname
[ceph-10-10-176-131][DEBUG ] remote hostname: ceph-10-10-176-131
[ceph-10-10-176-131][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-10-10-176-131][DEBUG ] create the mon path if it does not exist
[ceph-10-10-176-131][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-10-10-176-131/done
[ceph-10-10-176-131][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-10-10-176-131][DEBUG ] create the init path if it does not exist
[ceph-10-10-176-131][INFO  ] Running command: sudo systemctl enable ceph.target
[ceph-10-10-176-131][INFO  ] Running command: sudo systemctl enable ceph-mon@ceph-10-10-176-131
[ceph-10-10-176-131][INFO  ] Running command: sudo systemctl start ceph-mon@ceph-10-10-176-131
[ceph-10-10-176-131][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-10-10-176-131.asok mon_status
[ceph-10-10-176-131][DEBUG ] ********************************************************************************
[ceph-10-10-176-131][DEBUG ] status for monitor: mon.ceph-10-10-176-131
[ceph-10-10-176-131][DEBUG ] {
[ceph-10-10-176-131][DEBUG ]   "election_epoch": 0,
[ceph-10-10-176-131][DEBUG ]   "extra_probe_peers": [
[ceph-10-10-176-131][DEBUG ]     {
[ceph-10-10-176-131][DEBUG ]       "addrvec": [
[ceph-10-10-176-131][DEBUG ]         {
[ceph-10-10-176-131][DEBUG ]           "addr": "10.10.176.131:3300",
[ceph-10-10-176-131][DEBUG ]           "nonce": 0,
[ceph-10-10-176-131][DEBUG ]           "type": "v2"
[ceph-10-10-176-131][DEBUG ]         },
[ceph-10-10-176-131][DEBUG ]         {
[ceph-10-10-176-131][DEBUG ]           "addr": "10.10.176.131:6789",
[ceph-10-10-176-131][DEBUG ]           "nonce": 0,
[ceph-10-10-176-131][DEBUG ]           "type": "v1"
[ceph-10-10-176-131][DEBUG ]         }
[ceph-10-10-176-131][DEBUG ]       ]
[ceph-10-10-176-131][DEBUG ]     }
[ceph-10-10-176-131][DEBUG ]   ],
[ceph-10-10-176-131][DEBUG ]   "feature_map": {
[ceph-10-10-176-131][DEBUG ]     "mon": [
[ceph-10-10-176-131][DEBUG ]       {
[ceph-10-10-176-131][DEBUG ]         "features": "0x3ffddff8ffecffff",
[ceph-10-10-176-131][DEBUG ]         "num": 1,
[ceph-10-10-176-131][DEBUG ]         "release": "luminous"
[ceph-10-10-176-131][DEBUG ]       }
[ceph-10-10-176-131][DEBUG ]     ]
[ceph-10-10-176-131][DEBUG ]   },
[ceph-10-10-176-131][DEBUG ]   "features": {
[ceph-10-10-176-131][DEBUG ]     "quorum_con": "0",
[ceph-10-10-176-131][DEBUG ]     "quorum_mon": [],
[ceph-10-10-176-131][DEBUG ]     "required_con": "0",
[ceph-10-10-176-131][DEBUG ]     "required_mon": []
[ceph-10-10-176-131][DEBUG ]   },
[ceph-10-10-176-131][DEBUG ]   "monmap": {
[ceph-10-10-176-131][DEBUG ]     "created": "2024-04-10 16:49:38.033486",
[ceph-10-10-176-131][DEBUG ]     "epoch": 0,
[ceph-10-10-176-131][DEBUG ]     "features": {
[ceph-10-10-176-131][DEBUG ]       "optional": [],
[ceph-10-10-176-131][DEBUG ]       "persistent": []
[ceph-10-10-176-131][DEBUG ]     },
[ceph-10-10-176-131][DEBUG ]     "fsid": "86e7d941-2e39-4205-b3f3-9d08bfa7b79f",
[ceph-10-10-176-131][DEBUG ]     "min_mon_release": 0,
[ceph-10-10-176-131][DEBUG ]     "min_mon_release_name": "unknown",
[ceph-10-10-176-131][DEBUG ]     "modified": "2024-04-10 16:49:38.033486",
[ceph-10-10-176-131][DEBUG ]     "mons": [
[ceph-10-10-176-131][DEBUG ]       {
[ceph-10-10-176-131][DEBUG ]         "addr": ":/0",
[ceph-10-10-176-131][DEBUG ]         "name": "ceph-10-10-176-131",
[ceph-10-10-176-131][DEBUG ]         "public_addr": ":/0",
[ceph-10-10-176-131][DEBUG ]         "public_addrs": {
[ceph-10-10-176-131][DEBUG ]           "addrvec": []
[ceph-10-10-176-131][DEBUG ]         },
[ceph-10-10-176-131][DEBUG ]         "rank": 0
[ceph-10-10-176-131][DEBUG ]       }
[ceph-10-10-176-131][DEBUG ]     ]
[ceph-10-10-176-131][DEBUG ]   },
[ceph-10-10-176-131][DEBUG ]   "name": "ceph-10-10-176-131",
[ceph-10-10-176-131][DEBUG ]   "outside_quorum": [
[ceph-10-10-176-131][DEBUG ]     "ceph-10-10-176-131"
[ceph-10-10-176-131][DEBUG ]   ],
[ceph-10-10-176-131][DEBUG ]   "quorum": [],
[ceph-10-10-176-131][DEBUG ]   "rank": -1,
[ceph-10-10-176-131][DEBUG ]   "state": "probing",
[ceph-10-10-176-131][DEBUG ]   "sync_provider": []
[ceph-10-10-176-131][DEBUG ] }
[ceph-10-10-176-131][DEBUG ] ********************************************************************************
[ceph-10-10-176-131][INFO  ] monitor: mon.ceph-10-10-176-131 is currently at the state of probing
[ceph-10-10-176-131][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-10-10-176-131.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.ceph-10-10-176-131
[ceph-10-10-176-131][DEBUG ] connection detected need for sudo
[ceph-10-10-176-131][DEBUG ] connected to host: ceph@ceph-10-10-176-131
[ceph-10-10-176-131][DEBUG ] detect platform information from remote host
[ceph-10-10-176-131][DEBUG ] detect machine type
[ceph-10-10-176-131][DEBUG ] find the location of an executable
[ceph-10-10-176-131][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-10-10-176-131.asok mon_status
[ceph_deploy.mon][WARNIN] mon.ceph-10-10-176-131 monitor is not yet in quorum, tries left: 5
[ceph_deploy.mon][WARNIN] waiting 5 seconds before retrying
[ceph-10-10-176-131][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-10-10-176-131.asok mon_status
[ceph_deploy.mon][WARNIN] mon.ceph-10-10-176-131 monitor is not yet in quorum, tries left: 4
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[ceph-10-10-176-131][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-10-10-176-131.asok mon_status
[ceph_deploy.mon][WARNIN] mon.ceph-10-10-176-131 monitor is not yet in quorum, tries left: 3
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[ceph-10-10-176-131][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-10-10-176-131.asok mon_status
[ceph_deploy.mon][WARNIN] mon.ceph-10-10-176-131 monitor is not yet in quorum, tries left: 2
[ceph_deploy.mon][WARNIN] waiting 15 seconds before retrying
[ceph-10-10-176-131][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-10-10-176-131.asok mon_status
[ceph_deploy.mon][WARNIN] mon.ceph-10-10-176-131 monitor is not yet in quorum, tries left: 1
[ceph_deploy.mon][WARNIN] waiting 20 seconds before retrying
[ceph_deploy.mon][ERROR ] Some monitors have still not reached quorum:
[ceph_deploy.mon][ERROR ] ceph-10-10-176-131

这里报错,是因为monmap中的地址还是ipv4地址,需要修改

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ sudo systemctl stop ceph-mon@ceph-10-10-176-131   //停止服务

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ sudo ceph-mon -i ceph-10-10-176-131 --extract-monmap /tmp/monmap   //导出monmap
2024-04-11 10:47:08.014 7f3c533bb1c0 -1 wrote monmap to /tmp/monmap

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ monmaptool --print /tmp/monmap  //查看monmap内容
monmaptool: monmap file /tmp/monmap
epoch 0
fsid 86e7d941-2e39-4205-b3f3-9d08bfa7b79f
last_changed 2024-04-10 16:49:38.033486
created 2024-04-10 16:49:38.033486
min_mon_release 0 (unknown)
0: [v2:10.10.176.131:3300/0,v1:10.10.176.131:6789/0] mon.noname-a

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ monmaptool --rm noname-a /tmp/monmap  //删除ipv4 mon信息
monmaptool: monmap file /tmp/monmap
monmaptool: removing noname-a
monmaptool: writing epoch 0 to /tmp/monmap (0 monitors)

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ monmaptool --print /tmp/monmap //确认删除成功
monmaptool: monmap file /tmp/monmap
epoch 0
fsid 86e7d941-2e39-4205-b3f3-9d08bfa7b79f
last_changed 2024-04-10 16:49:38.033486
created 2024-04-10 16:49:38.033486
min_mon_release 0 (unknown)

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ monmaptool --add ceph-10-10-176-131 [fdf8:f53d:0:82e4::2]:6789 /tmp/monmap  //写入新的mon ipv6
monmaptool: monmap file /tmp/monmap
monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ monmaptool --print /tmp/monmap  //确认新增成功
monmaptool: monmap file /tmp/monmap
epoch 0
fsid 86e7d941-2e39-4205-b3f3-9d08bfa7b79f
last_changed 2024-04-10 16:49:38.033486
created 2024-04-10 16:49:38.033486
min_mon_release 0 (unknown)
0: v1:[fdf8:f53d:0:82e4::2]:6789/0 mon.ceph-10-10-176-131

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ ceph-mon -i ceph-10-10-176-131 --inject-monmap /tmp/monmap  //注入新的monmap
ceph@ceph-10-10-176-131[/home/ceph/cluster]$ sudo systemctl start ceph-mon@ceph-10-10-176-131

重新执行mon create-initial并分发配置

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ ceph-deploy --username ceph --overwrite-conf mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy --username ceph --overwrite-conf mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : ceph
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f773dcf11b8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7f773df5e410>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-10-10-176-131
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-10-10-176-131 ...
[ceph-10-10-176-131][DEBUG ] connection detected need for sudo
[ceph-10-10-176-131][DEBUG ] connected to host: ceph@ceph-10-10-176-131 
[ceph-10-10-176-131][DEBUG ] detect platform information from remote host
[ceph-10-10-176-131][DEBUG ] detect machine type
[ceph-10-10-176-131][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.8.2003 Core
[ceph-10-10-176-131][DEBUG ] determining if provided host has same hostname in remote
[ceph-10-10-176-131][DEBUG ] get remote short hostname
[ceph-10-10-176-131][DEBUG ] deploying mon to ceph-10-10-176-131
[ceph-10-10-176-131][DEBUG ] get remote short hostname
[ceph-10-10-176-131][DEBUG ] remote hostname: ceph-10-10-176-131
[ceph-10-10-176-131][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-10-10-176-131][DEBUG ] create the mon path if it does not exist
[ceph-10-10-176-131][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-10-10-176-131/done
[ceph-10-10-176-131][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-10-10-176-131][DEBUG ] create the init path if it does not exist
[ceph-10-10-176-131][INFO  ] Running command: sudo systemctl enable ceph.target
[ceph-10-10-176-131][INFO  ] Running command: sudo systemctl enable ceph-mon@ceph-10-10-176-131
[ceph-10-10-176-131][INFO  ] Running command: sudo systemctl start ceph-mon@ceph-10-10-176-131
[ceph-10-10-176-131][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-10-10-176-131.asok mon_status
[ceph-10-10-176-131][DEBUG ] ********************************************************************************
[ceph-10-10-176-131][DEBUG ] status for monitor: mon.ceph-10-10-176-131
[ceph-10-10-176-131][DEBUG ] {
[ceph-10-10-176-131][DEBUG ]   "election_epoch": 3, 
[ceph-10-10-176-131][DEBUG ]   "extra_probe_peers": [], 
[ceph-10-10-176-131][DEBUG ]   "feature_map": {
[ceph-10-10-176-131][DEBUG ]     "mon": [
[ceph-10-10-176-131][DEBUG ]       {
[ceph-10-10-176-131][DEBUG ]         "features": "0x3ffddff8ffecffff", 
[ceph-10-10-176-131][DEBUG ]         "num": 1, 
[ceph-10-10-176-131][DEBUG ]         "release": "luminous"
[ceph-10-10-176-131][DEBUG ]       }
[ceph-10-10-176-131][DEBUG ]     ]
[ceph-10-10-176-131][DEBUG ]   }, 
[ceph-10-10-176-131][DEBUG ]   "features": {
[ceph-10-10-176-131][DEBUG ]     "quorum_con": "4611087854035861503", 
[ceph-10-10-176-131][DEBUG ]     "quorum_mon": [
[ceph-10-10-176-131][DEBUG ]       "kraken", 
[ceph-10-10-176-131][DEBUG ]       "luminous", 
[ceph-10-10-176-131][DEBUG ]       "mimic", 
[ceph-10-10-176-131][DEBUG ]       "osdmap-prune", 
[ceph-10-10-176-131][DEBUG ]       "nautilus"
[ceph-10-10-176-131][DEBUG ]     ], 
[ceph-10-10-176-131][DEBUG ]     "required_con": "2449958747315912708", 
[ceph-10-10-176-131][DEBUG ]     "required_mon": [
[ceph-10-10-176-131][DEBUG ]       "kraken", 
[ceph-10-10-176-131][DEBUG ]       "luminous", 
[ceph-10-10-176-131][DEBUG ]       "mimic", 
[ceph-10-10-176-131][DEBUG ]       "osdmap-prune", 
[ceph-10-10-176-131][DEBUG ]       "nautilus"
[ceph-10-10-176-131][DEBUG ]     ]
[ceph-10-10-176-131][DEBUG ]   }, 
[ceph-10-10-176-131][DEBUG ]   "monmap": {
[ceph-10-10-176-131][DEBUG ]     "created": "2024-04-10 16:49:38.033486", 
[ceph-10-10-176-131][DEBUG ]     "epoch": 2, 
[ceph-10-10-176-131][DEBUG ]     "features": {
[ceph-10-10-176-131][DEBUG ]       "optional": [], 
[ceph-10-10-176-131][DEBUG ]       "persistent": [
[ceph-10-10-176-131][DEBUG ]         "kraken", 
[ceph-10-10-176-131][DEBUG ]         "luminous", 
[ceph-10-10-176-131][DEBUG ]         "mimic", 
[ceph-10-10-176-131][DEBUG ]         "osdmap-prune", 
[ceph-10-10-176-131][DEBUG ]         "nautilus"
[ceph-10-10-176-131][DEBUG ]       ]
[ceph-10-10-176-131][DEBUG ]     }, 
[ceph-10-10-176-131][DEBUG ]     "fsid": "86e7d941-2e39-4205-b3f3-9d08bfa7b79f", 
[ceph-10-10-176-131][DEBUG ]     "min_mon_release": 14, 
[ceph-10-10-176-131][DEBUG ]     "min_mon_release_name": "nautilus", 
[ceph-10-10-176-131][DEBUG ]     "modified": "2024-04-11 10:52:44.538416", 
[ceph-10-10-176-131][DEBUG ]     "mons": [
[ceph-10-10-176-131][DEBUG ]       {
[ceph-10-10-176-131][DEBUG ]         "addr": "[fdf8:f53d:0:82e4::2]:6789/0", 
[ceph-10-10-176-131][DEBUG ]         "name": "ceph-10-10-176-131", 
[ceph-10-10-176-131][DEBUG ]         "public_addr": "[fdf8:f53d:0:82e4::2]:6789/0", 
[ceph-10-10-176-131][DEBUG ]         "public_addrs": {
[ceph-10-10-176-131][DEBUG ]           "addrvec": [
[ceph-10-10-176-131][DEBUG ]             {
[ceph-10-10-176-131][DEBUG ]               "addr": "[fdf8:f53d:0:82e4::2]:6789", 
[ceph-10-10-176-131][DEBUG ]               "nonce": 0, 
[ceph-10-10-176-131][DEBUG ]               "type": "v1"
[ceph-10-10-176-131][DEBUG ]             }
[ceph-10-10-176-131][DEBUG ]           ]
[ceph-10-10-176-131][DEBUG ]         }, 
[ceph-10-10-176-131][DEBUG ]         "rank": 0
[ceph-10-10-176-131][DEBUG ]       }
[ceph-10-10-176-131][DEBUG ]     ]
[ceph-10-10-176-131][DEBUG ]   }, 
[ceph-10-10-176-131][DEBUG ]   "name": "ceph-10-10-176-131", 
[ceph-10-10-176-131][DEBUG ]   "outside_quorum": [], 
[ceph-10-10-176-131][DEBUG ]   "quorum": [
[ceph-10-10-176-131][DEBUG ]     0
[ceph-10-10-176-131][DEBUG ]   ], 
[ceph-10-10-176-131][DEBUG ]   "quorum_age": 79, 
[ceph-10-10-176-131][DEBUG ]   "rank": 0, 
[ceph-10-10-176-131][DEBUG ]   "state": "leader", 
[ceph-10-10-176-131][DEBUG ]   "sync_provider": []
[ceph-10-10-176-131][DEBUG ] }
[ceph-10-10-176-131][DEBUG ] ********************************************************************************
[ceph-10-10-176-131][INFO  ] monitor: mon.ceph-10-10-176-131 is running
[ceph-10-10-176-131][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-10-10-176-131.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.ceph-10-10-176-131
[ceph-10-10-176-131][DEBUG ] connection detected need for sudo
[ceph-10-10-176-131][DEBUG ] connected to host: ceph@ceph-10-10-176-131 
[ceph-10-10-176-131][DEBUG ] detect platform information from remote host
[ceph-10-10-176-131][DEBUG ] detect machine type
[ceph-10-10-176-131][DEBUG ] find the location of an executable
[ceph-10-10-176-131][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-10-10-176-131.asok mon_status
[ceph_deploy.mon][INFO  ] mon.ceph-10-10-176-131 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpeQD2Yc
[ceph-10-10-176-131][DEBUG ] connection detected need for sudo
[ceph-10-10-176-131][DEBUG ] connected to host: ceph@ceph-10-10-176-131 
[ceph-10-10-176-131][DEBUG ] detect platform information from remote host
[ceph-10-10-176-131][DEBUG ] detect machine type
[ceph-10-10-176-131][DEBUG ] get remote short hostname
[ceph-10-10-176-131][DEBUG ] fetch remote file
[ceph-10-10-176-131][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph-10-10-176-131.asok mon_status
[ceph-10-10-176-131][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-10-10-176-131/keyring auth get client.admin
[ceph-10-10-176-131][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-10-10-176-131/keyring auth get client.bootstrap-mds
[ceph-10-10-176-131][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-10-10-176-131/keyring auth get client.bootstrap-mgr
[ceph-10-10-176-131][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-10-10-176-131/keyring auth get client.bootstrap-osd
[ceph-10-10-176-131][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-10-10-176-131/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] Replacing 'ceph.mon.keyring' and backing up old key as 'ceph.mon.keyring-20240411105407'
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpeQD2Yc

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ sudo ceph-deploy-ceph.log admin ceph-10-10-176-131
sudo: ceph-deploy-ceph.log: command not found
ceph@ceph-10-10-176-131[/home/ceph/cluster]$ sudo ceph-deploy  admin ceph-10-10-176-131
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy admin ceph-10-10-176-131
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f5fbadd76c8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph-10-10-176-131']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7f5fbb67b230>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-10-10-176-131
[ceph-10-10-176-131][DEBUG ] connected to host: ceph-10-10-176-131 
[ceph-10-10-176-131][DEBUG ] detect platform information from remote host
[ceph-10-10-176-131][DEBUG ] detect machine type
[ceph-10-10-176-131][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ chown -R ceph:ceph /etc/ceph  //为了ceph用户可以执行ceph -s

检查

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ ceph mon enable-msgr2   //启用msgr2 不然ceph -s 会告警
ceph@ceph-10-10-176-131[/home/ceph/cluster]$ ceph -s 
  cluster:
    id:     86e7d941-2e39-4205-b3f3-9d08bfa7b79f
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum ceph-10-10-176-131 (age 13m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

安装mgr

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ ceph-deploy --username ceph --overwrite-conf mgr create  ceph-10-10-176-131
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy --username ceph --overwrite-conf mgr create ceph-10-10-176-131
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : ceph
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('ceph-10-10-176-131', 'ceph-10-10-176-131')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f66b3432a28>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x7f66b33f7140>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph-10-10-176-131:ceph-10-10-176-131
[ceph-10-10-176-131][DEBUG ] connection detected need for sudo
[ceph-10-10-176-131][DEBUG ] connected to host: ceph@ceph-10-10-176-131 
[ceph-10-10-176-131][DEBUG ] detect platform information from remote host
[ceph-10-10-176-131][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.8.2003 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-10-10-176-131
[ceph-10-10-176-131][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-10-10-176-131][WARNIN] mgr keyring does not exist yet, creating one
[ceph-10-10-176-131][DEBUG ] create a keyring file
[ceph-10-10-176-131][DEBUG ] create path recursively if it doesn't exist
[ceph-10-10-176-131][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-10-10-176-131 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-10-10-176-131/keyring
[ceph-10-10-176-131][INFO  ] Running command: sudo systemctl enable ceph-mgr@ceph-10-10-176-131
[ceph-10-10-176-131][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-10-10-176-131.service to /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-10-10-176-131][INFO  ] Running command: sudo systemctl start ceph-mgr@ceph-10-10-176-131
[ceph-10-10-176-131][INFO  ] Running command: sudo systemctl enable ceph.target

安装 osd

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ ceph-deploy --username ceph --overwrite-conf osd create --data /dev/sdc ceph-10-10-176-131
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy --username ceph --overwrite-conf osd create --data /dev/sdc ceph-10-10-176-131
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f424b987710>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : ceph
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-10-10-176-131
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f424bbdb8c0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdc
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdc
[ceph-10-10-176-131][DEBUG ] connection detected need for sudo
[ceph-10-10-176-131][DEBUG ] connected to host: ceph@ceph-10-10-176-131 
[ceph-10-10-176-131][DEBUG ] detect platform information from remote host
[ceph-10-10-176-131][DEBUG ] detect machine type
[ceph-10-10-176-131][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.8.2003 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-10-10-176-131
[ceph-10-10-176-131][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-10-10-176-131][WARNIN] osd keyring does not exist yet, creating one
[ceph-10-10-176-131][DEBUG ] create a keyring file
[ceph-10-10-176-131][DEBUG ] find the location of an executable
[ceph-10-10-176-131][INFO  ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdc
[ceph-10-10-176-131][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-10-10-176-131][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new dea78fe4-54e6-4233-90e8-714b1bbb8930
[ceph-10-10-176-131][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-e823d31d-0c79-4caa-a8a3-a95b870b016a /dev/sdc
[ceph-10-10-176-131][WARNIN]  stdout: Physical volume "/dev/sdc" successfully created.
[ceph-10-10-176-131][WARNIN]  stdout: Volume group "ceph-e823d31d-0c79-4caa-a8a3-a95b870b016a" successfully created
[ceph-10-10-176-131][WARNIN] Running command: /sbin/lvcreate --yes -l 51199 -n osd-block-dea78fe4-54e6-4233-90e8-714b1bbb8930 ceph-e823d31d-0c79-4caa-a8a3-a95b870b016a
[ceph-10-10-176-131][WARNIN]  stdout: Logical volume "osd-block-dea78fe4-54e6-4233-90e8-714b1bbb8930" created.
[ceph-10-10-176-131][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-10-10-176-131][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[ceph-10-10-176-131][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-e823d31d-0c79-4caa-a8a3-a95b870b016a/osd-block-dea78fe4-54e6-4233-90e8-714b1bbb8930
[ceph-10-10-176-131][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-5
[ceph-10-10-176-131][WARNIN] Running command: /bin/ln -s /dev/ceph-e823d31d-0c79-4caa-a8a3-a95b870b016a/osd-block-dea78fe4-54e6-4233-90e8-714b1bbb8930 /var/lib/ceph/osd/ceph-0/block
[ceph-10-10-176-131][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[ceph-10-10-176-131][WARNIN]  stderr: 2024-04-11 11:12:53.415 7f7e59c4e700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-10-10-176-131][WARNIN] 2024-04-11 11:12:53.415 7f7e59c4e700 -1 AuthRegistry(0x7f7e54066308) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-10-10-176-131][WARNIN]  stderr: got monmap epoch 2
[ceph-10-10-176-131][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQAyVRdmmdUuMBAAK7E/2oDz8SRdYejZVQVcgA==
[ceph-10-10-176-131][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[ceph-10-10-176-131][WARNIN]  stdout: added entity osd.0 auth(key=AQAyVRdmmdUuMBAAK7E/2oDz8SRdYejZVQVcgA==)
[ceph-10-10-176-131][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[ceph-10-10-176-131][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[ceph-10-10-176-131][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid dea78fe4-54e6-4233-90e8-714b1bbb8930 --setuser ceph --setgroup ceph
[ceph-10-10-176-131][WARNIN]  stderr: 2024-04-11 11:12:54.145 7f103dd38a80 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
[ceph-10-10-176-131][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdc
[ceph-10-10-176-131][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-10-10-176-131][WARNIN] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-e823d31d-0c79-4caa-a8a3-a95b870b016a/osd-block-dea78fe4-54e6-4233-90e8-714b1bbb8930 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
[ceph-10-10-176-131][WARNIN] Running command: /bin/ln -snf /dev/ceph-e823d31d-0c79-4caa-a8a3-a95b870b016a/osd-block-dea78fe4-54e6-4233-90e8-714b1bbb8930 /var/lib/ceph/osd/ceph-0/block
[ceph-10-10-176-131][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[ceph-10-10-176-131][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-5
[ceph-10-10-176-131][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-10-10-176-131][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-0-dea78fe4-54e6-4233-90e8-714b1bbb8930
[ceph-10-10-176-131][WARNIN]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-dea78fe4-54e6-4233-90e8-714b1bbb8930.service to /usr/lib/systemd/system/ceph-volume@.service.
[ceph-10-10-176-131][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@0
[ceph-10-10-176-131][WARNIN]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /usr/lib/systemd/system/ceph-osd@.service.
[ceph-10-10-176-131][WARNIN] Running command: /bin/systemctl start ceph-osd@0
[ceph-10-10-176-131][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[ceph-10-10-176-131][WARNIN] --> ceph-volume lvm create successful for: /dev/sdc
[ceph-10-10-176-131][INFO  ] checking OSD status...
[ceph-10-10-176-131][DEBUG ] find the location of an executable
[ceph-10-10-176-131][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-10-10-176-131 is now ready for osd use.  //这里osd报错是没有配置ms_bind_ipv4 = false,配置重启osd服务就ok了
ceph@ceph-10-10-176-131[/home/ceph/cluster]$ ceph -s 
  cluster:
    id:     86e7d941-2e39-4205-b3f3-9d08bfa7b79f
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum ceph-10-10-176-131 (age 6s)
    mgr: ceph-10-10-176-131(active, since 10m)
    osd: 1 osds: 1 up (since 6m), 1 in (since 6m)
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   1.0 GiB used, 199 GiB / 200 GiB avail
    pgs:

安装rgw

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ ceph-deploy --username ceph --overwrite-conf rgw create ceph-10-10-176-131
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy --username ceph --overwrite-conf rgw create ceph-10-10-176-131
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : ceph
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  rgw                           : [('ceph-10-10-176-131', 'rgw.ceph-10-10-176-131')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7ff52c3dfe18>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function rgw at 0x7ff52ca24050>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts ceph-10-10-176-131:rgw.ceph-10-10-176-131
[ceph-10-10-176-131][DEBUG ] connection detected need for sudo
[ceph-10-10-176-131][DEBUG ] connected to host: ceph@ceph-10-10-176-131 
[ceph-10-10-176-131][DEBUG ] detect platform information from remote host
[ceph-10-10-176-131][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO  ] Distro info: CentOS Linux 7.8.2003 Core
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to ceph-10-10-176-131
[ceph-10-10-176-131][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-10-10-176-131][WARNIN] rgw keyring does not exist yet, creating one
[ceph-10-10-176-131][DEBUG ] create a keyring file
[ceph-10-10-176-131][DEBUG ] create path recursively if it does not exist
[ceph-10-10-176-131][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.ceph-10-10-176-131 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.ceph-10-10-176-131/keyring
[ceph-10-10-176-131][INFO  ] Running command: sudo systemctl enable ceph-radosgw@rgw.ceph-10-10-176-131
[ceph-10-10-176-131][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.ceph-10-10-176-131.service to /usr/lib/systemd/system/ceph-radosgw@.service.
[ceph-10-10-176-131][INFO  ] Running command: sudo systemctl start ceph-radosgw@rgw.ceph-10-10-176-131
[ceph-10-10-176-131][INFO  ] Running command: sudo systemctl enable ceph.target
[ceph_deploy.rgw][INFO  ] The Ceph Object Gateway (RGW) is now running on host ceph-10-10-176-131 and default port 7480

检查发现 ipv4地址和ipv6地址都启动了

ceph@ceph-10-10-176-131[/home/ceph/cluster]$ lsof -i:7480
COMMAND    PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
radosgw 371639 ceph   44u  IPv4 5791024      0t0  TCP *:7480 (LISTEN)
radosgw 371639 ceph   45u  IPv6 5791026      0t0  TCP *:7480 (LISTEN)
  • 5
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值