Openstack集群-Ceph集群作为存储的部署

本文档详细介绍了如何在Linux CentOS环境下,通过Ceph-deploy工具部署Ceph集群,包括设置yum源、安装Ceph及管理组件、添加OSD、配置并启动Mgr。同时,讲解了Openstack集成Ceph的前期准备,如创建pool、安装Ceph客户端、授权设置,以及Glance、Cinder、Nova与Ceph的集成配置和验证过程。
摘要由CSDN通过智能技术生成

1、安装Ceph集群

1.1 设置ceph的yum源

ceph版本:12.2.5 ceph-deploy版本: 2.0.0
注:此处用控制节点部署mod和mgr ,OSD部署在计算节点上

[root@m$c$:/root]# vim /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/$basearch
enabled=1
priority=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/noarch
enabled=1
priority=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/SRPMS
enabled=0
priority=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc

[root@m$c$:/root]# yum clean all && yum repolist


1.2 安装ceph-deploy(在admin server上)

# 在规划的全部控制管理节点安装ceph-deploy工具
[root@cont01:/root]# yum -y install ceph-deploy
[root@cont02:/root]# yum -y install ceph-deploy
[root@cont03:/root]# yum -y install ceph-deploy
注:如果报错,需要安装yum install python-setuptools
[root@cont01:/root]# ceph-deploy --version
2.0.1

1.3 安装Ceph包(在admin server上)

注:在所有server上安装deltarpm(yum install -y deltarpm)
[root@m&c:/root]#  yum install -y deltarpm
[root@cont03:/root]# ceph-deploy install --release=luminous cont01 cont02 cont03 comp01 comp02 comp03
[root@cont03:/root]# ceph -v
ceph version 12.2.13 (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)

[root@cont03:/root]#  rpm -qa | grep ceph
ceph-common-12.2.13-0.el7.x86_64
ceph-radosgw-12.2.13-0.el7.x86_64
ceph-deploy-2.0.1-0.noarch
ceph-mgr-12.2.13-0.el7.x86_64
libcephfs2-12.2.13-0.el7.x86_64
ceph-selinux-12.2.13-0.el7.x86_64
ceph-osd-12.2.13-0.el7.x86_64
centos-release-ceph-luminous-1.1-2.el7.centos.noarch
ceph-mon-12.2.13-0.el7.x86_64
ceph-12.2.13-0.el7.x86_64
ceph-mds-12.2.13-0.el7.x86_64
ceph-release-1-1.el7.noarch
python-cephfs-12.2.13-0.el7.x86_64
ceph-base-12.2.13-0.el7.x86_64

1.4 创建ceph集群

1.4.1 创建mon&mgr

##先以cont03作为initial monitor创建集群(后续再增加cont01和cont02)
[root@cont03:/root]# mkdir -p /etc/ceph && cd /etc/ceph
[root@cont03:/etc/ceph]# ceph-deploy new cont03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy new cont03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f82f6448ed8>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f82f5bc42d8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['cont03']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[cont03][DEBUG ] connected to host: cont03 
[cont03][DEBUG ] detect platform information from remote host
[cont03][DEBUG ] detect machine type
[cont03][DEBUG ] find the location of an executable
[cont03][INFO  ] Running command: /usr/sbin/ip link show
[cont03][INFO  ] Running command: /usr/sbin/ip addr show
[cont03][DEBUG ] IP addresses found: [u'192.168.10.23', u'192.168.7.123']
[ceph_deploy.new][DEBUG ] Resolving host cont03
[ceph_deploy.new][DEBUG ] Monitor cont03 at 192.168.10.23
[ceph_deploy.new][DEBUG ] Monitor initial members are ['cont03']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.10.23']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[root@cont03:/etc/ceph]# ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring  rbdmap


1.4.2 修改集群配置文件(optional)

[root@cont03:/etc/ceph]# vim /etc/ceph/ceph.conf
[global]
fsid = bc616791-7d5a-4b1a-ab1d-30414312fcfd
mon_initial_members = cont03, cont02, cont01
mon_host = 192.168.10.23,192.168.10.22,192.168.10.21
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
# 默认的副本数为3osd_pool_default_size = 3
public network = 192.168.10.0/24
cluster network = 192.168.7.0/24
# public network:前端mon网络,client访问网络;确保public network与mon_host在相同网段,否则初始化时可能会有错误;
##ceph集群使用两个网络:public network和cluster network。前者用于服务client;后者用于集群内部通信,例如osd之间迁移数据。另外,两个网络上都有heartbeat。
##我们配置主机名解析的时候,把主机名解析为public network的地址。这是因为,ceph-deploy是作为client 来操作集群的,ceph集群通过public network服务于clientmonitor是运行于public network上的。这也很容易理解,ceph的client都需要访问monitor,若monitor运行于cluster network上,client无法访问。
# cluster network:后端osd心跳,数据/流复制恢复等网络# 默认保护机制不允许删除pool,根据情况设置
[mon]
mon_allow_pool_delete = true

[root@cont03:/etc/ceph]# cat /etc/ceph/ceph.mon.keyring
[mon.]
key = AQAZOmpeAAAAABAA9k58FrBYzKXjC2F414eKkA==
caps mon = allow *

1.4.3 部署initial monitor

[root@cont03:/etc/ceph]# ceph-deploy mon create cont03
[cont03][DEBUG ] ********************************************************************************
[cont03][INFO  ] monitor: mon.cont03 is running
[cont03][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cont03.asok mon_status

[root@cont03:/etc/ceph]# ps -ef | grep ceph
ceph       26332       1  0 17:21 ?        00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id cont03 --setuser ceph --setgroup ceph
root       26412   16878  0 17:22 pts/0    00:00:00 grep --color=auto ceph
[root@cont03:/etc/ceph]# netstat -anpl | grep 6789 | grep LISTEN
tcp        0      0 192.168.10.23:6789      0.0.0.0:*               LISTEN      26332/ceph-mon 

1.4.4 创建ceph keyring

[root@cont03:/etc/ceph]# ceph-deploy gatherkeys cont03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy gatherkeys cont03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f1498d4be60>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['cont03']
[ceph_deploy.cli][INFO  ]  func                          : <function gatherkeys at 0x7f14995c4aa0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpHbM7Ns
[cont03][DEBUG ] connected to host: cont03 
[cont03][DEBUG ] detect platform information from remote host
[cont03][DEBUG ] detect machine type
[cont03][DEBUG ] get remote short hostname
[cont03][DEBUG ] fetch remote file
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.cont03.asok mon_status
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get client.admin
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get-or-create client.admin osd allow * mds allow * mon allow * mgr allow *
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get client.bootstrap-mds
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get-or-create client.bootstrap-mds mon allow profile bootstrap-mds
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get client.bootstrap-mgr
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get-or-create client.bootstrap-mgr mon allow profile bootstrap-mgr
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get client.bootstrap-osd
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get-or-create client.bootstrap-osd mon allow profile bootstrap-osd
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get client.bootstrap-rgw
[cont03][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-cont03/keyring auth get-or-create client.bootstrap-rgw mon allow profile bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpHbM7Ns

[root@cont03:/etc/ceph]# ll
total 88
-rw-------. 1 root root    71 Mar 12 22:23 ceph.bootstrap-mds.keyring
-rw-------. 1 root root    71 Mar 12 22:23 ceph.bootstrap-mgr.keyring
-rw-------. 1 root root    71 Mar 12 22:23 ceph.bootstrap-osd.keyring
-rw-------. 1 root root    71 Mar 12 22:23 ceph.bootstrap-rgw.keyring
-rw-------. 1 root root    63 Mar 12 22:23 ceph.client.admin.keyring
-rw-r--r--. 1 root root   423 Mar 12 22:11 ceph.conf
-rw-r--r--. 1 root root 54271 Mar 12 22:23 ceph-deploy-ceph.log
-rw-------. 1 root root    73 Mar 12 21:33 ceph.mon.keyring
-rw-r--r--. 1 root root    92 Jan 31 05:37 rbdmap
[root@cont03:/etc/ceph]# cat ceph.client.admin.keyring
[client.admin]
        key = AQDJRWpePC/0MRAAP1+o23HgRFOnUIvU+9F6Rw==
[root@cont03:/etc/ceph]# cat ceph.bootstrap-osd.keyring
[client.bootstrap-osd]
        key = AQDMRWpenbmIGRAA4tCcF2ZtAgmBUQWqeAgIUQ==
//admin的key保存在ceph.client.admin.keyring文件里,通过–keyring提供
[root@cont03:/etc/ceph]# ceph --keyring ceph.client.admin.keyring -c ceph.conf -s
  cluster:
    id:     bc616791-7d5a-4b1a-ab1d-30414312fcfd
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum cont01
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   0B used, 0B / 0B avail
    pgs:     

[root@cont03:/etc/ceph]# ceph --keyring ceph.client.admin.keyring -c ceph.conf auth get client.admin
exported keyring for client.admin
[client.admin]
        key = AQDJRWpePC/0MRAAP1+o23HgRFOnUIvU+9F6Rw==
        caps mds = "allow *"
        caps mgr = "allow *"
        caps mon = "allow *"
        caps osd = "allow *"
        

1.4.5 分发ceph keyring

执行admin的命令,要提供admin的key(–keyring ceph.client.admin.keyring)以及配置文件(-c
ceph.conf)。在后续的运维中,我们经常需要在某个server上执行admin命令。每次都提供这些参数比较麻烦。实际上,ceph会默认地从/etc/ceph/中找keyring和ceph.conf。因此,我们可以把ceph.client.admin.keyring和ceph.conf放到每个server的/etc/ceph/。ceph-deploy可以帮我做这些。

[root@cont03:/etc/ceph]# ceph-deploy admin cont03 cont01 cont02 comp01 comp02 comp03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin cont03 cont01 cont02 comp01 comp02 comp03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f709694dc20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['cont03', 'cont01', 'cont02', 'comp01', 'comp02', 'comp03']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7f70973fc320>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to cont03
[cont03][DEBUG ] connected to host: cont03 
[cont03][DEBUG ] detect platform information from remote host
[cont03][DEBUG ] detect machine type
[cont03][DEBUG ] write cluster configuration to /etc/ceph/{
   cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to cont01
[cont01][DEBUG ] connected to host: cont01 
[cont01][DEBUG ] detect platform information from remote host
[cont01][DEBUG ] detect machine type
[cont01][DEBUG ] write cluster configuration to /etc/ceph/{
   cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to cont02
[cont02][DEBUG ] connected to host: cont02 
[cont02][DEBUG ] detect platform information from remote host
[cont02][DEBUG ] detect machine type
[cont02][DEBUG ] write cluster configuration to /etc/ceph/{
   cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to comp01
[comp01][DEBUG ] connected to host: comp01 
[comp01][DEBUG ] detect platform information from remote host
[comp01][DEBUG ] detect machine type
[comp01][DEBUG ] write cluster configuration to /etc/ceph/{
   cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to comp02
[comp02][DEBUG ] connected to host: comp02 
[comp02][DEBUG ] detect platform information from remote host
[comp02][DEBUG ] detect machine type
[comp02][DEBUG ] write cluster configuration to /etc/ceph/{
   cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to comp03
[comp03][DEBUG ] connected to host: comp03 
[comp03][DEBUG ] detect platform information from remote host
[comp03][DEBUG ] detect machine type
[comp03][DEBUG ] write cluster configuration to /etc/ceph/{
   cluster}.conf

//检查每个server,发现/etc/ceph/下都多了ceph.client.admin.keyring和ceph.conf这两个文件,现在就不用提供那些参数了:
[root@cont01:/etc/ceph]# ls
ceph.client.admin.keyring  ceph.conf  rbdmap  tmpH2C5VD
[root@cont01:/etc/ceph]# ceph -s
  cluster:
    id:      bc616791-7d5a-4b1a-ab1d-30414312fcfd
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum cont03
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   0B used, 0B / 0B avail
    pgs:     
[root@cont01:/etc/ceph]# ceph auth get client.admin 
exported keyring for client.admin
[client.admin]
        key = AQDJRWpePC/0MRAAP1+o23HgRFOnUIvU+9F6Rw==
        caps mds = "allow *"
        caps mgr = "allow *"
        caps mon = "allow *"
        caps osd = "allow *"

[root@comp01:/root]# cd /etc/ceph
[root@comp01:/etc/ceph]# ls
ceph.client.admin.keyring  ceph.conf  rbdmap  tmpt3YKNe
[root@comp01:/etc/ceph]# ceph -s
  cluster:
    id:      bc616791-7d5a-4b1a-ab1d-30414312fcfd
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum cont03
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   0B used, 0B / 0B avail
    pgs:     
 [root@comp01:/etc/ceph]# ceph auth get client.admin 
exported keyring for client.admin
[client.admin]
	key = AQA81Vxe/zKVOxAA0Y7VQWCoY2Wb9opdeIbk8Q==
	caps mds = "allow *"
	caps mgr = "allow *"
	caps mon = "allow *"
	caps osd = "allow *"


1.4.6 创建ceph mgr

从ceph 12(luminous)开始,需要为每个monitor创建一个mgr

[root@cont03:/etc/ceph]# ceph-deploy mgr create cont03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create cont03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('cont03', 'cont03')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f70a79945f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x7f70a820b230>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_re
  • 1
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值