08.存储Cinder→5.场景学习→12.Ceph Volume Provider→1.配置

  1. 配置ceph(控制节点)这里仅仅是对控制节点的配置文件进行更改,具体安装流程参考见04.搭建实验环境→2.搭建环境(devstack)       (在配置控制节点前记得在计算节点执行unstack.sh,以便关闭计算节点所有服务,使其不影响控制节点)  
    1. 在控制节点的local.conf添加ceph plugin: 
      1
      2
      3
      4
      5
      stack@controller:~/devstack$ vim local.conf
      ...
      #ceph
      
      # use TryStack git mirror

    2. 运行stack.sh文件
    3. 在安装中如果出现“umount: /var/lib/ceph/drives/sdb1: mountpoint not found”的问题,解决如下:
      1. 出现这个问题往往是因为在安装过程中先出现了Could not find a version that satisfies the requirement...问题 参考见04.搭建实验环境→2.搭建环境(devstack)
        1. 添加了新的python源之后重新进行stack,发现问题解决
        2. 然后就发现新的问题即“umount: /var/lib/ceph/drives/sdb1: mountpoint not found”
      2. 此时应先unstack然后再stack 由于添加的python源解决了第一个问题,因此第一个问题不会出现,第二个问题也没有出现
  1. 配置计算节点 
    1. 重新安装计算节点devstack环境,并在控制节点发现计算节点:root@controller:~# /opt/stack/devstack/tools/discover_hosts.sh 
    2. 安装ceph客户端:root@compute:~# apt-get install ceph-common
    3. 授权设置:
      1. client.cinder秘钥:root@controller:~# ceph auth get-or-create client.cinder | ssh root@compute tee  /etc/ceph/ceph.client.cinder.keyring
        1. tee用法:读取标准输入的数据,并将其内容输出成文件
          1
          2
          3
          4
          5
          6
          7
          8
          root@cmp-2:~# tee zhao
          36
          36
          q
          q
          root@cmp-2:~# cat zhao
          36
          q
      2. libvirt秘钥:root@controller:~# ceph auth get-key client.cinder | ssh root@compute tee /etc/ceph/client.cinder.keypaste-193428147142659.jpg结果一样
        1. 配置文件
          1
          2
          3
          4
          5
          6
          7
          root@compute:~# vim /etc/ceph/secret.xml
          <secret ephemeral='no' private='no'>
            <uuid></uuid>
            <usage type='ceph'>
              <name>client.cinder secret</name>
            </usage>
          </secret>
          其中这个uuid查看控制节点的nova.conf
        2. 从xml文件定义或修改secret:
          1. root@compute:~# virsh secret-define --file /etc/ceph/secret.xml 
        3. 设置secret:
          1. root@compute:~# virsh secret-set-value --secret df0d0b60-047a-45f5-b5be-f7d2b4beadee --base64 $(cat /etc/ceph/client.cinder.key) 
        4. 查看secret:virsh secret-list所有计算节点的secret相同
        5. 删除文件:rm /etc/ceph/client.cinder.key && rm /etc/ceph/secret.xml 
      3. client.admin秘钥
        1. root@controller:~# scp /etc/ceph/ceph.client.admin.keyring compute:/etc/ceph/
    4. 配置文件:
      计算节点:
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      root@compute:~# vim /etc/ceph/ceph.conf
      [global]
      rbd default features = 1
      osd pool default size = 1
      osd journal size = 100
      osd crush chooseleaf type = 0
      filestore_xattr_use_omap = true
      auth_client_required = cephx
      auth_service_required = cephx
      auth_cluster_required = cephx
      mon_host = 172.16.1.17
      mon_initial_members = controller
      fsid = eab37548-7aef-466a-861c-3757a12ce9e8
      
      root@compute:~# vim /etc/nova/
      [libvirt]
      images_rbd_ceph_conf = /etc/ceph/ceph.conf
      images_rbd_pool = 
      images_type = rbd
      disk_cachemodes = network=writeback
      inject_partition = -2
      inject_key = false
      rbd_secret_uuid = 
      rbd_user = cinder
      live_migration_uri = qemu+ssh://stack@%s/system
      cpu_mode = none
      virt_type = kvm
      修改nova-cpu.conf而不是修改nova.conf
    5. 重启计算服务
      1. root@compute:~# systemctl restart libvirtd.service
      2. root@compute:~# systemctl restart devstack@n-cpu.service
    6. 验证
      1. 创建虚拟机,确定该虚拟机是在计算节点上创建(virsh list),使用rbd ls vms查看虚机镜像文件
  1. 配置对比
不使用ceph plugin搭建的devstack环境使用ceph plugin搭建的devstack环境
glance-api.conf
1
2
[glance_store]
filesystem_store_datadir = /opt/stack/data/glance/images/
1
2
3
4
5
6
7
[glance_store]
rbd_store_pool = 
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
stores = file, http, rbd
default_store = rbd
filesystem_store_datadir = /opt/stack/data/glance/images/
nova.conf 或 nova-cpu.conf
1
2
3
4
[libvirt]
live_migration_uri = qemu+ssh://stack@%s/system
cpu_mode = none
virt_type = kvm
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
[libvirt]
images_rbd_ceph_conf = /etc/ceph/ceph.conf
images_rbd_pool = 
images_type = rbd
disk_cachemodes = network=writeback
inject_partition = -2
inject_key = false
rbd_secret_uuid = 
rbd_user = cinder
live_migration_uri = qemu+ssh://stack@%s/system
cpu_mode = none
virt_type = kvm
cinder.conf
1
2
3
4
5
6
7
8
[lvmdriver-1]
image_volume_cache_enabled = True
volume_clear = zero
lvm_type = auto
iscsi_helper = tgtadm
volume_group = stack-volumes-lvmdriver-1
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = lvmdriver-1
cinder.conf
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
[ceph]
image_volume_cache_enabled = True
volume_clear = zero
rbd_max_clone_depth = 5
rbd_flatten_volume_from_snapshot = False
rbd_secret_uuid = 
rbd_user = cinder
rbd_pool = 
rbd_ceph_conf = /etc/ceph/ceph.conf
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph

ceph.conf
控制节点在配置好后显示的:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
[global]
rbd default features = 1
osd pool default size = 1
osd journal size = 100
osd crush chooseleaf type = 0
filestore_xattr_use_omap = true
auth_client_required = cephx
auth_service_required = cephx
auth_cluster_required = cephx
mon_host = 172.16.1.17
mon_initial_members = controller
fsid = 
在ceph集群的admin节点(在该节点用ceph-deploy建立ceph存储集群)创建初始化monitor后,会得到多个秘钥文件,要将这些秘钥文件和该节点配置的ceph.conf文件分发到其他所有节点(其他ceph节点,计算节点,控制节点...)
fsid是自动配置的

  1. ceph的日志文件
    1
    2
    3
    4
    5
    6
    7
    8
    9
    root@controller:~# ll /var/log/ceph/    
    total 2856
    drwxrws--T  2 ceph ceph      4096 Jun 25 17:48 ./
    drwxrwxr-x 13 root syslog    4096 Jun 25 17:46 ../
    -rw-------  1 ceph ceph     35669 Jun 26 16:38 ceph.audit.log
    -rw-------  1 ceph ceph      4504 Jun 26 15:01 ceph.log
    -rw-r--r--  1 ceph ceph   2719445 Jun 26 17:06 ceph-mgr.x.log
    -rw-r--r--  1 root ceph     32990 Jun 25 19:31 ceph-mon.controller.log
    -rw-r--r--  1 ceph ceph    106920 Jun 26 14:29 ceph-osd.0.log

    1. ceph的日志等级有 ERR、WRN、INFO
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值