ceph搭建

08.存储Ceph的  所有笔记将使用6OSD、2MON的ceph集群
  1. 在一控两计算的devstack环境上搭建ceph集群,首先,每个节点上都有2个卷用作osdpaste-216938798120963.jpg
  2. 三个节点CEPH-DEPLOY SETUP:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    # Add the release key
    root@controller:~# wget -q -O- 'http://mirrors.163.com/ceph/keys/release.asc' | apt-key add -
    OK
    # Add the Ceph packages to your repository.
    root@controller:~# echo deb http://mirrors.163.com/ceph/debian-luminous/ $(lsb_release -sc) main | tee /etc/apt/sources.list.d/ceph.list
    deb http://mirrors.163.com/ceph/debian-luminous/ xenial main
    Update your repository and install ceph-deploy
    root@controller:~# apt update
    root@controller:~# apt install ceph-deploy
  3. CEPH NODE SETUP→1.INSTALL NTP:
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    # We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to prevent issues arising from clock drift. 
    ########################
    #控制节点
    root@controller:~# apt install chrony -y
    root@controller:~# vim /etc/chrony/chrony.conf
    # 第20行:注释掉这一行
    # pool 2.debian.pool.ntp.org offline iburst
    # 换为:(根据地区添加合适的时间同步服务器,这里选择阿里云服务器)
    server time4.aliyun.com offline iburst
    # 67行添加: (这里意思是允许其他节点来控制节点进行时间同步) 子网
    allow 10.110.31.0/24
    # 保存并退出VIM
    # 重启chrony服务
    root@controller:~# service chrony restart
    #########################
    #计算节点
    root@compute:~# apt install chrony -y
    root@compute:~# vim /etc/chrony/chrony.conf
     
    # 第20行:注释掉这一行
    # pool 2.debian.pool.ntp.org offline iburst
     
    # 添加控制节点的IP
    server 10.110.31.94 iburst
    # 保存并退出VIM
     
    root@compute:~# service chrony restart
    ###########################
    #验证
    root@controller:~# chronyc sources
    210 Number of sources = 1
    MS Name/IP address         Stratum Poll Reach LastRx Last sample
    ===============================================================================
    ^* 203.107.6.88                  2   6    17    10   +192us[+1885us] +/-   13ms
    root@compute1:~# chronyc sources
    210 Number of sources = 1
    MS Name/IP address         Stratum Poll Reach LastRx Last sample
    ===============================================================================
    ^* controller                    3   6    17    19    -80us[ -284us] +/-   14ms
  4. CEPH NODE SETUP→2.各节点间实现免密ssh登录,从而可以保证The ceph-deploy utility must login to a Ceph node as a user that has passwordless sudo privileges, because it needs to install software and configuration files without prompting for passwords.
    1
    2
    3
    4
    #仅展示控制节点传公钥到计算节点,计算节点执行同样操作
    root@controller:~# ssh-keygen -t rsa
    root@controller:~# ssh-copy-id root@compute1
    root@controller:~# ssh-copy-id root@compute2
  5. STARTING OVER
    1
    2
    3
    4
    5
    #If at any point you run into trouble and you want to start over, execute the following to purge the Ceph packages, and erase all its data and configuration:
    ceph-deploy purge {ceph-node} [{ceph-node}]
    ceph-deploy purgedata {ceph-node} [{ceph-node}]
    ceph-deploy forgetkeys
    rm ceph.*
  6. CREATE A CLUSTER
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    # create a directory for maintaining the configuration files and keys that ceph-deploy generates for your cluster.
    # Ensure you are in this directory when executing ceph-deploy
    root@controller:~# cd my-cluster/
    root@controller:~/my-cluster# 
    # Create the cluster,
    root@controller:~/my-cluster# ceph-deploy new controller compute1
    root@controller:~/my-cluster# ll 
    total 24
    drwxr-xr-x 2 root root 4096 Sep  4 20:21 ./
    drwx------ 5 root root 4096 Sep  4 20:21 ../
    -rw-r--r-- 1 root root  223 Sep  4 20:21 ceph.conf
    -rw-r--r-- 1 root root 4098 Sep  4 20:21 ceph-deploy-ceph.log
    -rw------- 1 root root   73 Sep  4 20:21 ceph.mon.keyring
    #Install Ceph packages,The ceph-deploy utility will install Ceph on each node
    #!   root@controller:~/my-cluster#                  ceph-deploy install controller compute1 compute2
    #
    root@controller:~/my-cluster# apt-get install ceph ceph-base ceph-common ceph-mds ceph-mgr ceph-mon ceph-osd libcephfs2 python-cephfs
    # Deploy the initial monitor(s) and gather the keys:
    root@controller:~/my-cluster# ceph-deploy mon create-initial
    root@controller:~/my-cluster# ll
    total 96
    drwxr-xr-x 2 root root  4096 Sep  5 08:54 ./
    drwx------ 5 root root  4096 Sep  4 20:26 ../
    -rw------- 1 root root    71 Sep  5 08:54 ceph.bootstrap-mds.keyring
    -rw------- 1 root root    71 Sep  5 08:54 ceph.bootstrap-mgr.keyring
    -rw------- 1 root root    71 Sep  5 08:54 ceph.bootstrap-osd.keyring
    -rw------- 1 root root    71 Sep  5 08:54 ceph.bootstrap-rgw.keyring
    -rw------- 1 root root    63 Sep  5 08:54 ceph.client.admin.keyring
    -rw-r--r-- 1 root root   223 Sep  4 20:24 ceph.conf
    -rw-r--r-- 1 root root 50696 Sep  5 08:54 ceph-deploy-ceph.log
    -rw------- 1 root root    73 Sep  4 20:21 ceph.mon.keyring
    -rw-r--r-- 1 root root  1645 Oct 16  2015 release.asc
    #Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.
    root@controller:~/my-cluster# ceph-deploy admin controller compute1 compute2
    #执行上述命令前的节点
    root@compute2:~# ll /etc/ceph/
    total 12
    drwxr-xr-x   2 root root 4096 Sep  4 20:44 ./
    drwxr-xr-x 105 root root 4096 Sep  4 20:44 ../
    -rw-r--r--   1 root root   92 Apr 11 21:18 rbdmap
    #执行上述命令后的节点
    root@compute2:~# ll /etc/ceph/
    total 20
    drwxr-xr-x   2 root root 4096 Sep  5 08:58 ./
    drwxr-xr-x 105 root root 4096 Sep  4 20:44 ../
    -rw-------   1 root root   63 Sep  5 08:58 ceph.client.admin.keyring
    -rw-r--r--   1 root root  223 Sep  5 08:58 ceph.conf
    -rw-r--r--   1 root root   92 Apr 11 21:18 rbdmap
    -rw-------   1 root root    0 Sep  5 08:58 tmpiamuoy
    
    #Deploy a manager daemon. (Required only for luminous+ builds):
    root@controller:~/my-cluster# ceph-deploy mgr create controller compute1
    #Add six OSDs
    root@controller:~/my-cluster# ceph-deploy osd create --data /dev/vdc controller
    root@controller:~/my-cluster# ceph-deploy osd create --data /dev/vdd controller
    root@controller:~/my-cluster# ceph-deploy osd create --data /dev/vdc compute1
    root@controller:~/my-cluster# ceph-deploy osd create --data /dev/vdd compute1
    root@controller:~/my-cluster# ceph-deploy osd create --data /dev/vdc compute2
    root@controller:~/my-cluster# ceph-deploy osd create --data /dev/vdd compute2
    #检查集群信息
    root@controller:~/my-cluster# ceph health                                    
    HEALTH_OK
    root@controller:~/my-cluster# ceph -s
      cluster:
        id:     6afe180b-87c5-4f51-bdc6-53a8ecf85a9a
        health: HEALTH_OK
     
      services:
        mon: 2 daemons, quorum compute1,controller
        mgr: compute1(active)
        osd: 6 osds: 6 up, 6 in
     
      data:
        pools:   0 pools, 0 pgs
        objects: 0  objects, 0 B
        usage:   6.0 GiB used, 54 GiB / 60 GiB avail
        pgs:
    1.为了满足HA的要求,OSD需要分散在不同的节点上,这里拷贝数量为3,则需要有三个OSD节点来承载这些OSD,如果三个OSD分布在两个OSD节点上,则依然可能会出现”active+undersized+degraded”的状态。
    2.如果在创建osd时提醒config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite,表明这是节点间配置文件不一致,可以ceph-deploy --overwrite-conf mon create controller compute1来解决
    3.If you are creating an OSD on an LVM volume, the argument to --data must be volume_group/lv_name, rather than the path to the volume’s block device.(可以使用lvm卷当osd)
  7. EXPANDING YOUR CLUSTER
  8. ceph与cinder的对接
    1. CREATE A POOL
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      root@controller:~# ceph osd pool create volumes 128
      pool 'volumes' created
      root@controller:~# ceph osd pool create images 128
      pool 'images' created
      root@controller:~# ceph osd pool create vms 128
      pool 'vms' created
      #Use the rbd tool to initialize the pools
      root@controller:~# rbd pool init volumes
      root@controller:~# rbd pool init images
      root@controller:~# rbd pool init vms
    2. SETUP CEPH CLIENT AUTHENTICATION:
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      # If you have cephx authentication enabled, 
      root@controller:~# ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'
      [client.cinder]
        key = AQA9k3Bdir5EMRAAqe+jAWqznUEHGSHU7qwAYA==
      # Add the keyrings for client.cinder
      root@controller:~# ceph auth get-or-create client.cinder |  tee  /etc/ceph/ceph.client.cinder.keyring
      [client.cinder]
        key = AQA9k3Bdir5EMRAAqe+jAWqznUEHGSHU7qwAYA==
      #为了可以让虚机能attach ceph卷,在进行如下设置(因为三节点都可以创建虚机)
      root@controller:~# vim /etc/ceph/secret.xml
      <secret ephemeral='no' private='no'>
        <uuid>19e30f2b-0565-4194-82ea-f982982438a9</uuid>
        <usage type='ceph'>
          <name>client.cinder secret</name>
        </usage>
      </secret>
      
      
      root@controller:~# ll /etc/ceph/
      total 20
      drwxr-xr-x   2 root root 4096 Sep  5 12:43 ./
      drwxr-xr-x 113 root root 4096 Sep  5 10:43 ../
      -rw-------   1 root root   63 Sep  5 08:58 ceph.client.admin.keyring
      -rw-r--r--   1 root root  252 Sep  5 11:30 ceph.conf
      -rw-r--r--   1 root root   92 Jun  4 00:15 rbdmap
      -rw-------   1 root root    0 Sep  5 08:54 tmpRiOsBL
      root@controller:~# 
      
        
      root@controller:~# 
      root@controller:~# virsh secret-define --file /etc/ceph/secret.xml
      Secret 19e30f2b-0565-4194-82ea-f982982438a9 created
      
      root@controller:~# ceph auth get-key client.cinder | tee /etc/ceph/client.cinder.key
      AQA9k3Bdir5EMRAAqe+jAWqznUEHGSHU7qwAYA==root@controller:~# virsh secret-set-value --secret 19e30f2b-0565-4194-82ea-f982982438a9 --base64 $(cat /etc/ceph/client.cinder.key)
      Secret value set
      
      root@controller:~# virsh secret-list
       UUID                                  Usage
      --------------------------------------------------------------------------------
       19e30f2b-0565-4194-82ea-f982982438a9  ceph client.cinder secret
      
      root@controller:~# rm /etc/ceph/client.cinder.key && rm /etc/ceph/secret.xml
      root@controller:~# ll /etc/ceph/
      total 24
      drwxr-xr-x   2 root root 4096 Sep  5 17:19 ./
      drwxr-xr-x 113 root root 4096 Sep  5 10:43 ../
      -rw-------   1 root root   63 Sep  5 08:58 ceph.client.admin.keyring
      -rw-r--r--   1 root root   64 Sep  5 17:14 ceph.client.cinder.keyring
      -rw-r--r--   1 root root  252 Sep  5 11:30 ceph.conf
      -rw-r--r--   1 root root   92 Jun  4 00:15 rbdmap
      -rw-------   1 root root    0 Sep  5 08:54 tmpRiOsBL
    3. CONFIGURE CINDER TO USE CEPH
      1. 参考见08.存储Cinder→5.场学→12.Ceph Volume Provider→1.配置→扩展
  9. DASHBOARD PLUGIN
    1. 确定三台节点安装的ceph的版本保持一致  不同版本dashboard安装方法不同
      1
      2
      3
      4
      5
      6
      root@controller:~# ceph --version
      ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)
      root@compute1:~# ceph --version
      ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)
      root@compute2:~# ceph --version
      ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)
    2. 安装
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      #Within a running Ceph cluster, the dashboard manager module is enabled with
      root@controller:~# ceph mgr module enable dashboard
      root@compute1:~# ceph mgr module enable dashboard
      
      # 设置dashboard的ip和端口
      root@controller:~# ceph config-key put mgr/dashboard/server_addr 10.110.31.94 
      set mgr/dashboard/server_addr
      root@controller:~# ceph config-key put mgr/dashboard/server_port 7000
      set mgr/dashboard/server_port
      # 重启controller节点上的mgr服务(不用重启compute节点上的)
      root@controller:~# systemctl restart ceph-mgr@controller 
      
      # 检查端口,u表示udp
      root@controller:~# netstat -tunlp|grep 7000
      tcp        0      0 10.110.31.94:7000       0.0.0.0:*               LISTEN      239170/ceph-mgr 
      #访问网址
      root@controller:~# {{c1::ceph mgr services}}
      {
          "dashboard": "http://10.110.31.94:7000/"
      }
      1.重启dashboard执行systemctl restart ceph-mgr@controller  2.当出现重启失败的情况下,先执行systemctl reset-failed ceph-mgr@controller,然后再执行systemctl start ceph-mgr@controller 3.重启后应该可以看到paste-91100551315459.jpg
    3. 访问界面

paste-169608258519043.jpg

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值