Ceph OSD扩容与缩容

      在企业生产环境中,随着时间的迁移数据会存在磁盘空间不足,或者机器节点故障等情况。OSD又是实际存储数据,所以扩容和缩容OSD就很有必要性

随着我们数据量的增长,后期可能我们需要对osd进行扩容。目前扩容分为两种,一种为横向扩容,另外一种为纵向扩容

  • 横向扩容scale out增加节点,添加更多的节点进来,让集群包含更多的节点
  • 纵向扩容scale up增加磁盘,添加更多的硬盘进行,来增加存储容量

一、横向扩容

横向扩容实际上就是把ceph osd节点安装一遍,安装步骤我这里一步带过,详细的步骤可以参考下面的文章

新添加机器

执行添加节点初始化步骤

  1. #NTP SERVER (ntp server 与阿里与ntp时间服务器进行同步)
  2. #首先我们配置ntp server,我这里在ceph01上面配置
  3. yum install -y ntp
  4. systemctl start ntpd
  5. systemctl enable ntpd
  6. timedatectl set-timezone Asia/Shanghai
  7. #将当前的 UTC 时间写入硬件时钟
  8. timedatectl set-local-rtc 0
  9. #重启依赖于系统时间的服务
  10. systemctl restart rsyslog
  11. systemctl restart crond
  12. #这样我们的ntp server自动连接到外网,进行同步 (时间同步完成在IP前面会有一个*号)
  13. [root@ceph-01 ~]# ntpq -pn
  14. remote refid st t when poll reach delay offset jitter
  15. ==============================================================================
  16. 120.25.115.20 10.137.53.7 2 u 8 64 17 40.203 -24.837 0.253
  17. *203.107.6.88 100.107.25.114 2 u 8 64 17 14.998 -22.611 0.186
  18. #NTP Agent (ntp agent同步ntp server时间)
  19. ntp agent需要修改ntp server的地址
  20. [root@ceph-02 ~]# vim /etc/ntp.conf
  21. server 192.168.31.20 iburst
  22. #server 0.centos.pool.ntp.org iburst
  23. #server 1.centos.pool.ntp.org iburst
  24. #server 2.centos.pool.ntp.org iburst
  25. #server 3.centos.pool.ntp.org iburst
  26. #注释默认的server,添加一条我们ntp server的地址
  27. [root@ceph-02 ~]# systemctl restart ntpd
  28. [root@ceph-02 ~]# systemctl enable ntpd
  29. #等待几分钟出现*号代表同步完成
  30. [root@ceph-02 ~]# ntpq -pn
  31. remote refid st t when poll reach delay offset jitter
  32. ==============================================================================
  33. *192.168.31.20 120.25.115.20 3 u 13 64 1 0.125 -19.095 0.095
  34. #ceph-03节点操作相同
  35. 在ntp_agent节点添加定时同步任务
  36. $ crontab -e
  37. */5 * * * * /usr/sbin/ntpdate 192.168.31.20
  38. ntp时间服务器设置完成后在所有节点修改时区以及写入硬件
  39. timedatectl set-timezone Asia/Shanghai
  40. #将当前的 UTC 时间写入硬件时钟
  41. timedatectl set-local-rtc 0
  42. #重启依赖于系统时间的服务
  43. systemctl restart rsyslog
  44. systemctl restart crond
  45. 校对时间
  46. [root@ceph-01 ~]# date
  47. Tue Sep 8 17:35:43 CST 2020
  48. [root@ceph-02 ~]# date
  49. Tue Sep 8 17:35:46 CST 2020
  50. [root@ceph-03 ~]# date
  51. Tue Sep 8 17:35:47 CST 2020
  52. 添加host
  53. [root@ceph-01 ceph-deploy]# vim /etc/hosts
  54. 192.168.31.23 ceph-04
  55. #关闭防火墙
  56. systemctl stop firewalld
  57. systemctl disable firewalld
  58. iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
  59. iptables -P FORWARD ACCEPT
  60. setenforce 0
  61. sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
  62. #ceph yum源配置
  63. #配置centos、epeo、ceph源
  64. curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
  65. wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
  66. wget -O /etc/yum.repos.d/ceph.repo http://down.i4t.com/ceph/ceph.repo
  67. yum clean all
  68. yum makecache
  69. #安装ceph
  70. yum install -y ceph vim wget
  71. #进入mon节点,分发配置文件,添加osd节点
  72. [root@ceph-01 ceph-deploy]# cd /root/ceph-deploy/
  73. [root@ceph-01 ceph-deploy]# ceph-deploy --overwrite-conf config push ceph-04
  74. [root@ceph-01 ceph-deploy]# ceph-deploy osd create --data /dev/sdb ceph-04
  75. #这时候可以看到节点已经添加进来了,并且ceph状态已经是OK
  76. [root@ceph-01 ceph-deploy]# ceph osd tree
  77. ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
  78. -1 0.22449 root default
  79. -3 0.07809 host ceph-01
  80. 0 hdd 0.04880 osd.0 up 1.00000 1.00000
  81. 3 hdd 0.02930 osd.3 up 1.00000 1.00000
  82. -5 0.04880 host ceph-02
  83. 1 hdd 0.04880 osd.1 up 1.00000 1.00000
  84. -7 0.04880 host ceph-03
  85. 2 hdd 0.04880 osd.2 up 1.00000 1.00000
  86. -9 0.04880 host ceph-04
  87. 4 hdd 0.04880 osd.4 up 1.00000 1.00000
  88. [root@ceph-01 ceph-deploy]#
  89. [root@ceph-01 ceph-deploy]# ceph -s
  90. cluster:
  91. id: c8ae7537-8693-40df-8943-733f82049642
  92. health: HEALTH_OK
  93. services:
  94. mon: 3 daemons, quorum ceph-01,ceph-02,ceph-03 (age 11m)
  95. mgr: ceph-03(active, since 8d), standbys: ceph-02, ceph-01
  96. mds: cephfs-abcdocker:1 {0=ceph-02=up:active} 2 up:standby
  97. osd: 5 osds: 5 up (since 7m), 5 in (since 7m)
  98. rgw: 1 daemon active (ceph-01)
  99. task status:
  100. data:
  101. pools: 9 pools, 384 pgs
  102. objects: 320 objects, 141 MiB
  103. usage: 5.5 GiB used, 224 GiB / 230 GiB avail
  104. pgs: 384 active+clean

二、纵向扩容

纵向扩容即添加一块新硬盘即可 (我这里只添加ceph-01服务器一块30G盘)

如果我们新增的硬盘有数据和分区需要初始化,可以通过下面的命令进行处理

 
  1. [root@ceph-01 ~]# fdisk -l /dev/sdc #查看目前的硬盘空间
  2. Disk /dev/sdc: 32.2 GB, 32212254720 bytes, 62914560 sectors
  3. Units = sectors of 1 * 512 = 512 bytes
  4. Sector size (logical/physical): 512 bytes / 512 bytes
  5. I/O size (minimum/optimal): 512 bytes / 512 bytes
  6. [root@ceph-01 ~]# cd ceph-deploy #需要进入到我们的ceph.conf目录,否则执行命令会报错
  7. [root@ceph-01 ceph-deploy]# ceph-deploy disk zap ceph-01 /dev/sdc #执行初始化命令,ceph-01为需要初始化的节点,/dev/sdc初始化硬盘
  8. [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
  9. [ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy disk zap ceph-01 /dev/sdc
  10. [ceph_deploy.cli][INFO ] ceph-deploy options:
  11. [ceph_deploy.cli][INFO ] username : None
  12. [ceph_deploy.cli][INFO ] verbose : False
  13. [ceph_deploy.cli][INFO ] debug : False
  14. [ceph_deploy.cli][INFO ] overwrite_conf : False
  15. [ceph_deploy.cli][INFO ] subcommand : zap
  16. [ceph_deploy.cli][INFO ] quiet : False
  17. [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x283f290>
  18. [ceph_deploy.cli][INFO ] cluster : ceph
  19. [ceph_deploy.cli][INFO ] host : ceph-01
  20. [ceph_deploy.cli][INFO ] func : <function disk at 0x282c7d0>
  21. [ceph_deploy.cli][INFO ] ceph_conf : None
  22. [ceph_deploy.cli][INFO ] default_release : False
  23. [ceph_deploy.cli][INFO ] disk : ['/dev/sdc']
  24. [ceph_deploy.osd][DEBUG ] zapping /dev/sdc on ceph-01
  25. [ceph-01][DEBUG ] connected to host: ceph-01
  26. [ceph-01][DEBUG ] detect platform information from remote host
  27. [ceph-01][DEBUG ] detect machine type
  28. [ceph-01][DEBUG ] find the location of an executable
  29. [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.4.1708 Core
  30. [ceph-01][DEBUG ] zeroing last few blocks of device
  31. [ceph-01][DEBUG ] find the location of an executable
  32. [ceph-01][INFO ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdc
  33. [ceph-01][WARNIN] --> Zapping: /dev/sdc
  34. [ceph-01][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
  35. [ceph-01][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdc bs=1M count=10 conv=fsync
  36. [ceph-01][WARNIN] stderr: 10+0 records in
  37. [ceph-01][WARNIN] 10+0 records out
  38. [ceph-01][WARNIN] 10485760 bytes (10 MB) copied
  39. [ceph-01][WARNIN] stderr: , 0.378842 s, 27.7 MB/s
  40. [ceph-01][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdc>

实际上上面的命令只是执行了一个dd命令,将我们服务器的数据表内容清除

接下来我们执行扩容命令

#ceph-01为扩容节点名称,--data为扩容节点硬盘
  1. [root@ceph-01 ceph-deploy]# ceph-deploy osd create ceph-01 --data /dev/sdc
  2. [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
  3. [ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy osd create ceph-01 --data /dev/sdc
  4. [ceph_deploy.cli][INFO ] ceph-deploy options:
  5. [ceph_deploy.cli][INFO ] verbose : False
  6. [ceph_deploy.cli][INFO ] bluestore : None
  7. [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x20e33b0>
  8. [ceph_deploy.cli][INFO ] cluster : ceph
  9. [ceph_deploy.cli][INFO ] fs_type : xfs
  10. [ceph_deploy.cli][INFO ] block_wal : None
  11. [ceph_deploy.cli][INFO ] default_release : False
  12. [ceph_deploy.cli][INFO ] username : None
  13. [ceph_deploy.cli][INFO ] journal : None
  14. [ceph_deploy.cli][INFO ] subcommand : create
  15. [ceph_deploy.cli][INFO ] host : ceph-01
  16. [ceph_deploy.cli][INFO ] filestore : None
  17. [ceph_deploy.cli][INFO ] func : <function osd at 0x20cf758>
  18. [ceph_deploy.cli][INFO ] ceph_conf : None
  19. [ceph_deploy.cli][INFO ] zap_disk : False
  20. [ceph_deploy.cli][INFO ] data : /dev/sdc
  21. [ceph_deploy.cli][INFO ] block_db : None
  22. [ceph_deploy.cli][INFO ] dmcrypt : False
  23. [ceph_deploy.cli][INFO ] overwrite_conf : False
  24. [ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
  25. [ceph_deploy.cli][INFO ] quiet : False
  26. [ceph_deploy.cli][INFO ] debug : False
  27. [ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdc
  28. [ceph-01][DEBUG ] connected to host: ceph-01
  29. [ceph-01][DEBUG ] detect platform information from remote host
  30. [ceph-01][DEBUG ] detect machine type
  31. [ceph-01][DEBUG ] find the location of an executable
  32. [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.4.1708 Core
  33. [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-01
  34. [ceph-01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  35. [ceph-01][DEBUG ] find the location of an executable
  36. [ceph-01][INFO ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdc
  37. [ceph-01][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
  38. [ceph-01][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 40cc4038-1d7b-4ec8-a78b-6dc939b9dd01
  39. [ceph-01][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-6adf3ba2-9b6f-4f62-b9c9-0e6414bd1c96 /dev/sdc
  40. [ceph-01][WARNIN] stdout: Physical volume "/dev/sdc" successfully created.
  41. [ceph-01][WARNIN] stdout: Volume group "ceph-6adf3ba2-9b6f-4f62-b9c9-0e6414bd1c96" successfully created
  42. [ceph-01][WARNIN] Running command: /sbin/lvcreate --yes -l 7679 -n osd-block-40cc4038-1d7b-4ec8-a78b-6dc939b9dd01 ceph-6adf3ba2-9b6f-4f62-b9c9-0e6414bd1c96
  43. [ceph-01][WARNIN] stdout: Logical volume "osd-block-40cc4038-1d7b-4ec8-a78b-6dc939b9dd01" created.
  44. [ceph-01][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
  45. [ceph-01][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
  46. [ceph-01][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-6adf3ba2-9b6f-4f62-b9c9-0e6414bd1c96/osd-block-40cc4038-1d7b-4ec8-a78b-6dc939b9dd01
  47. [ceph-01][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
  48. [ceph-01][WARNIN] Running command: /bin/ln -s /dev/ceph-6adf3ba2-9b6f-4f62-b9c9-0e6414bd1c96/osd-block-40cc4038-1d7b-4ec8-a78b-6dc939b9dd01 /var/lib/ceph/osd/ceph-3/block
  49. [ceph-01][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap
  50. [ceph-01][WARNIN] stderr: 2022-02-15 15:32:34.732 7f433f903700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
  51. [ceph-01][WARNIN] 2022-02-15 15:32:34.732 7f433f903700 -1 AuthRegistry(0x7f4338066aa8) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
  52. [ceph-01][WARNIN] stderr: got monmap epoch 3
  53. [ceph-01][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-3/keyring --create-keyring --name osd.3 --add-key AQARVwtiym1ZMRAAVHbevWt3Mr3VfpnOkCQnEg==
  54. [ceph-01][WARNIN] stdout: creating /var/lib/ceph/osd/ceph-3/keyring
  55. [ceph-01][WARNIN] added entity osd.3 auth(key=AQARVwtiym1ZMRAAVHbevWt3Mr3VfpnOkCQnEg==)
  56. [ceph-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring
  57. [ceph-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/
  58. [ceph-01][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid 40cc4038-1d7b-4ec8-a78b-6dc939b9dd01 --setuser ceph --setgroup ceph
  59. [ceph-01][WARNIN] stderr: 2022-02-15 15:32:35.229 7fa862665a80 -1 bluestore(/var/lib/ceph/osd/ceph-3/) _read_fsid unparsable uuid
  60. [ceph-01][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdc
  61. [ceph-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
  62. [ceph-01][WARNIN] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-6adf3ba2-9b6f-4f62-b9c9-0e6414bd1c96/osd-block-40cc4038-1d7b-4ec8-a78b-6dc939b9dd01 --path /var/lib/ceph/osd/ceph-3 --no-mon-config
  63. [ceph-01][WARNIN] Running command: /bin/ln -snf /dev/ceph-6adf3ba2-9b6f-4f62-b9c9-0e6414bd1c96/osd-block-40cc4038-1d7b-4ec8-a78b-6dc939b9dd01 /var/lib/ceph/osd/ceph-3/block
  64. [ceph-01][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block
  65. [ceph-01][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
  66. [ceph-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
  67. [ceph-01][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-3-40cc4038-1d7b-4ec8-a78b-6dc939b9dd01
  68. [ceph-01][WARNIN] stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-3-40cc4038-1d7b-4ec8-a78b-6dc939b9dd01.service to /usr/lib/systemd/system/ceph-volume@.service.
  69. [ceph-01][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@3
  70. [ceph-01][WARNIN] stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@3.service to /usr/lib/systemd/system/ceph-osd@.service.
  71. [ceph-01][WARNIN] Running command: /bin/systemctl start ceph-osd@3
  72. [ceph-01][WARNIN] --> ceph-volume lvm activate successful for osd ID: 3
  73. [ceph-01][WARNIN] --> ceph-volume lvm create successful for: /dev/sdc
  74. [ceph-01][INFO ] checking OSD status...
  75. [ceph-01][DEBUG ] find the location of an executable
  76. [ceph-01][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
  77. [ceph_deploy.osd][DEBUG ] Host ceph-01 is now ready for osd use.

# 扩容完成我们可以看到ceph的状态,此时我们的osd已经发生变化

  [root@ceph-01 ceph-deploy]# ceph -s

  1. cluster:
  2. id: c8ae7537-8693-40df-8943-733f82049642
  3. health: HEALTH_OK
  4. services:
  5. mon: 3 daemons, quorum ceph-01,ceph-02,ceph-03 (age 19m)
  6. mgr: ceph-03(active, since 7d), standbys: ceph-02, ceph-01
  7. mds: cephfs-abcdocker:1 {0=ceph-02=up:active} 2 up:standby
  8. osd: 4 osds: 4 up (since 2m), 4 in (since 2m) #osd已经变更为4个,状态为up
  9. rgw: 1 daemon active (ceph-01)
  10. task status:
  11. data:
  12. pools: 9 pools, 384 pgs
  13. objects: 320 objects, 141 MiB
  14. usage: 4.5 GiB used, 176 GiB / 180 GiB avail #空间大小已经由原来的150扩容为180
  15. pgs: 384 active+clean

# 通过ceph osd tree可以看到我们三台节点,一共有4个osd,其中ceph-01节点有2台osd节点

[root@ceph-01 ceph-deploy]# ceph osd tree
  1. ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
  2. -1 0.17569 root default
  3. -3 0.07809 host ceph-01
  4. 0 hdd 0.04880 osd.0 up 1.00000 1.00000
  5. 3 hdd 0.02930 osd.3 up 1.00000 1.00000
  6. -5 0.04880 host ceph-02
  7. 1 hdd 0.04880 osd.1 up 1.00000 1.00000
  8. -7 0.04880 host ceph-03
  9. 2 hdd 0.04880 osd.2 up 1.00000 1.00000

三、数据重分步 rebalancing

数据重分布原理
当Ceph OSD添加到Ceph存储集群时,集群映射会使用新的 OSD 进行更新。回到计算 PG ID中,这会更改集群映射。因此,它改变了对象的放置,因为它改变了计算的输入。下图描述了重新平衡过程(尽管相当粗略,因为它对大型集群的影响要小得多),其中一些但不是所有 PG 从现有 OSD(OSD 1 和 OSD 2)迁移到新 OSD(OSD 3) )。即使在再平衡时,许多归置组保持原来的配置,每个OSD都获得了一些额外的容量,因此在重新平衡完成后新 OSD 上没有负载峰值。

PG中存储的是subject,因为subject计算比较复杂,所以ceph会直接迁移pg保证集群平衡

 

#DD一个10G的文件

[root@ceph-01 abcdocker]# dd if=/dev/zero of=abcdocker.img bs=1M count=10240

我们将文件复制到CEPHFS文件存储中,通过ceph健康检查,就可以看到下面的PG同步的状态

ceph osd重分布不会马上进行数据同步,而是大概有10分钟的等待时间。在异常时可以看到有多少个object受到影响,并不会马上同步

[root@ceph-02 ~]# ceph -s
  1. cluster:
  2. id: c8ae7537-8693-40df-8943-733f82049642
  3. health: HEALTH_WARN
  4. Degraded data redundancy: 8/8640 objects degraded (0.093%), 2 pgs degraded
  5. services:
  6. mon: 3 daemons, quorum ceph-01,ceph-02,ceph-03 (age 71s)
  7. mgr: ceph-03(active, since 8d), standbys: ceph-02, ceph-01
  8. mds: cephfs-abcdocker:1 {0=ceph-03=up:active} 2 up:standby
  9. osd: 5 osds: 5 up (since 15s), 5 in (since 3h)
  10. rgw: 1 daemon active (ceph-01)
  11. task status:
  12. data:
  13. pools: 9 pools, 384 pgs
  14. objects: 2.88k objects, 10 GiB
  15. usage: 36 GiB used, 194 GiB / 230 GiB avail
  16. pgs: 8/8640 objects degraded (0.093%)
  17. 382 active+clean
  18. 1 active+recovery_wait+degraded
  19. 1 active+recovering+degraded
  20. io:
  21. recovery: 0 B/s, 1 objects/s
  22. client: 32 KiB/s rd, 0 B/s wr, 31 op/s rd, 21 op/s wr

当PG数据同步完成后,ceph集群health状态就变更为OK

温馨提示,当ceph osd执行重分配时,会影响ceph集群正常写入的操作。所以当更新升级osd节点时建议一台一台进行更新,或者临时关闭rebalance

ceph osd 重分布以及写入数据是可以指定2块网卡,生产环境建议ceph配置双网段。cluster_network为osd数据扩容同步重分配网卡,public_network为ceph客户端连接的网络。设置双网段可以减少重分布造成的影响

如果我们已经在数据重分配了,已经影响到线上ceph正常读写了,可以通过下面的方式临时暂停rebalance

[root@ceph-01 ceph-deploy]# ceph osd set norebalance
  1. norebalance is set
  2. [root@ceph-01 ceph-deploy]# ceph osd set nobackfill
  3. nobackfill is set
  4. #当我们设置了norebalance nobackfill ,ceph会将重分布给暂停掉。ceph业务就可以访问正常
  5. [root@ceph-01 ceph-deploy]# ceph -s
  6. cluster:
  7. id: c8ae7537-8693-40df-8943-733f82049642
  8. health: HEALTH_WARN
  9. nobackfill,norebalance flag(s) set
  10. services:
  11. mon: 3 daemons, quorum ceph-01,ceph-02,ceph-03 (age 38m)
  12. mgr: ceph-03(active, since 8d), standbys: ceph-02, ceph-01
  13. mds: cephfs-abcdocker:1 {0=ceph-03=up:active} 2 up:standby
  14. osd: 5 osds: 5 up (since 37m), 5 in (since 3h)
  15. flags nobackfill,norebalance
  16. rgw: 1 daemon active (ceph-01)
  17. task status:
  18. data:
  19. pools: 9 pools, 384 pgs
  20. objects: 2.88k objects, 10 GiB
  21. usage: 36 GiB used, 194 GiB / 230 GiB avail
  22. pgs: 384 active+clean

# 解除rebalance命令如下

[root@ceph-01 ceph-deploy]# ceph osd unset nobackfill
  1. nobackfill is unset
  2. [root@ceph-01 ceph-deploy]# ceph osd unset norebalance
  3. norebalance is unset
  4. [root@ceph-01 ceph-deploy]# ceph -s
  5. cluster:
  6. id: c8ae7537-8693-40df-8943-733f82049642
  7. health: HEALTH_OK
  8. services:
  9. mon: 3 daemons, quorum ceph-01,ceph-02,ceph-03 (age 40m)
  10. mgr: ceph-03(active, since 8d), standbys: ceph-02, ceph-01
  11. mds: cephfs-abcdocker:1 {0=ceph-03=up:active} 2 up:standby
  12. osd: 5 osds: 5 up (since 39m), 5 in (since 3h)
  13. rgw: 1 daemon active (ceph-01)
  14. task status:
  15. data:
  16. pools: 9 pools, 384 pgs
  17. objects: 2.88k objects, 10 GiB
  18. usage: 36 GiB used, 194 GiB / 230 GiB avail
  19. pgs: 384 active+clean

四、OSD 缩容

当某个时间段我们OSD服务器受到外部因素影响,硬盘更换,或者是节点DOWN机需要手动剔除OSD节点。

目前ceph osd状态

[root@ceph-01 ~]# ceph osd tree
  1. ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
  2. -1 0.22449 root default
  3. -3 0.07809 host ceph-01
  4. 0 hdd 0.04880 osd.0 up 1.00000 1.00000
  5. 3 hdd 0.02930 osd.3 up 1.00000 1.00000
  6. -5 0.04880 host ceph-02
  7. 1 hdd 0.04880 osd.1 up 1.00000 1.00000
  8. -7 0.04880 host ceph-03
  9. 2 hdd 0.04880 osd.2 up 1.00000 1.00000
  10. -9 0.04880 host ceph-04
  11. 4 hdd 0.04880 osd.4 up 1.00000 1.00000

目前我们ceph节点一共有4台,其中ceph-01有2个osd节点。 假设我们ceph-04节点出现故障,软件或者硬件故障,需要将ceph-04从ceph集群中剔除

ceph osd perf 可以看到ceph中osd的延迟,如果生产中遇到哪块盘延迟较大,可以进行手动剔除

故障发生后,如果一定时间后重新上线故障 OSD,那么 PG 会进行以下流程:

  1. 故障OSD上线,通知Monitor并注册,该OSD在上线前会读取存在持久化的设备的PGLog
    2.Monitor 得知该OSD的旧id,因此会继续使用以前的PG分配,之前该OSD下线造成的Degraded PG会被通知该OSD已经中心加入
    3.这时候分为两种情况,以下两种情况PG会标记自己为Peering状态并暂时停止处理请求
  • 第一种情况
    故障OSD是拥有Primary PG,它作为这部分数据权责主题,需要发送查询PG元数据请求给所有属于该PG的Replicate角色节点。该PG的Replicate角色节点实际上在故障OSD下线时期成为了Primary角色并维护了权威的PGLog,该PG在得到OSD的Primary PG的查询请求后会发送回应。Primary PG通过对比Replicate PG发送的元数据和PG版本信息后发现处于落后状态,因此会合并到PGLog并建立权威PGLog,同时会建立missing列表来标记过时数据

  • 第二种情况
    故障OSD是拥有Replicate PG,这时上线后故障OSD的Replicate PG会得到Primary PG的查询请求,发送自己这份过时的元数据和PGLog。Primary PG对比数据后发现该PG落后时,通过PGLog建立missing列表。

    1. PG开始接受IO请求,但是PG所属的故障节点仍存在过时数据,故障节点的Primary PG会发起Pull请求从Replicate节点获得最新数据,Replicate PG会得到其它OSD节点的Primary PG的Push请求来恢复数据
      5.恢复完成后标记自己Clean

第三步是PG唯一不处理请求的阶段,它通常会在1s内完成来减少不可用时间。但是这里仍然有其他问题,比如恢复期间故障OSD会维护missing列表,如果IO正好是处于missing列表的数据,那么PG会进行恢复数据的插队操作,主动将该IO涉及的数据从Replicate PG拉过来,提前恢复该部分数据。这个情况延迟大概几十毫秒


首先我们模拟ceph04节点异常,异常的情况有很多,我直接down掉ceph-04节点

第一步: shutdown ceph-04

第二步: 检查ceph状态

[root@ceph-01 ~]# ceph osd tree
  1. ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
  2. -1 0.22449 root default
  3. -3 0.07809 host ceph-01
  4. 0 hdd 0.04880 osd.0 up 1.00000 1.00000
  5. 3 hdd 0.02930 osd.3 up 1.00000 1.00000
  6. -5 0.04880 host ceph-02
  7. 1 hdd 0.04880 osd.1 up 1.00000 1.00000
  8. -7 0.04880 host ceph-03
  9. 2 hdd 0.04880 osd.2 up 1.00000 1.00000
  10. -9 0.04880 host ceph-04
  11. 4 hdd 0.04880 osd.4 down 1.00000 1.00000
  12. # ceph -04中的osd状态已经是down的状态
  13. [root@ceph-01 ~]# ceph -s
  14. cluster:
  15. id: c8ae7537-8693-40df-8943-733f82049642
  16. health: HEALTH_WARN
  17. 1 osds down
  18. 1 host (1 osds) down
  19. Degraded data redundancy: 2154/8640 objects degraded (24.931%), 133 pgs degraded
  20. services:
  21. mon: 3 daemons, quorum ceph-01,ceph-02,ceph-03 (age 16h)
  22. mgr: ceph-03(active, since 8d), standbys: ceph-02, ceph-01
  23. mds: cephfs-abcdocker:1 {0=ceph-03=up:active} 2 up:standby
  24. osd: 5 osds: 4 up (since 23s), 5 in (since 20h)
  25. rgw: 1 daemon active (ceph-01)
  26. task status:
  27. data:
  28. pools: 9 pools, 384 pgs
  29. objects: 2.88k objects, 10 GiB
  30. usage: 36 GiB used, 194 GiB / 230 GiB avail
  31. pgs: 2154/8640 objects degraded (24.931%)
  32. 166 active+undersized
  33. 133 active+undersized+degraded
  34. 85 active+clean
  35. #可以看到,大概有2154个object受到影响

通过ceph -s可以看到异常的osd在ceph-04上,osd的节点为osd.4。下面执行osd out,可以将权重变小

 
  1. [root@ceph-01 ~]# ceph osd out osd.4
  2. #因为本身没权重,ceph就不会让此节点提供服务

第三步: 删除CRUSHMAP信息,默认情况下ceph osd out不会删除crush中的信息

 
  1. [root@ceph-01 ~]# ceph osd crush dump|head
  2. {
  3. "devices": [
  4. {
  5. "id": 0,
  6. "name": "osd.0",
  7. "class": "hdd"
  8. },
  9. {
  10. "id": 1,
  11. "name": "osd.1",

从集群里面删除这个节点的记录

 
  1. [root@ceph-01 ~]# ceph osd crush rm osd.4
  2. removed item id 4 name 'osd.4' from crush map

当前ceph osd里面没有任何数据了,但是ceph集群中还有保留

第四步: 从集群中删除异常节点

 
  1. #虽然没有提供数据,但是还有存在这个节点
  2. [root@ceph-01 ~]# ceph osd rm osd.4
  3. removed osd.4
  4. [root@ceph-01 ~]# ceph osd tree
  5. ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
  6. -1 0.17569 root default
  7. -3 0.07809 host ceph-01
  8. 0 hdd 0.04880 osd.0 up 1.00000 1.00000
  9. 3 hdd 0.02930 osd.3 up 1.00000 1.00000
  10. -5 0.04880 host ceph-02
  11. 1 hdd 0.04880 osd.1 up 1.00000 1.00000
  12. -7 0.04880 host ceph-03
  13. 2 hdd 0.04880 osd.2 up 1.00000 1.00000
  14. -9 0 host ceph-04

在通过ceph -s查看时,osd的节点已经更改为4个

 
  1. [root@ceph-01 ~]# ceph -s|grep osd
  2. osd: 4 osds: 4 up (since 10m), 4 in (since 9m); 27 remapped pgs

第五步: 删除auth中的osd key

 
  1. #此时我们通过ceph -s参数还可以看到有集群状态,是因为auth中osd的key没有删除
  2. #通过下面的命令删除auth中的key
  3. #查看auth list
  4. [root@ceph-01 ~]# ceph auth list|grep osd
  5. installed auth entries:
  6. caps: [osd] allow rwx
  7. caps: [osd] allow rwx
  8. caps: [osd] allow rwx
  9. osd.0
  10. caps: [mgr] allow profile osd
  11. caps: [mon] allow profile osd
  12. caps: [osd] allow *
  13. osd.1
  14. caps: [mgr] allow profile osd
  15. caps: [mon] allow profile osd
  16. caps: [osd] allow *
  17. osd.2
  18. caps: [mgr] allow profile osd
  19. caps: [mon] allow profile osd
  20. caps: [osd] allow *
  21. osd.3
  22. caps: [mgr] allow profile osd
  23. caps: [mon] allow profile osd
  24. caps: [osd] allow *
  25. osd.4
  26. caps: [mgr] allow profile osd
  27. caps: [mon] allow profile osd
  28. caps: [osd] allow *
  29. caps: [osd] allow *
  30. client.bootstrap-osd
  31. caps: [mon] allow profile bootstrap-osd
  32. caps: [osd] allow rwx
  33. caps: [osd] allow *
  34. caps: [osd] allow *
  35. caps: [osd] allow *
  36. #删除osd.4
  37. [root@ceph-01 ~]# ceph auth rm osd.4
  38. #注意不要删除osd

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值