Ceph 三副本 触发recovery

8 篇文章 2 订阅
6 篇文章 1 订阅

环境配置,三个节点,Ceph三副本

查看Ceph最小副本数

[root@node-1 ~]# ceph osd dump | grep -E " size | min_size "
pool 1 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 3 flags hashpspool stripe_width 0 application rgw
pool 2 'volumes' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 228 lfor 0/59 flags hashpspool stripe_width 0 application cinder-volume
pool 3 'compute' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 372 lfor 0/57 flags hashpspool stripe_width 0
pool 4 'rbd' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 236 flags hashpspool stripe_width 0
pool 5 'ssdpool' replicated size 3 min_size 1 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 last_change 63 lfor 0/62 flags hashpspool stripe_width 0
pool 6 'metrics' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 35 flags hashpspool stripe_width 0
pool 7 'backups' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 229 flags hashpspool stripe_width 0 application cinder-backup
pool 8 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 42 flags hashpspool stripe_width 0 application rgw
pool 9 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 44 flags hashpspool stripe_width 0 application rgw
pool 10 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 46 flags hashpspool stripe_width 0 application rgw
pool 11 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 53 flags hashpspool stripe_width 0 application rgw
pool 12 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 55 flags hashpspool stripe_width 0 application rgw

1.查看rbd pool

rbd ls

2.在rbd pool里创建一个1G的文件

rbd create -s 1G rbd/test

3.Disable node-6的一个osd

kubectl edit nodes node-6

4.查看rbd bench的使用手册

rbd help bench

5.查看rbd/test大小

rbd du rbd/test

6.写满rbd/test

rbd bench --io-size 4096 --io-type write rbd/test

7.Enable node-6的osd

kubectl edit nodes node-6

环境配置,四个节点,Ceph三副本

1.查看ceph osd tree

[root@node-1 ~]# ceph osd tree
ID  CLASS WEIGHT  TYPE NAME               STATUS REWEIGHT PRI-AFF 
 -2       0.49213 root ssdpool                                    
 -3       0.12303     host node-1_ssdpool                         
  1   hdd 0.12303         osd.1             down        0 1.00000 
 -5       0.12303     host node-2_ssdpool                         
  3   hdd 0.12303         osd.3               up  1.00000 1.00000 
-17       0.12303     host node-3_ssdpool                         
  5   hdd 0.12303         osd.5               up  1.00000 1.00000 
 -4       0.12303     host node-4_ssdpool                         
  7   hdd 0.12303         osd.7               up  1.00000 1.00000 
 -1       0.85931 root default                                    
-14       0.21483     host node-1                                 
  0   ssd 0.21483         osd.0             down        0 1.00000 
-28       0.21483     host node-2                                 
  2   ssd 0.21483         osd.2               up  1.00000 1.00000 
 -6       0.21483     host node-3                                 
  4   ssd 0.21483         osd.4               up  1.00000 1.00000 
-13       0.21483     host node-4                                 
  6   ssd 0.21483         osd.6               up  1.00000 1.00000 

2.Out 一个 osd

[root@node-1 ~]# ceph osd out 3
marked out osd.3. 

3.ceph处于recovery状态

[root@node-1 ~]# ceph -s
  cluster:
    id:     3927e644-f606-4cf6-b166-1beccc4c3047
    health: HEALTH_WARN
            1465/33586 objects misplaced (4.362%)
            Reduced data availability: 8 pgs inactive, 29 pgs peering
 
  services:
    mon:        3 daemons, quorum node-1,node-3,node-4
    mgr:        node-4(active), standbys: node-3, node-1
    osd:        8 osds: 6 up, 5 in; 84 remapped pgs
                flags nodeep-scrub
    rbd-mirror: 1 daemon active
    rgw:        3 daemons active
 
  data:
    pools:   12 pools, 360 pgs
    objects: 16.79k objects, 46.8GiB
    usage:   99.8GiB used, 818GiB / 918GiB avail
    pgs:     12.778% pgs not active
             1465/33586 objects misplaced (4.362%)
             276 active+clean
             46  remapped+peering
             37  active+remapped+backfill_wait
             1   active+remapped+backfilling
 
  io:
    client:   550KiB/s wr, 0op/s rd, 46op/s wr
    recovery: 17.0MiB/s, 5objects/s

4.加入osd

[root@node-1 ~]# ceph osd in 3
marked in osd.3. 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值