ceph容灾验证

本文详细记录了Ceph集群的搭建过程,包括对象存储、块设备和文件系统的创建与验证。通过模拟osd节点故障和恢复,测试了不同副本丢失情况下集群的稳定性和数据恢复能力。此外,还涉及到了监控节点损失的情况以及集群在更换IP后的恢复验证。测试结果显示,Ceph在多种故障场景下能有效保护数据并恢复正常服务。
摘要由CSDN通过智能技术生成

验证剧本

  1. ceph环境搭建,需要搭建14版本的ceph集群,提供三种数据接口,1.对象存储 2.块设备 3.文件系统
  2. 数据导入,将原来的ceph数据(对象存储)导入到新的ceph集群中
  3. osd节点灾备恢复,分别对三种数据接口,进行osd节点摘除,按照三副本进行:
    情形一:按照一个文件的分布,按照摘除1个osd
    结论:摘除一个节点后,可以继续进行文件的读写;三个副本恢复后,仍然可以进行读写
    情形二:摘除2个osd节点
    结论:摘除两个节点后,不能继续对文件进行读写,只能读取数据,三个副本恢复后,可以正常进行读写
  4. 更换ip后,验证ceph集群的恢复
    结论:更换ip后,步骤3的三种数据接口,仍然可以正常读取文件,且文件md5值一致
  5. ceph节点损失恢复,按照5个监控节点,验证损失1个节点和三个节点集群的恢复情况
    情形一:监控节点损失1到2个节点
    结论:摘除两个节点后,ceph仍然能够正常对外提供三种数据接口
    情形二:监控节点损失3个节点
    结论:ceph不能正常对外提供数据接口,且恢复一个节点后,ceph能够正常对外提供三种数据接口

集群搭建

参加:ceph集群搭建过程

创建文件接口

创建块设备

参加:ceph集群搭建过程 之 挂载块设备 过程

创建ceph fs文件系统

验证

块设备

rados命令参考

rbd追踪磁盘存储位置

[root@ceph101 test_rbd]# rados -p test_rbd ls
rbd_data.2836b2f53ea86.000000000000042d
rbd_data.2836b2f53ea86.0000000000000436
rbd_data.2836b2f53ea86.0000000000000421
rbd_data.2836b2f53ea86.0000000000000800
rbd_object_map.2836b2f53ea86
rbd_data.2836b2f53ea86.0000000000000429
rbd_data.2836b2f53ea86.0000000000000439
rbd_data.2836b2f53ea86.0000000000000060
rbd_data.2836b2f53ea86.00000000000009e1
rbd_data.2836b2f53ea86.000000000000042b
rbd_data.2836b2f53ea86.000000000000043d
rbd_data.2836b2f53ea86.0000000000000427
rbd_data.2836b2f53ea86.000000000000042c
rbd_directory
rbd_header.2836b2f53ea86
rbd_data.2836b2f53ea86.0000000000000422
rbd_data.2836b2f53ea86.0000000000000423
rbd_data.2836b2f53ea86.00000000000000a0
rbd_data.2836b2f53ea86.0000000000000430
rbd_data.2836b2f53ea86.000000000000042f
rbd_info
rbd_data.2836b2f53ea86.0000000000000020
rbd_data.2836b2f53ea86.000000000000043c
rbd_data.2836b2f53ea86.0000000000000420
rbd_data.2836b2f53ea86.0000000000000320
rbd_data.2836b2f53ea86.00000000000009ff
rbd_data.2836b2f53ea86.0000000000000437
rbd_data.2836b2f53ea86.0000000000000360
rbd_data.2836b2f53ea86.000000000000042a
rbd_data.2836b2f53ea86.0000000000000200
rbd_data.2836b2f53ea86.0000000000000433
rbd_data.2836b2f53ea86.0000000000000600
rbd_data.2836b2f53ea86.000000000000043a
rbd_data.2836b2f53ea86.0000000000000620
rbd_data.2836b2f53ea86.000000000000043b
rbd_data.2836b2f53ea86.0000000000000434
rbd_data.2836b2f53ea86.000000000000043e
rbd_data.2836b2f53ea86.00000000000000e0
rbd_data.2836b2f53ea86.0000000000000438
rbd_data.2836b2f53ea86.0000000000000425
rbd_data.2836b2f53ea86.0000000000000120
rbd_data.2836b2f53ea86.00000000000009e0
rbd_data.2836b2f53ea86.0000000000000400
rbd_data.2836b2f53ea86.0000000000000001
rbd_data.2836b2f53ea86.0000000000000428
rbd_data.2836b2f53ea86.00000000000009c1
rbd_data.2836b2f53ea86.0000000000000432
rbd_data.2836b2f53ea86.0000000000000426
rbd_data.2836b2f53ea86.00000000000009c0
rbd_data.2836b2f53ea86.0000000000000431
rbd_data.2836b2f53ea86.0000000000000435
rbd_id.test_rbd_image_1
rbd_data.2836b2f53ea86.000000000000042e
rbd_data.2836b2f53ea86.000000000000043f
rbd_data.2836b2f53ea86.0000000000000009
rbd_data.2836b2f53ea86.0000000000000424
rbd_data.2836b2f53ea86.0000000000000000

[root@ceph101 test_rbd]# ceph osd map test_rbd  rbd_id.test_rbd_image_1
osdmap e3035 pool 'test_rbd' (43) object 'rbd_id.test_rbd_image_1' -> pg 43.ce44bb37 (43.17) -> up ([3,1,8], p3) acting ([3,1,8], p3)
[root@ceph101 ~]# ceph osd tree
ID   CLASS  WEIGHT   TYPE NAME         STATUS  REWEIGHT  PRI-AFF
 -1         5.85956  root default
 -3         0.97659      host ceph101
  0    hdd  0.48830          osd.0         up   1.00000  1.00000
  1    hdd  0.48830          osd.1         up   1.00000  1.00000
 -5         0.97659      host ceph102
  2    hdd  0.48830          osd.2         up   1.00000  1.00000
  3    hdd  0.48830          osd.3         up   1.00000  1.00000
 -7         0.97659      host ceph103
  4    hdd  0.48830          osd.4         up   1.00000  1.00000
  5    hdd  0.48830          osd.5         up   1.00000  1.00000
 -9         0.97659      host ceph104
  6    hdd  0.48830          osd.6         up   1.00000  1.00000
  7    hdd  0.48830          osd.7         up   1.00000  1.00000
-11         0.97659      host ceph105
  8    hdd  0.48830          osd.8         up   1.00000  1.00000
  9    hdd  0.48830          osd.9         up   1.00000  1.00000
-13         0.97659      host ceph106
 10    hdd  0.48830          osd.10        up   1.00000  1.00000
 11    hdd  0.48830          osd.11        up   1.00000  1.00000
#查看pg分布(全局)
[root@ceph101 ~]# ceph pg ls 
##或者按照osd查看
[root@ceph101 ~]# ceph pg ls-by-osd osd.0
##更精确捕获
[root@ceph101 ~]#ceph pg ls  |awk '{printf"%s\t%s\n",$1,$15}'
PG     OBJECTS  DEGRADED  MISPLACED  UNFOUND  BYTES     OMAP_BYTES*  OMAP_KEYS*  LOG   STATE         SINCE  VERSION      REPORTED       UP           ACTING       SCRUB_STAMP                      DEEP_SCRUB_STAMP
1.0          0         0          0        0         0            0           0     0  active+clean     4h          0'0      3046:3060    [7,5,2]p7    [7,5,2]p7  2022-05-09T06:30:28.561204+0000  2022-05-05T20:49:31.217124+0000
24.0         0         0          0        0         0            0           0     0  active+clean     4h          0'0      3046:1865    [9,0,6]p9    [9,0,6]p9  2022-05-09T21:57:21.361105+0000  2022-05-04T21:21:44.529734+0000
24.1         2         0          0        0      1134            0           0    24  active+clean     4h      2610'24      3046:1898    [3,9,4]p3    [3,9,4]p3  2022-05-10T05:08:18.923492+0000  2022-05-09T02:37:15.839476+0000
24.2         1         0          0        0       939            0           0    13  active+clean     4h      2610'13
..................
43.17        4         0          0        0  12582929            0           0     4  active+clean     4h       3034'4        3046:77    [3,1,8]p3    [3,1,8]p3  2022-05-10T02:36:03.721277+0000  2022-05-10T02:36:03.721277+0000
.....................

ceph副本验证

块对象
[root@ceph101 ~]# ceph osd pool create test_rbd_1 32
pool 'test_rbd_1' created
[root@ceph101 ~]# ceph osd pool application enable  test_rbd_1 rbd
enabled application 'rbd' on pool 'test_rbd_1'
[root@ceph101 ~]# rados -p test_rbd_1 put bcsj.note caoyong/bcsj.txt
[root@ceph101 ~]# rados -p test_rbd_1 ls
bcsj.note
[root@ceph101 ~]# ceph osd map test_rbd  bcsj.txt
osdmap e3048 pool 'test_rbd' (43) object 'bcsj.txt' -> pg 43.c7ddb025 (43.5) -> up ([11,9,4], p11) acting ([11,9,4], p11)
#禁用副本复制功能
[root@ceph101 test_rbd]# ceph osd set noout
noout is set
[root@ceph101 test_rbd]# ceph osd set norebalance
norebalance is set
[root@ceph101 test_rbd]# ceph osd set norecover
norecover is set
[root@ceph101 test_rbd]# ceph osd set nobackfill
nobackfill is set
[root@ceph101 test_rbd]# rados -p test_rbd append bcsj.txt append.txt
[root@ceph101 test_rbd]# rados -p test_rbd get bcsj.txt bcsj.txt.download
[root@ceph101 test_rbd]# ls
append.txt  bcsj.txt.download
[root@ceph101 test_rbd]# tail -10 bcsj.txt.download

就如同夜幕降临,白日西沉。

更多精彩好书,更多原创手机电子书,请登陆奇书网--Www.Qisuu.Com1
[root@ceph101 test_rbd]# ceph orch daemon rm osd.9 --force
Removed osd.9 from host 'ceph105'
[root@ceph101 test_rbd]# ceph osd rm osd.9
removed osd.9
[root@ceph101 test_rbd]# ceph osd out osd.9
osd.9 does not exist.
#两个副本,可以正常对文件进行读写
[root@ceph101 test_rbd]# ceph osd map test_rbd  bcsj.txt
osdmap e3054 pool 'test_rbd' (43) object 'bcsj.txt' -> pg 43.c7ddb025 (43.5) -> up ([11,4], p11) acting ([11,4], p11)
[root@ceph101 test_rbd]# cat append.txt
append 2  copies
---------------2-------copies-------------
[root@ceph101 test_rbd]# rados -p test_rbd append bcsj.txt append.txt
[root@ceph101 test_rbd]# rados -p test_rbd get bcsj.txt bcsj.txt.2.copies
[root@ceph101 test_rbd]# tail -10 bcsj.txt.2.copies

更多精彩好书,更多原创手机电子书,请登陆奇书网--Www.Qisuu.Com1
append 2  copies
---------------2-------copies-------------
[root@ceph101 test_rbd]# ceph orch daemon rm osd.4 --force
Removed osd.4 from host 'ceph103'
#剩下一副本后,仍然可以进行读写
[root@ceph101 test_rbd]# ceph osd map test_rbd  bcsj.txt
osdmap e3056 pool 'test_rbd' (43) object 'bcsj.txt' -> pg 43.c7ddb025 (43.5) -> up ([11], p11) acting ([11], p11)
[root@ceph101 test_rbd]# ceph osd out osd.4
marked out osd.4.
[root@ceph101 test_rbd]# ceph osd rm osd.4
removed osd.4
[root@ceph101 test_rbd]# cat append.txt
append 1  copies
---------------1-------copies-------------
[root@ceph101 test_rbd]# rados -p test_rbd append bcsj.txt append.txt
[root@ceph101 test_rbd]# rados -p test_rbd get bcsj.txt bcsj.txt.1.copies
[root@ceph101 test_rbd]# tail -12 bcsj.txt.1.copies

更多精彩好书,更多原创手机电子书,请登陆奇书网--Www.Qisuu.Com1
append 2  copies
---------------2-------copies-------------
append 1  copies
---------------1-------copies-------------

#恢复集群
[root@ceph101 ~]# ceph osd set noout
noout is set
[root@ceph101 ~]# ceph osd set norebalance
norebalance is set
[root@ceph101 ~]# ceph osd set norecover
norecover is set
[root@ceph101 ~]# ceph osd set nobackfill
nobackfill is set
#将其他osd重新加入,可以看到文件重新恢复成三副本
[root@ceph101 test_rbd]# ceph osd map test_rbd  bcsj.txt
osdmap e3048 pool 'test_rbd' (43) object 'bcsj.txt' -> pg 43.c7ddb025 (43.5) -> up ([11,9,4], p11) acting ([11,9,4], p11)

对象网关

参照快设备,可以找到对象网关接口下,副本的分布情况

[root@ceph1 ~]# ceph osd lspools
1 device_health_metrics
2 .rgw.root
43 knowdee
54 cp_pool
57 zone_02.rgw.log
58 zone_03.rgw.log
59 zone_02.rgw.meta
60 zone_02.rgw.control
61 zone_02.rgw.buckets.index
64 zone_02.rgw.buckets.data

[root@ceph1 ~]# radosgw-admin bucket stats --bucket caoyong
{
    "bucket": "caoyong",
    "num_shards": 11,
    "tenant": "",
    "zonegroup": "9eadce15-982d-49b2-8b5f-b640ab920e82",
    "placement_rule": "default-placement",
    "explicit_placement": {
        "data_pool": "",
        "data_extra_pool": "",
        "index_pool": ""
    },
    "id": "b80f8394-1f06-47e9-b2a0-bc6cfd2fc26a.134229.1",
    "marker": "b80f8394-1f06-47e9-b2a0-bc6cfd2fc26a.134229.1",
    "index_type": "Normal",
    "owner": "rgw_admin",
    "ver": "0#1,1#1,2#3,3#1,4#1,5#1,6#1,7#1,8#1,9#1,10#1",
    "master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0",
    "mtime": "2022-04-27T08:50:27.397030Z",
    "creation_time": "2022-04-27T08:50:22.362309Z",
    "max_marker": "0#,1#,2#00000000002.14.5,3#,4#,5#,6#,7#,8#,9#,10#",
    "usage": {
        "rgw.main": {
            "size": 19098141,
            "size_actual": 19099648,
            "size_utilized": 19098141,
            "size_kb": 18651,
            "size_kb_actual": 18652,
            "size_kb_utilized": 18651,
            "num_objects": 2
        }
    },
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    }
}

[root@ceph1 ~]# ceph osd map zone_02.rgw.buckets.index .dir.b80f8394-1f06-47e9-b2a0-bc6cfd2fc26a.134229.1
osdmap e4183 pool 'zone_02.rgw.buckets.index' (61) object '.dir.b80f8394-1f06-47e9-b2a0-bc6cfd2fc26a.134229.1' -> pg 61.aaf566ea (61.2) -> up ([0,1,6], p0) acting ([0,1,6], p0)

cephfs文件系统

参照快设备,可以找到cephfs文件系统接口下,副本的分布情况

[root@ceph1 ~]# ceph osd pool create cephfs_data 32
pool 'cephfs_data' created
[root@ceph1 ~]# ceph osd pool create cephfs_metadata 32
pool 'cephfs_metadata' created
[root@ceph1 ~]# ceph fs new cephfs cephfs_metadata cephfs_data
[root@ceph1 ~]# ceph fs volume create cephfs
new fs with metadata pool 68 and data pool 67
[root@ceph1 ~]# ceph fs status
cephfs - 0 clients
======
RANK  STATE           MDS             ACTIVITY     DNS    INOS
 0    active  cephfs.ceph6.cqdgnc  Reqs:    0 /s    10     13
       POOL           TYPE     USED  AVAIL
cephfs.cephfs.meta  metadata  1536k  1895G
cephfs.cephfs.data    data       0   1895G
    STANDBY MDS
cephfs.ceph2.afksaf
MDS version: ceph version 15.2.16 (d46a73d6d0a67a79558054a3a5a72cb561724974) octopus (stable)

[root@ceph1 ~]# ceph auth get-or-create client.cephfs mon 'allow *' mds 'allow *' osd 'allow * pool=cephfs_data, allow * pool=cephfs_metadata'
[client.cephfs]
        key = AQAG9oFimrR/ORAA0mOMbh8uknrRLrA0DD/SCA==
[root@ceph1 ~]# ceph auth get client.cephfs
exported keyring for client.cephfs
[client.cephfs]
        key = AQAG9oFimrR/ORAA0mOMbh8uknrRLrA0DD/SCA==
        caps mds = "allow rw"
        caps mon = "allow r"
        caps osd = "allow rw pool=cephfs_data, allow rw pool=cephfs_metadata"
[root@ceph1 ~]# pwd
/root
[root@ceph1 ~]# mkdir cephfs
[root@ceph1 ~]# mount -t ceph 172.70.10.161:6789:/ /root/cephfs -o name=cephfs,secret=AQAG9oFimrR/ORAA0mOMbh8uknrRLrA0DD/SCA==
[root@ceph1 ~]# df -h
文件系统                 容量  已用  可用 已用% 挂载点
devtmpfs                 7.9G     0  7.9G    0% /dev
tmpfs                    7.9G     0  7.9G    0% /dev/shm
tmpfs                    7.9G  860M  7.0G   11% /run
tmpfs                    7.9G     0  7.9G    0% /sys/fs/cgroup
/dev/mapper/centos-root   50G   17G   34G   34% /
/dev/vda1               1014M  176M  839M   18% /boot
/dev/mapper/centos-home  441G   33M  441G    1% /home
overlay                   50G   17G   34G   34% /var/lib/docker/overlay2/29b83745237d0b8c1998f9e8b8a4e237d2851c8fdc3085358a72e3d4279bdda2/merged
overlay                   50G   17G   34G   34% /var/lib/docker/overlay2/8164d8132289057cfc142839d15a4ef22e2d6e39e89a93f610786a2469e23cd5/merged
overlay                   50G   17G   34G   34% /var/lib/docker/overlay2/73d1c01c8bbf1c88189540fdb06a6c1ede1a52720925e6c0fc5bd4d94c9fa960/merged
overlay                   50G   17G   34G   34% /var/lib/docker/overlay2/87c970108e45f919a9f2e6d6634f09747cfaffb68e2ce6fe90fa7997cf8abc33/merged
overlay                   50G   17G   34G   34% /var/lib/docker/overlay2/4d5db84cd24c09eaf39d2dde96639a87c4053cae625bc99fe431326052e95d0f/merged
tmpfs                    1.6G     0  1.6G    0% /run/user/0
overlay                   50G   17G   34G   34% /var/lib/docker/overlay2/875dad643dce157f200a03d9c4d8b0ddcdaef834c7df971d8976312a95b68871/merged
overlay                   50G   17G   34G   34% /var/lib/docker/overlay2/3a9b6217e2acc2536547dcbf8501766469188fc33af82dfbe08bae7d06a9cc2b/merged
overlay                   50G   17G   34G   34% /var/lib/docker/overlay2/97b51af97503d53caf12b3a0bc5aa96c0b8e664be225c201302f093d62caa429/merged
overlay                   50G   17G   34G   34% /var/lib/docker/overlay2/2744f55d60cd71d70df6b99ce870a25e41024465b2deeea38fba93f0f831c9ed/merged
overlay                   50G   17G   34G   34% /var/lib/docker/overlay2/9ed5a8d1866530c548c469961413b9d9c4ac86328cff32ed9382847be5916ea1/merged
overlay                   50G   17G   34G   34% /var/lib/docker/overlay2/cc51e991ec059837464d744d9d8f36fbca8e13d5a30b36cbdc52e04993d16593/merged
overlay                   50G   17G   34G   34% /var/lib/docker/overlay2/308e2d063bab1c5c97bf6796da05f7ed83c4adc4c846a1a6c232f31fccef70c6/merged
172.70.10.161:6789:/     1.9T     0  1.9T    0% /root/cephfs

[root@ceph1 cephfs]# pwd
/root/cephfs
[root@ceph1 cephfs]# cat caoyong.txt
iii
[root@ceph1 cephfs]# rados ls -p cephfs_data
10000000004.00000000
[root@ceph1 cephfs]# ceph osd map cephfs_data 10000000004.00000000
osdmap e4202 pool 'cephfs_data' (67) object '10000000004.00000000' -> pg 67.17c5e6c5 (67.5) -> up ([0,4,10], p0) acting ([0,4,10], p0)
[root@ceph1 cephfs]# rados -p cephfs_data get 10000000004.00000000 bcsj.txt.2.copies
[root@ceph1 cephfs]# ll
总用量 1
-rw-r--r-- 1 root root 4 516 15:03 bcsj.txt.2.copies
-rw-r--r-- 1 root root 4 516 14:59 caoyong.txt
[root@ceph1 cephfs]# md5sum bcsj.txt.2.copies
3b7c423ba4e937ec11b6f7a7814b9960  bcsj.txt.2.copies
[root@ceph1 cephfs]# md5sum caoyong.txt
3b7c423ba4e937ec11b6f7a7814b9960  caoyong.txt
[root@ceph1 cephfs]# vim bcsj.txt.2.copies

其他的常用命令

config set osd osd_pool_default_min_size 1
 
ceph osd set noout
ceph osd set norebalance
ceph osd set norecover
ceph osd set nobackfill
 

 ceph osd pool get test_rbd min_size
ceph orch device zap ceph102 /dev/vdb --force
ceph orch daemon add osd ceph102:/dev/vdb

ceph orch daemon rm osd.6 --force
ceph osd out osd.6
ceph osd rm osd.6

rados -p test_rbd_1 append bcsj.txt append.txt
ceph osd pool get test_rbd_1 min_size
ceph osd pool delete test_rbd
[root@ceph101 ~]# ceph pg map 24.1
osdmap e3304 pg 24.1 (24.1) -> up [3,9,4] acting [3,9,4]

#设置min_size = 1
$ bin/ceph osd pool set test_pool min_size 1
set pool 1 min_size to 1

ceph osd pool get test_pool min_size 

集群不健康:

现象一

[root@ceph1 ~]# ceph health detail
HEALTH_WARN 2 failed cephadm daemon(s); 3 daemons have recently crashed
[WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
    daemon rgw.realm_1.zone_02.ceph1.bnqyza on ceph1 is in unknown state
    daemon rgw.realm_1.zone_03.ceph1.liwnmb on ceph1 is in unknown state
[WRN] RECENT_CRASH: 3 daemons have recently crashed
    client.rgw.realm_1.zone_02.ceph1.bnqyza crashed on host ceph1 at 2022-05-13T12:36:17.438998Z
    client.rgw.realm_1.zone_03.ceph1.liwnmb crashed on host ceph1 at 2022-05-13T12:38:12.049014Z
    client.rgw.realm_1.zone_03.ceph1.liwnmb crashed on host ceph1 at 2022-05-13T12:38:12.324601Z

解决:

#删除无用的service
[root@ceph1 ~]# ceph orch daemon rm rgw.realm_1.zone_02.ceph1.bnqyza --force
Removed rgw.realm_1.zone_02.ceph1.bnqyza from host 'ceph1'
#对crash的daemons进行归档
[root@ceph1 ~]# ceph crash archive-all
[root@ceph1 ~]# ceph crash ls-new
#之后,集群回复正常
[root@ceph1 ~]# ceph health detail
HEALTH_OK

参考:参考日志

备注

对集群中副本操作常用命令:

noin   通常和noout一起用防止OSD up/down跳来跳去    
noout  MON在过了300秒(mon_osd_down_out_interval)后自动将down掉的OSD标记为out,一旦out数据就会开始迁移,建议在处理故障期间设置该标记,避免数据迁移。
       (故障处理的第一要则设置osd noout 防止数据迁移。ceph osd set noout ,ceph osd unset noout)
noup   通常和nodwon一起用解决OSD up/down跳来跳去
nodown 网络问题可能会影响到Ceph进程之间的心跳,有时候OSD进程还在,却被其他OSD一起举报标记为down,导致不必要的损耗,如果确定OSD进程始终正常  
       可以设置nodown标记防止OSD被误标记为down.
full   如果集群快要满了,你可以预先将其设置为FULL,注意这个设置会停止写操作。
       (有没有效需要实际测试)
pause  这个标记会停止一切客户端的读写,但是集群依旧保持正常运行。
nobackfill
norebalance  这个标记通常和上面的nobackfill和下面的norecover一起设置,在操作集群(挂掉OSD或者整个节点)时,          如果不希望操作过程中数据发生恢复迁移等,可以设置这个标志,记得操作完后unset掉。
norecover  也是在操作磁盘时防止数据发生恢复。
noscrub   ceph集群不做osd清理
nodeep-scrub 有时候在集群恢复时,scrub操作会影响到恢复的性能,和上面的noscrub一起设置来停止scrub。一般不建议打开。
notieragent  停止tier引擎查找冷数据并下刷到后端存储。

cpeh osd set {option} 设置所有osd标志
ceph osd unset {option} 接触所有osd标志

使用下面的命令去修复pg和osd
ceph osd repair :修复一个特定的osd
ceph pg repair 修复一个特定的pg,可能会影响用户的数据,请谨慎使用。
ceph pg scrub:在指定的pg上做处理
ceph deep-scrub:在指定的pg上做深度清理。
ceph osd set pause 搬移机房时可以暂时停止读写,等待客户端读写完毕了,就可以关闭集群
  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值