ceph17 实现块存储远程单项&双向复制

目标

1. 实现两个ceph集群,集群A 向 集群B单项复制
2. 实现两个ceph集群,集群A 与 集群B双向互备,互相复制

准备两个集群(ubuntu20 + ceph17.2.7)

        创建一个mirrortest存储池
        开启rbd 的image mirror功能

        设置image的默认feature(主要exclusive-lock,  journaling两个feature)

集群A  u196-198  (ceph version 17.2.7)
root@u196:~# ceph -s  
  cluster:  
    id:     4101504e-e364-11ee-a68e-99388f3ce351  
    health: HEALTH_OK  
  
  services:  
    mon: 3 daemons, quorum u196,u197,u198 (age 5h)  
    mgr: u197.ymxalp(active, since 5h), standbys: u196.tipvej  
    osd: 6 osds: 6 up (since 5h), 6 in (since 22h)  
  
  data:  
    pools:   1 pools, 1 pgs  
    objects: 2 objects, 449 KiB  
    usage:   2.1 GiB used, 51 GiB / 53 GiB avail  
    pgs:     1 active+clean  

root@u196:~# ceph osd pool create mirrortest 64
pool 'mirrortest' created

root@u196:~# ceph osd pool application enable mirrortest rbd
enabled application 'rbd' on pool 'mirrortest'

root@u196:~# rbd mirror pool enable mirrortest image

root@u196:~# ceph config set global rbd_default_features 125
集群B  u201-203  (ceph version 17.2.7)
root@u201:~# ceph -s
  cluster:
    id:     d449eb1e-eb79-11ee-a6d1-8bc03479c953
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum u201,u202,u203 (age 13m)
    mgr: u201.pakibn(active, since 13m), standbys: u202.tjbmcy
    osd: 6 osds: 6 up (since 13m), 6 in (since 24m)

  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 449 KiB
    usage:   1.7 GiB used, 118 GiB / 120 GiB avail
    pgs:     1 active+clean

root@u201:~# ceph osd pool create mirrortest 64
pool 'mirrortest' created

root@u201:~# ceph osd pool application enable mirrortest rbd
enabled application 'rbd' on pool 'mirrortest'

root@u201:~# rbd mirror pool enable mirrortest image

单向复制
1. 集群B开启rbd-mirror服务
root@u201:~# ceph orch apply rbd-mirror --placement=u201,u202
Scheduled rbd-mirror update...
root@u201:~# ceph orch ps |grep mirror
rbd-mirror.u201.vgvzrk  u201               running (31s)    23s ago  31s    12.8M        -  17.2.7   89d8d0b224fc  68fba00f74aa
rbd-mirror.u202.tmcurw  u202               running (34s)    24s ago  34s    12.9M        -  17.2.7   89d8d0b224fc  81a523d65ad1

2. 集群A创建一个bootstrap token,并远程复制到集群B

root@u196:~# rbd mirror pool peer bootstrap create --site-name site-a mirrortest > /tmp/bootstrap_token-site-a
root@u196:~# cat /tmp/bootstrap_token-site-a
eyJmc2lkIjoiNDEwMTUwNGUtZTM2NC0xMWVlLWE2OGUtOTkzODhmM2NlMzUxIiwiY2xpZW50X2lkIjoicmJkLW1pcnJvci1wZWVyIiwia2V5IjoiQVFCNEJ5MW1aemI2SkJBQTJZbVBqZ1RwYUg0clNqSkRDUDdoZFE9PSIsIm1vbl9ob3N0IjoiW3YyOjEwLjIwMy41MC4xOTY6MzMwMC8wLHYxOjEwLjIwMy41MC4xOTY6Njc4OS8wXSxbdjI6MTAuMjAzLjUwLjE5NzozMzAwLzAsdjE6MTAuMjAzLjUwLjE5Nzo2Nzg5LzBdLFt2MjoxMC4yMDMuNTAuMTk4OjMzMDAvMCx2MToxMC4yMDMuNTAuMTk4OjY3ODkvMF0ifQ==
root@u196:~# scp /tmp/bootstrap_token-site-a u201:/tmp/
3. 集群B导入token,并建立单项复制(A -> B)( rx-only)
root@u201:~# rbd mirror pool peer bootstrap import --site-name  site-b --direction rx-only mirrortest /tmp/bootstrap_token-site-a
root@u201:~# rbd mirror pool status mirrortest
health: OK
daemon health: OK
image health: OK
images: 0 total
4. 集群A创建两个rbd,分别采用两种方式同步数据
    disk1 基于日志同步,disk2基于snapshot同步
root@u196:~# rbd create mirrortest/disk1 --size 2G
root@u196:~# rbd create mirrortest/disk2 --size 3G
root@u196:~# rbd info mirrortest/disk1
rbd image 'disk1':
        size 2 GiB in 512 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: d43ac10f0a56
        block_name_prefix: rbd_data.d43ac10f0a56
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten, journaling
        op_features:
        flags:
        create_timestamp: Sat Apr 27 22:18:35 2024
        access_timestamp: Sat Apr 27 22:18:35 2024
        modify_timestamp: Sat Apr 27 22:18:35 2024
        journal: d43ac10f0a56
        mirroring state: disabled

root@u196:~# rbd ls -p mirrortest -l
NAME   SIZE   PARENT  FMT  PROT  LOCK
disk1  2 GiB            2
disk2  3 GiB            2

root@u196:~# rbd mirror image enable mirrortest/disk1
Mirroring enabled

root@u196:~# rbd mirror image enable mirrortest/disk2 snapshot
Mirroring enabled
root@u196:~#
5. 集群B的mirrortest存储已经能看到这两个rbd了,自动同步过来了
root@u201:~# rbd ls -p mirrortest -l
NAME   SIZE   PARENT  FMT  PROT  LOCK
disk1  2 GiB            2        excl
disk2  3 GiB            2        excl
6. 挂载两个rbd,格式化并写入数据测试
root@u196:~# rbd-nbd map mirrortest/disk1   (需要安装rbd-nbd)
/dev/nbd0
root@u196:~# rbd-nbd map mirrortest/disk2
/dev/nbd1
root@u196:~# mkdir /d1 /d2


root@u196:~# mkfs.xfs /dev/nbd0
meta-data=/dev/nbd0              isize=512    agcount=4, agsize=131072 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=524288, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

root@u196:~# mkfs.xfs /dev/nbd1
meta-data=/dev/nbd1              isize=512    agcount=4, agsize=196608 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=786432, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

root@u196:~# mount /dev/nbd0 /d1
root@u196:~# mount /dev/nbd1 /d2

root@u196:~# df -h | grep nbd
/dev/nbd0       2.0G   47M  2.0G   3% /d1
/dev/nbd1       3.0G   54M  3.0G   2% /d2

分别在d1和d2目录写入数据
root@u196:~# rbd du mirrortest/disk1
NAME   PROVISIONED  USED
disk1        2 GiB  108 MiB
root@u196:~# rbd du mirrortest/disk2
NAME   PROVISIONED  USED
disk2        3 GiB  52 MiB
7. 集群B查看两个rbd的大小
root@u201:~# rbd du mirrortest/disk1
NAME   PROVISIONED  USED
disk1        2 GiB  108 MiB
root@u201:~# rbd du mirrortest/disk2
NAME   PROVISIONED  USED
disk2        3 GiB   0 B

可以看到基于日志的disk1已经将数据同步过来了,但是disk2基于快照还没自动同步过来,下面对disk2手动生成一个快照

root@u196:~# rbd mirror image snapshot mirrortest/disk2
Snapshot ID: 7

集群B再开查看disk2,已经自动同步过来了

root@u201:~# rbd  info mirrortest/disk2
rbd image 'disk2':
        size 3 GiB in 768 objects
        order 22 (4 MiB objects)
        snapshot_count: 1
        id: 1220abd2661b9
        block_name_prefix: rbd_data.1220abd2661b9
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten, journaling, non-primary
        op_features:
        flags:
        create_timestamp: Sat Apr 27 22:25:04 2024
        access_timestamp: Sat Apr 27 22:25:04 2024
        modify_timestamp: Sat Apr 27 22:25:04 2024
        journal: 1220abd2661b9
        mirroring state: enabled
        mirroring mode: snapshot
        mirroring global id: 971e7f87-0662-4955-b92e-cdd3fb241df5
        mirroring primary: false

root@u201:~# rbd mirror image status mirrortest/disk2
disk2:
  global_id:   971e7f87-0662-4955-b92e-cdd3fb241df5
  state:       up+replaying
  description: replaying, {"bytes_per_second":0.0,"bytes_per_snapshot":19741696.0,"last_snapshot_bytes":39483392,"last_snapshot_sync_seconds":0,"local_snapshot_timestamp":1714228938,"remote_snapshot_timestamp":1714228938,"replay_state":"idle"}
  service:     u201.vgvzrk on u201
  last_update: 2024-04-27 22:43:22

root@u201:~# rbd du mirrortest/disk2
NAME   PROVISIONED  USED
disk2        3 GiB  52 MiB
9. 针对disk2创建定时任务可以自动定时同步数据过来
root@u196:~# rbd mirror snapshot schedule add --pool mirrortest --image disk2 3m

root@u196:~# rbd mirror snapshot schedule ls --pool mirrortest --image disk2
every 3m
10. /d2目录再写入数据,会定期自动同步到集群B
  • 5
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值