对象存储介绍
简介
作为文件系统的磁盘,操作系统不能直接访问对象存储。相反,它只能通过应用程序级别的API访问。ceph是一种分布式对象存储系统,通过ceph对象网关提供对象存储接口,也称为RADOS网关(RGW)接口,它构建在ceph RADOS层之上。RGW使用librgw(RADOS Gateway library)和librados,允许应用程序与ceph对象存储建立连接。RGW为应用程序提供了一个RESTful S3/swift兼容的接口,用于在ceph集群中以对象的形式存储数据。ceph还支持多租户对象存储,可以通过RESTful API访问。此外,RGW还支持ceph管理API,可以使用本机API调用来管理ceph存储集群。
Ceph对象存储的两个接口
- 与S3兼容
- 与Swift兼容
Ceph对象存储使用Ceph对象网关守护进程(radosgw),它是用于与Ceph存储群集进行交互的HTTP服务器。由于它提供与OpenStack Swift和Amazon S3兼容的接口,因此Ceph对象网关具有其自己的用户管理。Ceph对象网关可以将数据存储在用于存储来自Ceph文件系统客户端或Ceph块设备客户端的数据的同一Ceph存储群集中。S3和Swift API共享一个公共的名称空间,因此您可以使用一个API编写数据,而使用另一个API检索数据。
RGW内部逻辑处理层级结构图
- HTTP前端
- REST API通用处理层
- API操作执行层
- RADOS接口适配层
- librados接口层
- HTTP 前端接收请求数据并保存在相应的数据结构中。
- REST API 通用处理层从 HTTP 语义中解析出 S3 或 Swift 数据并进行一系列检查。
- 检查通过后,根据不同 API 操作请求执行不同的处理流程。
- 如需从 RADOS 集群中获取数据或者往 RADOS 集群中写入数据,则通过 RGW 与 RADOS 接口适配层调用 librados 接口将请求发送到 RADOS 集群中获取或写入相应数据。
RGW的三类基础数据逻辑实体
- 用户
- 存储桶
- 对象
用户
RGW用户兼容S3、Swift
用户数据信息包含:
* 用户认证信息
* 访问控制权限信息
* 用户配额信息
存储桶
存储桶是对象的容器,是为了方便管理和操作具有相同属性的一类对象而引入的一级管理单元。
存储桶信息包含:
* 基础信息:(保存在对应RADOS对象的数据部分)RGW关注的信息,包含bucket配额信息(最大对象数目或最大对象大小总和),bucket placement rule,bucket中的索引对象数目等等。
* 扩展信息:对RGW透明的一些信息,如用户自定义的元数据信息。
bucket placement rule:包含了存储桶的索引对象、对象数据、分段上传的中间数据
对象
RGW中的应用对象对应RADOS对象。应用对象上传分整体上传和分段上传,不同的上传方式应用对象对应RADOS对象的方式不同。
概念
- zone:包含多个RGW实例的一个逻辑概念。zone不能跨集群,同一个zone的数据保存在同一组pool中。同一个zonegroup下,zone之间实现active-active模式,数据相互同步备份,提供灾备功能。
- zonegroup:一个zonegroup可以包含1个或多个zone。如果一个zonegroup包含多个zone,必须指定一个zone作为master-zone,用来处理bucket和用户的创建。一个集群可以创建多个zonegroup,一个zonegroup也可以跨多个集群。
- realm:一个realm可以包含1个或多个zonegroup。如果realm包含多个zonegroup,必须指定一个zonegroup为master-zonegroup,用来处理系统操作。一个系统中可以包含多个realm,多个realm之间资源完全隔离。
- user:对象存储的使用者。默认一个用户只能创建1000个存储桶
- bucket:存储桶,用来管理对象的容器。
- ibject:对象,指文档、图片、视频文件等。
S3认证过程
- 应用在发送请求前,使用用户的私有密钥secret_key请求内容,采用与RGW相匹配的算法计算出数字签名后,将数字签名以及用户访问密钥access_key封装在请求中发送给RGW网关
- RGW网关收到请求后,使用用户访问密钥access_key作为索引从rados集群中读取用户信息,并从用户信息中获取用户私有密钥
- 使用用户私有密钥请求内容,采用与应用相匹配的算法计算数字签名
- 比对RGW生成的数字签名与应用请求所携带的数字签名是否匹配,如果匹配,则认为请求通过,用户认证通过。
- 通过身份验证后,用户还需要具备相应的访问权限才能访问对应的对象,即ACL权限
数据检索路径
[root@ceph1 ~]# radosgw-admin zone get
{
"id": "95010954-503b-4b17-87e5-023f5d344fdf",
"name": "default",
"domain_root": "default.rgw.meta:root",
"control_pool": "default.rgw.control",
"gc_pool": "default.rgw.log:gc",
"lc_pool": "default.rgw.log:lc",
"log_pool": "default.rgw.log",
"intent_log_pool": "default.rgw.log:intent",
"usage_log_pool": "default.rgw.log:usage",
"reshard_pool": "default.rgw.log:reshard",
"user_keys_pool": "default.rgw.meta:users.keys",
"user_email_pool": "default.rgw.meta:users.email",
"user_swift_pool": "default.rgw.meta:users.swift",
"user_uid_pool": "default.rgw.meta:users.uid",
"system_key": {
"access_key": "",
"secret_key": ""
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "default.rgw.buckets.index",
"data_pool": "default.rgw.buckets.data",
"data_extra_pool": "default.rgw.buckets.non-ec",
"index_type": 0,
"compression": ""
}
}
],
"metadata_heap": "",
"tier_config": [],
"realm_id": ""
}
存储池
默认创建的zone和zonegroup为:default
[root@ceph1 ~]# radosgw-admin zonegroup list
{
"default_info": "5a473f02-65df-40df-98ff-bac62ea419ca",
"zonegroups": [
"default"
]
}
[root@ceph1 ~]# radosgw-admin zone list
{
"default_info": "95010954-503b-4b17-87e5-023f5d344fdf",
"zones": [
"default"
]
}
- .rgw.root:存放集群命名空间信息
- {zone}.rgw.control:当一个zone对应多个rgw,保证数据一致性
- {zone}.rgw.meta:存放用户信息,如keys、mails等
- {zone}.rgw.log:操作日志信息
- {zone}.rgw.buckets.index:object索引信息,rados会为每个bucket创建一个对象,命名格式为.dir.{bucket_id}。
- {zone}.rgw.buckets.data:存放object数据
- {zone}.rgw.buckets.non-ec
[root@ceph1 ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
420GiB 411GiB 9.33GiB 2.22
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
.rgw.root 1 3.10KiB 0 189GiB 8
default.rgw.control 2 0B 0 189GiB 8
default.rgw.meta 3 899B 0 189GiB 7
default.rgw.log 4 0B 0 189GiB 175
default.rgw.buckets.index 5 0B 0 189GiB 1
default.rgw.buckets.data 6 100MiB 0.05 189GiB 29
default.rgw.buckets.non-ec 7 0B 0 189GiB 0
{zone}.rgw.meta
通过namespace隔离多个存储空间
[root@ceph1 ~]# radosgw-admin zone get | grep meta:
"domain_root": "default.rgw.meta:root",
"user_keys_pool": "default.rgw.meta:users.keys",
"user_email_pool": "default.rgw.meta:users.email",
"user_swift_pool": "default.rgw.meta:users.swift",
"user_uid_pool": "default.rgw.meta:users.uid",
- root:bucket以及bucket-instance
- users.keys:用户key
- users.email:用户Email地址
- users.swift: swift账号
- users.uid: s3用户及用户的Bucket信息
对象存储-测试用户、bucket、数据关系
准备工作
- ceph集群
[root@ceph1 ~]# ceph -s
cluster:
id: 4c687eb0-1f15-4eb2-9cbb-cbaf2e439015
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3
mgr: ceph1(active), standbys: ceph3, ceph2
osd: 9 osds: 9 up, 9 in
rgw: 3 daemons active
data:
pools: 6 pools, 216 pgs
objects: 191 objects, 3.10KiB
usage: 9.11GiB used, 411GiB / 420GiB avail
pgs: 216 active+clean
io:
client: 93.1KiB/s rd, 0B/s wr, 93op/s rd, 62op/s wr
- 集群数据为空
[root@ceph1 ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
420GiB 411GiB 9.11GiB 2.17
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
.rgw.root 1 3.10KiB 0 189GiB 8
default.rgw.control 2 0B 0 189GiB 8
default.rgw.meta 3 0B 0 189GiB 0
default.rgw.log 4 0B 0 189GiB 175
default.rgw.buckets.index 5 0B 0 189GiB 0
default.rgw.buckets.data 6 0B 0 189GiB 0
- RGW服务正常
[root@ceph1 ~]# netstat -ntlp | grep 7480
tcp 0 0 0.0.0.0:7480 0.0.0.0:* LISTEN 4994/radosgw
- 用户和bucket均为空
[root@ceph1 ~]# radosgw-admin user list
[]
[root@ceph1 ~]# radosgw-admin bucket list
[]
准备数据
创建用户
#创建test用户,该用户的密钥为test、test。用以s3、swift连接使用该用户
[root@ceph1 ~]# radosgw-admin user create --uid=test --display-name=test --access-key=test --secret=test
{
"user_id": "test",
"display_name": "test",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{
"user": "test",
"access_key": "test",
"secret_key": "test"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}
- 元数据metadata
user、bucket、bucket.instance存放在default.rgw.meta存储池中
[root@ceph1 ~]# radosgw-admin zone get
{
"id": "95010954-503b-4b17-87e5-023f5d344fdf",
"name": "default",
"domain_root": "default.rgw.meta:root",
"control_pool": "default.rgw.control",
"gc_pool": "default.rgw.log:gc",
"lc_pool": "default.rgw.log:lc",
"log_pool": "default.rgw.log",
"intent_log_pool": "default.rgw.log:intent",
"usage_log_pool": "default.rgw.log:usage",
"reshard_pool": "default.rgw.log:reshard",
"user_keys_pool": "default.rgw.meta:users.keys",
"user_email_pool": "default.rgw.meta:users.email",
"user_swift_pool": "default.rgw.meta:users.swift",
"user_uid_pool": "default.rgw.meta:users.uid",
"system_key": {
"access_key": "",
"secret_key": ""
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "default.rgw.buckets.index",
"data_pool": "default.rgw.buckets.data",
"data_extra_pool": "default.rgw.buckets.non-ec",
"index_type": 0,
"compression": ""
}
}
],
"metadata_heap": "",
"tier_config": [],
"realm_id": ""
}
创建bucket
使用s3cmd客户端创建一个bucket
- 安装部署s3cmd
#安装软件包
[root@ceph1 ~]# yum -y install s3cmd
#按照提示生成基础配置文件
[root@ceph1 ~]# s3cmd --configure
#修改配置文件中的如下项
[root@ceph1 ~]# vim /root/.s3cfg
cloudfront_host = 192.168.186.101:7480
host_base = 192.168.186.101:7480
host_bucket = 192.168.186.101:7480/%(bucket)
- 创建bucket
[root@ceph1 ~]# s3cmd mb s3://demo0219
Bucket 's3://demo0219/' created
#可以看到default.rgw.buckets.index存储池中存在了一个对象。OBJECTS为1
[root@ceph1 ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
420GiB 411GiB 9.11GiB 2.17
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
.rgw.root 1 3.10KiB 0 189GiB 8
default.rgw.control 2 0B 0 189GiB 8
default.rgw.meta 3 665B 0 189GiB 5
default.rgw.log 4 0B 0 189GiB 175
default.rgw.buckets.index 5 0B 0 189GiB 1
default.rgw.buckets.data 6 0B 0 189GiB 0
用户、bucket、对象、rados之间关系
用户和rados对象
在上面创建了一个radosgw用户:test。用户的数据被存放在default.rgw.meta存储池中。通过如下命令可以看到,在元数据存储池中存在bucket、bucket.instance、user。bucket中包含了创建的存储桶:demo0219,user中包含了创建的用户:test,bucket.instance中包含了拼接而成的对象ID。每创建一个user,都会在default.rgw.meta存储池中的user对象里保存一个用户的rados对象。
[root@ceph1 dir]# radosgw-admin metadata list
[
"bucket",
"bucket.instance",
"user"
]
[root@ceph1 dir]# radosgw-admin metadata list bucket
[
"demo0219"
]
[root@ceph1 dir]# radosgw-admin metadata list bucket.instance
[
"demo0219:95010954-503b-4b17-87e5-023f5d344fdf.44112.1"
]
[root@ceph1 dir]# radosgw-admin metadata list user
[
"test"
]
bucket和rados对象
bucket与rados对象的关系和用户一致,当创建bucket时,也会在default.rgw.meta存储池中创建bucket、bucket.instance,bucket.instance中存入的rados对象格式为:{bucket-name}:{bucket-id},里面存放的时关于bucket的元数据信息
#查看bucket存储桶demo0219的id为:95010954-503b-4b17-87e5-023f5d344fdf.44112.1
[root@ceph1 dir]# radosgw-admin metadata get bucket:demo0219
{
"key": "bucket:demo0219",
"ver": {
"tag": "_L_p3GSNo0xaaVd4EV5q8jBz",
"ver": 1
},
"mtime": "2020-02-19 07:30:30.926620Z",
"data": {
"bucket": {
"name": "demo0219",
"marker": "95010954-503b-4b17-87e5-023f5d344fdf.44112.1",
"bucket_id": "95010954-503b-4b17-87e5-023f5d344fdf.44112.1",
"tenant": "",
"explicit_placement": {
"data_pool": "",
"data_extra_pool": "",
"index_pool": ""
}
},
"owner": "test",
"creation_time": "2020-02-19 07:30:30.921594Z",
"linked": "true",
"has_bucket_info": "false"
}
}
[root@ceph1 ~]# radosgw-admin metadata get bucket.instance:demo0219:95010954-503b-4b17-87e5-023f5d344fdf.44112.1
{
"key": "bucket.instance:demo0219:95010954-503b-4b17-87e5-023f5d344fdf.44112.1",
"ver": {
"tag": "_geGF06BtYLaOAiWYOX5r2gR",
"ver": 1
},
"mtime": "2020-02-19 07:30:30.924367Z",
"data": {
"bucket_info": {
"bucket": {
"name": "demo0219",
"marker": "95010954-503b-4b17-87e5-023f5d344fdf.44112.1",
"bucket_id": "95010954-503b-4b17-87e5-023f5d344fdf.44112.1",
"tenant": "",
"explicit_placement": {
"data_pool": "",
"data_extra_pool": "",
"index_pool": ""
}
},
"creation_time": "2020-02-19 07:30:30.921594Z",
"owner": "test",
"flags": 0,
"zonegroup": "5a473f02-65df-40df-98ff-bac62ea419ca",
"placement_rule": "default-placement",
"has_instance_obj": "true",
"quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"num_shards": 0,
"bi_shard_hash_type": 0,
"requester_pays": "false",
"has_website": "false",
"swift_versioning": "false",
"swift_ver_location": "",
"index_type": 0,
"mdsearch_config": [],
"reshard_status": 0,
"new_bucket_instance_id": ""
},
"attrs": [
{
"key": "user.rgw.acl",
"val": "AgJ7AAAAAwIQAAAABAAAAHRlc3QEAAAAdGVzdAQDXwAAAAEBAAAABAAAAHRlc3QPAAAAAQAAAAQAAAB0ZXN0BQM0AAAAAgIEAAAAAAAAAAQAAAB0ZXN0AAAAAAAAAAACAgQAAAAPAAAABAAAAHRlc3QAAAAAAAAAAAAAAAAAAAAA"
},
{
"key": "user.rgw.idtag",
"val": ""
}
]
}
}
用户与bucket
用户可以拥有多个bucket,在创建bucket时,通过传入密钥来指定用户。创建bucket之后,会在bucket的元数据中存入该存储桶的拥有者。
这一块保留疑问:用户信息存放在default.rgw.meta 存储池中,只能通过radosgw-admin metadata命令查看,网上看到的存放在default.rgw.users.uid存储池中,可以很方便的看到用户以及该用户所创建的bucket。
[root@ceph1 dir]# radosgw-admin metadata get bucket:demo0219
{
"key": "bucket:demo0219",
"ver": {
"tag": "_L_p3GSNo0xaaVd4EV5q8jBz",
"ver": 1
},
"mtime": "2020-02-19 07:30:30.926620Z",
"data": {
"bucket": {
"name": "demo0219",
"marker": "95010954-503b-4b17-87e5-023f5d344fdf.44112.1",
"bucket_id": "95010954-503b-4b17-87e5-023f5d344fdf.44112.1",
"tenant": "",
"explicit_placement": {
"data_pool": "",
"data_extra_pool": "",
"index_pool": ""
}
},
"owner": "test",
"creation_time": "2020-02-19 07:30:30.921594Z",
"linked": "true",
"has_bucket_info": "false"
}
}
bucket与数据
- 使用s3cmd客户端上传数据
[root@ceph1 ~]# ls
0.4.2.c4 ceph.conf dir
[root@ceph1 ~]# s3cmd put ceph.conf s3://demo0219
upload: 'ceph.conf' -> 's3://demo0219/ceph.conf' [1 of 1]
368 of 368 100% in 0s 5.63 kB/s done
#可以明显看到在索引存储池和数据存储池中都生成了一个对象OJECTS
[root@ceph1 ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
420GiB 411GiB 9.11GiB 2.17
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
.rgw.root 1 3.10KiB 0 189GiB 8
default.rgw.control 2 0B 0 189GiB 8
default.rgw.meta 3 899B 0 189GiB 7
default.rgw.log 4 0B 0 189GiB 175
default.rgw.buckets.index 5 0B 0 189GiB 1
default.rgw.buckets.data 6 368B 0 189GiB 1
- 查看上传的数据
#数据对象命名格式为:{bucket-name}_{file-name}
[root@ceph1 ~]# rados -p default.rgw.buckets.data ls
95010954-503b-4b17-87e5-023f5d344fdf.44112.1_ceph.conf
#索引对象命名格式为:.dir.{bucket-id}。
[root@ceph1 ~]# rados -p default.rgw.buckets.index ls
.dir.95010954-503b-4b17-87e5-023f5d344fdf.44112.1
#携带.dir的对象也可以叫做索引对象,可以通过这个索引对象查看bucket中的所有对象信息
[root@ceph1 ~]# rados -p default.rgw.buckets.index listomapkeys .dir.95010954-503b-4b17-87e5-023f5d344fdf.44112.1
ceph.conf
根据底层rados对象拼接文件
默认以15M为一段进行上传,如果文件过大会将其进行拆分上传
- 准备一个稍大点的文件-100M
[root@ceph1 ~]# dd if=/dev/zero of=file count=10 bs=10M
记录了10+0 的读入
记录了10+0 的写出
104857600字节(105 MB)已复制,0.170556 秒,615 MB/秒
- 上传文件
可以看到100M的文件分了7次进行上传
[root@ceph1 ~]# s3cmd put file s3://demo0219
upload: 'file' -> 's3://demo0219/file' [part 1 of 7, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 15.55 MB/s done
upload: 'file' -> 's3://demo0219/file' [part 2 of 7, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 31.97 MB/s done
upload: 'file' -> 's3://demo0219/file' [part 3 of 7, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 35.19 MB/s done
upload: 'file' -> 's3://demo0219/file' [part 4 of 7, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 40.15 MB/s done
upload: 'file' -> 's3://demo0219/file' [part 5 of 7, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 24.77 MB/s done
upload: 'file' -> 's3://demo0219/file' [part 6 of 7, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 15.54 MB/s done
upload: 'file' -> 's3://demo0219/file' [part 7 of 7, 10MB] [1 of 1]
10485760 of 10485760 100% in 0s 34.61 MB/s done
- 查看bucket中的对象
[root@ceph1 ~]# rados -p default.rgw.buckets.data ls | grep file
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__multipart_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.6
95010954-503b-4b17-87e5-023f5d344fdf.44112.1_file
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.3_3
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.3_1
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__multipart_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.5
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.5_3
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.1_1
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.3_2
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.1_2
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.6_1
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.4_2
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.7_2
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.6_2
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.2_1
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.7_1
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__multipart_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.3
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.6_3
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__multipart_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.1
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.1_3
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__multipart_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.4
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__multipart_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.2
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__multipart_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.7
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.2_3
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.2_2
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.4_3
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.5_2
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.5_1
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.4_1
- 将bucket中的对象进行排序保存
[root@ceph1 ~]# cat rados.txt
95010954-503b-4b17-87e5-023f5d344fdf.44112.1_file
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__multipart_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.1
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.1_1
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.1_2
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.1_3
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__multipart_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.2
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.2_1
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.2_2
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.2_3
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__multipart_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.3
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.3_1
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.3_2
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.3_3
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__multipart_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.4
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.4_1
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.4_2
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.4_3
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__multipart_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.5
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.5_1
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.5_2
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.5_3
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__multipart_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.6
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.6_1
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.6_2
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.6_3
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__multipart_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.7
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.7_1
95010954-503b-4b17-87e5-023f5d344fdf.44112.1__shadow_file.2~tqwjASmgvsRrn1Odjo6cmbX2b4z3MOz.7_2
- 进行拼接
for i in `cat rados.txt`;do rados -p default.rgw.buckets.data get $i $i; cat $i >> file.txt;done
- 对比md5值
对比md5值是一致的
[root@ceph1 ~]# md5sum file
2f282b84e7e608d5852449ed940bfc51 file
[root@ceph1 ~]# md5sum file.txt
2f282b84e7e608d5852449ed940bfc51 file.txt