Ceph对象存储 S3

ceph对象存储

作为文件系统的磁盘,操作系统不能直接访问对象存储。相反,它只能通过应用程序级别的API访问。ceph是一种分布式对象存储系统,通过ceph对象网关提供对象存储接口,也称为RADOS网关(RGW)接口,它构建在ceph RADOS层之上。RGW使用librgw(RADOS Gateway library)和librados,允许应用程序与ceph对象存储建立连接。RGW为应用程序提供了一个RESTful S3/swift兼容的接口,用于在ceph集群中以对象的形式存储数据。ceph还支持多租户对象存储,可以通过RESTful API访问。此外,RGW还支持ceph管理API,可以使用本机API调用来管理ceph存储集群。

librados软件库非常灵活,允许用户应用程序通过C、C++、java、python和php绑定直接访问ceph存储集群。ceph对象存储还具有多站点功能,即灾难恢复提供解决方案。

部署对象存储

安装ceph-radosgw

[ceph-admin@ceph-node1 my-cluster]$ sudo yum install ceph-radosgw

部署rgw

[ceph-admin@ceph-node1 my-cluster]$ ceph-deploy rgw create ceph-node1 ceph-node2 ceph-node3
[ceph-admin@ceph-node1 my-cluster]$ sudo netstat -tnlp |grep 7480
tcp        0      0 0.0.0.0:7480            0.0.0.0:*               LISTEN      15418/radosgw 

如果要修改为80端口,可修改配置文件  重启

复制代码
vim /etc/ceph/ceph.conf
[client.rgw.ceph-node1]
rgw_frontends = "civetweb port=80"

sudo systemctl restart ceph-radosgw@rgw.ceph-node1.service
复制代码

 使用S3 API访问ceph对象存储

创建radosgw用户

复制代码
[ceph-admin@ceph-node1 my-cluster]$ radosgw-admin user create --uid=radosgw --display-name="radosgw"
{
    "user_id": "radosgw",
    "display_name": "radosgw",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "radosgw",
            "access_key": "DKOORDOMS6YHR2OW5M23",
            "secret_key": "OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw"
}
复制代码

安装s3cmd客户端

复制代码
[root@localhost ~]# yum install -y s3cmd
[root@localhost ~]# s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: DKOORDOMS6YHR2OW5M23
Secret Key: OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4
Default Region [US]: ZH

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: 

Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: 
Path to GPG program [/usr/bin/gpg]: 

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: no

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name: 

New settings:
  Access Key: DKOORDOMS6YHR2OW5M23
  Secret Key: OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4
  Default Region: ZH
  S3 Endpoint: s3.amazonaws.com
  DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.s3.amazonaws.com
  Encryption password: 
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: False
  HTTP Proxy server name: 
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] n

Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'
复制代码

编辑s3配置文件

复制代码
[root@localhost ~]# cat .s3cfg 
[default]
access_key = DKOORDOMS6YHR2OW5M23
access_token = 
add_encoding_exts = 
add_headers = 
bucket_location = US
ca_certs_file = 
cache_file = 
check_ssl_certificate = True
check_ssl_hostname = True
cloudfront_host = cloudfront.amazonaws.com
content_disposition = 
content_type = 
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encrypt = False
expiry_date = 
expiry_days = 
expiry_prefix = 
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase = 
guess_mime_type = True
host_base = ceph-node1:7480
host_bucket = %(bucket).ceph-node1:7480
human_readable_sizes = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
kms_key = 
limit = -1
limitrate = 0
list_md5 = False
log_target_prefix = 
long_listing = False
max_delete = -1
mime_type = 
multipart_chunk_size_mb = 15
multipart_max_chunks = 10000
preserve_attrs = True
progress_meter = True
proxy_host = 
proxy_port = 0
put_continue = False
recursive = False
recv_chunk = 65536
reduced_redundancy = False
requester_pays = False
restore_days = 1
restore_priority = Standard
secret_key = OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4
send_chunk = 65536
server_side_encryption = False
signature_v2 = False
signurl_use_https = False
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
stats = False
stop_on_error = False
storage_class = 
throttle_max = 100
upload_id = 
urlencoding_mode = normal
use_http_expect = False
use_https = False
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error = 
website_index = index.html
复制代码

创建桶并放入文件

复制代码
[root@localhost ~]# s3cmd mb s3://first-bucket
Bucket 's3://first-bucket/' created
[root@localhost ~]# s3cmd ls
2019-02-15 07:45  s3://first-bucket
[root@localhost ~]# s3cmd put /etc/hosts s3://first-bucket
upload: '/etc/hosts' -> 's3://first-bucket/hosts'  [1 of 1]
 239 of 239   100% in    1s   175.80 B/s  done
[root@localhost ~]# s3cmd ls s3://first-bucket
2019-02-15 07:47       239   s3://first-bucket/hosts
复制代码

使用Swift API访问ceph对象存储

创建swift api子用户

复制代码

[ceph-admin@ceph-node1 my-cluster]$ radosgw-admin subuser create --uid=radosgw --subuser=radosgw:swift --access=full
{
"user_id": "radosgw",
"display_name": "radosgw",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [
{
"id": "radosgw:swift",
"permissions": "full-control"
}
],
"keys": [
{
"user": "radosgw",
"access_key": "DKOORDOMS6YHR2OW5M23",
"secret_key": "OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4"
}
],
"swift_keys": [
{
"user": "radosgw:swift",
"secret_key": "bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD"
}
],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}

复制代码

安装swift api客户端

[root@localhost ~]# yum install python-pip -y
[root@localhost ~]# pip install --upgrade python-swiftclient

测试

复制代码
[root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD list
first-bucket
[root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD list
[root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD post second-bucket
[root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD list
first-bucket
second-bucket
[root@localhost ~]# s3cmd ls
2019-02-15 07:45  s3://first-bucket
2019-02-15 08:18  s3://second-bucket

ceph对象存储

作为文件系统的磁盘,操作系统不能直接访问对象存储。相反,它只能通过应用程序级别的API访问。ceph是一种分布式对象存储系统,通过ceph对象网关提供对象存储接口,也称为RADOS网关(RGW)接口,它构建在ceph RADOS层之上。RGW使用librgw(RADOS Gateway library)和librados,允许应用程序与ceph对象存储建立连接。RGW为应用程序提供了一个RESTful S3/swift兼容的接口,用于在ceph集群中以对象的形式存储数据。ceph还支持多租户对象存储,可以通过RESTful API访问。此外,RGW还支持ceph管理API,可以使用本机API调用来管理ceph存储集群。

librados软件库非常灵活,允许用户应用程序通过C、C++、java、python和php绑定直接访问ceph存储集群。ceph对象存储还具有多站点功能,即灾难恢复提供解决方案。

部署对象存储

安装ceph-radosgw

[ceph-admin@ceph-node1 my-cluster]$ sudo yum install ceph-radosgw

部署rgw

[ceph-admin@ceph-node1 my-cluster]$ ceph-deploy rgw create ceph-node1 ceph-node2 ceph-node3
[ceph-admin@ceph-node1 my-cluster]$ sudo netstat -tnlp |grep 7480
tcp        0      0 0.0.0.0:7480            0.0.0.0:*               LISTEN      15418/radosgw 

如果要修改为80端口,可修改配置文件  重启

复制代码
vim /etc/ceph/ceph.conf
[client.rgw.ceph-node1]
rgw_frontends = "civetweb port=80"

sudo systemctl restart ceph-radosgw@rgw.ceph-node1.service
复制代码

创建池

复制代码
[ceph-admin@ceph-node1 my-cluster]$ wget https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/rgw/pool
[ceph-admin@ceph-node1 my-cluster]$ wget  https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/rgw/create_pool.sh
[ceph-admin@ceph-node1 my-cluster]$ cat create_pool.sh
#!/bin/bash

PG_NUM=30
PGP_NUM=30
SIZE=3

for i in `cat /home/ceph-admin/my-cluster/pool`
        do
        ceph osd pool create $i $PG_NUM
        ceph osd pool set $i size $SIZE
        done

for i in `cat /home/ceph-admin/my-cluster/pool`
        do
        ceph osd pool set $i pgp_num $PGP_NUM
        done

[ceph-admin@ceph-node1 my-cluster]$ chmod +x create_pool.sh 
[ceph-admin@ceph-node1 my-cluster]$ ./create_pool.sh
复制代码

测试是否能访问ceph集群

复制代码
[ceph-admin@ceph-node1 my-cluster]$ sudo ls -l /var/lib/ceph/
total 0
drwxr-x--- 2 ceph ceph  6 Jan 31 00:48 bootstrap-mds
drwxr-x--- 2 ceph ceph 26 Feb 14 13:30 bootstrap-mgr
drwxr-x--- 2 ceph ceph 26 Feb 14 13:21 bootstrap-osd
drwxr-x--- 2 ceph ceph  6 Jan 31 00:48 bootstrap-rbd
drwxr-x--- 2 ceph ceph 26 Feb 15 14:13 bootstrap-rgw
drwxr-x--- 2 ceph ceph  6 Jan 31 00:48 mds
drwxr-x--- 3 ceph ceph 29 Feb 14 13:30 mgr
drwxr-x--- 3 ceph ceph 29 Feb 14 12:01 mon
drwxr-x--- 5 ceph ceph 48 Feb 14 13:22 osd
drwxr-xr-x 3 root root 33 Feb 15 14:13 radosgw
[ceph-admin@ceph-node1 my-cluster]$ sudo cp /var/lib/ceph/radosgw/ceph-rgw.ceph-node1/keyring ./
[ceph-admin@ceph-node1 my-cluster]$ ceph -s -k keyring --name client.rgw.ceph-node1
  cluster:
    id:     cde2c9f7-009e-4bb4-a206-95afa4c43495
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3
    mgr: ceph-node1(active), standbys: ceph-node2, ceph-node3
    osd: 9 osds: 9 up, 9 in
    rgw: 3 daemons active
 
  data:
    pools:   18 pools, 550 pgs
    objects: 240 objects, 114MiB
    usage:   9.45GiB used, 171GiB / 180GiB avail
    pgs:     550 active+clean
 
  io:
    client:   0B/s rd, 0op/s rd, 0op/s wr
复制代码

 使用S3 API访问ceph对象存储

创建radosgw用户

复制代码
[ceph-admin@ceph-node1 my-cluster]$ radosgw-admin user create --uid=radosgw --display-name="radosgw"
{
    "user_id": "radosgw",
    "display_name": "radosgw",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "radosgw",
            "access_key": "DKOORDOMS6YHR2OW5M23",
            "secret_key": "OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw"
}
复制代码

安装s3cmd客户端

复制代码
[root@localhost ~]# yum install -y s3cmd
[root@localhost ~]# s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: DKOORDOMS6YHR2OW5M23
Secret Key: OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4
Default Region [US]: ZH

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: 

Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: 
Path to GPG program [/usr/bin/gpg]: 

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: no

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name: 

New settings:
  Access Key: DKOORDOMS6YHR2OW5M23
  Secret Key: OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4
  Default Region: ZH
  S3 Endpoint: s3.amazonaws.com
  DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.s3.amazonaws.com
  Encryption password: 
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: False
  HTTP Proxy server name: 
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] n

Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'
复制代码

编辑s3配置文件

  View Code

创建桶并放入文件

复制代码
[root@localhost ~]# s3cmd mb s3://first-bucket
Bucket 's3://first-bucket/' created
[root@localhost ~]# s3cmd ls
2019-02-15 07:45  s3://first-bucket
[root@localhost ~]# s3cmd put /etc/hosts s3://first-bucket
upload: '/etc/hosts' -> 's3://first-bucket/hosts'  [1 of 1]
 239 of 239   100% in    1s   175.80 B/s  done
[root@localhost ~]# s3cmd ls s3://first-bucket
2019-02-15 07:47       239   s3://first-bucket/hosts
复制代码

使用Swift API访问ceph对象存储

创建swift api子用户

复制代码

[ceph-admin@ceph-node1 my-cluster]$ radosgw-admin subuser create --uid=radosgw --subuser=radosgw:swift --access=full
{
"user_id": "radosgw",
"display_name": "radosgw",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [
{
"id": "radosgw:swift",
"permissions": "full-control"
}
],
"keys": [
{
"user": "radosgw",
"access_key": "DKOORDOMS6YHR2OW5M23",
"secret_key": "OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4"
}
],
"swift_keys": [
{
"user": "radosgw:swift",
"secret_key": "bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD"
}
],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}

复制代码

安装swift api客户端

[root@localhost ~]# yum install python-pip -y
[root@localhost ~]# pip install --upgrade python-swiftclient

测试

复制代码
[root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD list
first-bucket
[root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD list
[root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD post second-bucket
[root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD list
first-bucket
second-bucket
[root@localhost ~]# s3cmd ls
2019-02-15 07:45  s3://first-bucket
2019-02-15 08:18  s3://second-bucket
复制代码

转载于:https://www.cnblogs.com/freeitlzx/p/11281763.html

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
s3对象存储提供了多种方法来上传文件。其中,三种常用的方法包括使用S3Browser、使用s3cmd命令和使用代码。 第一种方法是使用S3Browser。通过S3Browser,你可以选择上传整个文件夹而不是一个一个文件地上传,同时保留文件夹中文件的层级关系。你可以在S3Browser中选择"upload folder",然后选择你想要上传的文件夹,即可完成上传。 第二种方法是使用s3cmd命令。你可以使用以下命令将一个文件夹上传到S3中:s3cmd put -r yourfolder s3://yourbucket,其中yourfolder是你要上传的文件夹路径,yourbucket是你的S3存储桶名称。例如,使用命令s3cmd put -r ceph-cluster s3://Data,你可以将文件夹ceph-cluster上传到名为Data的S3存储桶中。 第三种方法是使用代码。你可以使用编程语言,如Python或Java,来编写代码实现文件的上传。具体的代码实现可以根据你使用的编程语言和S3 SDK来进行。通过调用合适的API方法,你可以将文件夹上传到S3中,并保留文件的层级关系。这种方法适用于需要自动化或批量上传文件的场景。具体的代码实现可以参考相关的文档和示例。 综上所述,你可以使用S3Browser、s3cmd命令或代码来实现文件的上传到s3对象存储。具体选择哪种方法取决于你的需求和偏好。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* *3* [S3对象存储上传文件夹](https://blog.csdn.net/weixin_42126962/article/details/110954157)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 100%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值