ceph学习 ---s3cmd

ceph学习 s3cmd

 

  1. s3cmd安装

通过pip和yum可以直接安装,没有pip的需要安装pip

yum install s3cmd

pip search s3cmd

1:安装和配置s3cmd
#yum install s3cmd -y

2:配置s3cmd进行S3接口测试,初始化 s3cmd本地环境  将之前radosgw-admin创建的user的access_key和secret_key,根据本机实际进行赋值;

#vim ~/.s3cfg       将标红的部分改成你自己的桶配置和对应的radowgs地址即可

s3cfg 

[default]
access_key = WEHZDTZCGV2NYYB0TRKF
access_token = 
add_encoding_exts = 
add_headers = 
bucket_location = US
ca_certs_file = 
cache_file = 
check_ssl_certificate = True
check_ssl_hostname = True
cloudfront_host = node1:7480
content_disposition = 
content_type = 
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encrypt = False
expiry_date = 
expiry_days = 
expiry_prefix = 
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase = 
guess_mime_type = True
host_base = node1:7480
host_bucket = %(*)s.rgw.ptengine.cn
human_readable_sizes = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
kms_key = 
limit = -1
limitrate = 0
list_md5 = False
log_target_prefix = 
long_listing = False
max_delete = -1
mime_type = 
multipart_chunk_size_mb = 15
multipart_max_chunks = 10000
preserve_attrs = True
progress_meter = True
proxy_host = 
proxy_port = 0
put_continue = False
recursive = False
recv_chunk = 4096
reduced_redundancy = False
requester_pays = False
restore_days = 1
restore_priority = Standard
secret_key = bIgqgyq5Z7hAVspqkLhqn4V2J9XouMxU8fUsli3l
send_chunk = 4096
server_side_encryption = False
signature_v2 = False
signurl_use_https = False
simpledb_host = rgw.ptengine.cn
skip_existing = False
socket_timeout = 10
stats = False
stop_on_error = False
storage_class = 
throttle_max = 100
upload_id = 
urlencoding_mode = normal
use_http_expect = False
use_https = False
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://node1:7480/
website_error = 
website_index = index.html

 s3cfg 

[default]
access_key = SOxxxxx
access_token = xxxxxx
add_encoding_exts = 
add_headers = 
bucket_location = US
ca_certs_file = 
cache_file = 
check_ssl_certificate = True
check_ssl_hostname = True
cloudfront_host = cloudfront.amazonaws.com
connection_pooling = True
content_disposition = 
content_type = 
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encrypt = False
expiry_date = 
expiry_days = 
expiry_prefix = 
follow_symlinks = False
force = False
get_continue = False
gpg_command = /bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase = 
guess_mime_type = True
host_base = 10.10.10.17:7480
host_bucket = 10.10.10.17:7480/%(buckets)
human_readable_sizes = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
kms_key = 
limit = -1
limitrate = 0
list_md5 = False
log_target_prefix = 
long_listing = False
max_delete = -1
mime_type = 
multipart_chunk_size_mb = 15
multipart_max_chunks = 10000
preserve_attrs = True
progress_meter = True
proxy_host = 
proxy_port = 0
public_url_use_https = False
put_continue = False
recursive = False
recv_chunk = 65536
reduced_redundancy = False
requester_pays = False
restore_days = 1
restore_priority = Standard
secret_key = 7xxxxxxxxxxxxxpK
send_chunk = 65536
server_side_encryption = False
signature_v2 = False
signurl_use_https = False
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
stats = False
stop_on_error = False
storage_class = 
throttle_max = 100
upload_id = 
urlencoding_mode = normal
use_http_expect = False
use_https = False
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error = 
website_index = index.html

生成秘钥

radosgw-admin user create --uid=test1 --display-name="test1" --email=test1@abc.com

查看

radosgw-admin user info --uid=test1
{
    "user_id": "test1",
    "display_name": "test 1",
    "email": "test1@abc.com",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "test1",
            "access_key": "LEEJ5TSHT0PHWGKYB3NM",
            "secret_key": "TbzEYCWsdM0j9JYXPYS6qMF3ur1hT9VBPkXongGt"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "max_size_kb": -1,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "max_size_kb": -1,
        "max_objects": -1
    },
    "temp_url_keys": []
}

  1. s3cmd的配置

使用前需要配置Access Key ID 和 Secret Access Key

vi ~/.s3cfg


[default]
access_key = *
secret_key = *
host_base = 127.0.0.1:7480
host_bucket = 127.0.0.1:7480/%(bucket)
use_https = False

s3cmd --configure

---------------------------------------------------------------------------------------------------

  1. s3cmd的基本使用

3.1、列举所有 Buckets。(bucket 相当于根文件夹)

s3cmd ls

3.2、创建 bucket,且 bucket 名称是唯一的,不能重复。

s3cmd mb s3://my-bucket-name

3.3、删除空 bucket

s3cmd rb s3://my-bucket-name

3.4、列举 Bucket 中的内容

s3cmd ls s3://my-bucket-name

3.5、上传 file.txt 到某个 bucket,

s3cmd put file.txt s3://my-bucket-name/file.txt

3.6、上传并将权限设置为所有人可读

s3cmd put --acl-public file.txt s3://my-bucket-name/file.txt

3.7、批量上传文件

s3cmd put ./* s3://my-bucket-name/

3.8、下载文件

s3cmd get s3://my-bucket-name/file.txt file.txt

3.9、批量下载

s3cmd get s3://my-bucket-name/* ./

3.10、删除文件

s3cmd del s3://my-bucket-name/file.txt

3.11、来获得对应的bucket所占用的空间大小

s3cmd du -H s3://my-bucket-name

3.12、设置S3 bucket的Public权限

s3cmd setacl s3://myexamplebucket.calvium.com/ --acl-public --recursive

3.13 创建用户

radosgw-admin user create --uid=test1 --display-name="test 1" --email=test1@abc.com

3.14 配置文件

vi ~/.s3cfg


[default]
access_key =xxxxxx 
secret_key =xxxxxxxxxxxx
host_base = 127.0.0.1:7480  
host_bucket =127.0.0.1:7480/bucketname
use_https = False

3.15 查看用户信息

radosgw-admin user info --uid=test1

 

3.:#s3cmd --configure
将会问你一系列问题:
    AWS S3的访问密钥和安全密钥
    对AWS S3双向传输的加密密码和加密数据
    为加密数据设定GPG程序的路径(例如,/usr/bin/gpg)
    是否使用https协议
    如果使用http代理,设定名字和端口

配置将以保存普通文本格式保存在 ~/.s3cfg.
一路回车,因为我们已经配置过了,直到最后一步Test选y:

4:列举所有 Buckets。(bucket 相当于根文件夹)
s3cmd ls

5:创建 bucket,且 bucket 名称是唯一的,不能重复。
s3cmd mb s3://my-bucket-name

6:删除空 bucket
s3cmd rb s3://my-bucket-name

7:列举 Bucket 中的内容
s3cmd ls s3://my-bucket-name

8:上传 file.txt 到某个 bucket,
s3cmd put file.txt s3://my-bucket-name/file.txt

9:上传并将权限设置为所有人可读
s3cmd put --acl-public file.txt s3://my-bucket-name/file.txt

10:批量上传文件
s3cmd put ./* s3://my-bucket-name/

11:下载文件
s3cmd get s3://my-bucket-name/file.txt file.txt

12:批量下载
s3cmd get s3://my-bucket-name/* ./

13:删除文件
s3cmd del s3://my-bucket-name/file.txt

14:来获得对应的bucket所占用的空间大小
s3cmd du -H s3://my-bucket-name

15.给对应bucket设置public权限

 ./s3cmd setacl s3://myexamplebucket.calvium.com/ --acl-public --

 ./s3cmd setacl s3://testpublic --acl-public        #设置为公共bucket.
 ./s3cmd  setpolicy policy.json s3://testpublic    #设置testpublic的策略属性
polic.json如下:
{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": "*",
    "Action": "s3:GetObject",
    "Resource": [
      "arn:aws:s3:::testpublic/*"
    ]
  }]
}

./s3cmd setpolicy policy s3://publicbucket

# 设置publicbucket跨域

./s3cmd setcors rules.xml s3://publicbucket

cat  rules.xml

<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <CORSRule>
    <AllowedMethod>PUT</AllowedMethod>
    <AllowedMethod>POST</AllowedMethod>
    <AllowedMethod>HEAD</AllowedMethod>
    <AllowedMethod>GET</AllowedMethod>
#    <AllowedOrigin>*</AllowedOrigin>
有時候不支持*
    <AllowedOrigin>www.xxx.xxx.cn</AllowedOrigin>
    <AllowedHeader>*</AllowedHeader>
  </CORSRule>
  <CORSRule>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedHeader>*</AllowedHeader>
  </CORSRule>
</CORSConfiguration>

Python创建bucket代理s3cmd



import boto
import boto.s3.connection

access_key = 'xxxx'
secret_key = 'xxxxxxxxxx'

conn = boto.connect_s3(
        aws_access_key_id = access_key,
        aws_secret_access_key = secret_key,
        host = '10.10.10.27',
        port = 7480,
        is_secure=False,               # uncomment if you are not using ssl
        calling_format = boto.s3.connection.OrdinaryCallingFormat(),
        )

#bucket = conn.create_bucket('hgnter')
bucket = conn.get_bucket('publicbucke')

for key in bucket.list():
        print("{name}\t{size}\t{modified}".format(
                name = key.name,
                size = key.size,
                modified = key.last_modified,
                ))
~                                                                                                                                                                          
~     

python查看bucket内的文件:

import boto3

ACCESS_KEY='SO47H87845Y87GE4QPQ4'
SECRET_KEY='7xTtMNHeTMr0kUJmvvKKID1pjNCagaiBibncsmpK'

s3 = boto3.resource('s3',
#        endpoint_url='http://127.0.0.1:7480',
        endpoint_url='https://bj-qa.lx.360.net',
        aws_access_key_id=ACCESS_KEY,
        aws_secret_access_key=SECRET_KEY)


bucket = s3.Bucket('publicbucke')

#s3.Object('publicbucke', 'hello1.txt').put(Body=open('/tmp/hello1.txt', 'rb'))
s3.Object('publicbucke', 'yujia.txt').put(Body=open('/tmp/yujia.txt', 'rb'))


for s3_file in bucket.objects.all():
    print(s3_file.key)

 

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值