s3cmd常用命令

记录:319

场景:在CentOS 7.9操作系统上,使用s3cmd操作Ceph分布式存储系统。操作包括上传、下载、检索和管理数据等。

版本:

操作系统:CentOS 7.9

Ceph:Ceph version 13.2.10

名词:

Amazon S3,Amazon Simple Storage Service的简称,是一种对象存储服务,使用唯一性键值来存储任意数量的对象。

S3cmd,是一个免费的命令行工具和客户端,用于在Amazon S3和其他使用S3协议的云存储服务提供商中上传、下载、检索和管理数据。比如,Ceph支持S3协议。

Ceph,一个开源的分布式存储系统。

bucket,在Ceph的对象存储模式中,bucket存放格式是:s3://BUCKET,相当于根目录下的一级目录;对象在bucket中存放格式:s3://BUCKET/OBJECT。

bucket,直译:桶。

1.基础环境

1.1 Ceph集群(服务端)

Ceph集群中已部署rgw组件且服务已启动,服务名称:radosgw。

1.2 Ceph的客户端

本例在集群之外的机器app166中部署客户端。

(1)安装Ceph的客户端

安装命令:yum install -y ceph-common

(2)生成客户端秘钥

生成命令:

radosgw-admin user create --uid="hangzhou" --display-name="hangzhou"

解析:radosgw-admin user create命令,创建用户;--uid,指定用户名;--display-name,指定显示名称。

记录下关键信息:

......
"user": "hangzhou",
"access_key": "0DEWPO0TLWQWVN9ZU4LW",
"secret_key": "D3ZXtteflGqnPTcjFRlQEuTO4qtbiTFIQHl2pBEZ"
......

1.3 安装s3cmd客户端

(1)安装s3cmd

安装s3cmd命令:yum install -y s3cmd

安装辅助包:yum install -y python-pip

(2)配置s3cmd

配置命令:vi /root/.s3cfg

修改内容:

[default]
access_key=0DEWPO0TLWQWVN9ZU4LW
secret_key=D3ZXtteflGqnPTcjFRlQEuTO4qtbiTFIQHl2pBEZ
host_base=192.168.19.161:7480
host_bucket=192.168.19.161:7480/%(bucket)
cloudfront_host=192.168.19.161:7480
use_https=False

解析:192.168.19.161:7480,是部署Ceph的radosgw的主机和端口;access_key和secret_key是使用radosgw-admin user create命令生成的。

注意:配置文件/root/.s3cfg中,建议不要写额外无用信息。比如#注释。

2.s3cmd常用命令

(1)ps帮助命令

命令:s3cmd --help

功能:查看s3cmd支持全部命令和选项,在实际工作中,查看这个手册应该是必备之选。

(2)配置s3cmd

命令:s3cmd --configure

功能:一个交互式配置工具。

(3)创建bucket

命令:s3cmd mb s3://hangzhou

功能:创建一个bucket,名称:hangzhou。bucket名称唯一,不能重复。

(4)删除空bucket

命令:s3cmd rb s3://hangzhou

功能:只能删除空桶,如果桶有内容需先删内容,再删除桶。

(5)查看bucket

命令:s3cmd ls

功能:查看桶列表。

命令:s3cmd ls s3://hangzhou/

功能:查看指定桶下的对象。

(6)查看bucket包括内容

命令:s3cmd la

功能:查看s3://下的桶列表,包括桶里面的对象。

(7)上传文件到bucket

命令:s3cmd put /home/jdk-8u281-linux-x64.tar.gz s3://hangzhou

功能:上传文件到ceph的桶中存储。

(8)从bucket下文件

命令:s3cmd get s3://hangzhou/jdk-8u281-linux-x64.tar.gz

功能:从ceph的桶中下载文件到本地。

(9)删除bucket中的文件

命令:s3cmd del s3://hangzhou/jdk-8u281-linux-x64.tar.gz

命令:s3cmd rm s3://hangzhou/jdk-8u281-linux-x64.tar.gz

功能:删除桶中文件。

(10)查看bucket已使用空间

命令:s3cmd du -H s3://hangzhou

功能:查看桶中对象,占用空间大小。

(11)获取bucket的信息

命令:s3cmd info s3://hangzhou

功能:查看桶信息。

命令:s3cmd info s3://hangzhou/jdk-8u281-linux-x64.tar.gz

功能:查看桶中对象信息。

(12)复制bucket中文件

命令:s3cmd cp s3://hangzhou/jdk-8u281-linux-x64.tar.gz s3://hangzhou_new

功能:把文件从一个桶复制到另一个桶。

(13)移动bucket中文件

命令:s3cmd mv s3://hangzhou/jdk-8u281-linux-x64.tar.gz s3://hangzhou_new

功能:把文件从一个桶移动到另一个桶。

(14)本地目录下文件同步bucket

命令:s3cmd sync /home/apps/software/ s3://hangzhou

功能:指定目录下文件,同步到ceph的桶中。

(15)对比指定目录下文件和bucket中文件差异

命令:s3cmd sync --dry-run /home/apps/software/ s3://hangzhou

功能:比对本地目录下文件和bucket文件差异。

(16)对比指定目录下文件和bucket中文件并删除桶中差异文件

命令:s3cmd sync --delete-removed /home/apps/software/ s3://hangzhou_new

功能:对比指定目录下文件和bucket中文件,删除指定目录没有但是在桶中有的文件;上传指定目录有的文件但是桶中没有的文件。

3.帮助命令详细列表

命令:s3cmd --help

查看s3cmd支持全部命令和选项,在实际工作中,查看这个手册应该是必备之选。

Usage: s3cmd [options] COMMAND [parameters]

S3cmd is a tool for managing objects in Amazon S3 storage. It allows for
making and removing "buckets" and uploading, downloading and removing
"objects" from these buckets.

Options:
  -h, --help            show this help message and exit
  --configure           Invoke interactive (re)configuration tool. Optionally
                        use as '--configure s3://some-bucket' to test access
                        to a specific bucket instead of attempting to list
                        them all.
  -c FILE, --config=FILE
                        Config file name. Defaults to $HOME/.s3cfg
  --dump-config         Dump current configuration after parsing config files
                        and command line options and exit.
  --access_key=ACCESS_KEY
                        AWS Access Key
  --secret_key=SECRET_KEY
                        AWS Secret Key
  --access_token=ACCESS_TOKEN
                        AWS Access Token
  -n, --dry-run         Only show what should be uploaded or downloaded but
                        don't actually do it. May still perform S3 requests to
                        get bucket listings and other information though (only
                        for file transfer commands)
  -s, --ssl             Use HTTPS connection when communicating with S3.
                        (default)
  --no-ssl              Don't use HTTPS.
  -e, --encrypt         Encrypt files before uploading to S3.
  --no-encrypt          Don't encrypt files.
  -f, --force           Force overwrite and other dangerous operations.
  --continue            Continue getting a partially downloaded file (only for
                        [get] command).
  --continue-put        Continue uploading partially uploaded files or
                        multipart upload parts.  Restarts parts/files that
                        don't have matching size and md5.  Skips files/parts
                        that do.  Note: md5sum checks are not always
                        sufficient to check (part) file equality.  Enable this
                        at your own risk.
  --upload-id=UPLOAD_ID
                        UploadId for Multipart Upload, in case you want
                        continue an existing upload (equivalent to --continue-
                        put) and there are multiple partial uploads.  Use
                        s3cmd multipart [URI] to see what UploadIds are
                        associated with the given URI.
  --skip-existing       Skip over files that exist at the destination (only
                        for [get] and [sync] commands).
  -r, --recursive       Recursive upload, download or removal.
  --check-md5           Check MD5 sums when comparing files for [sync].
                        (default)
  --no-check-md5        Do not check MD5 sums when comparing files for [sync].
                        Only size will be compared. May significantly speed up
                        transfer but may also miss some changed files.
  -P, --acl-public      Store objects with ACL allowing read for anyone.
  --acl-private         Store objects with default ACL allowing access for you
                        only.
  --acl-grant=PERMISSION:EMAIL or USER_CANONICAL_ID
                        Grant stated permission to a given amazon user.
                        Permission is one of: read, write, read_acp,
                        write_acp, full_control, all
  --acl-revoke=PERMISSION:USER_CANONICAL_ID
                        Revoke stated permission for a given amazon user.
                        Permission is one of: read, write, read_acp,
                        write_acp, full_control, all
  -D NUM, --restore-days=NUM
                        Number of days to keep restored file available (only
                        for 'restore' command). Default is 1 day.
  --restore-priority=RESTORE_PRIORITY
                        Priority for restoring files from S3 Glacier (only for
                        'restore' command). Choices available: bulk, standard,
                        expedited
  --delete-removed      Delete destination objects with no corresponding
                        source file [sync]
  --no-delete-removed   Don't delete destination objects [sync]
  --delete-after        Perform deletes AFTER new uploads when delete-removed
                        is enabled [sync]
  --delay-updates       *OBSOLETE* Put all updated files into place at end
                        [sync]
  --max-delete=NUM      Do not delete more than NUM files. [del] and [sync]
  --limit=NUM           Limit number of objects returned in the response body
                        (only for [ls] and [la] commands)
  --add-destination=ADDITIONAL_DESTINATIONS
                        Additional destination for parallel uploads, in
                        addition to last arg.  May be repeated.
  --delete-after-fetch  Delete remote objects after fetching to local file
                        (only for [get] and [sync] commands).
  -p, --preserve        Preserve filesystem attributes (mode, ownership,
                        timestamps). Default for [sync] command.
  --no-preserve         Don't store FS attributes
  --exclude=GLOB        Filenames and paths matching GLOB will be excluded
                        from sync
  --exclude-from=FILE   Read --exclude GLOBs from FILE
  --rexclude=REGEXP     Filenames and paths matching REGEXP (regular
                        expression) will be excluded from sync
  --rexclude-from=FILE  Read --rexclude REGEXPs from FILE
  --include=GLOB        Filenames and paths matching GLOB will be included
                        even if previously excluded by one of
                        --(r)exclude(-from) patterns
  --include-from=FILE   Read --include GLOBs from FILE
  --rinclude=REGEXP     Same as --include but uses REGEXP (regular expression)
                        instead of GLOB
  --rinclude-from=FILE  Read --rinclude REGEXPs from FILE
  --files-from=FILE     Read list of source-file names from FILE. Use - to
                        read from stdin.
  --region=REGION, --bucket-location=REGION
                        Region to create bucket in. As of now the regions are:
                        us-east-1, us-west-1, us-west-2, eu-west-1, eu-
                        central-1, ap-northeast-1, ap-southeast-1, ap-
                        southeast-2, sa-east-1
  --host=HOSTNAME       HOSTNAME:PORT for S3 endpoint (default:
                        s3.amazonaws.com, alternatives such as s3-eu-
                        west-1.amazonaws.com). You should also set --host-
                        bucket.
  --host-bucket=HOST_BUCKET
                        DNS-style bucket+hostname:port template for accessing
                        a bucket (default: %(bucket)s.s3.amazonaws.com)
  --reduced-redundancy, --rr
                        Store object with 'Reduced redundancy'. Lower per-GB
                        price. [put, cp, mv]
  --no-reduced-redundancy, --no-rr
                        Store object without 'Reduced redundancy'. Higher per-
                        GB price. [put, cp, mv]
  --storage-class=CLASS
                        Store object with specified CLASS (STANDARD,
                        STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, GLACIER
                        or DEEP_ARCHIVE). [put, cp, mv]
  --access-logging-target-prefix=LOG_TARGET_PREFIX
                        Target prefix for access logs (S3 URI) (for [cfmodify]
                        and [accesslog] commands)
  --no-access-logging   Disable access logging (for [cfmodify] and [accesslog]
                        commands)
  --default-mime-type=DEFAULT_MIME_TYPE
                        Default MIME-type for stored objects. Application
                        default is binary/octet-stream.
  -M, --guess-mime-type
                        Guess MIME-type of files by their extension or mime
                        magic. Fall back to default MIME-Type as specified by
                        --default-mime-type option
  --no-guess-mime-type  Don't guess MIME-type and use the default type
                        instead.
  --no-mime-magic       Don't use mime magic when guessing MIME-type.
  -m MIME/TYPE, --mime-type=MIME/TYPE
                        Force MIME-type. Override both --default-mime-type and
                        --guess-mime-type.
  --add-header=NAME:VALUE
                        Add a given HTTP header to the upload request. Can be
                        used multiple times. For instance set 'Expires' or
                        'Cache-Control' headers (or both) using this option.
  --remove-header=NAME  Remove a given HTTP header.  Can be used multiple
                        times.  For instance, remove 'Expires' or 'Cache-
                        Control' headers (or both) using this option. [modify]
  --server-side-encryption
                        Specifies that server-side encryption will be used
                        when putting objects. [put, sync, cp, modify]
  --server-side-encryption-kms-id=KMS_KEY
                        Specifies the key id used for server-side encryption
                        with AWS KMS-Managed Keys (SSE-KMS) when putting
                        objects. [put, sync, cp, modify]
  --encoding=ENCODING   Override autodetected terminal and filesystem encoding
                        (character set). Autodetected: UTF-8
  --add-encoding-exts=EXTENSIONs
                        Add encoding to these comma delimited extensions i.e.
                        (css,js,html) when uploading to S3 )
  --verbatim            Use the S3 name as given on the command line. No pre-
                        processing, encoding, etc. Use with caution!
  --disable-multipart   Disable multipart upload on files bigger than
                        --multipart-chunk-size-mb
  --multipart-chunk-size-mb=SIZE
                        Size of each chunk of a multipart upload. Files bigger
                        than SIZE are automatically uploaded as multithreaded-
                        multipart, smaller files are uploaded using the
                        traditional method. SIZE is in Mega-Bytes, default
                        chunk size is 15MB, minimum allowed chunk size is 5MB,
                        maximum is 5GB.
  --list-md5            Include MD5 sums in bucket listings (only for 'ls'
                        command).
  --list-allow-unordered
                        Not an AWS standard. Allow the listing results to be
                        returned in unsorted order. This may be faster when
                        listing very large buckets.
  -H, --human-readable-sizes
                        Print sizes in human readable form (eg 1kB instead of
                        1234).
  --ws-index=WEBSITE_INDEX
                        Name of index-document (only for [ws-create] command)
  --ws-error=WEBSITE_ERROR
                        Name of error-document (only for [ws-create] command)
  --expiry-date=EXPIRY_DATE
                        Indicates when the expiration rule takes effect. (only
                        for [expire] command)
  --expiry-days=EXPIRY_DAYS
                        Indicates the number of days after object creation the
                        expiration rule takes effect. (only for [expire]
                        command)
  --expiry-prefix=EXPIRY_PREFIX
                        Identifying one or more objects with the prefix to
                        which the expiration rule applies. (only for [expire]
                        command)
  --progress            Display progress meter (default on TTY).
  --no-progress         Don't display progress meter (default on non-TTY).
  --stats               Give some file-transfer stats.
  --enable              Enable given CloudFront distribution (only for
                        [cfmodify] command)
  --disable             Disable given CloudFront distribution (only for
                        [cfmodify] command)
  --cf-invalidate       Invalidate the uploaded filed in CloudFront. Also see
                        [cfinval] command.
  --cf-invalidate-default-index
                        When using Custom Origin and S3 static website,
                        invalidate the default index file.
  --cf-no-invalidate-default-index-root
                        When using Custom Origin and S3 static website, don't
                        invalidate the path to the default index file.
  --cf-add-cname=CNAME  Add given CNAME to a CloudFront distribution (only for
                        [cfcreate] and [cfmodify] commands)
  --cf-remove-cname=CNAME
                        Remove given CNAME from a CloudFront distribution
                        (only for [cfmodify] command)
  --cf-comment=COMMENT  Set COMMENT for a given CloudFront distribution (only
                        for [cfcreate] and [cfmodify] commands)
  --cf-default-root-object=DEFAULT_ROOT_OBJECT
                        Set the default root object to return when no object
                        is specified in the URL. Use a relative path, i.e.
                        default/index.html instead of /default/index.html or
                        s3://bucket/default/index.html (only for [cfcreate]
                        and [cfmodify] commands)
  -v, --verbose         Enable verbose output.
  -d, --debug           Enable debug output.
  --version             Show s3cmd version (2.3.0) and exit.
  -F, --follow-symlinks
                        Follow symbolic links as if they are regular files
  --cache-file=FILE     Cache FILE containing local source MD5 values
  -q, --quiet           Silence output on stdout
  --ca-certs=CA_CERTS_FILE
                        Path to SSL CA certificate FILE (instead of system
                        default)
  --ssl-cert=SSL_CLIENT_CERT_FILE
                        Path to client own SSL certificate CRT_FILE
  --ssl-key=SSL_CLIENT_KEY_FILE
                        Path to client own SSL certificate private key
                        KEY_FILE
  --check-certificate   Check SSL certificate validity
  --no-check-certificate
                        Do not check SSL certificate validity
  --check-hostname      Check SSL certificate hostname validity
  --no-check-hostname   Do not check SSL certificate hostname validity
  --signature-v2        Use AWS Signature version 2 instead of newer signature
                        methods. Helpful for S3-like systems that don't have
                        AWS Signature v4 yet.
  --limit-rate=LIMITRATE
                        Limit the upload or download speed to amount bytes per
                        second.  Amount may be expressed in bytes, kilobytes
                        with the k suffix, or megabytes with the m suffix
  --no-connection-pooling
                        Disable connection re-use
  --requester-pays      Set the REQUESTER PAYS flag for operations
  -l, --long-listing    Produce long listing [ls]
  --stop-on-error       stop if error in transfer
  --content-disposition=CONTENT_DISPOSITION
                        Provide a Content-Disposition for signed URLs, e.g.,
                        "inline; filename=myvideo.mp4"
  --content-type=CONTENT_TYPE
                        Provide a Content-Type for signed URLs, e.g.,
                        "video/mp4"

Commands:
  Make bucket
      s3cmd mb s3://BUCKET
  Remove bucket
      s3cmd rb s3://BUCKET
  List objects or buckets
      s3cmd ls [s3://BUCKET[/PREFIX]]
  List all object in all buckets
      s3cmd la 
  Put file into bucket
      s3cmd put FILE [FILE...] s3://BUCKET[/PREFIX]
  Get file from bucket
      s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
  Delete file from bucket
      s3cmd del s3://BUCKET/OBJECT
  Delete file from bucket (alias for del)
      s3cmd rm s3://BUCKET/OBJECT
  Restore file from Glacier storage
      s3cmd restore s3://BUCKET/OBJECT
  Synchronize a directory tree to S3 (checks files freshness using size and md5 checksum, unless overridden by options, see below)
      s3cmd sync LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR or s3://BUCKET[/PREFIX] s3://BUCKET[/PREFIX]
  Disk usage by buckets
      s3cmd du [s3://BUCKET[/PREFIX]]
  Get various information about Buckets or Files
      s3cmd info s3://BUCKET[/OBJECT]
  Copy object
      s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
  Modify object metadata
      s3cmd modify s3://BUCKET1/OBJECT
  Move object
      s3cmd mv s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
  Modify Access control list for Bucket or Files
      s3cmd setacl s3://BUCKET[/OBJECT]
  Modify Bucket Policy
      s3cmd setpolicy FILE s3://BUCKET
  Delete Bucket Policy
      s3cmd delpolicy s3://BUCKET
  Modify Bucket CORS
      s3cmd setcors FILE s3://BUCKET
  Delete Bucket CORS
      s3cmd delcors s3://BUCKET
  Modify Bucket Requester Pays policy
      s3cmd payer s3://BUCKET
  Show multipart uploads
      s3cmd multipart s3://BUCKET [Id]
  Abort a multipart upload
      s3cmd abortmp s3://BUCKET/OBJECT Id
  List parts of a multipart upload
      s3cmd listmp s3://BUCKET/OBJECT Id
  Enable/disable bucket access logging
      s3cmd accesslog s3://BUCKET
  Sign arbitrary string using the secret key
      s3cmd sign STRING-TO-SIGN
  Sign an S3 URL to provide limited public access with expiry
      s3cmd signurl s3://BUCKET/OBJECT <expiry_epoch|+expiry_offset>
  Fix invalid file names in a bucket
      s3cmd fixbucket s3://BUCKET[/PREFIX]
  Create Website from bucket
      s3cmd ws-create s3://BUCKET
  Delete Website
      s3cmd ws-delete s3://BUCKET
  Info about Website
      s3cmd ws-info s3://BUCKET
  Set or delete expiration rule for the bucket
      s3cmd expire s3://BUCKET
  Upload a lifecycle policy for the bucket
      s3cmd setlifecycle FILE s3://BUCKET
  Get a lifecycle policy for the bucket
      s3cmd getlifecycle s3://BUCKET
  Remove a lifecycle policy for the bucket
      s3cmd dellifecycle s3://BUCKET
  Upload a notification policy for the bucket
      s3cmd setnotification FILE s3://BUCKET
  Get a notification policy for the bucket
      s3cmd getnotification s3://BUCKET
  Remove a notification policy for the bucket
      s3cmd delnotification s3://BUCKET
  List CloudFront distribution points
      s3cmd cflist 
  Display CloudFront distribution point parameters
      s3cmd cfinfo [cf://DIST_ID]
  Create CloudFront distribution point
      s3cmd cfcreate s3://BUCKET
  Delete CloudFront distribution point
      s3cmd cfdelete cf://DIST_ID
  Change CloudFront distribution point parameters
      s3cmd cfmodify cf://DIST_ID
  Display CloudFront invalidation request(s) status
      s3cmd cfinvalinfo cf://DIST_ID[/INVAL_ID]

For more information, updates and news, visit the s3cmd website:
http://s3tools.org

以上,感谢。

2022年11月17日

  • 3
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
s3对象存储提供了多种方法来上传文件。其中,三种常用的方法包括使用S3Browser、使用s3cmd命令和使用代码。 第一种方法是使用S3Browser。通过S3Browser,你可以选择上传整个文件夹而不是一个一个文件地上传,同时保留文件夹中文件的层级关系。你可以在S3Browser中选择"upload folder",然后选择你想要上传的文件夹,即可完成上传。 第二种方法是使用s3cmd命令。你可以使用以下命令将一个文件夹上传到S3中:s3cmd put -r yourfolder s3://yourbucket,其中yourfolder是你要上传的文件夹路径,yourbucket是你的S3存储桶名称。例如,使用命令s3cmd put -r ceph-cluster s3://Data,你可以将文件夹ceph-cluster上传到名为Data的S3存储桶中。 第三种方法是使用代码。你可以使用编程语言,如Python或Java,来编写代码实现文件的上传。具体的代码实现可以根据你使用的编程语言和S3 SDK来进行。通过调用合适的API方法,你可以将文件夹上传到S3中,并保留文件的层级关系。这种方法适用于需要自动化或批量上传文件的场景。具体的代码实现可以参考相关的文档和示例。 综上所述,你可以使用S3Browser、s3cmd命令或代码来实现文件的上传到s3对象存储。具体选择哪种方法取决于你的需求和偏好。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* *3* [S3对象存储上传文件夹](https://blog.csdn.net/weixin_42126962/article/details/110954157)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 100%"] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值