s3cmd 操作手册

2 篇文章 0 订阅
s3cmd(1)                                                              s3cmd(1)

NAME
       s3cmd  -  tool  for  managing Amazon S3 storage space and Amazon CloudFront content delivery
       network

SYNOPSIS
       s3cmd [OPTIONS] COMMAND [PARAMETERS]

DESCRIPTION
       s3cmd is a command line client for copying files to/from Amazon S3 (Simple Storage  Service)
       and  performing  other  related  tasks,  for instance creating and removing buckets, listing
       objects, etc.

COMMANDS
       s3cmd can do several actions specified by the following commands.

       s3cmd mb s3://BUCKET
              Make bucket

       s3cmd rb s3://BUCKET
              Remove bucket

       s3cmd ls [s3://BUCKET[/PREFIX]]
              List objects or buckets

       s3cmd la
              List all object in all buckets

       s3cmd put FILE [FILE...] s3://BUCKET[/PREFIX]
              Put file into bucket

       s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
              Get file from bucket

       s3cmd del s3://BUCKET/OBJECT
              Delete file from bucket

       s3cmd rm s3://BUCKET/OBJECT
              Delete file from bucket (alias for del)

       s3cmd restore s3://BUCKET/OBJECT
              Restore file from Glacier storage

       s3cmd sync LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR
              Synchronize a directory tree to S3 (checks files freshness using size and md5  check-
              sum, unless overriden by options, see below)

       s3cmd du [s3://BUCKET[/PREFIX]]
              Disk usage by buckets

       s3cmd info s3://BUCKET[/OBJECT]
              Get various information about Buckets or Files

       s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
              Copy object

       s3cmd modify s3://BUCKET1/OBJECT
              Modify object metadata

       s3cmd mv s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
              Move object

       s3cmd setacl s3://BUCKET[/OBJECT]
              Modify Access control list for Bucket or Files

       s3cmd setpolicy FILE s3://BUCKET
              Modify Bucket Policy

       s3cmd delpolicy s3://BUCKET
              Delete Bucket Policy

       s3cmd multipart s3://BUCKET [Id]
              Show multipart uploads

       s3cmd abortmp s3://BUCKET/OBJECT Id
              Abort a multipart upload

       s3cmd listmp s3://BUCKET/OBJECT Id
              List parts of a multipart upload

       s3cmd accesslog s3://BUCKET
              Enable/disable bucket access logging

       s3cmd sign STRING-TO-SIGN
              Sign arbitrary string using the secret key

       s3cmd signurl s3://BUCKET/OBJECT <expiry_epoch|+expiry_offset>
              Sign an S3 URL to provide limited public access with expiry

       s3cmd fixbucket s3://BUCKET[/PREFIX]
              Fix invalid file names in a bucket

       s3cmd expire s3://BUCKET
              Set or delete expiration rule for the bucket

       s3cmd setlifecycle s3://BUCKET
              Upload a lifecycle policy for the bucket

       s3cmd dellifecycle s3://BUCKET
              Remove a lifecycle policy for the bucket

       Commands for static WebSites configuration

       s3cmd ws-create s3://BUCKET
              Create Website from bucket

       s3cmd ws-delete s3://BUCKET
              Delete Website

       s3cmd ws-info s3://BUCKET
              Info about Website

       Commands for CloudFront management

       s3cmd cflist
              List CloudFront distribution points

       s3cmd cfinfo [cf://DIST_ID]
              Display CloudFront distribution point parameters

       s3cmd cfcreate s3://BUCKET
              Create CloudFront distribution point

       s3cmd cfdelete cf://DIST_ID
              Delete CloudFront distribution point

       s3cmd cfmodify cf://DIST_ID
              Change CloudFront distribution point parameters

       s3cmd cfinvalinfo cf://DIST_ID[/INVAL_ID]
              Display CloudFront invalidation request(s) status

OPTIONS
       Some  of  the below specified options can have their default values set in s3cmd config file
       (by default $HOME/.s3cmd). As it?. a simple text  file  feel  free  to  open  it  with  your
       favorite text editor and do any changes you like.

       -h, --help
              show this help message and exit

       --configure
              Invoke  interactive (re)configuration tool. Optionally use as ?.-configure s3://some-
              bucket?.to test access to a specific bucket instead of attempting to list them all.

       -c FILE, --config=FILE
              Config file name. Defaults to $HOME/.s3cfg

       --dump-config
              Dump current configuration after parsing config files and command  line  options  and
              exit.

       --access_key=ACCESS_KEY
              AWS Access Key

       --secret_key=SECRET_KEY
              AWS Secret Key

       -n, --dry-run
              Only  show  what should be uploaded or downloaded but don?. actually do it. May still
              perform S3 requests to get bucket listings and other  information  though  (only  for
              file transfer commands)

       -s, --ssl
              Use HTTPS connection when communicating with S3.

       --no-ssl
              Don?. use HTTPS. (default)

       -e, --encrypt
              Encrypt files before uploading to S3.

       --no-encrypt
              Don?. encrypt files.

       -f, --force
              Force overwrite and other dangerous operations.

       --continue
              Continue getting a partially downloaded file (only for [get] command).

       --continue-put
              Continue   uploading   partially   uploaded   files   or   multipart   upload  parts.
              Restarts/parts files that don?. have matching size and md5.  Skips  files/parts  that
              do.   Note:  md5sum  checks  are not always sufficient to check (part) file equality.
              Enable this at your own risk.

       --upload-id=UPLOAD_ID
              UploadId for Multipart Upload, in case you want continue an existing upload  (equiva-
              lent to --continue- put) and there are multiple partial uploads.  Use s3cmd multipart
              [URI] to see what UploadIds are associated with the given URI.

       --skip-existing
              Skip over files that exist at the destination (only for [get] and [sync] commands).

       -r, --recursive
              Recursive upload, download or removal.

       --check-md5
              Check MD5 sums when comparing files for [sync].  (default)

       --no-check-md5
              Do not check MD5 sums when comparing files for [sync].  Only size will  be  compared.
              May significantly speed up transfer but may also miss some changed files.

       -P, --acl-public
              Store objects with ACL allowing read for anyone.

       --acl-private
              Store objects with default ACL allowing access for you only.

       --acl-grant=PERMISSION:EMAIL or USER_CANONICAL_ID
              Grant  stated  permission to a given amazon user.  Permission is one of: read, write,
              read_acp, write_acp, full_control, all

       --acl-revoke=PERMISSION:USER_CANONICAL_ID
              Revoke stated permission for a given amazon user.  Permission is one of: read, write,
              read_acp, wr ite_acp, full_control, all

       -D NUM, --restore-days=NUM
              Number of days to keep restored file available (only for ?.estore?.command).

       --delete-removed
              Delete remote objects with no corresponding local file [sync]

       --no-delete-removed
              Don?. delete remote objects.

       --delete-after
              Perform deletes after new uploads [sync]

       --delay-updates
              Put all updated files into place at end [sync]

       --max-delete=NUM
              Do not delete more than NUM files. [del] and [sync]

       --add-destination=ADDITIONAL_DESTINATIONS
              Additional  destination  for  parallel  uploads,  in  addition  to  last arg.  May be
              repeated.

       --delete-after-fetch
              Delete remote objects after fetching to local file (only for [get]  and  [sync]  com-
              mands).

       -p, --preserve
              Preserve filesystem attributes (mode, ownership, timestamps). Default for [sync] com-
              mand.

       --no-preserve
              Don?. store FS attributes

       --exclude=GLOB
              Filenames and paths matching GLOB will be excluded from sync

       --exclude-from=FILE
              Read --exclude GLOBs from FILE

       --rexclude=REGEXP
              Filenames and paths matching REGEXP (regular expression) will be excluded from sync

       --rexclude-from=FILE
              Read --rexclude REGEXPs from FILE

       --include=GLOB
              Filenames and paths matching GLOB will be included even if previously excluded by one
              of --(r)exclude(-from) patterns

       --include-from=FILE
              Read --include GLOBs from FILE

       --rinclude=REGEXP
              Same as --include but uses REGEXP (regular expression) instead of GLOB

       --rinclude-from=FILE
              Read --rinclude REGEXPs from FILE

       --ignore-failed-copy
              Don?. exit unsuccessfully because of missing keys

       --files-from=FILE
              Read list of source-file names from FILE. Use - to read from stdin.

       --region=REGION, --bucket-location=REGION
              Region  to  create  bucket  in.  As of now the regions are: us-east-1, us-west-1, us-
              west-2, eu-west-1, eu- central-1, ap-northeast-1,  ap-southeast-1,  ap-  southeast-2,
              sa-east-1

       --reduced-redundancy, --rr
              Store object with ?.educed redundancy?. Lower per-GB price. [put, cp, mv]

       --no-reduced-redundancy, --no-rr
              Store object without ?.educed redundancy?. Higher per- GB price. [put, cp, mv]

       --access-logging-target-prefix=LOG_TARGET_PREFIX
              Target prefix for access logs (S3 URI) (for [cfmodify] and [accesslog] commands)

       --no-access-logging
              Disable access logging (for [cfmodify] and [accesslog] commands)

       --default-mime-type=DEFAULT_MIME_TYPE
              Default MIME-type for stored objects. Application default is binary/octet-stream.

       -M, --guess-mime-type
              Guess MIME-type of files by their extension or mime magic. Fall back to default MIME-
              Type as specified by --default-mime-type option

       --no-guess-mime-type
              Don?. guess MIME-type and use the default type instead.

       --no-mime-magic
              Don?. use mime magic when guessing MIME-type.

       -m MIME/TYPE, --mime-type=MIME/TYPE
              Force MIME-type. Override both --default-mime-type and --guess-mime-type.

       --add-header=NAME:VALUE
              Add a given HTTP header to the upload  request.  Can  be  used  multiple  times.  For
              instance set ?.xpires?.or

       --remove-header=NAME
              Remove  a  given  HTTP  header.   Can  be  used multiple times.  For instance, remove
              ?.xpires?.or ?.ache- Control?.headers (or both) using this option. [modify]

       --server-side-encryption
              Specifies that server-side encryption will be used when putting objects. [put,  sync,
              cp, modify]

       --encoding=ENCODING
              Override autodetected terminal and filesystem encoding (character set). Autodetected:
              UTF-8

       --add-encoding-exts=EXTENSIONs
              Add encoding to these comma delimited extensions i.e.  (css,js,html)  when  uploading
              to S3 )

       --verbatim
              Use  the S3 name as given on the command line. No pre- processing, encoding, etc. Use
              with caution!

       --disable-multipart
              Disable multipart upload on files bigger than --multipart-chunk-size-mb

       --multipart-chunk-size-mb=SIZE
              Size of each chunk of a multipart upload. Files bigger than  SIZE  are  automatically
              uploaded  as  multithreaded-  multipart,  smaller files are uploaded using the tradi-
              tional method. SIZE is in Mega-Bytes, default chunk size  is  15MB,  minimum  allowed
              chunk size is 5MB, maximum is 5GB.

       --list-md5
              Include MD5 sums in bucket listings (only for ?.s?.command).

       -H, --human-readable-sizes
              Print sizes in human readable form (eg 1kB instead of 1234).

       --ws-index=WEBSITE_INDEX
              Name of index-document (only for [ws-create] command)

       --ws-error=WEBSITE_ERROR
              Name of error-document (only for [ws-create] command)

       --expiry-date=EXPIRY_DATE
              Indicates when the expiration rule takes effect. (only for [expire] command)

       --expiry-days=EXPIRY_DAYS
              Indicates  the number of days after object creation the expiration rule takes effect.
              (only for [expire] command)

       --expiry-prefix=EXPIRY_PREFIX
              Identifying one or more objects with the prefix to which the expiration rule applies.
              (only for [expire] command)

       --progress
              Display progress meter (default on TTY).

       --no-progress
              Don?. display progress meter (default on non-TTY).

       --enable
              Enable given CloudFront distribution (only for [cfmodify] command)

       --disable
              Enable given CloudFront distribution (only for [cfmodify] command)

       --cf-invalidate
              Invalidate the uploaded filed in CloudFront. Also see [cfinval] command.

       --cf-invalidate-default-index
              When using Custom Origin and S3 static website, invalidate the default index file.

       --cf-no-invalidate-default-index-root
              When  using  Custom  Origin  and  S3 static website, don?. invalidate the path to the
              default index file.

       --cf-add-cname=CNAME
              Add given CNAME to a CloudFront distribution (only for [cfcreate] and [cfmodify] com-
              mands)

       --cf-remove-cname=CNAME
              Remove given CNAME from a CloudFront distribution (only for [cfmodify] command)

       --cf-comment=COMMENT
              Set  COMMENT  for a given CloudFront distribution (only for [cfcreate] and [cfmodify]
              commands)

       --cf-default-root-object=DEFAULT_ROOT_OBJECT
              Set the default root object to return when no object is specified in the URL.  Use  a
              relative   path,   i.e.    default/index.html   instead   of  /default/index.html  or
              s3://bucket/default/index.html (only for [cfcreate] and [cfmodify] commands)

       -v, --verbose
              Enable verbose output.

       -d, --debug
              Enable debug output.

       --version
              Show s3cmd version (1.5.2) and exit.

       -F, --follow-symlinks
              Follow symbolic links as if they are regular files

       --cache-file=FILE
              Cache FILE containing local source MD5 values

       -q, --quiet
              Silence output on stdout

       --ca-certs=CA_CERTS_FILE
              Path to SSL CA certificate FILE (instead of system default)

       --check-certificate
              Check SSL certificate validity

       --no-check-certificate
              Check SSL certificate validity

       --signature-v2
              Use AWS Signature version 2 instead of newer signature methods. Helpful  for  S3-like
              systems that don?. have AWS Signature v4 yet.

EXAMPLES
       One  of  the  most  powerful commands of s3cmd is s3cmd sync used for synchronising complete
       directory trees to or from remote S3 storage. To some extent s3cmd put and s3cmd get share a
       similar behaviour with sync.

       Basic usage common in backup scenarios is as simple as:
            s3cmd sync /local/path/ s3://test-bucket/backup/

       This  command will find all files under /local/path directory and copy them to corresponding
       paths under s3://test-bucket/backup on the remote side.  For example:
            /local/path/file1.ext         ->  s3://bucket/backup/file1.ext
            /local/path/dir123/file2.bin  ->  s3://bucket/backup/dir123/file2.bin

       However if the local path doesn?. end with a slash the last directory?. name is used on  the
       remote side as well. Compare these with the previous example:
            s3cmd sync /local/path s3://test-bucket/backup/
       will sync:
            /local/path/file1.ext         ->  s3://bucket/backup/path/file1.ext
            /local/path/dir123/file2.bin  ->  s3://bucket/backup/path/dir123/file2.bin

       To retrieve the files back from S3 use inverted syntax:
            s3cmd sync s3://test-bucket/backup/ /tmp/restore/
       that will download files:
            s3://bucket/backup/file1.ext         ->  /tmp/restore/file1.ext
            s3://bucket/backup/dir123/file2.bin  ->  /tmp/restore/dir123/file2.bin

       Without  the trailing slash on source the behaviour is similar to what has been demonstrated
       with upload:
            s3cmd sync s3://test-bucket/backup /tmp/restore/
       will download the files as:
            s3://bucket/backup/file1.ext         ->  /tmp/restore/backup/file1.ext
            s3://bucket/backup/dir123/file2.bin  ->  /tmp/restore/backup/dir123/file2.bin

       All source file names, the bold ones above, are matched against exclude rules and those that
       match  are  then  re-checked against include rules to see whether they should be excluded or
       kept in the source list.

       For the purpose of --exclude and --include matching only the bold file names above are used.
       For instance only path/file1.ext is tested against the patterns, not /local/path/file1.ext

       Both  --exclude  and --include work with shell-style wildcards (a.k.a. GLOB).  For a greater
       flexibility s3cmd provides Regular-expression versions of  the  two  exclude  options  named
       --rexclude  and  --rinclude.  The options with ...-from suffix (eg --rinclude-from) expect a
       filename as an argument. Each line of such a file is treated as one pattern.

       There is only one set of patterns built from all --(r)exclude(-from) options  and  similarly
       for  include  variant.  Any  file  excluded with eg --exclude can be put back with a pattern
       found in --rinclude-from list.

       Run s3cmd with --dry-run to verify that your rules work  as  expected.   Use  together  with
       --debug  get  detailed  information  about  matching  file names against exclude and include
       rules.

       For example to exclude all files with ".jpg" extension except those beginning with a  number
       use:

            --exclude ?..jpg?.--rinclude ?.0-9].*.jpg?

       To exclude all files except "*.jpg" extension, use:

            --exclude ?.?.--include ?..jpg?

       To exclude local directory ?.omedir?. be sure to use a trailing forward slash, as such:

            --exclude ?.omedir/?

SEE ALSO
       For the most up to date list of options run: s3cmd --help
       For  more  info  about  usage,  examples  and  other related info visit project homepage at:
       http://s3tools.org

DONATIONS
       Please consider a donation if you have found s3cmd useful:
       http://s3tools.org/donate

AUTHOR
       Written by Michal Ludvig and contributors

CONTACT, SUPPORT
       Preferred way to get support is our mailing list:
       s3tools-general@lists.sourceforge.net
       or visit the project homepage:
       http://s3tools.org

REPORTING BUGS
       Report bugs to s3tools-bugs@lists.sourceforge.net

COPYRIGHT
       Copyright 漏 2007-2014 TGRMN Software - http://www.tgrmn.com - and contributors

LICENSE
       This program is free software; you can redistribute it and/or modify it under the  terms  of
       the  GNU General Public License as published by the Free Software Foundation; either version
       2 of the License, or (at your option) any later version.  This program is distributed in the
       hope  that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
       MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
       more details.

                                                                      s3cmd(1)
(END) 
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
对象存储S3(Simple Storage Service)是由亚马逊AWS提供的一种云存储服务,它可以帮助用户安全地存储和获取任意量的数据。对象存储S3学习手册可以帮助学习者全面了解并掌握这项云存储服务的使用方法和技巧。 首先,学习手册会介绍对象存储S3的基本概念和核心特性。学习者将会了解S3的存储桶(Bucket)和对象(Object)的概念,以及如何进行数据的上传和下载。此外,手册还会详细介绍S3的数据一致性、存储类别和访问控制等方面的知识,帮助学习者理解和正确使用S3。 其次,学习手册将会介绍S3的高级功能和应用场景。比如,学习者将会了解如何使用S3进行大规模数据迁移和批量操作,以及如何使用S3进行数据备份和归档。学习手册还会介绍S3的跨区域复制功能,帮助学习者理解和应用复制策略,实现数据的高可用性和灾备。 此外,学习手册还会涉及S3的性能优化和最佳实践。学习者将会了解如何利用S3的分块上传和多线程下载功能提高数据传输的效率,以及如何合理地设计存储桶和对象的命名方式,提升系统的可维护性和可扩展性。 最后,学习手册会提供一些实际案例和练习,帮助学习者将理论知识应用到实践中。例如,学习者可以学习如何使用S3实现图片和视频的在线存储和分享,或者如何利用S3构建可靠的数据湖(Data Lake)。 总体而言,对象存储S3学习手册通过系统性的介绍,帮助学习者全面了解和掌握该云存储服务的使用方法和技巧,为他们在云计算和大数据领域的工作和学习提供基础支持。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值