使用AmazonS3

一、命令行

1、创建存储桶
    以下create-bucket示例创建一个名为 的存储桶my-bucket:
    aws s3api create-bucket \
        --bucket my-bucket \
        --region us-east-1
2、删除存储桶:
    $ aws s3 rb s3://bucket-name --force
3、上传对象:
    以下示例使用put-object命令将对象上传到 Amazon S3:
        aws s3api put-object --bucket text-content --key dir-1/my_images.tar.bz2 --body my_images.tar.bz2
    以下示例显示了视频文件的上传(视频文件是使用 Windows 文件系统语法指定的。):
        aws s3api put-object --bucket text-content --key dir-1/big-video-file.mp4 --body e:\media\videos\f-sharp-3-data-services.mp4

二、javasdk


#1、创建存储桶:
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CreateBucketRequest;
import com.amazonaws.services.s3.model.GetBucketLocationRequest;
import java.io.IOException;
    public class CreateBucket {
        public static void main(String[] args) throws IOException {
            Regions clientRegion = Regions.DEFAULT_REGION;
            String bucketName = "*** Bucket name ***";
            try {
                AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withCredentials(new ProfileCredentialsProvider()).withRegion(clientRegion).build();
                if (!s3Client.doesBucketExistV2(bucketName)) {
                // Because the CreateBucketRequest object doesn't specify a region, the
                // bucket is created in the region specified in the client.
                s3Client.createBucket(new CreateBucketRequest(bucketName));
                // Verify that the bucket was created by retrieving it and checking its location.
                String bucketLocation = s3Client.getBucketLocation(new GetBucketLocationRequest(bucketName));
                System.out.println("Bucket location: " + bucketLocation);
                }
            } catch (AmazonServiceException e) {
                // The call was transmitted successfully, but Amazon S3 couldn't process
                // it and returned an error response.
                e.printStackTrace();
            } catch (SdkClientException e) {
                // Amazon S3 couldn't be contacted for a response, or the client
                // couldn't parse the response from Amazon S3.
                e.printStackTrace();
            }
        }
    }


#2、删除存储桶
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;
import java.util.Iterator;
    public class DeleteBucket {
        public static void main(String[] args) {
            Regions clientRegion = Regions.DEFAULT_REGION;
            String bucketName = "*** Bucket name ***";
            try {
                AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withCredentials(new ProfileCredentialsProvider()).withRegion(clientRegion).build();
                // Delete all objects from the bucket. This is sufficient
                // for unversioned buckets. For versioned buckets, when you attempt to delete objects, Amazon S3 inserts
                // delete markers for all objects, but doesn't delete the object versions.
                // To delete objects from versioned buckets, delete all of the object versions before deleting
                // the bucket (see below for an example).
                ObjectListing objectListing = s3Client.listObjects(bucketName);
                while (true) {
                    Iterator<S3ObjectSummary> objIter =
                    objectListing.getObjectSummaries().iterator();
                    while (objIter.hasNext()) {
                        s3Client.deleteObject(bucketName, objIter.next().getKey());
                    }
                    // If the bucket contains many objects, the listObjects() call
                    // might not return all of the objects in the first listing. Check to
                    // see whether the listing was truncated. If so, retrieve the next page of
                    objects
                    // and delete them.
                    if (objectListing.isTruncated()) {
                        objectListing = s3Client.listNextBatchOfObjects(objectListing);
                    } else {
                        break;
                    }
                }
                // Delete all object versions (required for versioned buckets).
                VersionListing versionList = s3Client.listVersions(new
                ListVersionsRequest().withBucketName(bucketName));
                
                while (true) {
                    Iterator<S3VersionSummary> versionIter =versionList.getVersionSummaries().iterator();
                while (versionIter.hasNext()) {
                    S3VersionSummary vs = versionIter.next();
                    s3Client.deleteVersion(bucketName, vs.getKey(), vs.getVersionId());
                }
                if (versionList.isTruncated()) {
                    versionList = s3Client.listNextBatchOfVersions(versionList);
                } else {
                break;
                }
                }
                // After all objects and object versions are deleted, delete the bucket.
                s3Client.deleteBucket(bucketName);
                } catch (AmazonServiceException e) {
                    // The call was transmitted successfully, but Amazon S3 couldn't process
                    // it, so it returned an error response.
                    e.printStackTrace();
            } catch (SdkClientException e) {
                    // Amazon S3 couldn't be contacted for a response, or the client couldn't
                    // parse the response from Amazon S3.
                    e.printStackTrace();
            }
        }
    }


#3、存储对象,上传文件
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.BucketLifecycleConfiguration;
import com.amazonaws.services.s3.model.BucketLifecycleConfiguration.Transition;
import com.amazonaws.services.s3.model.StorageClass;
import com.amazonaws.services.s3.model.Tag;
import com.amazonaws.services.s3.model.lifecycle.LifecycleAndOperator;
import com.amazonaws.services.s3.model.lifecycle.LifecycleFilter;
import com.amazonaws.services.s3.model.lifecycle.LifecyclePrefixPredicate;
import com.amazonaws.services.s3.model.lifecycle.LifecycleTagPredicate;
import java.io.IOException;
import java.util.Arrays;
    public class LifecycleConfiguration {
            public static void main(String[] args) throws IOException {
            Regions clientRegion = Regions.DEFAULT_REGION;
            String bucketName = "*** Bucket name ***";
            // Create a rule to archive objects with the "glacierobjects/" prefix to Glacier
            immediately.
            BucketLifecycleConfiguration.Rule rule1 = new BucketLifecycleConfiguration.Rule()
                                                            .withId("Archive immediately rule")
                                                            .withFilter(new LifecycleFilter(new LifecyclePrefixPredicate("glacierobjects/"))).addTransition(new Transition().withDays(0).withStorageClass(StorageClass.Glacier)).withStatus(BucketLifecycleConfiguration.ENABLED);
            // Create a rule to transition objects to the Standard-Infrequent Access storage
            class
            // after 30 days, then to Glacier after 365 days. Amazon S3 will delete the objectsafter 3650 days.
            
            // The rule applies to all objects with the tag "archive" set to "true".
            BucketLifecycleConfiguration.Rule rule2 = new BucketLifecycleConfiguration.Rule().withId("Archive and then delete rule").withFilter(new LifecycleFilter(new LifecycleTagPredicate(new Tag("archive", "true")))).addTransition(new Transition().withDays(30).withStorageClass(StorageClass.StandardInfrequentAccess)).addTransition(new Transition().withDays(365).withStorageClass(StorageClass.Glacier)).withExpirationInDays(3650).withStatus(BucketLifecycleConfiguration.ENABLED);
            // Add the rules to a new BucketLifecycleConfiguration.
            BucketLifecycleConfiguration configuration = new BucketLifecycleConfiguration().withRules(Arrays.asList(rule1, rule2));
            try {
                AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
                .withCredentials(new ProfileCredentialsProvider())
                .withRegion(clientRegion)
                .build();
                // Save the configuration.
                s3Client.setBucketLifecycleConfiguration(bucketName, configuration);
                // Retrieve the configuration.
                configuration = s3Client.getBucketLifecycleConfiguration(bucketName);
                // Add a new rule with both a prefix predicate and a tag predicate.
                configuration.getRules().add(new
                BucketLifecycleConfiguration.Rule().withId("NewRule")
                .withFilter(new LifecycleFilter(new LifecycleAndOperator(
                Arrays.asList(new LifecyclePrefixPredicate("YearlyDocuments/"),
                new LifecycleTagPredicate(new Tag("expire_after",
                "ten_years"))))))
                .withExpirationInDays(3650)
                .withStatus(BucketLifecycleConfiguration.ENABLED));
                // Save the configuration.
                s3Client.setBucketLifecycleConfiguration(bucketName, configuration);
                // Retrieve the configuration.
                configuration = s3Client.getBucketLifecycleConfiguration(bucketName);
                // Verify that the configuration now has three rules.
                configuration = s3Client.getBucketLifecycleConfiguration(bucketName);
                System.out.println("Expected # of rules = 3; found: " +
                configuration.getRules().size());
                // Delete the configuration.
                s3Client.deleteBucketLifecycleConfiguration(bucketName);
                // Verify that the configuration has been deleted by attempting to retrieve it.
                configuration = s3Client.getBucketLifecycleConfiguration(bucketName);
                String s = (configuration == null) ? "No configuration found." : "Configuration
                found.";
                System.out.println(s);
            } catch (AmazonServiceException e) {
                // The call was transmitted successfully, but Amazon S3 couldn't process
                // it, so it returned an error response.
                e.printStackTrace();
            } catch (SdkClientException e) {
                // Amazon S3 couldn't be contacted for a response, or the client
                // couldn't parse the response from Amazon S3.
                e.printStackTrace();
            }
        
        }
    }

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
Spring Boot是一个开源的Java框架,可以帮助开发者快速构建基于Java的应用程序。而Amazon S3(Simple Storage Service)是亚马逊云计算服务中提供的一种对象存储服务,可以帮助用户存储和检索任意数量的数据对象。 结合Spring Boot和Amazon S3,我们可以在Java应用程序中方便地使用Amazon S3进行对象存储的操作。首先,我们需要使用Spring Boot的依赖管理工具来引入Amazon S3的Java SDK。 一旦我们引入了Amazon S3 SDK的依赖,我们就可以在代码中使用Amazon S3的API来与存储桶(Bucket)进行交互。存储桶是Amazon S3用于存储对象的基本单位。我们可以通过代码创建新的存储桶,或者将对象上传到已存在的存储桶中。 使用Spring Boot和Amazon S3的关键是配置正确的连接信息。我们需要提供访问Amazon S3的凭证,包括访问密钥(Access Key)和密钥ID(Secret Key)。这些凭证可以通过Amazon Web Services(AWS)的控制台获取。 一旦配置了正确的连接信息,我们就可以在Spring Boot应用程序中使用Amazon S3的Java SDK来执行各种操作,如上传文件、下载文件、删除文件等。通过这些API,我们可以轻松地在Spring Boot应用程序中集成Amazon S3的功能,实现对对象存储的管理和操作。 总之,Spring Boot和Amazon S3的结合使得在Java应用程序中使用Amazon S3变得非常简单和高效。我们可以借助Spring Boot的强大功能和Amazon S3的可靠性和扩展性,快速构建出功能丰富的应用程序,并轻松地存储和管理大量的数据对象。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值