AWS SAP-C02教程11-解决方案

本章中,会根据一些常见场景的解决方案或者AWS的某一方面的总结,带你了解AWS各个组件之间的配合使用、如何在解决方案中选择组件以及如何避开其本身限制实现需求。

1 处理高并发解决方案(Handing Extreme Rates)

通过从一个请求到最终获得数据开始,每一层的请求数限制。
在这里插入图片描述

  • 网络层
    1)通过Route53进行Global路由
    2)通过CloudFront进行缓存,可以支持100000/s的请求数
    3)ALB/API Gateway,支持10000/s的请求数
  • 计算层
    1)ASG,ECS:可扩展,但是响应启动比较慢
    2)Fargate:可扩展,启动速度快
    3)Lambda:支持1000/s的请求数
  • 存储层
    1)Database:RDS、Aurora、ElasticSearch(比较难扩展,预先配置好容量)
    2)DynamoDB:自动扩展,按需扩展
    3)EBS:16k IOPS(GP2),64k IOPS(IO1)
    4)Instance Store:EC2的本地缓存,数百万的IOPS
    5)EFS:文件共享,可以启动MaxIO当文件过多的时候
    6)S3:3500/s个PUT、5500/s个GET(使用KMS加密,每个区域只能10000个)
  • 缓存层
    1)Redis:可扩展至200个节点
    2)Memcached:可扩展至20个节点
    3)DAX:可扩展至10个节点主节点和副本
  • 解耦层
    1)SQS、SNS是无限的
    2)SQS FIFO:3000/s批处理或者300/s的请求数
    3)Kinesis:每个分片2MB/s的输出,1MB/s的输入
  • 静态数据层
    1)CloudFront Edge
    2)S3:3500/s个PUT、5500/s个GET(使用KMS加密,每个区域只能10000个)
    注意:从左往右,处理的时间会越来越长,因此尽量在前面节点就处理返回是最好的方案,也是最便宜的方案。因此利用好缓存是很关键的一步。

2 日志管理(AWS Managed Logs)

通过讲解AWS内置的一些日志管理方式,通过这些日志,协助你更好的使用AWS。

  • Load Balancer Access Logs:可以导出存入S3
  • CloudTrail Logs:可以导出到S3或者CloudWatch Logs
  • VPC Flow Logs:可以导出S3或者CloudWatch Logs
  • Route53 Access Logs:可以导出到CloudWatch Logs
  • S3 Access Logs:可以导出到S3
  • CloudFront Access Logs:可以导出到S3
  • AWS Config:可以导出到S3,作为备份追踪使用

3 部署解决方案(Deployment Comparisons)

  • 普通EC2部署:自定义部署方式,但是效率较低,时间较长
  • 使用AMI部署:自定义镜像,可以包括用户数据,效率较高,且可重复使用
  • Auto Scaling Group:自动伸缩组,通过AMI部署,可获得自动伸缩能力
  • CodeDeploy:application部署(并非AMI部署)
    1)部署在EC2
    2)部署在ASG
    3)AWS Lambda的流量切换
    4)ECS组+流量切换
  • Elastic Beanstalk
    1)本地到云的迁移
    2)滚动升级
    3)Blue/Green
  • OpsWorks
    1)如果使用chef/puppet方式部署
    2)能够管理ELB和EC2
    3)不能管理ASG
  • SAM Framework:基于CloudFormation和CodeDeploy来部署Lambda

4 高性能计算(High Performance Computing (HPC) )

AWS现在鼓励使用多少付多少的方式,因此HPC会在考试中越来越多被采纳。下面通过几部分不同内容讲述HPC中的一些架构选择。

4.1 数据管理和转移

  • AWS Direct Connect:在私有安全网络下迁移GB/s的数据
  • Snowball Family:通过物理方式迁移PB级别数据
  • AWS DataSync:从On-premise迁移S3、EFS、FSx for Windows到AWS云上

4.2 计算和网络

  • EC2实例
    1)使用优化的CPU或者GPU实例
    2)使用Spot/Spot Fleets去节省成本及自动伸缩
    3)使用Placement Groups将EC2部署在一个可用区或者rack上面,提供低延迟的网络(特别适合需要交互计算的场景)
  • EC2网络
    1)EC2 Enhanced Networking(SR-IOV),通过**Elastic Network Adapter(ENA)**提升网速到100Gbps
    2)使用Elastic Fabric Adapter(EFA),是一种提升ENA的HPC,适合紧密计算场景(比如分布式计算),只适合Linux

4.3 存储

  • 实例附加存储
    1)EBS:提供256000 IOPS的性能
    2)Instance Store:提供上百万的IOPS,但是存在数据丢失风险
  • Network storage
    1)S3:大对象,并非一个文件存储系统
    2)EFS:提供可伸缩的IOPS
    3)FSx for Lustre:专门为HPC提供的高性能IOPS,可备份于S3

例题:A company is running a data-intensive application on AWS. The application runs on a cluster of hundreds of Amazon EC2 instances. A shared file system also runs on several EC2 instances that store 200 TB of data. The application reads and modifies the data on the shared file system and generates a report. The job runs once monthly, reads a subset of the files from the shared file system, and takes about 72 hours to complete. The compute instances scale in an Auto Scaling group, but the instances that host the shared the system run continuously. The compute and storage instances are all in the same AWS Region.
A solutions architect needs to reduce costs by replacing the shared file system instances. The file system must provide high performance access to the needed data for the duration of the 72-hour run.
Which solution will provide the LARGEST overall cost reduction while meeting these requirements?
A. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Intelligent-Tiering storage class. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using lazy loading. Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete.
B. Migrate the data from the existing shared file system to a large Amazon Elastic Block Store (Amazon EBS) volume with Multi-Attach enabled. Attach the EBS volume to each of the instances by using a user data script in the Auto Scaling group launch template. Use the EBS volume as the shared storage for the duration of the job. Detach the EBS volume when the job is complete.
C. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Standard storage class. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using batch loading. Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete.
D. Migrate the data from the existing shared file system to an Amazon S3 bucket. Before the job runs each month, use AWS Storage Gateway to create a file gateway with the data from Amazon S3. Use the file gateway as the shared storage for the job. Delete the file gateway when the job is complete.
答案:A
答案解析:参考https://aws.amazon.com/cn/blogs/storage/new-enhancements-for-moving-data-between-amazon-fsx-for-lustre-and-amazon-s3/

例题:A company has built a high performance computing (HPC) cluster in AWS for a tightly coupled workload that generates a large number of shared files stored in Amazon EFS. The cluster was performing well when the number of Amazon EC2 instances in the cluster was 100. However, when the company increased the cluster size to 1,000 EC2 instances, overall performance was well below expectations.
Which collection of design choices should a solutions architect make to achieve the maximum performance from the HPC cluster? (Choose three.)
A. Ensure the HPC cluster is launched within a single Availability Zone.
B. Launch the EC2 instances and attach elastic network interfaces in multiples of four.
C. Select EC2 instance types with an Elastic Fabric Adapter (EFA) enabled.
D. Ensure the clusters is launched across multiple Availability Zones.
E. Replace Amazon EFS win multiple Amazon EBS volumes in a RAID array.
F. Replace Amazon EFS with Amazon FSx for Lustre.
答案:ACF
答案解析:题目需要一个HPC的设计方案。A选项在同一个可用区减少传输,符合;B选项提高可用性,不符合;C选项EFA是一种提升ENA的HPC,符合;D选项与A选项相反,因此不符合;E选项EBS不适合共享文件;F选项FSx for Lustre:专门为HPC提供的高性能IOPS。因此选择ACF。

4.4 自动化和编排

  • AWS Batch:支持一个任务大规模的并行计算
  • AWS ParallelCluster:专门为HPC的并行计算设计的并发计算集群,自动化创建VPC、子网等基础设施

5 应用层架构解决方案

  • 有状态的服务部署架构
架构描述缺陷
EIP+EC2最简单且便宜的方案需要手动切换EC2,且无扩展性
Route53+EC2 fleet提供自动化切换和扩展性扩展会比较慢,且扩展需要手动
ALB+ASG提供自动化切换和扩展性,且自动扩展扩展会比较慢
ALB+ECS on EC2(backed by ASG)提供自动化切换和扩展性,且自动扩展,扩展速度较快编排比较困难,需要维护ECS和EC2的编排
ALB+ECS on Fargate提供自动化切换和扩展性,且自动扩展,扩展速度较快,同时解决编排问题较高的成本
API Gateway+HTTP backend集成API Gateway的认证、限流等功能自己部署和维护后端
  • 无状态的服务部署架构
架构描述缺陷
ALB+Lambda自动化扩展,且扩展速度快,可集成WAF等有Lambda一些限制
API Gateway+Lambda自动扩展,且自动发布,真正实现用多少付多少,且集成API Gateway的认证、限流等功能有一些API Gateway限制
API Gateway+AWS Service当出现无代码化的应用,可以使用AWS Service直接调用(比如SQS、SNS等),且集成API Gateway的认证、限流等功能应用场景较为特定

6 考试中其它解决方案

例题:A company runs a proprietary stateless ETL application on an Amazon EC2 Linux instances. The application is a Linux binary, and the source code cannot be modified. The application is single-threaded, uses 2 GB of RAM, and is highly CPU intensive. The application is scheduled to run every 4 hours and runs for up to
20 minutes. A solutions architect wants to revise the architecture for the solution.
Which strategy should the solutions architect use?
A. Use AWS Lambda to run the application. Use Amazon CloudWatch Logs to invoke the Lambda function every 4 hours.
B. Use AWS Batch to run the application. Use an AWS Step Functions state machine to invoke the AWS Batch job every 4 hours.
C. Use AWS Fargate to run the application. Use Amazon EventBridge (Amazon CloudWatch Events) to invoke the Fargate task every 4 hours.
D. Use Amazon EC2 Spot Instances to run the application. Use AWS CodeDeploy to deploy and run the application every 4 hours.
答案:C
答案解析:题目要求定时每4小时运行一次,一次20分钟,单线程,2GB且CPU要求较高的无状态任务。A选项Lambda运行最长时间是15分钟,因此A选项不符合。B选项Batch是批处理,单线程不需要(虽然可以做但是显然不是最佳选项)。D选项虽然使用Spot实例可以按需使用,但是每次都是CodeDeploy部署一台新的Spot,显然使用ECS更为轻便。因此答案为C。

例题:A company has migrated Its forms-processing application to AWS. When users interact with the application, they upload scanned forms as files through a web application. A database stores user metadata and references to files that are stored in Amazon S3. The web application runs on Amazon EC2 instances and an Amazon RDS for PostgreSQL database.
When forms are uploaded, the application sends notifications to a team through Amazon Simple Notification Service (Amazon SNS). A team member then logs in and processes each form. The team member performs data validation on the form and extracts relevant data before entering the information into another system that uses an API.
A solutions architect needs to automate the manual processing of the forms. The solution must provide accurate form extraction. minimize time to market, and minimize tong-term operational overhead.
Which solution will meet these requirements?
A. Develop custom libraries to perform optical character recognition (OCR) on the forms. Deploy the libraries to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster as an application tier. Use this tier to process the forms when forms are uploaded. Store the output in Amazon S3. Parse this output by extracting the data into an Amazon DynamoDB table. Submit the data to the target system’s APL. Host the new application tier on EC2 instances.
B. Extend the system with an application tier that uses AWS Step Functions and AWS Lambda. Configure this tier to use artificial intelligence and machine learning (AI/ML) models that are trained and hosted on an EC2 instance to perform optical character recognition (OCR) on the forms when forms are uploaded. Store the output in Amazon S3. Parse this output by extracting the data that is required within the application tier. Submit the data to the target system’s API.
C. Host a new application tier on EC2 instances. Use this tier to call endpoints that host artificial intelligence and machine teaming (AI/ML) models that are trained and hosted in Amazon SageMaker to perform optical character recognition (OCR) on the forms. Store the output in Amazon ElastiCache. Parse this output by extracting the data that is required within the application tier. Submit the data to the target system’s API.
D. Extend the system with an application tier that uses AWS Step Functions and AWS Lambda. Configure this tier to use Amazon Textract and Amazon Comprehend to perform optical character recognition (OCR) on the forms when forms are uploaded. Store the output in Amazon S3. Parse this output by extracting the data that is required within the application tier. Submit the data to the target system’s API.
答案:D
答案解析:题目关键词:minimize time to market, and minimize tong-term operational overhead.4个方案都能满足要求,只是要看一下那个方案根据快速和运维开销。A选项需要开发程序。B选项和D选项需要大量的开发和维护工作来培训和托管模型。因此答案为D。

例题:A company is hosting a monolithic REST-based API for a mobile app on five Amazon EC2 instances in public subnets of a VPC. Mobile clients connect to the API by using a domain name that is hosted on Amazon Route 53. The company has erupted a Route 53 multivalue answer routing policy with the IP addresses of all the EC2 instances. Recently, the app has been overwhelmed by large and sudden increases to traffic. The app has not been able to keep up with the traffic.
A solutions architect needs to implement a solution so that the app can handle the new and varying load.
Which solution will meet these requirements with the LEAST operational overhead?
A. Separate the API into individual AWS Lambda functions. Configure an Amazon API Gateway REST API with Lambda integration for the backend. Update the Route 53 record to point to the API Gateway API.
B. Containerize the API logic. Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Run the containers in the cluster by using Amazon EC2. Create a Kubernetes ingress. Update the Route 53 record to point to the Kubernetes ingress.
C. Create an Auto Scaling group. Place all the EC2 instances in the Auto Scaling group. Configure the Auto Scaling group to perform scaling actions that are based on CPU utilization. Create an AWS Lambda function that reacts to Auto Scaling group changes and updates the Route 53 record.
D. Create an Application Load Balancer (ALB) in front of the API. Move the EC2 instances to private subnets in the VPC. Add the EC2 instances as targets for the ALB. Update the Route 53 record to point to the ALB.
答案:A
答案解析:题目关键词:overwhelmed by large and sudden increases to traffic, LEAST operational overhead。提升应对的突发流量并且希望运维投入最小。D选项只是增加ALB,并未增加可伸缩性。B选项和C选项相对于A选项来说后续运维投入更大,因此选择A选项。

例题:A video processing company has an application that downloads images from an Amazon S3 bucket, processes the images, stores a transformed image in a second S3 bucket, and updates metadata about the image in an Amazon DynamoDB table. The application is written in Node.js and runs by using an AWS Lambda function. The Lambda function is invoked when a new image is uploaded to Amazon S3.
The application ran without incident for a while. However, the size of the images has grown significantly. The Lambda function is now failing frequently with timeout errors. The function timeout is set to its maximum value. A solutions architect needs to refactor the application’s architecture to prevent invocation failures. The company does not want to manage the underlying infrastructure.
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)
A. Modify the application deployment by building a Docker image that contains the application code. Publish the image to Amazon Elastic Container Registry (Amazon ECR).
B. Create a new Amazon Elastic Container Service (Amazon ECS) task definition with a compatibility type of AWS Fargate. Configure the task definition to use the new image in Amazon Elastic Container Registry (Amazon ECR). Adjust the Lambda function to invoke an ECS task by using the ECS task definition when a new file arrives in Amazon S3.
C. Create an AWS Step Functions state machine with a Parallel state to invoke the Lambda function. Increase the provisioned concurrency of the Lambda function.
D. Create a new Amazon Elastic Container Service (Amazon ECS) task definition with a compatibility type of Amazon EC2. Configure the task definition to use the new image in Amazon Elastic Container Registry (Amazon ECR). Adjust the Lambda function to invoke an ECS task by using the ECS task definition when a new file arrives in Amazon S3.
E. Modify the application to store images on Amazon Elastic File System (Amazon EFS) and to store metadata on an Amazon RDS DB instance. Adjust the Lambda function to mount the EFS file share.
答案:AB
答案解析:题目关键词:timeout errors,underlying infrastructure。首先要解决Lambda超时问题,那么就只能使用EC2或者ECS,把原先代码部署在EC2或者ECS,最好使用ECR的镜像。而不想做太多底层设施管理,那么优选现在ECS的Fargate。因此答案选择AB。C选项依旧使用Lambda;E选项换成EFS存储并未解决问题,因此问题不出在S3上面。

例题: A company is running a three-tier web application in an on-premises data center, The frontend is served by an Apache web server, the middle tier is a monolithic Java application. and the storage tier is a PostgreSQL database.
During a recent marketing promotion, customers could not place orders through the application because the application crashed. An analysis showed that all three tiers were overloaded. The application became unresponsive, and the database reached its capacity limit because of read operations. The company already has several similar promotions scheduled in the near future.
A solutions architect must develop a plan for migration to AWS to resolve these issues The solution must maximize scalability and must minimize operational effort.
Which combination of steps will meet these requirements? (Select THREE.)
A. Refactor the frontend so that static assets can be hosted on Amazon S3, Use Amazon CloudFront to serve the frontend to customers. Connect the frontend to the Java application
B. Rehost the Apache web server of the frontend on Amazon EC2 instances that are in an Auto Scaling group. Use a loadbalancer in front of the Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) to host the static assets thatthe Apache web server needs.
C. Rehost the Java application in an AWS Elastic Beanstalk environment that includes auto scaling
D. Refactor the Java application. Develop a Docker container to run the Java application. Use AWS Fargate to host the container
E. Use AWS Database Migration Service (AWS DMS) to replatform the PostgreSQL database to an Amazon Aurora PostgreSOL database. Use Aurora Auto Scaling for read replicas
F. Rehost the PostgreSQL database on an Amazon EC2 instance that has twice as much memory as the on-premise sserver.
答案:ACE
答案解析:题目中问题在于一个三层架构的服务在突发高访问量情况下,不堪重负,需要改造,且时间紧迫。首先是前端,在A选项和B选项之间,A选项更具备高可用可扩展且维护成本较低。其次中间层java,在C和D选项中,相对了说C选项改造成本较低就能获得高可用可扩展,如果使用D选项则需要做镜像改造。最后是存储层PostgreSQL,F选项使用EC2部署PostgreSQL这样并没有做到高可用可扩展且维护成本较低的要求。因此答案ACE。

例题:A company has registered 10 new domain names. The company uses the domains for online marketing. The company needs a solution that will redirect online visitors to a specific URL for each domain. All domains and target URLS are defined in a JSON document. All DNS records are managed by Amazon Route 53.
A solutions architect must implement a redirect service that accepts HTTP and HTTPS requests.
Which combination of steps should the solutions architect take to meet these requirements with the LEAST amount of operational effort? (Choose three.)
A. Create a dynamic webpage that runs on an Amazon EC2 instance. Configure the webpage to use the JSON document in combination with the event message to look up and respond with a redirect URL.
B. Create an Application Load Balancer that includes HTTP and HTTPS listeners.
C. Create an AWS Lambda function that uses the JSON document in combination with the event message to look up and respond with a redirect URL.
D. Use an Amazon API Gateway API with a custom domain to publish an AWS Lambda function.
E. Create an Amazon CloudFront distribution. Deploy a Lambda@Edge function.
F. Create an SSL certificate by using AWS Certificate Manager (ACM). Include the domains as Subject Alternative Names.
答案:CEF
答案解析:题目要求LEAST amount of operational effort的实现方案,因此看到最小运维开支就一般选择serverless方式。因此使用Lambda+CloudFront+ACM是最好方式,因此选项CEF。

例题:A company hosts a Git repository in an on-premises data center. The company uses webhooks to invoke functionality that runs in the AWS Cloud. The company hosts the webhook logic on a set of Amazon EC2 instances in an Auto Scaling group that the company set as a target for an Application Load Balancer (ALB). The Git server calls the ALB for the configured webhooks. The company wants to move the solution to a serverless architecture.
Which solution will meet these requirements with the LEAST operational overhead?
A. For each webhook, create and configure an AWS Lambda function URL. Update the Git servers to call the individual Lambda function URLs.
B. Create an Amazon API Gateway HTTP API. Implement each webhook logic in a separate AWS Lambda function. Update the Git servers to call the API Gateway endpoint.
C. Deploy the webhook logic to AWS App Runner. Create an ALB, and set App Runner as the target. Update the Git servers to call the ALB endpoint.
D. Containerize the webhook logic. Create an Amazon Elastic Container Service (Amazon ECS) cluster, and run the webhook logic in AWS Fargate. Create an Amazon API Gateway REST API, and set Fargate as the target. Update the Git servers to call the API Gateway endpoint.
答案:B
答案解析:题目要求使用 serverless architecture.,以及最小操作。那么利用了AWS Lambda和API Gateway的无服务器功能,可以自动扩展和管理底层基础设施和资源。它还允许通过API网关接口轻松地管理和更新webhook逻辑。因此选择B选项

例题:A company has applications in an AWS account that is named Source. The account is in an organization in AWS Organizations. One of the applications uses AWS Lambda functions and stores inventory data in an Amazon Aurora database. The application deploys the Lambda functions by using a deployment package. The company has configured automated backups for Aurora.
The company wants to migrate the Lambda functions and the Aurora database to a new AWS account that is named Target. The application processes critical data, so the company must minimize downtime.
Which solution will meet these requirements?
A. Download the Lambda function deployment package from the Source account. Use the deployment package and create new Lambda functions in the Target account. Share the automated Aurora DB cluster snapshot with the Target account.
B. Download the Lambda function deployment package from the Source account. Use the deployment package and create new Lambda functions in the Target account. Share the Aurora DB cluster with the Target account by using AWS Resource Access Manager {AWS RAM). Grant the Target account permission to clone the Aurora DB cluster.
C. Use AWS Resource Access Manager (AWS RAM) to share the Lambda functions and the Aurora DB cluster with the Target account. Grant the Target account permission to clone the Aurora DB cluster.
D. Use AWS Resource Access Manager (AWS RAM) to share the Lambda functions with the Target account. Share the automated Aurora DB cluster snapshot with the Target account.
答案:B
答案解析:题目需要将Lambda和Aurora 迁移到另外的账号。A选项不是最佳解决方案,因为它不与目标帐户共享Aurora DB集群,这会导致数据不一致,因为源帐户和目标帐户不会共享相同的数据;C选项不是最佳解决方案,因为它没有指定如何迁移数据,并且由于源帐户和目标帐户不共享相同的数据,它将导致停机;D选项不是最佳解决方案,因为它没有指定如何迁移Lambda函数,并且由于Source帐户和Target帐户不共享相同的数据,它会导致数据不一致。因此答案为B选项

例题:A large real-estate brokerage is exploring the option of adding a cost-effective location based alert to their existing mobile application. The application backend infrastructure currently runs on AWS. Users who opt in to this service will receive alerts on their mobile device regarding real-estate otters in proximity to their location. For the alerts to be relevant delivery time needs to be in the low minute count the existing mobile app has 5 million users across the US.
Which one of the following architectural suggestions would you make to the customer?
A. The mobile application will submit its location to a web service endpoint utilizing Elastic Load Balancing and EC2 instances; DynamoDB will be used to store and retrieve relevant offers EC2 instances will communicate with mobile earners/device providers to push alerts back to mobile application.
B. Use AWS DirectConnect or VPN to establish connectivity with mobile carriers EC2 instances will receive the mobile applications location through carrier connection: RDS will be used to store and relevant offers. EC2 instances will communicate with mobile carriers to push alerts back to the mobile application.
C. The mobile application will send device location using SQS. EC2 instances will retrieve the relevant others from DynamoDB. AWS Mobile Push will be used to send offers to the mobile application.
D. The mobile application will send device location using AWS Mobile Push EC2 instances will retrieve the relevant offers from DynamoDB. EC2 instances will communicate with mobile carriers/device providers to push alerts back to the mobile application.
答案:C
答案解析:题目要求推送订阅到mobile device。保持解耦使用SQS最合适,Mobile Push 是发送并不能作为接收。因此选择C选项。

例题:While debugging a backend application for an IoT system that supports globally distributed devices, a Solutions Architect notices that stale data is occasionally being sent to user devices. Devices often share data, and stale data does not cause issues in most cases. However, device operations are disrupted when a device reads the stale data after an update.
The global system has multiple identical application stacks deployed in different AWS Regions. If a user device travels out of its home geographic region, it will always connect to the geographically closest AWS Region to write or read data. The same data is available in all supported AWS Regions using an Amazon
DynamoDB global table.
What change should be made to avoid causing disruptions in device operations?
A. Update the backend to use strongly consistent reads. Update the devices to always write to and read from their home AWS Region.
B. Enable strong consistency globally on a DynamoDB global table. Update the backend to use strongly consistent reads.
C. Switch the backend data store to Amazon Aurora MySQL with cross-region replicas. Update the backend to always write to the master endpoint.
D. Select one AWS Region as a master and perform all writes in that AWS Region only. Update the backend to use strongly consistent reads.
答案:A
答案分析:题目说读取到过期的数据会导致设备出现问题。因此要解决的是读取最新数据,所以应用程序需要强一致性读,DynamoDB不支持跨区域的强一致性读取,那么只能在同一区域执行所有强一致性读和写。因此选择A选项

例题:A company needs to store and process image data that will be uploaded from mobile devices using a custom mobile app. Usage peaks between 8 AM and 5 PM on weekdays, with thousands of uploads per minute. The app is rarely used at any other time. A user is notified when image processing is complete.
Which combination of actions should a solutions architect take to ensure image processing can scale to handle the load? (Choose three.)
A. Upload files from the mobile software directly to Amazon S3. Use S3 event notifications to create a message in an Amazon MQ queue.
B. Upload files from the mobile software directly to Amazon S3. Use S3 event notifications to create a message in an Amazon Simple Queue Service (Amazon SQS) standard queue.
C. Invoke an AWS Lambda function to perform image processing when a message is available in the queue.
D. Invoke an S3 Batch Operations job to perform image processing when a message is available in the queue.
E. Send a push notification to the mobile app by using Amazon Simple Notification Service (Amazon SNS) when processing is complete.
F. Send a push notification to the mobile app by using Amazon Simple Email Service (Amazon SES) when processing is complete.
答案:BCE
答案解析:题目要求做一个支持图片上传处理并通知用户处理完成系统。A选项和B选项使用S3上传,通知使用SQS会集成通知服务会更简便;C选项和D选项中S3 Batch与题目没有什么关联;E选项和F选项中SES是发送邮件,显然没必要,只需要给mobile app发送通知即可。因此选择BCE

例题:A company is building a hybrid solution between its existing on-premises systems and a new backend in AWS. The company has a management application to monitor the state of its current IT infrastructure and automate responses to issues. The company wants to incorporate the status of its consumed AWS services into the application. The application uses an HTTPS endpoint to receive updates.
Which approach meets these requirements with the LEAST amount of operational overhead?
A. Configure AWS Systems Manager OpsCenter to ingest operational events from the on-premises systems. Retire the on-premises management application and adopt OpsCenter as the hub.
B. Configure Amazon EventBridge (Amazon CloudWatch Events) to detect and react to changes for AWS Health events from the AWS Personal Health Dashboard. Configure the EventBridge (CloudWateh Events) event to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic and subscribe the topic to the HTTPS endpoint of the management application.
C. Modify the on-premises management application to call the AWS Health API to poll for status events of AWS services.
D. Configure Amazon EventBridge (Amazon CloudWatch Events) to detect and react to changes for AWS Health events from the AWS Service Health Dashboard. Configure the EventBridge (CloudWateh Events) event to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic and subscribe the topic to an HTTPS endpoint for the management application with a topic filter corresponding to the services being used.
答案:B
答案解析:题目希望能够采集本地数据中心和AWS上面的应用监控状态。A选项不支持内部基础设施监控。C选项应用程序不会调用AWS Health api。D选项由于AWS服务运行状况指示板不能与EventBridge集成。因此选择B选项

例题:A media storage application uploads user photos to Amazon S3 for processing by AWS Lambda functions. Application state is stored in Amazon DynamoDB tables. Users are reporting that some uploaded photos are not being processed properly. The application developers trace the logs and find that Lambda is experiencing photo processing issues when thousands of users upload photos simultaneously. The issues are the result of Lambda concurrency limits and the performance of DynamoDB when data is saved.
Which combination of actions should a solutions architect take to increase the performance and reliability of the application? (Choose two.)
A. Evaluate and adjust the RCUs for the DynamoDB tables.
B. Evaluate and adjust the WCUs for the DynamoDB tables.
C. Add an Amazon ElastiCache layer to increase the performance of Lambda functions.
D. Add an Amazon Simple Queue Service (Amazon SQS) queue and reprocessing logic between Amazon S3 and the Lambda functions.
E. Use S3 Transfer Acceleration to provide lower latency to users.
答案:BD
答案解析:题目出现保存数据时出现性能问题(并发限制),希望解决并提供可靠性。写出现限制问题,说明应该增加WCUs,所以选择B选项。虽然增加WCUs,但不能保证一定成功,所以选择SQS的死信队列来保存失败执行,可以重新执行,这样提供更高的可靠性,因此选择D选项。

例题:A company has migrated an application from on premises to AWS. The application frontend is a static website that runs on two Amazon EC2 instances behind an Application Load Balancer (ALB). The application backend is a Python application that runs on three EC2 instances behind another ALB. The EC2 instances are large, general purpose On-Demand Instances that were sized to meet the on-premises specifications for peak usage of the application.
The application averages hundreds of thousands of requests each month. However, the application is used mainly during lunchtime and receives minimal traffic during the rest of the day.
A solutions architect needs to optimize the infrastructure cost of the application without negatively affecting the application availability.
Which combination of steps will meet these requirements? (Choose two.)
A. Change all the EC2 instances to compute optimized instances that have the same number of cores as the existing EC2 instances.
B. Move the application frontend to a static website that is hosted on Amazon S3.
C. Deploy the application frontend by using AWS Elastic Beanstalk. Use the same instance type for the nodes.
D. Change all the backend EC2 instances to Spot Instances.
E. Deploy the backend Python application to general purpose burstable EC2 instances that have the same number of cores as the existing EC2 instances.
答案:BE
答案解析:题目要求提升架构性能,其中有静态网页和某短时间高峰。因此选择B选项,将静态数据放到S3是最经济实惠。由于只是某短时间高峰,因此选择burstable EC2 instances。所以答案为BE

例题:A company has deployed an application on AWS Elastic Beanstalk. The application uses Amazon Aurora for the database layer. An Amazon CloudFront distribution serves web requests and includes the Elastic Beanstalk domain name as the origin server. The distribution is configured with an alternate domain name that visitors use when they access the application.
Each week, the company takes the application out of service for routine maintenance. During the time that the application is unavailable, the company wants visitors to receive an informational message instead of a CloudFront error message.
A solutions architect creates an Amazon S3 bucket as the first step in the process.
Which combination of steps should the solutions architect take next to meet the requirements? (Choose three.)
A. Upload static informational content to the S3 bucket.
B. Create a new CloudFront distribution. Set the S3 bucket as the origin.
C. Set the S3 bucket as a second origin in the original CloudFront distribution. Configure the distribution and the S3 bucket to use an origin access identity (OAI).
D. During the weekly maintenance, edit the default cache behavior to use the S3 origin. Revert the change when the maintenance is complete.
E. During the weekly maintenance, create a cache behavior for the S3 origin on the new distribution. Set the path pattern to \ Set the precedence to 0. Delete the cache behavior when the maintenance is complete.
F. During the weekly maintenance, configure Elastic Beanstalk to serve traffic from the S3 bucket.
答案:ACD
答案解析:题目要求在维护期间,客户端不要受到报错消息,而是收到提示页面。解决方案就是在维护期间能够让CloudFront 指向提示页面,因此使用S3存储提示页面,设置S3为CloudFront 的次要源,最后在维护期间切换CloudFront到次要源即可。因此为ACD

例题:An online magazine will launch its latest edition this month. This edition will be the first to be distributed globally. The magazine’s dynamic website currently uses an Application Load Balancer in front of the web tier, a fleet of Amazon EC2 instances for web and application servers, and Amazon Aurora MySQL. Portions of the website include static content and almost all traffic is read-only.
The magazine is expecting a significant spike in internet traffic when the new edition is launched. Optimal performance is a top priority for the week following the launch.
Which combination of steps should a solutions architect take to reduce system response times for a global audience? (Choose two.)
A. Use logical cross-Region replication to replicate the Aurora MySQL database to a secondary Region. Replace the web servers with Amazon S3. Deploy S3 buckets in cross-Region replication mode.
B. Ensure the web and application tiers are each in Auto Scaling groups. Introduce an AWS Direct Connect connection. Deploy the web and application tiers in Regions across the world.
C. Migrate the database from Amazon Aurora to Amazon RDS for MySQL. Ensure all three of the application tiers web, application, and database are in private subnets.
D. Use an Aurora global database for physical cross-Region replication. Use Amazon S3 with cross-Region replication for static content and resources. Deploy the web and application tiers in Regions across the world.
E. Introduce Amazon Route 53 with latency-based routing and Amazon CloudFront distributions. Ensure the web and application tiers are each in Auto Scaling groups.
答案:DE
答案解析:题目有一套架构,ALB+EC2+Aurora MySQL,但流量突增,希望解决并让全球用户提升体验。因此Aurora MySQL采用global database是必需的,然后使用Route 53做全球分发。A选项Aurora MySQL主备方式不能满足;B选择DC成本过高;C选项移到RDS也不好实现global。因此选择DE。

例题:A company is planning to set up a REST API application on AWS. The application team wants to set up a new identity store on AWS. The IT team does not want to maintain any infrastructure or servers for this deployment.
What is the MOST operationally efficient solution that meets these requirements?
A. Deploy the application as AWS Lambda functions. Set up Amazon API Gateway REST API endpoints for the application. Create a Lambda function, and configure a Lambda authorizer.
B. Deploy the application in AWS AppSync, and configure AWS Lambda resolvers. Set up an Amazon Cognito user pool, and configure AWS AppSync to use the user pool for authorization.
C. Deploy the application as AWS Lambda functions. Set up Amazon API Gateway REST API endpoints for the application. Set up an Amazon Cognito user pool, and configure an Amazon Cognito authorizer.
D. Deploy the application in Amazon Elastic Kubernetes Service (Amazon EKS) clusters. Set up an Application Load Balancer for the EKS pods. Set up an Amazon Cognito user pool and service pod for authentication.
答案:C
答案解析:题目希望部署一个REST API应用和建立身份存储,不希望维护底层基础设施。并且MOST operationally efficient。A选项在身份存储上相对于Amazon Cognito用户池来说复杂性和安全度都比不上。B选项增加AppSync使得架构复杂;D选项使用EKS使用需要维护过多基础设施。因此选择C选项。

例题:A company ingests and processes streaming market data. The data rate is constant. A nightly process that calculates aggregate statistics takes 4 hours to complete. The statistical analysis is not critical to the business, and data points are processed during the next iteration if a particular run fails.
The current architecture uses a pool of Amazon EC2 Reserved Instances with 1-year reservations. These EC2 instances run full time to ingest and store the streaming data in attached Amazon Elastic Block Store (Amazon EBS) volumes. A scheduled script launches EC2 On-Demand Instances each night to perform the nightly processing. The instances access the stored data from NFS shares on the ingestion servers. The script terminates the instances when the processing is complete.
The Reserved Instance reservations are expiring. The company needs to determine whether to purchase new reservations or implement a new design.
Which solution will meet these requirements MOST cost-effectively?
A. Update the ingestion process to use Amazon Kinesis Data Firehose to save data to Amazon S3. Use a scheduled script to launch a fleet of EC2 On-Demand Instances each night to perform the batch processing of the S3 data. Configure the script to terminate the instances when the processing is complete.
B. Update the ingestion process to use Amazon Kinesis Data Firehose to save data to Amazon S3. Use AWS Batch with Spot Instances to perform nightly processing with a maximum Spot price that is 50% of the On-Demand price.
C. Update the ingestion process to use a fleet of EC2 Reserved Instances with 3-year reservations behind a Network LoadBalancer. Use AWS Batch with Spot Instances to perform nightly processing with a maximum Spot price that is 50% of the On-Demand price.
D. Update the ingestion process to use Amazon Kinesis Data Firehose to save data to Amazon Redshift. Use Amazon EventBridge to schedule an AWS Lambda function to run nightly to query Amazon Redshift to generate the daily statistics.
答案:B
答案解析:题目系统重新设计一个采集数据并处理数据的方案,并且要求MOST cost-effectively。A选项使用scheduled script启动EC2是一个比较复杂且浪费的方案;C选项使用EC2来接收和存储数据并不是stream处理方式;D选项Lambda无法满足运行4个小时。因此选择B选项。

例题:A company is designing a new website that hosts static content. The website will give users the ability to upload and download large files. According to company requirements, all data must be encrypted in transit and at rest. A solutions architect is building the solution by using Amazon S3 and Amazon CloudFront.
Which combination of steps will meet the encryption requirements? (Choose three.)
A. Turn on S3 server-side encryption for the S3 bucket that the web application uses.
B. Add a policy attribute of “aws:SecureTransport”: “true” for read and write operations in the S3 ACLs.
C. Create a bucket policy that denies any unencrypted operations in the S3 bucket that the web application uses.
D. Configure encryption at rest on CloudFront by using server-side encryption with AWS KMS keys (SSE-KMS).
E. Configure redirection of HTTP requests to HTTPS requests in CloudFront.
F. Use the RequireSSL option in the creation of presigned URLs for the S3 bucket that the web application uses.
答案:ACE
答案解析:参考:https://aws.amazon.com/tw/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/

例题:A company runs a microservice as an AWS Lambda function. The microservice writes data to an on-premises SQL database that supports a limited number of concurrent connections. When the number of Lambda function invocations is too high, the database crashes and causes application downtime. The company has an AWS Direct Connect connection between the company’s VPC and the on-premises data center. The company wants to protect the database from crashes.
Which solution will meet these requirements?
A. Write the data to an Amazon Simple Queue Service (Amazon SQS) queue. Configure the Lambda function to read from the queue and write to the existing database. Set a reserved concurrency limit on the Lambda function that is less than the number of connections that the database supports.
B. Create a new Amazon Aurora Serverless DB cluster. Use AWS DataSync to migrate the data from the existing database to Aurora Serverless. Reconfigure the Lambda function to write to Aurora.
C. Create an Amazon RDS Proxy DB instance. Attach the RDS Proxy DB instance to the Amazon RDS DB instance. Reconfigure the Lambda function to write to the RDS Proxy DB instance.
D. Write the data to an Amazon Simple Notification Service (Amazon SNS) topic. Invoke the Lambda function to write to the existing database when the topic receives new messages. Configure provisioned concurrency for the Lambda function to be equal to the number of connections that the database supports.
答案:A
答案解析:题目需要解决Lambda调用本地数据库出现连接过多。出现数据库连接最好的方式是使用Proxy,但是Proxy不适合本地数据库。因此需要解决高并发下的缓解就只能是解耦,而消息队列是最好的解耦工具,因此选择A选项

例题:A company uses a load balancer to distribute traffic to Amazon EC2 instances in a single Availability Zone. The company is concerned about security and wants a solutions architect to re-architect the solution to meet the following requirements:
– Inbound requests must be filtered for common vulnerability attacks.
– Rejected requests must be sent to a third-party auditing application.
– All resources should be highly available.
Which solution meets these requirements?
A. Configure a Multi-AZ Auto Scaling group using the application’s AMI. Create an Application Load Balancer (ALB) and select the previously created Auto Scaling group as the target. Use Amazon Inspector to monitor traffic to the ALB and EC2 instances. Create a web ACL in WAF. Create an AWS WAF using the web ACL and ALB. Use an AWS Lambda function to frequently push the Amazon Inspector report to the third-party auditing application
B. Configure an Application Load Balancer (ALB) and add the EC2 instances as targets. Create a web ACL in WAF. Create an AWS WAF using the web ACL and ALB name and enable logging with Amazon CloudWatch Logs. Use an AWS Lambda function to frequently push the logs to the third-party auditing application.
C. Configure an Application Load Balancer (ALB) along with a target group adding the EC2 instances as targets. Create an Amazon Kinesis Data Firehose with the destination of the third-party auditing application. Create a web ACL in WAF. Create an AWS WAF using the web ACL and ALB then enable logging by selecting the Kinesis Data Firehose as the destination. Subscribe to AWS Managed Rules in AWS Marketplace, choosing the WAF as the subscriber.
D. Configure a Multi-AZ Auto Scaling group using the application’s AMI Create an Application Load Balancer (ALB) and select the previously created Auto Scaling group as the target. Create an Amazon Kinesis Data Firehose with a destination of the third-party auditing application. Create a web ACL in WAF. Create an AWS WAF using the web ACL and ALB then enable logging by selecting the Kinesis Data Firehose as the destination. Subscribe to AWS Managed Rules in AWS Marketplace, choosing the WAF as the subscriber.
答案:D
答案解析:题目要求设计一套入站请求筛选,拒绝的请求进入第三方审核,并且每部分都是高可用。B选项和C选项只是增加EC2,并没有自动伸缩,并不存在高可用。A选项和D选项,A选项中Amazon inspector不是用来分析来自ALB的流量的,因此选择D选项

例题:A company has set up its entire infrastructure on AWS. The company uses Amazon EC2 instances to host its ecommerce website and uses Amazon S3 to store static data. Three engineers at the company handle the cloud administration and development through one AWS account. Occasionally, an engineer alters an EC2 security group configuration of another engineer and causes noncompliance issues in the environment.
A solutions architect must set up a system that tracks changes that the engineers make. The system must send alerts when the engineers make noncompliant changes to the security settings for the EC2 instances.
What is the FASTEST way for the solutions architect to meet these requirements?
A. Set up AWS Organizations for the company. Apply SCPs to govern and track noncompliant security group changes that are made to the AWS account.
B. Enable AWS CloudTrail to capture the changes to EC2 security groups. Enable Amazon CloudWatch rules to provide alerts when noncompliant security settings are detected.
C. Enable SCPs on the AWS account to provide alerts when noncompliant security group changes are made to the environment.
D. Enable AWS Config on the EC2 security groups to track any noncompliant changes. Send the changes as alerts through an Amazon Simple Notification Service (Amazon SNS) topic.
答案:D
答案解析:参考:https://aws.amazon.com/cn/blogs/security/how-to-monitor-aws-account-configuration-changes-and-api-calls-to-amazon-ec2-security-groups/

例题:A company wants to run a custom network analysis software package to inspect traffic as traffic leaves and enters a VPC. The company has deployed the solution by using AWS CloudFormation on three Amazon EC2 instances in an Auto Scaling group. All network routing has been established to direct traffic to the EC2 instances.
Whenever the analysis software stops working, the Auto Scaling group replaces an instance. The network routes are not updated when the instance replacement occurs.
Which combination of steps will resolve this issue? (Choose three.)
A. Create alarms based on EC2 status check metrics that will cause the Auto Scaling group to replace the failed instance.
B. Update the CloudFormation template to install the Amazon CloudWatch agent on the EC2 instances. Configure the CloudWatch agent to send process metrics for the application.
C. Update the CloudFormation template to install AWS Systems Manager Agent on the EC2 instances. Configure Systems Manager Agent to send process metrics for the application.
D. Create an alarm for the custom metric in Amazon CloudWatch for the failure scenarios. Configure the alarm to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic.
E. Create an AWS Lambda function that responds to the Amazon Simple Notification Service (Amazon SNS) message to take the instance out of service. Update the network routes to point to the replacement instance.
F. In the CloudFormation template, write a condition that updates the network routes when a replacement instance is launched.
答案:BDE
答案解析:题目使用一台EC2运行第三方软件,但是当ASG替换一台EC2时,网络路由并没有更新。A选项多余,本身ASG就会替换EC2。C选项Systems Manager Agent用于采集metric指标,但是没有process指标。F选项需要替换EC2后才知道新的网络。因此答案BDE。

例题:A company is developing a new on-demand video application that is based on microservices. The application will have 5 million users at launch and will have 30 million users after 6 months. The company has deployed the application on Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. The company developed the application by using ECS services that use the HTTPS protocol.
A solutions architect needs to implement updates to the application by using blue/green deployments. The solution must distribute traffic to each ECS service through a load balancer. The application must automatically adjust the number of tasks in response to an Amazon CloudWatch alarm.
Which solution will meet these requirements?
A. Configure the ECS services to use the blue/green deployment type and a Network Load Balancer. Request increases to the service quota for tasks per service to meet the demand.
B. Configure the ECS services to use the blue/green deployment type and a Network Load Balancer. Implement Auto Scaling group for each ECS service by using the Cluster Autoscaler.
C. Configure the ECS services to use the blue/green deployment type and an Application Load Balancer. Implement an Auto Scaling group for each ECS service by using the Cluster Autoscaler.
D. Configure the ECS services to use the blue/green deployment type and an Application Load Balancer. Implement Service Auto Scaling for each ECS service.
答案:D
答案解析:题目要求部署一个Fargate的ECS,并且支持HTTPS。支持HTTPS,因此排除掉A选项和B选项,因为NLB不支持HTTPS。题目中使用Fargate,因此没有集群自动缩放功能,所以配置Service Auto Scaling,参考:https://repost.aws/knowledge-center/ecs-fargate-service-auto-scaling。因此选择D选项。

例题:A solutions architect needs to define a reference architecture for a solution for three-tier applications with web, application, and NoSQL data layers. The reference architecture must meet the following requirements:
– High availability within an AWS Region
– Able to fail over in 1 minute to another AWS Region for disaster recovery
– Provide the most efficient solution while minimizing the impact on the user experience
Which combination of steps will meet these requirements? (Choose three.)
A. Use an Amazon Route 53 weighted routing policy set to 100/0 across the two selected Regions. Set Time to Live (TTL) to 1 hour.
B. Use an Amazon Route 53 failover routing policy for failover from the primary Region to the disaster recovery Region. Set Time to Live (TTL) to 30 seconds.
C. Use a global table within Amazon DynamoDB so data can be accessed in the two selected Regions.
D. Back up data from an Amazon DynamoDB table in the primary Region every 60 minutes and then write the data to Amazon S3. Use S3 cross-Region replication to copy the data from the primary Region to the disaster recovery Region. Have a script import the data into DynamoDB in a disaster recovery scenario.
E. Implement a hot standby model using Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use zonal Reserved Instances for the minimum number of servers and On-Demand Instances for any additional resources.
F. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use Spot Instances for the required resources.
答案:BCE
答案解析:题目要求设计一个三层架构高可用方案,1分钟内在跨区域做灾备故障转移,并且最有效对用户影响最小。A选项和B选项使用Route 53 做故障转移,但是A选项TTL为1小时,不符合1分钟内故障转移要求,并且加权也是做负载均衡,并非故障转移;C选项和D选项都是对存储做DR方案,都可以做到跨区域恢复数据,但是D选项通过备份和恢复,并不能满足RTO在1分钟内;E选项和F选项都是应用层部署方案,但是F选项中使用Spot Instances并不是很好的选项。因此选择BCE

例题:A company has automated the nightly retraining of its machine learning models by using AWS Step Functions. The workflow consists of multiple steps that use
AWS Lambda. Each step can fail for various reasons, and any failure causes a failure of the overall workflow.
A review reveals that the retraining has failed multiple nights in a row without the company noticing the failure. A solutions architect needs to improve the workflow so that notifications are sent for all types of failures in the retraining process.
Which combination of steps should the solutions architect take to meet these requirements? (Choose three.)
A. Create an Amazon Simple Notification Service (Amazon SNS) topic with a subscription of type “Email” that targets the team’s mailing list.
B. Create a task named “Email” that forwards the input arguments to the SNS topic.
C. Add a Catch field to all Task, Map, and Parallel states that have a statement of “ErrorEquals”: [ “States.ALL” ] and “Next”: “Email”.
D. Add a new email address to Amazon Simple Email Service (Amazon SES). Verify the email address.
E. Create a task named “Email” that forwards the input arguments to the SES email address.
F. Add a Catch field to all Task, Map, and Parallel states that have a statement of “ErrorEquals”: [ “States.Runtime” ] and “Next”: “Email”.
答案:ABC
答案解析:题目要求改进原先的解决方案,参考https://dashbird.io/blog/aws-step-functions-error-handling/

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值