AWS Associate (SAA-C03) - Practice Exam02

You work for an online cloud education provider that provides hands-on labs for training students.  Recently, you noticed a spike in CPU activity for one of your EC2 instances and you suspect it is being used to mine bitcoin rather than for educational purposes.  Somehow, your production environment has been compromised and you need to quickly identify the root cause of this compromise.  Which AWS service would be best suited to identify the root cause?

  1. Amazon Detective
  2. AWS Trusted Advisor
  3. Amazon CloudWatch
  4. AWS Artifact

1-Using Amazon Detective, you can analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities.

可帮助您分析、调查和快速识别安全结果或可疑活动的根本原因。 Detective 会自动从您的AWS资源中收集日志数据。 然后,它使用机器学习、统计分析和图论来生成可视化效果,帮助您更快、更高效地进行安全调查。

You have configured a VPC with both a public and a private subnet. You need to deploy a web server and a database. You want the web server to be accessed from the Internet by customers. Which is the proper configuration for this architecture?

  1. Database outside the VPC for decoupling from web server, and web server in public subnet for internet access.
  2. Both web server and database in public subnets to facilitate internet access.
  3. Web server in public subnet, database in private subnet.
  4. Web server outside of VPC for internet access, database in private subnet.

3-The web server in the public subnet with an internet gateway will facilitate internet access. The purpose of a VPC is to create a private, secure environment, but public subnets are used within the VPC (Virtual Private Cloud) for internet access.

You work in healthcare for an IVF clinic. You host an application on AWS, which allows patients to track their medication during IVF cycles. The application also allows them to view test results, which contain sensitive medical data. You have a regulatory requirement that the application is secure and you must use a firewall managed by AWS that enables control and visibility over VPC-to-VPC traffic and prevents the VPCs hosting your sensitive application resources from accessing domains using unauthorized protocols. What AWS service would support this?

  1. AWS Network Firewall
  2. AWS WAF
  3. AWS PrivateLink
  4. AWS Firewall Manager

 1-The AWS Network Firewall infrastructure is managed by AWS, so you don’t have to worry about building and maintaining your own network security infrastructure. AWS Network Firewall’s stateful firewall can incorporate context from traffic flows, like tracking connections and protocol identification, to enforce policies such as preventing your VPCs from accessing domains using an unauthorized protocol. AWS Network Firewall gives you control and visibility of VPC-to-VPC traffic to logically separate networks hosting sensitive applications or line-of-business resources.

 A consultant is hired by a small company to configure an AWS environment. The consultant begins working with the VPC and launching EC2 instances within the VPC. The initial instances will be placed in a public subnet. The consultant begins to create security groups. What is true of the default security group?

  1. You can't delete this group, however, you can change the group's rules.
  2. You can delete this group, however, you can’t change the group's rules.
  3. You can delete this group or you can change the group's rules.
  4. You can't delete this group, nor can you change the group's rules.

1-Your VPC includes a default security group. You can't delete this group, however, you can change the group's rules. The procedure is the same as modifying any other security group. For more information, see Adding, removing, and updating rules. Control traffic to your AWS resources using security groups - Amazon Virtual Private Cloud

A small company has nearly 200 users who already have AWS accounts in the company AWS environment. A new S3 bucket has been created which will need to allow roughly a third of all users access to sensitive information in the bucket. What is the most time efficient way to get these users access to the bucket?

  1. Create a new policy which will grant permissions to the bucket. Create a group and attach the policy to that group. Add the users to this group.
  2. Create a new bucket policy granting the appropriate permissions and attach it to the bucket.
  3. Create a new policy which will grant permissions to the bucket. Create a role and attach the policy to that role. Add the users to this role.
  4. Create a new role which will grant permissions to the bucket. Create a group and attach the role to that group. Add the users to this group.

1-

An international company has many clients around the world. These clients need to transfer gigabytes to terabytes of data quickly and on a regular basis to an S3 bucket. Which S3 feature will enable these long distance data transfers in a secure and fast manner?

  1. Multipart upload
  2. AWS Snowmobile
  3. Transfer Acceleration
  4. Cross-account replication

3-Multipart upload allows you to upload a single object as a set of parts. After all parts of your object are uploaded, Amazon S3 then presents the data as a single object. With this feature you can create parallel uploads, pause and resume an object upload, and begin uploads before you know the total object size.

you might want to use Transfer Acceleration on a bucket for various reasons, including the following: You have customers that upload to a centralized bucket from all over the world. You transfer gigabytes to terabytes of data on a regular basis across continents. You are unable to utilize all of your available bandwidth over the Internet when uploading to Amazon S3. Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration - Amazon Simple Storage Service

Your company is storing highly sensitive data in S3 Buckets. The data includes personal and financial information. An audit has determined that this data must be stored in a secured manner and any data stored in the buckets already or data coming into the buckets must be analyzed and alerts sent out flagging improperly stored data. Which AWS service can be used to meet this requirement?

  1. AWS Inspector
  2. AWS GuardDuty
  3. AWS Trusted Advisor
  4. Amazon Macie

4-Amazon Macie is a fully-managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Macie automatically provides an inventory of Amazon S3 buckets including a list of unencrypted buckets, publicly accessible buckets, and buckets shared with AWS accounts outside those you have defined in AWS Organizations. Then, Macie applies machine learning and pattern matching techniques to the buckets you select to identify and alert you to sensitive data, such as personally identifiable information (PII). Macie’s alerts, or findings, can be searched and filtered in the AWS Management Console and sent to Amazon CloudWatch Events for easy integration with existing workflow or event management systems, or to be used in combination with AWS services, such as AWS Step Functions to take automated remediation actions. Reference - Sensitive Data Discovery and Protection - Amazon Macie - AWS

You are managing S3 buckets in your organization. One of the buckets in your organization has gotten some bizarre uploads and you would like to be aware of these types of uploads as soon as possible. Because of that, you configure event notifications for this bucket. Which of the following is NOT a supported destination for event notifications?

  1. Lambda function
  2. SQS
  3. SNS
  4. SES

4-SES is a NOT supported destination for S3 event notifications. The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket. To enable notifications, you must first add a notification configuration that identifies the events you want Amazon S3 to publish and the destinations where you want Amazon S3 to send the notifications. Amazon S3 can send event notification messages to the following destinations. You specify the ARN value of these destinations in the notification configuration.

  • Publish event messages to an Amazon Simple Notification Service (Amazon SNS) topic
  • Publish event messages to an Amazon Simple Queue Service (Amazon SQS) queue Note that if the destination queue or topic is SSE enabled, Amazon S3 will need access to the associated AWS Key Management Service (AWS KMS) customer master key (CMK) to enable message encryption.
  • Publish event messages to AWS Lambda by invoking a Lambda function and providing the event message as an argument Amazon S3 Event Notifications - Amazon Simple Storage Service

The company you work for has reshuffled teams a bit and you’ve been moved from the AWS IAM team to the AWS network team. One of your first assignments is to review the subnets in the main VPCs. You have recommended that the company add some private subnets and segregate databases from public traffic. What differentiates a public subnet from a private subnet?

  1. Public subnets are meant to house EC2 instances with public IP addresses.
  2. If a subnet's traffic is routed to an internet gateway, the subnet is known as a public subnet.
  3. A public subnet has a public IP address.
  4. Public subnets are associated with public Availability zones.

2-A public subnet is a subnet that's associated with a route table that has a route to an internet gateway. Reference: VPC with public and private subnets (NAT) - Overview.

You work for an organization that has multiple AWS accounts in multiple regions and multiple applications. You have been tasked with making sure that all your firewall rules across these multiple accounts and regions are consistent. You need to do this as quickly and efficiently as possible. Which AWS service would help you achieve this?

  1. AWS Firewall Manager
  2. AWS Network Firewall
  3. Amazon Detective
  4. AWS Web Application Firewall (AWS WAF)

1-AWS Firewall Manager is a security management service in a single pane of glass. This allows you to centrally set up and manage firewall rules across multiple AWS accounts and applications in AWS Organizations.

You have been evaluating the NACLs in your company. Currently, you are looking at the default network ACL. Which statement is true regarding subnets and NACLs?

  1. You have to delete the default NACL before creating a custom NACL to associate with a subnet.
  2. Each subnet in your VPC must be associated with a network ACL. If you don't explicitly associate a subnet with a network ACL, the subnet is automatically associated with the default network ACL.
  3. The default NACL will always be associated with each subnet.
  4. Only public subnets can use the default NACL.

2-Each subnet in your VPC must be associated with a network ACL. If you don't explicitly associate a subnet with a network ACL, the subnet is automatically associated with the default network ACL. Control traffic to subnets using network ACLs - Amazon Virtual Private Cloud

You work for an online education company that offers a 7-day unlimited access free trial for all new users. You discover that someone has been taking advantage of this and has created a script to register a new user every time the 7-day trial ends. They also use this script to download large amounts of video files, which they then put up on popular pirate websites. You need to find a way to automate the detection of fraud like this using machine learning and artificial intelligence. Which AWS service would best suit this?

  1. Amazon Inspector
  2. Amazon Rekognition
  3. Amazon Fraud Detector
  4. Amazon Detective

3-Amazon Fraud Detector is an AWS AI service that is built to detect fraud in your data.

A small software team is creating an application which will give subscribers real-time weather updates. The application will run on EC2 and will make several requests to AWS services such as S3 and DynamoDB. What is the best way to grant permissions to these other AWS services?

  1. Create an IAM user, grant the user permissions, and pass the user credentials to the application.
  2. Embed the appropriate credentials to access AWS services in the application.
  3. Create an IAM role that you attach to the EC2 instance to give temporary security credentials to applications running on the instance.
  4. Create an IAM policy that you attach to the EC2 instance to give temporary security credentials to applications running on the instance.

3-Create an IAM role in the following situations: You're creating an application that runs on an Amazon Elastic Compute Cloud (Amazon EC2) instance and that application makes requests to AWS. Don't create an IAM user and pass the user's credentials to the application or embed the credentials in the application. Instead, create an IAM role that you attach to the EC2 instance to give temporary security credentials to applications running on the instance. When an application uses these credentials in AWS, it can perform all of the operations that are allowed by the policies attached to the role. For details, see Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances.

You have been evaluating the NACLs in your company. Most of the NACLs are configured the same:

100 All Traffic Allow
200 All Traffic Deny
* All Traffic Deny

How can the last rule * All Traffic Deny be edited?

  1. Any number can replace the *.
  2. You can't modify or remove this rule.
  3. It’s a placeholder and can be deleted.
  4. The Deny can be changed to Allow.

2-The default network ACL is configured to allow all traffic to flow in and out of the subnets with which it is associated. Each network ACL also includes a rule whose rule number is an asterisk. This rule ensures that if a packet doesn't match any of the other numbered rules, it's denied. You can't modify or remove this rule.

A consultant is hired by a small company to configure an AWS environment. The consultant begins working with the VPC and launching EC2 instances within the VPC. The initial instances will be placed in a public subnet. The consultant begins to create security groups. How many security groups can be attached to an EC2 instance?

  1. You can only assign one security group to an instance.
  2. You can assign two security groups to an instance.
  3. Instances in private subnets cannot have multiple security groups.
  4. You can assign up to five security groups to the instance.

4-A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC can be assigned to a different set of security groups. If you launch an instance using the Amazon EC2 API or a command-line tool and you don't specify a security group, the instance is automatically assigned to the default security group for the VPC. If you launch an instance using the Amazon EC2 console, you have an option to create a new security group for the instance. For each security group, you add rules that control the inbound traffic to instances and a separate set of rules that control the outbound traffic. This section describes the basic things that you need to know about security groups for your VPC and their rules. Control traffic to your AWS resources using security groups - Amazon Virtual Private Cloud

A new startup company decides to use AWS to host their web application. They configure a VPC as well as two subnets within the VPC. They also attach an internet gateway to the VPC. In the first subnet, they create the EC2 instance which will host their web application. They finish the configuration by making the application accessible from the Internet. The second subnet has an instance hosting a smaller, secondary application. But this application is not currently accessible from the Internet. What could be potential problems?

  1. The second subnet does not have a route in the route table to the internet gateway.
  2. The EC2 instance does not have a public IP address.
  3. The EC2 instance is not attached to an internet gateway.
  4. The second subnet does not have a route in the route table to the virtual private gateway.
  5. The second subnet does not have a public IP address.

1- 2 

To enable access to or from the internet for instances in a subnet in a VPC, you must do the following:

  • Attach an internet gateway to your VPC.
  • Add a route to your subnet's route table that directs internet-bound traffic to the internet gateway. If a subnet is associated with a route table that has a route to an internet gateway, it's known as a public subnet. If a subnet is associated with a route table that does not have a route to an internet gateway, it's known as a private subnet.
  • Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address).
  • Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance. Connect to the internet using an internet gateway - Amazon Virtual Private Cloud
  • To enable access to or from the internet for instances in a subnet in a VPC, you must do the following:

  • Attach an internet gateway to your VPC.
  • Add a route to your subnet's route table that directs internet-bound traffic to the internet gateway. If a subnet is associated with a route table that has a route to an internet gateway, it's known as a public subnet. If a subnet is associated with a route table that does not have a route to an internet gateway, it's known as a private subnet.
  • Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address).
  • Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance. Connect to the internet using an internet gateway - Amazon Virtual Private Cloud

Recent worldwide events have dictated that you perform your duties as a Solutions Architect from home. You need to be able to manage several EC2 instances while working from home and have been testing the ability to SSH into these instances. One instance in particular has been a problem and you cannot SSH into this instance. What should you check first to troubleshoot this issue?

  1. Make sure that the security group for the instance allows inbound on port 22 from your home IP address
  2. Make sure that the Security Group for the instance allows inbound on port 443 from your home IP address
  3. Make sure that the security group for the instance allows inbound on port 80 from your home IP address
  4. Make sure that your VPC has a connected Virtual Private Gateway

1-A rule that allows access to TCP port 22 (SSH) from your home IP address enables you to SSH into the instances associated with the security group. AWS Documentation: Security group rules.

The company you work for has reshuffled teams a bit and you’ve been moved from the AWS IAM team to the AWS Network team. One of your first assignments is to review the subnets in the main VPCs. What are two key concepts regarding subnets?

  1. Private subnets can only hold databases.
  2. Every subnet you create is associated with the main route table for the VPC.
  3. Each subnet is associated with one security group.
  4. Each subnet maps to a single Availability Zone.
  5. A subnet spans all the Availability Zones in a Region.

2-Each subnet must be associated with a route table, which specifies the allowed routes for outbound traffic leaving the subnet. Every subnet that you create is automatically associated with the main route table for the VPC. You can change the association, and you can change the contents of the main route table.

Reference: Subnet routing

4-When you create a subnet, you specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. Each subnet must reside entirely within one Availability Zone and cannot span zones.

Reference: VPC and subnet basics

You have been evaluating the NACLs in your company. Currently, you are looking at the default network ACL. What is true about the default network ACL?

  1. You can only edit the default NACL if it is the only NACL in the VPC.
  2. You can add or remove rules from the default network ACL.
  3. The default NACL denies all traffic.
  4. You cannot edit the default NACL.

2- The default network ACL is configured to allow all traffic to flow in and out of the subnets with which it is associated. You are able to add and remove your own rules from the default network ACL. However, each network ACL also includes a rule whose rule number is an asterisk. This rule ensures that if a packet doesn't match any of the other numbered rules, it's denied. You can't modify or remove this rule. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html#default-network-acl

You work for a company that needs to pursue a FedRAMP assessment and accreditation. They need to generate a FedRAMP Customer Package, which is a report designed to get accreditation. The report contains a number of sections, such as AWS East/West and GovCloud Executive Briefing, Control Implementation Summary (CIS), Customer Responsibility Matrix (CRM), and E-Authentication. You need this information as quickly as possible. Which AWS service should you use to find this information?

  1. Use AWS Trusted Advisor to generate the report.
  2. Use AWS Certificate Manager to generate the report.
  3. Call your AWS Technical Account Manager (TAM) and ask for this information.
  4. Use AWS Artifact to download the report.

4-AWS Artifact is a single source you can visit to get the compliance-related information that matters to you, such as AWS security and compliance reports or select online agreements.

A new startup company decides to use AWS to host their web application. They configure a VPC as well as two subnets within the VPC. They also attach an internet gateway to the VPC. In the first subnet, they create an EC2 instance to host a web application. There is a network ACL and a security group, which both have the proper ingress and egress to and from the internet. There is a route in the route table to the internet gateway. The EC2 instances added to the subnet need to have a globally unique IP address to ensure internet access. Which is not a globally unique IP address?

  1. Elastic IP address
  2. IPv6 address
  3. Public IP address
  4. Private IP address

4-Public IPv4 address, elastic IP address, and IPv6 address are globally unique addresses. The IPv4 addresses known for not being unique are private IPs. These are found in the following ranges: from 10.0.0.0 to 10.255.255.255, from 172.16.0.0 to 172.31.255.255, and from 192.168.0.0 to 192.168.255.255. Reference: RFC1918.

A company has an application for sharing static content, such as photos. The popularity of the application has grown, and the company is now sharing content worldwide. This worldwide service has caused some issues with latency. What AWS services can be used to host a static website, serve content to globally dispersed users, and address latency issues, while keeping cost under control? Choose two.

  1. AWS Global Accelerator
  2. S3
  3. AWS CloudFormation
  4. CloudFront
  5. EC2 placement group

2-4-Amazon S3 is an object storage built to store and retrieve any amount of data from anywhere on the Internet. It’s a simple storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs. AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS protection.

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing, or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience. Lastly, if you use AWS origins such as Amazon S3, Amazon EC2, or Elastic Load Balancing, you don’t pay for any data transferred between these services and CloudFront.

You are working for a large financial institution and have been tasked with creating a relational database solution to deal with a read-heavy workload. The database needs to be highly available within the Oregon region and quickly recover if an Availability Zone goes offline. Which of the following would you select to meet these requirements?

  1. Use an Amazon Aurora global database to ensure a region failure won't break the application.
  2. Enable Multi-AZ support for the RDS database.
  3. Split your database into multiple RDS instances across different regions. In the event of a failure, point your application to the new region.
  4. Create a read replica and point your read workloads to the new endpoint RDS provides.
  5. Using RDS, create a read replica. If an AZ fails, RDS will automatically cut over to the read replica.

2-4-Multi-AZ creates a secondary database in another AZ within the region you are in. If something were to happen to the primary database, RDS would automatically fail over to the secondary copy. This allows your database achieve high availability with minimal work on your part. Amazon RDS Multi AZ Deployments | Cloud Relational Database | Amazon Web Services

Amazon RDS uses the MariaDB, MySQL, Oracle, PostgreSQL, and Microsoft SQL Server DB engines' built-in replication functionality to create a special type of DB instance called a read replica from a source DB instance. Updates made to the source DB instance are asynchronously copied to the read replica. You can reduce the load on your source DB instance by routing read queries from your applications to the read replica. Using read replicas, you can elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. Working with MySQL read replicas - Amazon Relational Database Service

A small development team with very limited AWS knowledge has begun the process of creating and deploying a new frontend application based on React within AWS. The application is simple and does not need any backend processing via traditional databases. The application does, however, require GraphQL interactions to complete the required processing of data. Which AWS service can the team use to complete this?

  1. Deploy a GraphQL interface via AWS AppSync.
  2. Host the application in AWS Lambda instead and perform the processing using DynamoDB.
  3. Leverage API Gateway for any GraphQL calls. It supports GraphQL and REST API.
  4. Stand up a full stack application easily via AWS Amplify.

1-This offers a simplified GraphQL interface for development teams to use within AWS. Reference: What is AWS AppSync?

You have configured an Auto Scaling Group of EC2 instances fronted by an Application Load Balancer and backed by an RDS database. You want to begin monitoring the EC2 instances using CloudWatch metrics. Which metric is not readily available out of the box?

  1. NetworkIn
  2. Memory utilization
  3. DiskReadOps
  4. CPU utilization

2-Memory utilization is not available as an out of the box metric in CloudWatch. You can, however, collect memory metrics when you configure a custom metric for CloudWatch. Types of custom metrics that you can set up include:

  • Memory utilization
  • Disk swap utilization
  • Disk space utilization
  • Page file utilization
  • Log collection

Your application is housed on an Auto Scaling Group of EC2 instances. The application is backed by the Multi-AZ MySQL RDS database and an additional read replica. You need to simulate some failures for disaster recovery drills. Which event will not cause an RDS to perform a failover to the standby replica?

  1. Loss of network connectivity to primary
  2. Compute unit failure on primary
  3. Storage failure on primary
  4. Read replica failure

4-When you provision a Multi-AZ DB instance, Amazon RDS automatically creates a primary DB instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.

You suspect that one of the AWS services your company is using has gone down. Which service can provide you proactive and transparent notifications about the status of your specific AWS environment?

  1. AWS Organizations
  2. Amazon Inspector
  3. AWS Personal Health Dashboard
  4. AWS Trusted Advisor

3-AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you. While the Service Health Dashboard displays the general status of AWS services, Personal Health Dashboard gives you a personalized view of the performance and availability of the AWS services underlying your AWS resources. The dashboard displays relevant and timely information to help you manage events in progress, and provides proactive notification to help you plan for scheduled activities. With Personal Health Dashboard, alerts are triggered by changes in the health of AWS resources, giving you event visibility and guidance to help quickly diagnose and resolve issues. AWS Health Dashboard

You are working as a Solutions Architect in a large healthcare organization. You have many Auto Scaling groups that you need to create. One requirement is that you need to reuse some software licenses and therefore need to use dedicated hosts on EC2 instances in your Auto Scaling groups. What step must you take to meet this requirement?

  1. Create your launch configuration, but manually change the instances to Dedicated Hosts in the EC2 console.
  2. Make sure your launch configurations are using Dedicated Hosts.
  3. Use a launch template with your Auto Scaling group and select the Dedicated Host option.
  4. Create the Dedicated Host EC2 instances, and then add them to an existing Auto Scaling group.

3-In addition to the features of Amazon EC2 Auto Scaling that you can configure by using launch configurations, launch templates provide more advanced Amazon EC2 configuration options. For example, you must use launch templates to use Amazon EC2 Dedicated Hosts. Dedicated Hosts are physical servers with EC2 instance capacity that are dedicated to your use. While Amazon EC2 Dedicated Instances also run on dedicated hardware, the advantage of using Dedicated Hosts over Dedicated Instances is that you can bring eligible software licenses from external vendors and use them on EC2 instances.

You are working as a Solutions Architect in a large healthcare organization. You have many Auto Scaling Groups that utilize launch configurations. Many of these launch configurations are similar yet have subtle differences. You’d like to use multiple versions of these launch configurations. An ideal approach would be to have a default launch configuration and then have additional versions that add additional features. Which option best meets these requirements?

  1. Use launch templates instead.
  2. Store the launch configurations in S3 and turn on versioning.
  3. Simply create the needed versions. Launch configurations already have versioning.
  4. Create the launch configurations in CloudFormation and version the templates accordingly.

1-A launch template is similar to a launch configuration, in that it specifies instance configuration information. Included are the ID of the Amazon Machine Image (AMI), the instance type, a key pair, security groups, and the other parameters that you use to launch EC2 instances. However, defining a launch template instead of a launch configuration allows you to have multiple versions of a template. With versioning, you can create a subset of the full set of parameters and then reuse it to create other templates or template versions. For example, you can create a default template that defines common configuration parameters and allow the other parameters to be specified as part of another version of the same template.

Launch templates - Amazon EC2 Auto Scaling

A gaming company is creating an application which simply provides a leaderboard for specific games. The leaderboard will use DynamoDB for data, and simply needs to be updated in near real-time. An EC2 instance will be configured to house the application which will be accessed by subscribers from the Internet. Which step is NOT necessary for internet traffic to flow to and from the Internet?

  1. Add a route to your subnet's route table that directs internet-bound traffic to the internet gateway.
  2. Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance.
  3. Attach an internet gateway to your VPC.
  4. A route in the route table to the DynamoDB table.

4-The application needs to be able to communicate with the DynamoDB table, but this has nothing to do with the necessary steps for internet traffic flow to and from the application instance.

You have two EC2 instances running in the same VPC, but in different subnets. You are removing the secondary ENI from an EC2 instance and attaching it to another EC2 instance. You want this to be fast and with limited disruption. So you want to attach the ENI to the EC2 instance when it’s running. What is this called?

  1. synchronous attach
  2. cold attach
  3. hot attach
  4. warm attach

3-Here are some best practices for configuring network interfaces. You can attach a network interface to an instance when it's running (hot attach), when it's stopped (warm attach), or when the instance is being launched (cold attach). You can detach secondary network interfaces when the instance is running or stopped. However, you can't detach the primary network interface. You can move a network interface from one instance to another if the instances are in the same Availability Zone and VPC but in different subnets. When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces. Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance. A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly. Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves. Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance. If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If possible, use a secondary private IPv4 address on the primary network interface instead. For more information, see Assigning a secondary private IPv4 address. Elastic network interfaces - Amazon Elastic Compute Cloud

Jamal recently joined a small company as a Site Reliability Engineer on the cloud development team. The team leverages numerous AWS Lambda functions with several backend AWS resources, as well as other backend microservices. A recent update to some of the different functions' code has begun to cause massive delays within the application workloads. The development initially turned on more detailed logging within their code base; however, this did not provide the application insights required to troubleshoot the issue. What can Jamal do to more easily gain a better understanding of the response times of the affected AWS Lambda functions, as well as all the connected downstream resources within the entire application flow?

  1. Update the code to log their response times for each function.
  2. Run a containerized version of the application and output log files with responses.
  3. This is not needed. Simply increase the resource settings for each function.
  4. Enable AWS X-Ray within each function to gain detailed information about responses.

4-AWS X-Ray collects data about requests that your application serves and helps gain insights into that data to identify issues and opportunities for optimization. AWS Lambda integrates easily with AWS X-Ray by toggling the feature on within the function configuration. Reference: Scorekeep diagram

After several issues with your application and unplanned downtime, your recommendation to migrate your application to AWS is approved. You have set up high availability on the front end with a load balancer and an Auto Scaling Group. What step can you take with your database to configure high-availability and ensure minimal downtime (under five minutes)?

  1. Take frequent snapshots of your database.
  2. Enable Multi-AZ failover on the database.
  3. Create a read replica.
  4. Create your database using CloudFormation and save the template for reuse.

2-In the event of a planned or unplanned outage of your DB instance, Amazon RDS automatically switches to a standby replica in another Availability Zone if you have enabled Multi-AZ. The time it takes for the failover to complete depends on the database activity and other conditions at the time the primary DB instance became unavailable. Failover times are typically 60–120 seconds. However, large transactions or a lengthy recovery process can increase failover time. When the failover is complete, it can take additional time for the RDS console to reflect the new Availability Zone. Note the above sentences. Large transactions could cause a problem in getting back up within five minutes, but this is clearly the best of the available choices to attempt to meet this requirement. We must move through our questions on the exam quickly, but always evaluate all the answers for the best possible solution.

Configuring and managing a Multi-AZ deployment - Amazon Relational Database Service

An accounting company has big data applications for analyzing actuary data. The company is migrating some of its services to the cloud, and for the foreseeable future, will be operating in a hybrid environment. They need a storage service that provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. Which AWS service can meet these requirements?

  1. S3
  2. Glacier
  3. EBS
  4. EFS

4-Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS offers 2 storage classes: the Standard storage class and the Infrequent Access storage class (EFS IA). EFS IA provides price/performance that's cost-optimized for files not accessed every day. By simply enabling EFS Lifecycle Management on your file system, files not accessed according to the lifecycle policy you choose will be automatically and transparently moved into EFS IA. Amazon EFS

Several instances you are creating have a specific data requirement. The requirement states that the data on the root device needs to persist independently from the lifetime of the instance. After considering AWS storage options, which is the simplest way to meet these requirements?

  1. Store the data on the local instance store.
  2. Send the data to S3 using S3 lifecycle rules.
  3. Store your root device data on Amazon EBS and set the DeleteOnTermination attribute to false using a block device mapping.
  4. Create a cron job to migrate the data to S3.

3-An Amazon EBS-backed instance can be stopped and later restarted without affecting data stored in the attached volumes. By default, the root volume for an AMI backed by Amazon EBS is deleted when the instance terminates. You can change the default behavior to ensure that the volume persists after the instance terminates. To change the default behavior, set the DeleteOnTermination attribute to false using a block device mapping.

A database outage has been very costly to your organization. You have been tasked with configuring a more highly-available architecture. The main requirement is that the chosen architecture needs to meet an aggressive RTO in case of disaster. You have decided to use an Amazon RDS for MySQL Multi-AZ deployment . How is the replication handled for Amazon RDS for MySQL with a Multi-AZ configuration?

  1. Amazon RDS for MySQL automatically provisions and maintains a synchronous standby replica in a different Region
  2. You can configure an Amazon RDS for MySQL standby replica in a different Availability Zone and send traffic synchronously or asynchronously depending on your cost considerations
  3. Amazon RDS for MySQL automatically provisions and maintains an asynchronous standby replica in a different Availability Zone
  4. Amazon RDS for MySQL automatically provisions and maintains a synchronous standby replica in a different Availability Zone

4-In a Multi-AZ DB instance deployment, Amazon RDS for MySQL automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy and minimize latency spikes during system backups. Running a DB instance with high availability can enhance availability during planned system maintenance. It can also help protect your databases against DB instance failure and Availability Zone disruption. AWS Documentation: Multi-AZ DB instance deployments.

A company has a great deal of data in S3 buckets for which they want to create a database. Creating the RDS database, normalizing the data, and migrating to the RDS database will take time and is the long-term plan. But there's an immediate need to query this data to retrieve information necessary for an audit. Which AWS service will enable querying data in S3 using standard SQL commands?

  1. Amazon Athena
  2. DynamoDB
  3. There is no such service, but there are third-party tools.
  4. Amazon SQL Connector

1-Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you only pay for the queries you run.

Athena is easy to use. Simply point to your data in Amazon S3, define the schema, and start querying using standard SQL. Most results are delivered within seconds. With Athena, there’s no need for complex ETL jobs to prepare your data for analysis. This makes it easy for anyone with SQL skills to quickly analyze large-scale datasets. Interactive SQL - Serverless Query Service - Amazon Athena - AWS

Your company uses IoT devices installed in businesses to provide those business real-time data for analysis. You have decided to use AWS Kinesis Data Firehose to stream the data to multiple backend storing services for analytics. Which service listed is not a viable solution to stream the real time data to?

  1. ElasticSearch
  2. S3
  3. Athena
  4. Redshift

3- Amazon Athena is correct because Amazon Kinesis Data Firehose cannot load streaming data to Athena. Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security. Amazon Kinesis Data Firehose - Streaming Data Pipeline - Amazon Web Services

You work for an online school that teaches IT by recording their screen and narrating what they are doing. The school is becoming quite popular, and you need to convert the video files into many different formats to support various laptops, tablets, and mobile devices. Which AWS service should you consider using?

  1. Amazon Kinesis Video Streams
  2. Amazon Elastic Transcoder
  3. Amazon CloudFront
  4. Amazon CloudWatch

2-Amazon Elastic Transcoder allows businesses and developers to convert media files from their original source format into versions that are optimized for various devices, such as smartphones, tablets, and PCs.

A professional baseball league has chosen to use a key-value and document database for storage, processing, and data delivery. Many of the data requirements involve high-speed processing of data such as a Doppler radar system which samples the position of the baseball 2000 times per second. Which AWS data storage can meet these requirements?

  1. S3
  2. DynamoDB
  3. Redshift
  4. RDS

 2-Amazon DynamoDB is a NoSQL database that supports key-value and document data models, and enables developers to build modern, serverless applications that can start small and scale globally to support petabytes of data and tens of millions of read and write requests per second. DynamoDB is designed to run high-performance, internet-scale applications that would overburden traditional relational databases. Amazon DynamoDB Features | NoSQL Key-Value Database | Amazon Web Services

You have just started work at a small startup in the Seattle area. Your first job is to help containerize your company's microservices and move them to AWS. The team has selected ECS as their orchestration service of choice. You've discovered the code currently uses access keys and secret access keys in order to communicate with S3. How can you best handle this authentication for the newly containerized application?

  1. Attach a role with the appropriate permissions to the task definition in ECS.
  2. Leave the credentials where they are.
  3. Attach a role to the EC2 instances that will run your ECS tasks.
  4. Migrate the access and secret access keys to the Dockerfile.

 1-It's always a good idea to use roles over hard-coded credentials. One of the best parts of using ECS is the ease of attaching roles to your containers. This allows the container to have an individual role even if it's running with other containers on the same EC2 instance. Task definition parameters - Amazon Elastic Container Service

A large, big-box hardware chain is setting up a new inventory management system. They have developed a system using IoT sensors which captures the removal of items from the store shelves in near real-time and want to use this information to update their inventory system. The company wants to analyze this data in the hopes of being ahead of demand and properly managing logistics and delivery of in-demand items.

Which AWS service can be used to capture this data as close to real-time as possible, while being able to both transform and load the streaming data into Amazon S3 or Elasticsearch?

  1. Amazon Kinesis Data Firehose
  2. Amazon Redshift
  3. Amazon Kinesis Data Streams
  4. Amazon Aurora

1-Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near-real-time analytics with existing business intelligence tools and dashboards you’re already using today. It is a fully-managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, transform, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security. Amazon Kinesis Data Firehose - Streaming Data Pipeline - Amazon Web Services

You have been tasked to review your company disaster recovery plan due to some new requirements. The driving factor is that the Recovery Time Objective has become very aggressive. Because of this, it has been decided to configure Multi-AZ deployments for the RDS MySQL databases. Unrelated to DR, it has been determined that some read traffic needs to be offloaded from the master database. What step can be taken to meet this requirement?

  1. Convert to Aurora to allow the standby to serve read traffic.
  2. Redirect some of the read traffic to the standby database.
  3. Add read replicas to offload some read traffic.
  4. Add DAX to the solution to alleviate excess read traffic.

3-Amazon RDS Read Replicas for MySQL and MariaDB now support Multi-AZ deployments. Combining Read Replicas with Multi-AZ enables you to build a resilient disaster recovery strategy and simplify your database engine upgrade process. Amazon RDS Read Replicas enable you to create one or more read-only copies of your database instance within the same AWS Region or in a different AWS Region. Updates made to the source database are then asynchronously copied to your Read Replicas. In addition to providing scalability for read-heavy workloads, Read Replicas can be promoted to become a standalone database instance when needed. Amazon RDS Read Replicas Now Support Multi-AZ Deployments.

You have been assigned to create an architecture which uses load balancers to direct traffic to an Auto Scaling Group of EC2 instances across multiple Availability Zones. The application to be deployed on these instances is a life insurance application which requires path-based and host-based routing. Which type of load balancer will you need to use?

  1. Any type of load balancer will meet these requirements.
  2. Application Load Balancer
  3. Network Load Balancer
  4. Classic Load Balancer

2-Only the Application Load Balancer can support path-based and host-based routing. Using an Application Load Balancer instead of a Classic Load Balancer has the following benefits:

  • Support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.
  • Support for host-based routing. You can configure rules for your listener that forward requests based on the host field in the HTTP header. This enables you to route requests to multiple domains using a single load balancer.
  • Support for routing based on fields in the request, such as standard and custom HTTP headers and methods, query parameters, and source IP addresses.
  • Support for routing requests to multiple applications on a single EC2 instance. You can register each instance or IP address with the same target group using multiple ports.
  • Support for redirecting requests from one URL to another.
  • Support for returning a custom HTTP response.
  • Support for registering targets by IP address, including targets outside the VPC for the load balancer.
  • Support for registering Lambda functions as targets.
  • Support for the load balancer to authenticate users of your applications through their corporate or social identities before routing requests.
  • Support for containerized applications. Amazon Elastic Container Service (Amazon ECS) can select an unused port when scheduling a task and register the task with a target group using this port. This enables you to make efficient use of your clusters.
  • Support for monitoring the health of each service independently, as health checks are defined at the target group level and many CloudWatch metrics are reported at the target group level. Attaching a target group to an Auto Scaling group enables you to scale each service dynamically based on demand.
  • Access logs contain additional information and are stored in compressed format.
  • Improved load balancer performance. What is an Application Load Balancer? - Elastic Load Balancing Network Traffic Distribution – Elastic Load Balancing FAQs – Amazon Web Services

You have multiple EC2 instances housing applications in a VPC in a single Availability Zone. Your EC2 workloads need low-latency network performance, high network throughput, and a tightly-coupled node-to-node communication. What's the best measure you can do to ensure this throughput?

  1. Launch your instances in a cluster placement group
  2. Increase the size of the instances
  3. Use Elastic Network Interfaces
  4. Use Auto Scaling Groups

1-A cluster placement group is a logical grouping of instances within a single Availability Zone. A cluster placement group can span peered VPCs in the same Region. Instances in the same cluster placement group enjoy a higher per-flow throughput limit for TCP/IP traffic and are placed in the same high-bisection bandwidth segment of the network. Reference: Placement groups.

You work for an oil and gas company as a lead in data analytics. The company is using IoT devices to better understand their assets in the field (for example, pumps, generators, valve assemblies, and so on). Your task is to monitor the IoT devices in real-time to provide valuable insight that can help you maintain the reliability, availability, and performance of your IoT devices. What tool can you use to process streaming data in real time with standard SQL without having to learn new programming languages or processing frameworks?

  1. AWS Kinesis Streams
  2. AWS RedShift
  3. AWS Lambda
  4. Kinesis Data Analytics

4-Monitoring IoT devices in real-time can provide valuable insight that can help you maintain the reliability, availability, and performance of your IoT devices. You can track time series data on device connectivity and activity. This insight can help you react quickly to changing conditions and emerging situations. Amazon Web Services (AWS) offers a comprehensive set of powerful, flexible, and simple-to-use services that enable you to extract insights and actionable information in real time. Amazon Kinesis is a platform for streaming data on AWS, offering key capabilities to cost-effectively process streaming data at any scale. Kinesis capabilities include Amazon Kinesis Data Analytics, the easiest way to process streaming data in real time with standard SQL without having to learn new programming languages or processing frameworks. Overview - Real-Time IoT Device Monitoring with Kinesis Data Analytics

You work for a security company that manufactures doorbells with cameras built in. They are designing an application so that when people ring the doorbell, the camera will activate and stream video from the doorbell to the user's mobile device. You need to implement an AWS service to handle the streaming of potentially millions of devices, which you will then run analytics and other processing on the streams. Which AWS service would best suit this?

  1. Amazon CloudFront
  2. Amazon Elastic Transcoder
  3. Amazon CloudWatch
  4. Amazon Kinesis Video Streams

4-Amazon Kinesis Video Streams is used to stream media content from a large number of devices to AWS and then run analytics, machine learning, playback, and other processing.

Your company has asked you to look into some latency issues with the company web app. The application is backed by an AWS RDS database. Your analysis has determined that the requests made of the application are very read heavy, and this is where improvements can be made. Which service can you use to store frequently accessed data in-memory?

  1. DAX
  2. EBS
  3. DynamoDB
  4. ElastiCache

4-Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases. There are two types of ElastiCache available: Memcached and Redis. Here is a good overview and comparison between them: Redis vs. Memcached | AWS

A team member has been tasked to configure four EC2 instances for four separate applications. These are not high-traffic apps, so there is no need for an Auto Scaling group. The instances are all in the same public subnet and each instance has an EIP address, and all of the instances have the same security group. But none of the instances can send or receive internet traffic. You verify that all the instances have a public IP address. You also verify that an internet gateway has been configured. What is the most likely issue?

  1. You are using the default NACL.
  2. Each instance needs its own security group.
  3. The route table is corrupt.
  4. There is no route in the route table to the internet gateway (or it has been deleted).

 4-The question details all of the configuration needed for internet access, except for a route to the IGW in the route table. This is definitely a key step in any checklist for internet connectivity. It is quite possible to have a subnet with the 'Public' attribute set but no route to the internet in the assigned route table. (Test it yourself.) This may have been a setup error, or someone may have altered the shared route table for a special case instead of creating a new route table for the special case.

Bill is a cloud solutions architect for a small technology startup company. The company started out completely on-premises, but Bill has finally convinced them to explore shifting their application to AWS. The application is fairly complex and leverages message brokers that communicate using AMQP 1.0 protocols to exchange data between nodes and complete workloads.

Which service should Bill use to design the new AWS cloud-based architecture?

  1. Amazon SNS
  2. Amazon SQS
  3. AWS Batch
  4. Amazon MQ

4-Amazon MQ offers a managed broker service in AWS. It is meant for applications that need a specific message broker like RabbitMQ and ActiveMQ, as well as very specific messaging protocols (AMQP, STOMP, OpenWire, WebSocket, and MQTT) and frameworks.

Reference: Amazon MQ

A pharmaceutical company has begun to explore using AWS cloud services for their computation workloads for processing incoming orders. Currently, they process orders on-premises using self-managed virtual machines with batch software installed. The current infrastructure design does not scale well and is cumbersome to update. In addition, each processed batch job takes roughly 30-45 minutes to complete. The processing times cannot be reduced due to the complexity of the application code, and they want to make the new solution as hands-off as possible with automatic scaling based on the number of queued orders.

Which AWS service would you recommend they use for this application design that best meets their needs and is cost optimized?

  1. An Amazon EC2 AMI with batch software installed used in an Auto Scaling group
  2. AWS Lambda with Amazon SQS
  3. Amazon EKS
  4. AWS Batch

4-AWS Batch is perfect for long-running (>15 minutes) batch computation workloads within AWS while leveraging managed compute infrastructure. It automatically provisions compute resources and then optimizes workload distribution based on the quantity and scale of your workloads.

Reference: AWS Batch

Your boss has tasked you with decoupling your existing web frontend from the backend. Both applications run on EC2 instances. After you investigate the existing architecture, you find that (on average) the backend resources are processing about 50,000 requests per second and will need something that supports their extreme level of message processing. It's also important that each request is processed only 1 time. What can you do to decouple these resources?

  1. Use S3 to store the messages being sent between the EC2 instances.
  2. Upsize your EC2 instances to reduce the message load on the backend servers.
  3. Use SQS Standard. Include a unique ordering ID in each message, and have the backend application use this to deduplicate messages.
  4. Use SQS FIFO to decouple the applications.

3-This would be a great choice, as SQS Standard can handle this level of extreme performance. If the application didn't require this level of performance, then SQS FIFO would be the better and easier choice. Quotas related to messages - Amazon Simple Queue Service

A company is running a teaching application which is consumed by users all over the world. The application is translated into 5 different languages. All of these language files need to be stored somewhere that is highly-durable and can be accessed frequently. As content is added to the site, the storage demands will grow by a factor of five, so the storage must be highly-scalable as well. Which storage option will be highly-durable, cost-effective, and highly-scalable?

  1. Glacier
  2. EBS Instance Store Volumes
  3. Amazon S3
  4. RDS

3-Glacier can be very cheap, but as you read a question, try to compile a complete list of the requirements given. One of those requirements is frequently-accessed. That requirement eliminates Glacier

You have been tasked with migrating an application and the servers it runs on to the company AWS cloud environment. You have created a checklist of steps necessary to perform this migration. A subsection in the checklist is security considerations. One of the things that you need to consider is the shared responsibility model. Which option does AWS handle under the shared responsibility model?

  1. Firewall configuration
  2. Client-side data encryption
  3. Physical hardware infrastructure
  4. User Authentication

3-Security and compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates. The customer assumes responsibility for, and management of, the guest operating system (including updates and security patches), other associated application software, and the configuration of the AWS provided security group firewall. Customers should carefully consider the services they choose, as their responsibilities vary depending on the services used, the integration of those services into their IT environment, and applicable laws and regulations. The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment.

AWS responsibility “Security of the Cloud”: AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.

Shared Responsibility Model - Amazon Web Services (AWS)

A software company is looking for compute capacity in the cloud for a fault-tolerant and flexible application. The application is not mission-critical, so occasional downtime is acceptable. What type of EC2 servers can be used to meet these requirements at the lowest cost?

  1. Reserved
  2. On-Demand
  3. Spot
  4. Dedicated Hosts

3-

You are put in charge of your company’s Disaster Recovery planning. As part of this plan, you intend to create all of the company infrastructure with CloudFormation templates. The templates can then be saved in another region and used to launch a new environment in case of disaster. What determines the costs associated with CloudFormation templates?

  1. There is no cost for templates, but when deployed, the resources created may accumulate charges.
  2. The distance of the region from the home region.
  3. It depends whether the resources in the template are in the free tier.
  4. There is a cost per template and discounts for over 100 templates.

1-There is no additional charge for using AWS CloudFormation with resource providers in the following namespaces: AWS::, Alexa::, and Custom::*. In this case you pay for AWS resources (such as Amazon EC2 instances, Elastic Load Balancing load balancers, etc.) created using AWS CloudFormation as if you created them manually. You only pay for what you use, as you use it; there are no minimum fees and no required upfront commitments. When you use resource providers with AWS CloudFormation outside the namespaces mentioned above, you incur charges per handler operation. Handler operations are create, update, delete, read, or list actions on a resource.

Provision Infrastructure As Code – AWS CloudFormation Pricing – Amazon Web Services

Your company is storing stack traces for application errors in an S3 Bucket. The engineers using these stack traces review them when addressing application issues. It has been decided that the files only need to be kept for four weeks then they can be purged. How can you meet this requirement in S3?

  1. Add an S3 Lifecycle rule to archive these files to Glacier after one month.
  2. Create a bucket policy to purge the rules after one month.
  3. Configure the S3 Lifecycle rules to purge the files after a month.
  4. Write a cron job to purge the files after one month.

3-To manage your objects so that they are stored cost-effectively throughout their lifecycle, configure their Amazon S3 Lifecycle. An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions:

Transition actions define when objects transition to another storage class. For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after you created them, or archive objects to the S3 Glacier storage class one year after creating them.

Expiration actions define when objects expire. Amazon S3 deletes expired objects on your behalf.

The lifecycle expiration costs depend on when you choose to expire objects.

Managing your storage lifecycle - Amazon Simple Storage Service

Your application team has been approved to create a new machine learning application over the next two years. You intend to leverage numerous Amazon SageMaker instances and components to back your application. Your manager is worried about the cost potential of the services involved.

How could you maximize your savings opportunities for the Amazon SageMaker service?

  1. Purchase a one-year All Upfront SageMaker Savings Plan. This applies to all SageMaker instances and components within any AWS Region.
  2. Purchase a one-year All Upfront Compute Savings Plan. This applies to all SageMaker instances and components within any AWS Region.
  3. Purchase a three-year All Upfront Compute Savings Plan. This applies to all SageMaker instances and components within any AWS Region.
  4. Purchase a three-year All Upfront SageMaker Savings Plan. This applies to all SageMaker instances and components within any AWS Region.

1-SageMaker Savings Plans offer the maximum savings potential for all SageMaker components, and the one-year agreement type falls within the two-year period.

After an IT Steering Committee meeting, you have been put in charge of configuring a hybrid environment for the company’s compute resources. You weigh the pros and cons of various technologies based on the requirements you are given. The main requirements to drive this selection are overall cost considerations and the ability to reuse existing internet connections. Which technology best meets these requirements?

  1. AWS Direct Connect
  2. AWS Managed VPN
  3. VPC Peering
  4. AWS Direct Gateway

2-AWS Managed VPN lets you reuse existing VPN equipment and processes, and reuse existing internet connections.

It is an AWS-managed high availability VPN service.

It supports static routes or dynamic Border Gateway Protocol (BGP) peering and routing policies.

You have been assigned the review of the security in your company AWS cloud environment. Your final deliverable will be a report detailing potential security issues. One of the first things that you need to describe is the responsibilities of the company under the shared responsibility model. Which measure is the customer’s responsibility?

  1. Virtualization infrastructure
  2. Physical security of data centers
  3. EC2 instance OS patching
  4. Managing underlying network infrastructure

3-Security and compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates. The customer assumes responsibility for, and management of, the guest operating system (including updates and security patches), other associated application software, and the configuration of the AWS provided security group firewall. Customers should carefully consider the services they choose, as their responsibilities vary depending on the services used, the integration of those services into their IT environment, and applicable laws and regulations. The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment. As shown in the chart below, this differentiation of responsibility is commonly referred to as Security “of” the Cloud versus Security “in” the Cloud.

Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.

A previous cloud engineer deployed several Amazon EC2 instances within your AWS account. You have recently taken over control of the account and have noticed there is a significant amount of idle and underutilized instances in place. Hundreds of instances are within the account, and you do not have the time to go through each instance and manually check all of them.

Which AWS service allows you to kick off the collection of metrics and generate recommendations for incorrectly sized EC2 instances?

  1. AWS Compute Optimizer
  2. AWS Cost and Usage Reports
  3. Amazon CloudWatch dashboards
  4. AWS Budgets

1-AWS Compute Optimizer allows you to automate the collection of metrics for underutilized and underperforming compute instances. It can then generate recommendations for you to save money.

A car insurance company keeps specific details about accidents on file for a year, for quick retrieval, and then archives those files to long-term storage. The files are mainly accessed in the first 30 days. A recent audit has approved the general steps they are taking but pointed out many deficiencies in the technologies they are using. You have been hired as a consultant to come up with an automated solution. Your solution will recommend AWS storage options. What storage options could you recommend to meet the lifecycle requirements outlined, provide high availability, and offer the most savings?

  1. Store the accident files in S3 for 30 days, then have the lifecycle policy move them to S3-IA. After a year, move them to Glacier.
  2. Store the accident files in Glacier for maximum cost savings.
  3. Store the accident files in S3 for a year, then have the lifecycle policy move them to S3 IA.
  4. Store the accident files in EBS volumes for a year, then migrate them to Glacier.

1-To manage your objects so they are stored cost-effectively throughout their lifecycle, configure their Amazon S3 Lifecycle. An S3 Lifecycle configuration is a set of rules that defines actions that Amazon S3 applies to a group of objects. There are 2 types of actions: Transition actions define when objects transition to another storage class. For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after you created them, or archive objects to the S3 Glacier storage class 1 year after creating them. Expiration actions define when objects expire. Amazon S3 deletes expired objects on your behalf. The lifecycle expiration costs depend on when you choose to expire objects.

You are consulting for a state agency focused on the state lottery. You have been given a task to have 2 million bar codes created as quickly as possible. This will require EC2 instances and an average CPU utilization of 70% for each of them. So you plan to spin up 10 EC2 instances to create the bar codes. You estimate the instances will complete the job from around 11 p.m. to 1 a.m. You don’t want the instances sitting idle for up to 9 hours until the next morning. What can you do to terminate these instances when they are done?

  1. You can create a CloudWatch alarm that is triggered when the average CPU utilization percentage has been lower than 5% for 15 minutes and terminates the instance
  2. Write a cron job that queries the instance status. If a certain status is met, have the cron job kick off CloudFormation to terminate the existing instance, and create a new instance from a template.
  3. Write a cron job that queries the instance status. Also, write a Lambda function that can be triggered upon a certain status and terminate the instance.
  4. Write a Python script that queries the instance status. Also, write a Lambda function that can be triggered upon a certain status and terminate the instance.

1-Adding Terminate Actions to Amazon CloudWatch Alarms: "You can create an alarm that terminates an EC2 instance automatically when a certain threshold has been met (as long as termination protection is not enabled for the instance). For example, you might want to terminate an instance when it has completed its work, and you don't need the instance again. If you might want to use the instance later, you should stop the instance instead of terminating it. For information about enabling and disabling termination protection for an instance, see "Enabling Termination Protection for an Instance" in the Amazon EC2 User Guide for Linux Instances."

You work for a Defense contracting company. The company develops software applications which perform intensive calculations in the area of Mechanical Engineering related to metals for ship building. The company competes for and wins contracts that typically range from 1 year to up to 5 years. These long-term contracts mean that the duration of your need for EC2 instances can be matched to the length of these contracts, and then extended if necessary. The main requirement is consistent performance for the duration of the contract. Which EC2 purchasing option provides the best value, given these long-term contracts?

  1. Dedicated Host
  2. Reserved
  3. On-Demand
  4. Spot

2-

The CFO of your company approaches you and inquires about cutting costs in your AWS account. One area you are able to identify for cost cutting is in S3. There is data in S3 that is very rarely used and has only been retained for audit purposes. You decide to archive this data to a cheaper storage solution. Which AWS solution would meet this requirement?

  1. Use a lifecycle policy to archive the data to Amazon SQS.
  2. Use a lifecycle policy to archive the data to Glacier.
  3. Write a cron job to archive the data to DynamoDB.
  4. Use a lifecycle policy to archive the data to Redshift.

2

 

1.【模块一】1. 云计算AWS .zip 10.【模块二】2.3 Amazon CloudFront 简介.zip 11.【模块二】3.1 Amazon Relational Database S.zip 12.【模块二】3.2 Amazon DynamoDB 简介.zip 13.【模块二】3.3 Amazon ElastiCache 简介.zip 14.【模块二】3.4 Amazon Elastic MapReduce (EM.zip 15.【模块二】3.5 Amazon Redshift 简介.zip 16.【模块二】4.1 Amazon Virtual Private Cloud.zip 17.【模块二】5.1 开始使用 AWS 上的 Microsoft SQL Se.zip 18.【模块二】5.2 开始使用 AWS 上的 Microsoft SQL Se.zip 19.【模块二】5.3 开始使用 AWS 上的 Microsoft SQL Se.zip 2.【模块一】2. 带你玩转AWS .zip 20.【模块二】5.4 开始使用 AWS 上的 Microsoft SQL Se.zip 21.【模块三】1.1 部署应用的那些事儿.zip 22.【模块三】1.2 关于 AWS Elastic Beanstalk.zip 23.【模块三】1.3 关于 AWS CodeDeploy.zip 24.【模块三】1.4 关于 Amazon EC2 Container Serv.zip 25.【模块三】2.1 导语:云数据迁移.zip 26.【模块三】2.2 什么是 AWS Direct Connect 光纤直连.zip 27.【模块三】2.3 Snowball 的速度有多快?.zip 28.【模块三】2.4 AWS Storage Gateway 的常见使用情形.zip 29.【模块三】2.5 CommVault 如何将本地和云数据战略紧密联系起来?.zip 3.【模块一】3. 来点有趣的东西.zip 30.【模块三】2.6 深入探讨——AWS Kinesis Firehose.zip 31.【模块三】3.1 AWS 云上大规模迁移的最佳实践(上).zip 32.【模块三】3.2 AWS 云上大规模迁移的最佳实践(下).zip 33.【模块三】4.1 AWS 云上数据库迁移(上).zip 34.【模块三】4.2 AWS 云上数据库迁移(下).zip 35.【模块三】5.1 迁移上云概览.zip 36.【模块三】5.2 加速数据迁移.zip 37.【模块三】5.3 demo演示和客户案例.zip 38.【模块三】6.1 使用 AWS 降低总拥有成本 (TCO) (一).zip 39.【模块三】6.2 使用 AWS 降低总拥有成本 (TCO) (二).zip 4.【模块一】4. 一起来AWS吧!.zip 40.【模块三】6.3使用 AWS 降低总拥有成本 (TCO) (三).zip 41.【模块三】6.4 使用 AWS 降低总拥有成本 (TCO) (四).zip 42.【模块四】1.1 AWS 云计算对于 AI 的影响及改变 (上).zip 43.【模块四】1.2 AWS 云计算对于 AI 的影响及改变 (下).zip 44.【模块四】2.1 AWS 上的人工智能和深度学习(上).zip 45.【模块四】2.2 AWS 上的人工智能和深度学习(下).zip 46.【模块四】3.1 人工智能新服务解析——Rekognition、Polly.zip 47.【模块四】3.2 人工智能新服务解析——Rekognition、Polly.zip 48.【模块四】4.1 第四章 AWS 云计算--Amazon ML 机器学习实.zip 49.【模块四】4.2 第四章 AWS 云计算--Amazon ML 机器学习实.zip 5.【模块二】1.1 Amazon EC2 简介.zip 50.【模块四】4.3 第四章 AWS 云计算--Amazon ML 机器学习实.zip 51.【模块四】4.4 第四章 AWS 云计算--Amazon ML 机器学习实.zip 52.【模块四】4.5 第四章 AWS 云计算--Amazon ML 机器学习实.zip 53.【模块四】4.6 第四章 AWS 云计算--Amazon ML 机器学习实.zip 54.【模块六】1.1 Amazon Elasticsearch服务概述.zip 55.【模块六】1.2 创建Amazon Elasticsearch集群.zip 56.【模块六】1.3 配置nginx反向代理实现kibana对ES的访问.zip 57.【模块六】1.4 使用curl进行ES的数据管理及kiban
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值