AWS SAP-C02教程10-其它服务

接下来介绍的内容是一些SAP-C02考试会涉及到的,但是目前无法很好将其归类,暂且放在其它服务中

1 AWS WorkSpaces

WorkSpaces 使您能够为用户预置虚拟的、基于云的微软 Windows、亚马逊 Linux 或 Ubuntu Linux 桌面,即WorkSpaces。简单理解就是可以让你通过安装VDI客户端,然后可以远程桌面形式访问你的服务器。

  • 需要安装WorkSpaces Application Manager(WAM)
  • 集成Microsoft Active Directory
  • 注意其更新频率和方式:AWS建议默认自动更新;如果你长时间使用WorkSpaces,建议更新时间设置在非繁忙阶段(如凌晨等)

2 AWS APP Stream 2.0

AWS AppStream 2.0 是一项完全托管的应用程序流媒体服务,使用户可以从任何地方即时访问其桌面应用程序。你无需安装VDI,只需要通过浏览器(如Chrome)的方式使用应用程序。

2.1 WorkSpaces vs APP Stream 2.0

/WorkSpacesAPP Stream 2.0
功能可以访问整个操作系统只能访问一个应用程序
方式按照VDI远程桌面连接通过浏览器访问应用程序
配置按需配置可为每个应用程序配置CPU、内存等

例题:A company has a Windows-based desktop application that is packaged and deployed to the users’ Windows machines. The company recently acquired another company that has employees who primarily use machines with a Linux operating system. The acquiring company has decided to migrate and rehost the Windows-based desktop application to AWS.
All employees must be authenticated before they use the application. The acquiring company uses Active Directory on premises but wants a simplified way to manage access to the application on AWS for all the employees.
Which solution will rehost the application on AWS with the LEAST development effort?
A. Set up and provision an Amazon Workspaces virtual desktop for every employee. Implement authentication by using Amazon Cognito identity pools. Instruct employees to run the application from their provisioned Workspaces virtual desktops.
B. Create an Auto Scaling group of Windows-based Amazon EC2 instances. Join each EC2 instance to the company’s Active Directory domain. Implement authentication by using the Active Directory that is running on premises. Instruct employees to run the application by using a Windows remote desktop.
C. Use an Amazon AppStream 2.0 image builder to create an image that includes the application and the required configurations. Provision an AppStream 2.0 On-Demand fleet with dynamic Fleet Auto Scaling policies for running the image. Implement authentication by using AppStream 2.0 user pools. Instruct the employees to access the application by starting browser-based AppStream 2.0 streaming sessions.
D. Refactor and containerize the application to run as a web-based application. Run the application in Amazon Elastic Container Service (Amazon ECS) on AWS Fargate with step scaling policies. Implement authentication by using Amazon Cognito user pools. Instruct the employees to run the application from their browsers.
答案:C
答案解析:题目收购一家公司,使用Linux操作系统,想用一个新的解决方案来处理能够继续访问基于Windows的程序,并要求LEAST development effort。使用AppStream基于浏览器访问,无论是Windows还是Linux都无需安装应用程序,是一个最小开发代价的方式,因此选择C选项。

3 AWS Device Farm

AWS Device Farm 是一项应用程序测试服务,可用于在亚马逊网络服务托管的真实实体手机和平板电脑上测试安卓、iOS 和 Web 应用程序并与之交互 (AWS)。可以自动生成运行不成功视频或者可以连接到设备上进行调试。
使用设备群的方法主要有两种:

  • 使用各种测试框架自动测试应用程序。
  • 远程访问设备,在这些设备上,您可以加载、运行应用程序并与其实时交互。

4 AWS AppSync

了解AWS AppSync之前,先了解GraphQL。
GraphQL 是一种用于 API 的查询和操作语言。GraphQL 提供了一种灵活直观的语法来描述数据需求和交互。它使开发人员能够要求确切需要什么,并获得可预测的结果。它还使得在单个请求中访问多个源成为可能,从而减少了网络调用次数和带宽需求,从而节省了应用程序消耗的电池寿命和CPU周期。简单来说,就是用来代替RESTful的一种方式,通过传入想要的信息,会返回同样结构的信息,这样就不影响API的更新。
AppSync使开发人员能够使用安全、无服务器和高性能 GraphQL 和 Pub/Sub API 将其应用程序和服务连接到数据和事件。
在这里插入图片描述

  • 使用GraphQL查询数据考试中如果出现GraphQL,那么基本上与AppSync有关
  • 通过websocket能够实时查询数据考试中出现实时查询,也可能与AppSync有关

例题:A company hosts a blog post application on AWS using Amazon API Gateway, Amazon DynamoDB, and AWS Lambda. The application currently does not use
API keys to authorize requests. The API model is as follows:
GET/posts/[postid] to get post details
GET/users[userid] to get user details
GET/comments/[commentid] to get comments details
The company has noticed users are actively discussing topics in the comments section, and the company wants to increase user engagement by marking the comments appears in real time.
Which design should be used to reduce comment latency and improve user experience?
A. Use edge-optimized API with Amazon CloudFront to cache API responses.
B. Modify the blog application code to request GET comment[commented] every 10 seconds.
C. Use AWS AppSync and leverage WebSockets to deliver comments.
D. Change the concurrency limit of the Lambda functions to lower the API response time.
答案:C
答案解析:题目希望能够让用户实时交流。采用AppSync和WebSockets实现实时解决方案是最合适,因此选择C选项。

  • 可以结合Cognito使用用户身份、条件和数据注入,以非常灵活的方式保护读取或写入的数据。
    在这里插入图片描述

例题:A company wants to refactor its retail ordering web application that currently has a load-balanced Amazon EC2 instance fleet for web hosting, database API services, and business logic. The company needs to create a decoupled, scalable architecture with a mechanism for retaining failed orders while also minimizing operational costs.
Which solution will meet these requirements?
A. Use Amazon S3 for web hosting with Amazon API Gateway for database API services. Use Amazon Simple Queue Service (Amazon SQS) for order queuing. Use Amazon Elastic Container Service (Amazon ECS) for business logic with Amazon SQS long polling for retaining failed orders.
B. Use AWS Elastic Beanstalk for web hosting with Amazon API Gateway for database API services. Use Amazon MQ for order queuing. Use AWS Step Functions for business logic with Amazon S3 Glacier Deep Archive for retaining failed orders.
C. Use Amazon S3 for web hosting with AWS AppSync for database API services. Use Amazon Simple Queue Service (Amazon SQS) for order queuing. Use AWS Lambda for business logic with an Amazon SQS dead-letter queue for retaining failed orders.
D. Use Amazon Lightsail for web hosting with AWS AppSync for database API services. Use Amazon Simple Email Service (Amazon SES) for order queuing. Use Amazon Elastic Kubernetes Service (Amazon EKS) for business logic with Amazon Elasticsearch Service (Amazon ES) for retaining failed orders.
答案:C
答案解析:题目希望重构一个解耦、可扩展,并且操作开销最小的。A选项失败订单不应该使用Amazon SQS long polling。B选项S3 Glacier Deep Archive并不能用于实际业务存储。D选项SES不适合解耦。因此选择C选项,使用GraphQL api (AppSync) + Serverless + DLQ处理失败订单

5 AWS Outposts

AWS Outposts 是一项完全托管的服务,可将 AWS 基础设施、服务、API 和工具扩展到客户场所。通过提供对 AWS 托管基础设施的本地访问,AWS Outposts 使客户能够使用与 AWS 区域中相同的编程接口在本地构建和运行应用程序,同时使用本地计算和存储资源来满足更低的延迟和本地数据处理需求。简单理解就是可以将你的本地数据中心作为AWS的一个区域,通过AWS控制面板操作本地数据中心如同使用AWS 服务一样
在这里插入图片描述
这样做的好处:

  • 低延迟,因为本身就是操作本地数据中心
  • 数据保留在本地数据中心
  • 容易将本地数据中心迁移到云上
  • 需要自己管理物理安全
  • 支持的服务有:EC2、EBS、S3、EKS、ECS、RDS、EMR

6 AWS WaveLength

AWS Wavelength使开发人员能够构建向移动设备和最终用户提供超低延迟的应用程序。Wavelength将标准的AWS计算和存储服务部署到通信服务提供商(CSP) 5G网络的边缘。简单理解就是有一个叫WaveLength Zone连接运营商的CSP,然后连接5G设备网络。
在这里插入图片描述

  • 提供非常低的网络延迟
  • 在WaveLength Zone可以部署EC2、EBS、VPC等
  • 可以与AWS的其它Zone连接
  • 应用场景:智慧城市、智能诊断、车联网、AR/VR、实时游戏、交互式实时视频流等5G场景(注意:考试时如果涉及5G,很可能就与WaveLength 有关系

7 AWS Local Zones

AWS Local Zones将计算、存储、数据库和其他选定的AWS资源放置在靠近人口众多和行业中心的地方。您可以使用本地区域为用户提供对应用程序的低延迟访问。
在这里插入图片描述

  • 让AWS services更加接近用户,提供更低延迟(注意:考试中如果说让AWS service更接近用户,提供低延迟,那么一般与Local Zones有关
  • 兼容EC2、RDS、ECS、EBS、ElasticCache、Direct Connect等

8 AWS Cloud Map

AWS Cloud Map 是一项完全托管服务,可供您用来创建和维护您的应用程序所依赖的后端服务和资源的映射。简单理解就是类似微服务的服务注册中心,如Eureka、Nacos等,当然Cloud Map是更广泛的注册中心,可以将AWS的资源都以服务方式注册到上面。
在这里插入图片描述

9 AWS FIS - Fault Injection Simulator

了解AWS Fault Injection Simulator(AWS FIS)之前可以了解一下混沌工程。简单来说就是在你生产环境随机(并非完全随机,只是有一定范围和可控性)注入故障,看看你系统的表现,从而找出系统薄弱环节进行增强。
AWS Fault Injection Simulator(AWS FIS) 是一项托管服务,使您能够在AWS工作负载上执行错误注入实验。故障注入基于混沌工程原理。这些实验通过制造破坏性事件来对应用程序施加压力,以便您可以观察应用程序的响应情况。然后,您可以使用这些信息来提高应用程序的性能和弹性,使它们按预期运行。
在这里插入图片描述

10 Amazon CodeGuru

Amazon CodeGuru Security是一个静态应用程序安全工具,它使用机器学习来检测安全策略违规和漏洞。它为解决安全风险提供建议,并生成指标,以便您可以跟踪应用程序的安全运行状况。简单来说,就是一款类似于Sonarqube的代码检查工具,但还有一个功能是应用性能建议。
在这里插入图片描述

  • 包括:issues、security、vulnerabilities等
  • 使用机器学习技术
  • 集成GitHub、Bitbucket和AWS CodeCommit

11 AWS IoT Core

AWS IoT 提供云服务将 IoT 设备连接到其它设备和 AWS 云服务。AWS IoT 提供设备软件以帮助您将 IoT 设备集成到基于 AWS IoT 的解决方案。如果您的设备可以连接到 AWS IoT,则 AWS IoT 可以将它们连接到 AWS 提供的云服务。
在这里插入图片描述

11.1 基本特性

  • 使用MQTT协议。(注意:考试中设计MQTT协议,基本上与IoT Core有关
  • 集成大部分的AWS service
  • 基本原理如下图:
    在这里插入图片描述

例题:A company has an IoT platform that runs in an on-premises environment. The platform consists of a server that connects to IoT devices by using the MQTT protocol. The platform collects telemetry data from the devices at least once every 5 minutes. The platform also stores device metadata in a MongoDB cluster.
An application that is installed on an on-premises machine runs periodic jobs to aggregate and transform the telemetry and device metadata. The application creates reports that users view by using another web application that runs on the same on-premises machine. The periodic jobs take 120-600 seconds to run. However, the web application is always running.
The company is moving the platform to AWS and must reduce the operational overhead of the stack.
Which combination of steps will meet these requirements with the LEAST operational overhead? (Choose three.)
A. Use AWS Lambda functions to connect to the IoT devices
B. Configure the IoT devices to publish to AWS IoT Core
C. Write the metadata to a self-managed MongoDB database on an Amazon EC2 instance
D. Write the metadata to Amazon DocumentDB (with MongoDB compatibility)
E. Use AWS Step Functions state machines with AWS Lambda tasks to prepare the reports and to write the reports to Amazon S3. Use Amazon CloudFront with an S3 origin to serve the reports
F. Use an Amazon Elastic Kubernetes Service (Amazon EKS) cluster with Amazon EC2 instances to prepare the reports. Use an ingress controller in the EKS cluster to serve the reports
答案:BDE
答案解析:题目要求提供接收MQTT协议的设备信息,并存储MongoDB数据,且使用最小操作开销。因此A选项lambda连接到物联网是不行的;C选项ec2实例运行MongoDB,不如直接使用Amazon DocumentDB操作开销小;F是EC2上的EKS也是操作开销过大。因此选项BDE

例题:A company runs an IoT application in the AWS Cloud. The company has millions of sensors that collect data from houses in the United States. The sensors use the MQTT protocol to connect and send data to a custom MQTT broker. The MQTT broker stores the data on a single Amazon EC2 instance. The sensors connect to the broker through the domain named iot.example.com. The company uses Amazon Route 53 as its DNS service. The company stores the data in Amazon DynamoDB.
On several occasions, the amount of data has overloaded the MQTT broker and has resulted in lost sensor data. The company must improve the reliability of the solution.
Which solution will meet these requirements?
A. Create an Application Load Balancer (ALB) and an Auto Scaling group for the MQTT broker. Use the Auto Scaling group as the target for the ALB. Update the DNS record in Route 53 to an alias record. Point the alias record to the ALB. Use the MQTT broker to store the data.
B. Set up AWS IoT Core to receive the sensor data. Create and configure a custom domain to connect to AWS IoT Core. Update the DNS record in Route 53 to point to the AWS IoT Core Data-ATS endpoint. Configure an AWS IoT rule to store the data.
C. Create a Network Load Balancer (NLB). Set the MQTT broker as the target. Create an AWS Global Accelerator accelerator. Set the NLB as the endpoint for the accelerator. Update the DNS record in Route 53 to a multivalue answer record. Set the Global Accelerator IP addresses as values. Use the MQTT broker to store the data.
D. Set up AWS IoT Greengrass to receive the sensor data. Update the DNS record in Route 53 to point to the AWS IoT Greengrass endpoint. Configure an AWS IoT rule to invoke an AWS Lambda function to store the data.
答案:B
答案解析:题目要求使用MQTT协议通讯,那么应该选择AWS IoT。所以只能是B选项或者D选项。Greengrass通常用于边缘计算场景,可能不是解决MQTT代理可靠性和可伸缩性的最合适的解决方案。因此选择B选项。

例题:A company is building a solution in the AWS Cloud. Thousands or devices will connect to the solution and send data. Each device needs to be able to send and receive data in real time over the MQTT protocol. Each device must authenticate by using a unique X.509 certificate.
Which solution will meet these requirements with the LEAST operational overhead?
A. Set up AWS IoT Core. For each device, create a corresponding Amazon MQ queue and provision a certificate. Connect each device to Amazon MQ.
B. Create a Network Load Balancer (NLB) and configure it with an AWS Lambda authorizer. Run an MQTT broker on Amazon EC2 instances in an Auto Scaling group. Set the Auto Scaling group as the target for the NLConnect each device to the NLB.
C. Set up AWS IoT Core. For each device, create a corresponding AWS IoT thing and provision a certificate. Connect each device to AWS IoT Core.
D. Set up an Amazon API Gateway HTTP API and a Network Load Balancer (NLB). Create integration between API Gateway and the NLB. Configure a mutual TLS certificate authorizer on the HTTP API. Run an MQTT broker on an Amazon EC2 instance that the NLB targets. Connect each device to the NLB.
答案:C
答案解析:需要连接Thousands or devices ,且使用MQTT协议。那么使用IoT Core最合适,因此选择C选项。

例题:A company has purchased appliances from different vendors. The appliances all have IoT sensors. The sensors send status information in the vendors’ proprietary formats to a legacy application that parses the information into JSON. The parsing is simple, but each vendor has a unique format. Once daily, the application parses all the JSON records and stores the records in a relational database for analysis.
The company needs to design a new data analysis solution that can deliver faster and optimize costs.
Which solution will meet these requirements?
A. Connect the IoT sensors to AWS IoT Core. Set a rule to invoke an AWS Lambda function to parse the information and save a .csv file to Amazon. S3 Use AWS Glue to catalog the files. Use Amazon Athena and Amazon QuickSight for analysis. Most Voted
B. Migrate the application server to AWS Fargate, which will receive the information from IoT sensors and parse the information into a relational format. Save the parsed information to Amazon Redshlft for analysis.
C. Create an AWS Transfer for SFTP server. Update the IoT sensor code to send the information as a .csv file through SFTP to the server. Use AWS Glue to catalog the files. Use Amazon Athena for analysis.
D. Use AWS Snowball Edge to collect data from the IoT sensors directly to perform local analysis. Periodically collect the data into Amazon Redshift to perform global analysis.
答案:A
答案解析:题目要求需要IoT传感器的数据。B选项使用Fargate启动ECS服务器明显不对,成本较高;C选项IoT设备一般不支持SFTP协议;D选项Snowball 一般用于物理传输。因此选择A选项

11.2 典型架构

  • 与Kinesis家族集成,做数据采集到数据分析(考题中出现过类似架构设计
    在这里插入图片描述

例题:A company has more than 10,000 sensors that send data to an on-premises Apache Kafka server by using the Message Queuing Telemetry Transport (MQTT) protocol. The on-premises Kafka server transforms the data and then stores the results as objects in an Amazon S3 bucket.
Recently, the Kafka server crashed. The company lost sensor data while the server was being restored. A solutions architect must create a new design on AWS that is highly available and scalable to prevent a similar occurrence.
Which solution will meet these requirements?
A. Launch two Amazon EC2 instances to host the Kafka server in an active/standby configuration across two Availability Zones. Create a domain name in Amazon Route 53. Create a Route 53 failover policy. Route the sensors to send the data to the domain name.
B. Migrate the on-premises Kafka server to Amazon Managed Streaming for Apache Kafka (Amazon MSK). Create a Network Load Balancer (NLB) that points to the Amazon MSK broker. Enable NLB health checks. Route the sensors to send the data to the NLB.
C. Deploy AWS IoT Core, and connect it to an Amazon Kinesis Data Firehose delivery stream. Use an AWS Lambda function to handle data transformation. Route the sensors to send the data to AWS IoT Core.
D. Deploy AWS IoT Core, and launch an Amazon EC2 instance to host the Kafka server. Configure AWS IoT Core to send the data to the EC2 instance. Route the sensors to send the data to AWS IoT Core.
答案:C
答案解析:题目希望采用AWS方案替代原先方案,通过MQTT协议接收传感器数据,并且通过数据转换保存下来。通过MQTT最好使用IoT,因此排除A选项和C选项。数据转换使用Kinesis Data Firehose +Lambda会比在AWS EC2自己搭建的Kafka运维代价更小,并且提供高可用和可扩展性。因此选择C选项

例题:A company manufactures smart vehicles. The company uses a custom application to collect vehicle data. The vehicles use the MQTT protocol to connect to the application. The company processes the data in 5-minute intervals. The company then copies vehicle telematics data to on-premises storage. Custom applications analyze this data to detect anomalies.
The number of vehicles that send data grows constantly. Newer vehicles generate high volumes of data. The on-premises storage solution is not able to scale for peak traffic, which results in data loss. The company must modernize the solution and migrate the solution to AWS to resolve the scaling challenges.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS IoT Greengrass to send the vehicle data to Amazon Managed Streaming for Apache Kafka (Amazon MSK). Create an Apache Kafka application to store the data in Amazon S3. Use a pretrained model in Amazon SageMaker to detect anomalies.
B. Use AWS IoT Core to receive the vehicle data. Configure rules to route data to an Amazon Kinesis Data Firehose delivery stream that stores the data in Amazon S3. Create an Amazon Kinesis Data Analytics application that reads from the delivery stream to detect anomalies.
C. Use AWS IoT FleetWise to collect the vehicle data. Send the data to an Amazon Kinesis data stream. Use an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Use the built-in machine learning transforms in AWS Glue to detect anomalies.
D. Use Amazon MQ for RabbitMQ to collect the vehicle data. Send the data to an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Use Amazon Lookout for Metrics to detect anomalies.
答案:B
答案解析:题目要求解决通过MQTT接收汽车数据,并做处理分析。因此MQTT采用IoT,处理数据采用Kinesis Data Firehose,分析数据采用Kinesis Data Analytics。因此选择B选项

12 AWS IoT Greengrass

AWS IoT Greengrass是一项开源 IoT 边缘运行时和云服务,可帮助您在设备上构建、部署和管理 IoT 应用程序。您可以AWS IoT Greengrass用来构建软件,使您的设备能够在本地对它们生成的数据进行操作,根据机器学习模型进行预测,以及筛选和聚合设备数据。 AWS IoT Greengrass使您的设备能够在距离数据生成位置更近的地方收集和分析数据,对本地事件做出自主反应,并与本地网络上的其他设备进行安全通信。简单理解就是可以在本地(没有网络连接的时候)根据机器学习模型进行预测,以及筛选和聚合设备数据。(考试中出现过离线使用机器学习模型预测识别等场景,一般与 IoT Greengrass有关

例题:A manufacturing company is building an inspection solution for its factory. The company has IP cameras at the end of each assembly line. The company has used Amazon SageMaker to train a machine learning (ML) model to identify common defects from still images.
The company wants to provide local feedback to factory workers when a defect is detected. The company must be able to provide this feedback even if the factory’s internet connectivity is down. The company has a local Linux server that hosts an API that provides local feedback to the workers.
How should the company deploy the ML model to meet these requirements?
A. Set up an Amazon Kinesis video stream from each IP camera to AWS. Use Amazon EC2 instances to take still images of the streams. Upload the images to an Amazon S3 bucket. Deploy a SageMaker endpoint with the ML model. Invoke an AWS Lambda function to call the inference endpoint when new images are uploaded. Configure the Lambda function to call the local API when a defect is detected.
B. Deploy AWS IoT Greengrass on the local server. Deploy the ML model to the Greengrass server. Create a Greengrass component to take still images from the cameras and run inference. Configure the component to call the local API when a defect is detected.
C. Order an AWS Snowball device. Deploy a SageMaker endpoint the ML model and an Amazon EC2 instance on the Snowball device. Take still images from the cameras. Run inference from the EC2 instance. Configure the instance to call the local API when a defect is detected.
D. Deploy Amazon Monitron devices on each IP camera. Deploy an Amazon Monitron Gateway on premises. Deploy the ML model to the Amazon Monitron devices. Use Amazon Monitron health state alarms to call the local API from an AWS Lambda function when a defect is detected.
答案:B
答案解析:题目要求离线情况下能够继续识别缺陷。那么使用AWS IoT Greengrass最合适;A选项Kinesis video stream 是流处理;C选项Snowball 是离线数据传输;D选项Monitron需要网络连接

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值