GCP认证考试之Storage专题

关键字:Storage
搜索结果共计:33

[单选]You have been asked to select the storage system for the click-data of your company‘s large portfolio of websites. This data is streamed in from a custom website analytics package at a typical rate of 6,000 clicks per minute. With bursts of up to 8,500 clicks per second. It must have been stored for future analysis by your data science and user experience teams.
Which storage infrastructure should you choose?
A
Google Cloud SQL

B
Google Cloud Bigtable

C
Google Cloud Storage

D
Google Cloud Datastore

答案:B

解析:
Google Cloud Bigtable is a scalable, fully-managed NoSQL wide-column database that is suitable for both real-time access and analytics workloads. Good for:
.Low-latency read/write access
.High-throughput analytics
.Native time series support
Common workloads:
.IoT, finance, adtech
.Personalization, recommendations
.Monitoring
.Geospatial datasets
.Graphs Incorrect Answers:
C: Google Cloud Storage is a scalable, fully-managed, highly reliable, and cost-efficient object / blob store. Is good for:
. Images, pictures, and videos
. Objects and blobs
. Unstructured data
D: Google Cloud Datastore is a scalable, fully-managed NoSQL document database for your web and mobile applications. Is good for:
. Semi-structured application data
. Hierarchical data . Durable key-value data
. Common workloads: . User Profiles . Product catalogs
. Game state
Reference: https://cloud.google.com/storage-options/

收起解析
[单选]You are creating a solution to remove backup files older than 90 days from your backup Cloud Storage bucket. You want to optimize ongoing Cloud Storage spend.
What should you do?
A
Write a lifecycle management rule in XML and push it to the bucket with gsutil

B
Write a lifecycle management rule in JSON and push it to the bucket with gsutil

C
Schedule a cron script using gsutil ls -lr gs://backups/** to find and remove items older than 90 days

D
Schedule a cron script using gsutil ls-l gs://backups/** to find and remove items older than 90 days and schedule it with cron .

答案:B

解析:
All four are correct answers. Google has built in cron job scheduling with Cloud Schedule, so that would place “D” behind “C” in Google’s perspective. Google also has it’s own lifecycle management command line prompt gcloud lifecycle so “A” or “B” could be used. JSON is slightly faster than XML because of the “{” verse “” distinguisher, with a Trie tree used for alphanumeric parsing. So between “A” and “B”, choose “B”.
Between “B” and “A”, “B” is slightly more efficient from the GCP operator perspective. So choose “B”.

收起解析
[单选]You need to upload files from your on-premises environment to Cloud Storage. You want the files to be encrypted on Cloud Storage using customer-supplied encryption keys. What should you do?
A
Supply the encryption key in a .boto configuration file. Use gsutil to upload the files.

B
Supply the encryption key using gcloud config. Use gsutil to upload the files to that bucket.

C
Use gsutil to upload the files, and use the flag --encryption-key to supply the encryption key.

D
Use gsutil to create a bucket, and use the flag --encryption-key to supply the encryption key. Use gsutil to upload the files to that bucket.

答案:A

解析:
In GCP document, key could be configured in .boto.
I didn’t find information show gsutil supports flag “–encryption-key”.
https://cloud.google.com/storage/docs/encryption/customer-supplied-keys

收起解析
[单选]Your operations team currently stores 10 TB of data in an object storage service from a third-party provider. They want to move this data to a Cloud Storage bucket as quickly as possible,following Google-recommended practices. They want to minimize the cost
of this data migration.Which approach should they use?
您的运营团队目前将 10 TB 的数据存储在来自第三方提供商的对象存储服务中。他们希望按照 Google 推荐的做法尽快将此数据移动到 Cloud Storage 存储分区。他们想把成本降到最低
的数据迁移。他们应该使用哪种方法?
A
Use the gsutil mv command to move the data.
使用 gsutil mv 命令移动数据。

B
Use the Storage Transfer Service to move the data.
使用存储传输服务移动数据。

C
Download the data to a Transfer Appliance, and ship it to Google.
将数据下载到 Transfer Appliance,然后将其发送给 Google。

D
Download the data to the on-premises data center, and upload it to the Cloud Storage bucket.
将数据下载到本地数据中心,并上传到Cloud Storage存储桶。

答案:B

[单选]You are building an archival solution for your data warehouse and have selected Cloud Storage to archive your data. Your users need to be able to access this archived data once a quarter for some regulatory requirements. You want to select a cost-efficient option. Which storage option should you use?
A
Cold Storage

B
Storage

C
Regional Storage

D
Multi-Regional Storage

答案:A

解析:
Nearline, Coldline, and Archive offer ultra low-cost, highly-durable,highly available archival storage.For data accessed less than once a year, Archive is a cost-effective storage option for long-term preservation of data. Coldline is also ideal for cold storage–data your business expects to touch less than once a quarter.For warmer storage, choose Nearline:data you expect to access less than once a month, but possibly multiple times throughout the year. All storage classes are available across all GCP regions and provide unparalleled sub-second access speeds with a consistent API.
Reference:
https://cloud.google.com/storage/archival

收起解析
[单选]Which of the following tasks would Nearline Storage be well suited for?
A
A mounted Linux file system

B
Image assets for a high traffic website

C
Frequently read files

D
Infrequently read data backups

答案:D

解析:
Nearline Storage: “Data you do not expect to access frequently (i.e., no more than once per month). Ideal for back-up and serving long-tail multimedia content.” Reference:
https://cloud.google.com/storage/docs/storage-classes#comparison_of_storage_classes

收起解析
[单选]Which of these options is not a valid Cloud Storage class?
A
Glacier Storage

B
Nearline Storage

C
Coldline Storage

D
Regional Storage

答案:A

解析:
Cloud Storage offers four storage classes:Multi- Regional Storage,Regional Storage,Nearline Storage, and Coldline Storage.
Reference:
https://cloud.google.com/storage/docs/storage-classes

收起解析
[单选]Regarding Cloud Storage,which option allows any user to access to a Cloud Storage resource for a limited time, using a specific URL?
A
Open Buckets

B
Temporary Resources

C
Signed URLs

D
Temporary URLs

答案:C

[单选]Regarding Cloud Storage: which of the following allows for time-limited access to buckets and objects without a Google account?
A
Signed URLS

B
gsutil

C
Single sign-on

D
Temporary storage accounts

答案:A

解析:
Signed URLs are a mechanism for query string authentication for buckets and objects. Signed URLS provide a way to give time-limited read or write access to anyone in possession of the URL, regardless of whether they have a Google account.
Reference:
https://cloud.google.com/storage/docs/access- control/signed-urls

收起解析
[单选]Which is the fastest instance storage option that will still be available when an instance is stopped?
A
Local SSD

B
Standard Persistent Disk

C
SSD Persistent Disk

D
RAM disk

答案:C

解析:
Local SSDs and RAM disks disappear when you stop an instance. Standard Persistent Disks and SSD Persistent Disks both survive when you stop an instance, but SSD Persistent Disks have up to 4 times the throughput and up to 40 times the I/O operations per second of a Standard Persistent Disk.
Reference:
https://cloud.google.com/compute/docs/disks/ 。

收起解析
[单选]Your company runs several databases on a single MySQL instance. They need to take backups of a specific database at regular intervals. The backup activity needs to complete as quickly as possible and cannot be allowed to impact disk performance. How should you configure the storage?
A
Configure a cron job to use the gcloud tool to take regular backups using persistent disk snapshots.

B
Mount a Local SSD volume as the backup location. After the backup is complete, use gsutil to move the backup to Google Cloud Storage.

C
Use gcsfise to mount a Google Cloud Storage bucket as a volume directly on the instance and write backups to the mounted location using mysqldump.

D
Mount additional persistent disk volumes onto each virtual machine (VM) instance in a RAID10 array and use LVM to create snapshots to send to Cloud Storage

答案:B

解析:
I think it’s B. If you use a tool like GCFUSE it will write immediately to GCS which is a cost benefit because you don’t need intermediate storage. In this case however “Quickly as possible” key for understanding. GCFUSE will write to GCS which is much slower than writing directly to an added SSD. During the write to GCS it would also execute reads for a longer period on the production database. Therefor writing to the extra SSD would be my recommended solution. Offloading from the SSD to GCS would not impact the running database because the data is already separated

收起解析
[单选]A production database virtual machine on Google Compute Engine has an ext4-formatted persistent disk for data files. The database is about to run out of storage space. How can you remediate the problem with the least amount of downtime?
A
In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.

B
Shut down the virtual machine, use the Cloud Platform Console to increase the persistent disk size, then restart the virtual machine

C
In the Cloud Platform Console, increase the size of the persistent disk and verify the new space is ready to use with the fdisk command in Linux

D
In the Cloud Platform Console, create a new persistent disk attached to the virtual machine, format and mount it, and configure the database service to move the files to the new disk

E
In the Cloud Platform Console, create a snapshot of the persistent disk restore the snapshot to a new larger disk, unmount the old disk, mount the new disk and restart the database service

答案:A

解析:
On Linux instances, connect to your instance and manually resize your partitions and file systems to use the additional disk space that you added. Extend the file system on the disk or the partition to use the added space. If you grew a partition on your disk, specify the partition. If your disk does not have a partition table, specify only the disk ID. sudo resize2fs /dev/[DISK_ID][PARTITION_NUMBER] where [DISK_ID] is the device name and [PARTITION_NUMBER] is the partition number for the device where you are resizing the file system.
Reference:
https://cloud.google.com/compute/docs/disks/add-persistent-disk

收起解析
[单选]Your customer support tool logs all email and chat conversations to Cloud Bigtable for retention and analysis. What is the recommended approach for sanitizing this data of personally identifiable information or payment card information before initial storage?
A
Hash all data using SHA256

B
Encrypt all data using elliptic curve cryptography

C
De-identify the data with the Cloud Data Loss Prevention API

D
Use regular expressions to find and redact phone numbers, email addresses, and credit card numbers

答案:C

[单选]Your company is moving 75 TB of data into Google Cloud. You want to use Cloud Storage and follow Google-recommended practices. What should you do?
A
Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into Cloud Storage.

B
Move your data onto a Transfer Appliance. Use Cloud Dataprep to decrypt the data into Cloud Storage.

C
Install gsutil on each server that contains data. Use resumable transfers to upload the data into Cloud Storage.

D
Install gsutil on each server containing data. Use streaming transfers to upload the data into Cloud Storage.

答案:A

解析:
It should be ‘A’
Transfer Appliance lets you quickly and securely transfer large amounts of data to Google Cloud Platform via a high capacity storage server that you lease from Google and ship to our datacenter. Transfer Appliance is recommended for data that exceeds 20 TB or would take more than a
week to upload.

收起解析
[单选]Your company pushes batches of sensitive transaction data from its application server VMs to Cloud Pub/Sub for processing and storage. What is the Google- recommended way for your application to authenticate to the required Google Cloud services?
A
Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles.

B
Ensure that VM service accounts do not have access to Cloud Pub/Sub, and use VM access scopes to grant the appropriate Cloud Pub/Sub IAM roles.

C
Generate an OAuth2 access token for accessing Cloud Pub/Sub, encrypt it, and store it in Cloud Storage for access from each VM.

D
Create a gateway to Cloud Pub/Sub using a Cloud Function, and grant the Cloud Function service account the appropriate Cloud Pub/Sub IAM roles.

答案:A

解析:
As per -
https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances
The service account can only execute API methods that are allowed by both the access scope and the service account’s specific IAM roles.
As per the above, correct answer is ‘A’

收起解析
[单选]Your applications will be writing their logs to BigQuery for analysis. Each application should have its own table. Any logs older than 45 days should be removed. You want to optimize storage and follow Google-recommended practices. What should you do?
A
Configure the expiration time for your tables at 45 days

B
Make the tables time-partitioned, and configure the partition expiration at 45 days

C
Rely on BigQuery‘s default behavior to prune application logs older than 45 days.

D
Create a script that uses the BigQuery command line tool (bq) to remove records older than 45 days

答案:B

解析:
I think B is correct.
It looks like table will be deleted with option A.
https://cloud.google.com/bigquery/docs/managing-tables#updating_a_tables_expiration_time
When you delete a table, any data in the table is also deleted. To automatically delete tables after a specified period of time, set the default table expiration for the dataset or set the expiration time when you create the table.

收起解析
[单选]You have an application that makes HTTP requests to Cloud Storage. Occasionally the requests fail with HTTP status codes of 5xx and 429. How should you handle these types of errors?
A
Use gRPC instead of HTTP for better performance.

B
Implement retry logic using a truncated exponential backoff strategy.

C
Make sure the Cloud Storage bucket is multi-regional for geo-redundancy.

D
Monitor https://status.cloud.google.com/feed.atom and only make requests if Cloud Storage is not reporting an incident.

答案:B

解析:
Answer is B
You should use exponential backoff to retry your requests when receiving errors with 5xx or 429 response codes from Cloud Storage.
https://cloud.google.com/storage/docs/request-rate

收起解析
[单选]If you do not grant a user named Bob permission to access a Cloud Storage bucket, but then use an ACL to grant access to an object inside that bucket to Bob, what will happen?
A
Bob will be able to access all of the objects inside the bucket because he was granted access to at least one object in the bucket. Bob will be able to access the object

B
because bucket and object ACLs are independent of each other.

C
Bob will not be able to access the object because he does not have access to the bucket.

D
It is not possible to grant access to an object when it is inside a bucket for which a user does not have access.

答案:B

解析:
Bucket and object ACLs are independent of each other, which means that the ACLs on a bucket do not affect the ACLs on objects inside that bucket. It is possible for a user without permissions for a bucket to have permissions for an object inside the bucket.For example, you can create a bucket such that only GroupA is granted permission to list the objects in the bucket, but then upload an object into that bucket that allows GroupB READ access to the object. GroupB will be able to read the object, but will not be able to view the contents of the bucket or perform bucket-related tasks.
Reference:
https://cloud.google.com/storage/docs/best-practices#security

收起解析
[多选]Which of these tools can you use to copy data from AWS S3 to Cloud Storage?(Select 2 answers.)
A
Cloud Storage Transfer Service

B
S3 Storage Transfer Service

C
Cloud Storage Console

D
gsutil

答案:A、D

解析:
Cloud Storage Transfer Service transfers data from an online data source to a data sink. Your data source can be an Amazon Simple Storage Service (Amazon S3) bucket, an HTTP/HTTPS location, or a Google Cloud Storage bucket. Your data sink (the destination) is always a Google Cloud Storage bucket.
You can use Cloud Storage Transfer Service to: Back up data to a Google Cloud Storage bucket from other storage providers. Move data from a Multi- Regional Storage bucket to a Nearline Storage bucket to lower your storage costs.
Reference:
https://cloud.google.com/storage/transfer/

收起解析
[多选]Which two places hold information you can use to monitor the effects of a Cloud Storage lifecycle policy on specific objects? (Select 2 answers.)
A
Cloud Storage Lifecycle Monitoring

B
Expiration time metadata

C
Access logs

D
Lifecycle config file

答案:B、C

[单选]Question #3 Topic 1
The application reliability team at your company this added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis. The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss. Which process should you implement?
A
Append metadata to file body ;Compress individual files; Name files with serverName’s Timestamp;Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket. Otherwise, save files to existing bucket.

B
Batch every 10,000 events with a single manifest file for metadata; Compress event file and manifest file into a single archive file ;Name files using serverName’EventSequence;Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to existing bucket.

C
Compress individual files ;Name files with serverName’ EventSequence ;Save files to one bucket ;Set custom metadata headers for each object after saving

D
Append metadata to file body ;Compress individual files ;Name files with a random prefix pattern ;Save files to one bucket

答案:D

[单选]You are using Cloud SQL as the database backend for a large CRM deployment. You want to scale as usage increases and ensure that you don ‘t run out of storage, maintain 75% CPU usage cores, and keep replication lag below 60 seconds. What are the correct steps to meet your requirements?
A

  1. Enable automatic storage increase for the instance. 2. Create a Stackdriver alert when CPU usage exceeds 75%, and change the instance type to reduce CPU usage. 3. Create a Stackdriver alert for replication lag, and shard the database to reduce replication time.

B

  1. Enable automatic storage increase for the instance. 2. Change the instance type to a 32-core machine type to keep CPU usage below 75%. 3. Create a Stackdriver alert for replication lag, and deploy memcache to reduce load on the master.

C

  1. Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance to create more space. 2. Deploy memcached to reduce CPU load. 3. Change the instance type to a 32-core machine type to reduce replication lag.

D

  1. Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance to create more space. 2. Deploy memcached to reduce CPU load. 3. Create a Stackdriver alert for replication lag, and change the instance type to a 32-core machine type to reduce replication lag.

答案:A

解析:
A, 1. Enable automatic storage increase for the instance. 2. Create a Stackdriver alert when CPU usage exceeds 75%, and change the instance type to reduce CPU usage. 3. Create a Stackdriver alert for replication lag, and shard the database to reduce replication time.

收起解析
[单选]Question #128 Topic 1
Your company has sensitive data in Cloud Storage buckets. Data analysts have Identity Access Management (IAM) permissions to read the buckets. You want to prevent data analysts from retrieving the data in the buckets from outside the office network. What should you do?
A
1) Create a VPC Service Controls perimeter that includes the projects with the buckets. 2) Create an access level with the CIDR of the office network.

B
1) Create a firewall rule for all instances in the Virtual Private Cloud (VPC) network for source range. 2) Use the Classless Inter-domain Routing (CIDR) of the office network.

C
1) Create a Cloud Function to remove IAM permissions from the buckets, and another Cloud Function to add IAM permissions to the buckets. 2) Schedule the Cloud Functions with Cloud Scheduler to add permissions at the start of business and remove permissions at the end of business.

D
1) Create a Cloud VPN to the office network. 2) Configure Private Google Access for on-premises hosts.

答案:A

解析:
Should be A.
For all Google Cloud services secured with VPC Service Controls, you can ensure that:
Resources within a perimeter are accessed only from clients within authorized VPC networks using Private Google Access with either Google Cloud
or on-premises.
https://cloud.google.com/vpc-service-controls/docs/overview

收起解析
[单选]Topic 1Question #138
You are working at a financial institution that stores mortgage loan approval documents on Cloud Storage. Any change to these approval documents must be uploaded as a separate approval file, so you want to ensure that these documents cannot be deleted or overwritten for the next 5 years. What should you do?
A
Create a retention policy on the bucket for the duration of 5 years. Create a lock on the retention policy.

B
Create the bucket with uniform bucket-level access, and grant a service account the role of Object Writer. Use the service account to upload new files.

C
Use a customer-managed key for the encryption of the bucket. Rotate the key after 5 years.

D
Create the bucket with fine-grained access control, and grant a service account the role of Object Writer. Use the service account to upload new files.

答案:A

解析:
Answer A
If a bucket has a retention policy, objects in the bucket can only be deleted or replaced once their age is greater than the retention period.
Once you lock a retention policy, you cannot remove it or reduce the retention period it has
Reference:
https://cloud.google.com/storage/docs/using-bucket-lock

收起解析
[单选]Question #146
Your company wants to migrate their 10-TB on-premises database export into Cloud Storage. You want to minimize the time it takes to complete this activity, the overall cost, and database load. The bandwidth between the on-premises environment and Google Cloud is 1 Gbps. You want to follow Google-recommended practices. What should you do?
A
Develop a Dataflow job to read data directly from the database and write it into Cloud Storage.

B
Use the Data Transfer appliance to perform an offline migration.

C
Use a commercial partner ETL solution to extract the data from the on-premises database and upload it into Cloud Storage.

D
Compress the data and upload it with gsutil -m to enable multi-threaded copy.

答案:A

[单选]Question #149
Your organization has stored sensitive data in a Cloud Storage bucket. For regulatory reasons, your company must be able to rotate the encryption key used to encrypt the data in the bucket. The data will be processed in Dataproc. You want to follow Google-recommended practices for security. What should you do?
A
Create a key with Cloud Key Management Service (KMS). Encrypt the data using the encrypt method of Cloud KMS.

B
Create a key with Cloud Key Management Service (KMS). Set the encryption key on the bucket to the Cloud KMS key.

C
Generate a GPG key pair. Encrypt the data using the GPG key. Upload the encrypted data to the bucket.

D
Generate an AES-256 encryption key. Encrypt the data in the bucket using the customer-supplied encryption keys feature.

答案:B

解析:
B is OK
https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-object-key

收起解析
[单选]Question #152
Your company is planning to upload several important files to Cloud Storage. After the upload is completed, they want to verify that the uploaded content is identical to what they have on-premises. You want to minimize the cost and effort of performing this check. What should you do?
A
1). Use Linux shasum to compute a digest of files you want to upload. 2). Use gsutil -m to upload all the files to Cloud Storage. 3). Use gsutil cp to download the uploaded files. 4). Use Linux shasum to compute a digest of the downloaded files. 5). Compare the hashes.

B
1). Use gsutil -m to upload the files to Cloud Storage. 2). Develop a custom Java application that computes CRC32C hashes. 3). Use gsutil ls -L gs://[YOUR_BUCKET_NAME] to collect CRC32C hashes of the uploaded files. 4). Compare the hashes.

C
1). Use gsutil -m to upload all the files to Cloud Storage. 2). Use gsutil cp to download the uploaded files. 3). Use Linux diff to compare the content of the files.

D
1). Use gsutil -m to upload the files to Cloud Storage. 2). Use gsutil hash -c FILE_NAME to generate CRC32C hashes of all on-premises files.
3). Use gsutil ls -L gs://[YOUR_BUCKET_NAME] to collect CRC32C hashes of the uploaded files. 4). Compare the hashes.

答案:D

解析:
Its D. The commands are simple and this is the best method to prevent doing any download and upload process again.

收起解析
[单选]Your company processes high volumes of loT data that are time-stamped. The total data volume can be several petabytes. The data needs to be written and changed at a high speed. You want to use the most performant storage option for your data. Which product should you use?
A
Cloud Datastore

B
Cloud Storage

C
Cloud Bigtable

D
BigQuery

答案:C

解析:
A) is not correct because Cloud Datastore is not the most performant product for frequent writes or timestamp-based queries.
B) is not correct because Cloud Storage is designed for object storage not for this type of data ingestion and collection.
C) is correct because Cloud Bigtable is the most performant storage option to work with loT and time series data.
D) is not correct because although it can store the data, BigQuery is very slow at changing data.
Reference:
Cloud Bigtable Schema Design for Time Series Data: https://cloud.google.com/bigtable/docs/schema-design-time-series

收起解析
[单选]Your company has an application running on App engine that allows users to upload music files and share therm with other people .You want to allow users to upload files directly into Cloud storage from their browser session.The payload should not be passed through the backend. What should you do?
A
Set a CORS configuration in the target Cloud storage bucket where the base URL of the App Engine application is an allowed origin.Use the Cloud Storage signed URL feature to generate a POST URL.

B
Set a CORS configuration in the target cloud storage bucket where the base URL of the App Engine application is an allowed origin. Assign the Cloud Storage WRITER role to users who upload files.

C
Use the Cloud Storage Signed URL feature to generate a POST URL.Use App Engine default credentials to sign requests against Cloud Storage.

D
Assign the Cloud Storage WRITER role to users who upload files; use App Engine default credentials to sign requests against Cloud Storage.

答案:A

[单选]You want to store critical business information in cloud storage buckets The information is regularly changed, but previous versions need to be referenced on a regular basis.You want to ensure that there is record of all changes to any information in these buckets.you want to ensure that accidental edits or deletions can be easily rolled back.which feature should you enable?
A
Bucket Lock

B
Object Versioning

C
Object change notification

D
Object Lifecycle Management

答案:B

[单选]You have been engaged by your client to lead the migration of their application infrastructure to GCP. One of their current problems is that the on-premises high performance SAN is requiring frequent and expensive upgrades to keep up with the variety of workloads that are identified as follows: 20 TB of log archives retained for legal reasons; 500 GB of VM boot/data volumes and templates; 500 GB of image thumbnails; 200 GB of customer session state data that allows customers to restart sessions even if off-line for several days. Which of the following best reflects your recommendations for a cost-effective storage allocation?
A
Local SSD for customer session state data. Lifecycle-managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes.

B
Memcache backed by Cloud Datastore for the customer session state data. Lifecycle-managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes.

C
Memcache backed by Cloud SQL for customer session state data. Assorted local SSD-backed instances for VM boot/data volumes. Cloud Storage for log archives and thumbnails.

D
Memcache backed by Persistent Disk SSD storage for customer session state data. Assorted local SSD-backed instances for VM boot/data volumes. Cloud Storage for log archives and thumbnails.

答案:B

解析:
B is correct.
WHY NOT OTHERS.
A: is wrong Local SSD in non-persistent therefore cannot be used for session state (as questions also need to save data for users who are offline for several days).
C: Again Local SSD cannot be used for boot volume (because its Non-persistent again) and always used for temporary data storage.
D: Same reason as C.
WHY B?
Left with B that’s why, but the question is how to store Boot/Data volume on Cloud Storage?

  • Storing other type of data is easy but most comments were about boot volume.
  • Boot volume can be stored to Cloud Storage by creating an Custom Image.
    https://cloud.google.com/compute/docs/images/create-delete-deprecate-private-images#selecting_image_storage_location

收起解析
[单选]Topic 1 Question #172
Your company has an application running on Compute Engine that allows users to play their favorite music. There are a fixed number of instances.
Files are stored in Cloud Storage, and data is streamed directly to users. Users are reporting that they sometimes need to attempt to play popular
songs multiple times before they are successful. You need to improve the performance of the application. What should you do?
A
1). Mount the Cloud Storage bucket using gcsfuse on all backend Compute Engine instances. 2). Serve music files directly from the backend Compute Engine instance.

B
1). Create a Cloud Filestore NFS volume and attach it to the backend Compute Engine instances. 2). Download popular songs in Cloud
Filestore. 3). Serve music files directly from the backend Compute Engine instance.

C
1). Copy popular songs into CloudSQL as a blob. 2). Update application code to retrieve data from CloudSQL when Cloud Storage is overloaded.

D
1). Create a managed instance group with Compute Engine instances. 2). Create a global load balancer and configure it with two backends:
—‹ Managed instance group—‹ Cloud Storage bucket 3). Enable Cloud CDN on the bucket backend.

答案:D

解析:
This is the meaning of using CND.
Cache content closer to the end user to optimize delivery time and other benefits

收起解析
[单选]Topic 1 Question #176
Your company has an application that is running on multiple instances of Compute Engine. It generates 1 TB per day of logs. For compliance reasons, the logs need to be kept for at least two years. The logs need to be available for active query for 30 days. After that, they just need to be retained for audit purposes. You want to implement a storage solution that is compliant,minimizes costs, and follows Google-recommended
practices. What should you do?
A
1). Install a Cloud Logging agent on all instances. 2). Create a sink to export logs into a regional Cloud Storage bucket. 3). Create an Object
Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month. 4). Configure a retention policy at the bucket level using
bucket lock.

B
1). Write a daily cron job, running on all instances, that uploads logs into a Cloud Storage bucket. 2). Create a sink to export logs into a regional Cloud Storage bucket. 3). Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month.

C
1). Install a Cloud Logging agent on all instances. 2). Create a sink to export logs into a partitioned BigQuery table. 3). Set a time_partitioning_expiration of 30 days.

D
1). Create a daily cron job, running on all instances, that uploads logs into a partitioned BigQuery table. 2). Set a time_partitioning_expiration of 30 days.

答案:A

解析:
I think the correct answer is A.
Because in the case of C, the 2-year retention solution is not mentioned.

收起解析

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值