GCP Professional Cloud Architect Practice Exam Part 5
Notes: Hi all, Google Professional Cloud Architect Practice Exam will familiarize you with types of questions you may encounter on the certification exam and help you determine your readiness or if you need more preparation and/or experience. Successful completion of the practice exam does not guarantee you will pass the certification exam as the actual exam is longer and covers a wider range of topics.
We highly recommend you should take Google Professional Cloud Architect Actual Exam Version because it include actual exam questions and highlighted answers are collected and verified in our exam. It will help you pass exam in easier way.
For this question, refer to the TerramEarth case study.
https://cloud.google.com/certification/guides/cloud-architect/casestudy-terramearth-rev2
For this question, refer to the Mountkirk Games case study.
https://cloud.google.com/certification/guides/cloud-architect/casestudy-mountkirkgames-rev2
1. For this question, refer to the Mountkirk Games case study.
Your company is an industry-leading ISTQB certified software testing firm, and Mountkirk Games has recently partnered with your company for designing their new testing strategy. Given the experience with scaling issues in the existing solution, Mountkirk Games is concerned about the ability of the new backend to scale based on traffic and has asked for your opinion on how to design their new test strategy to ensure scaling issues do not repeat. What should you suggest?
A. Modify the test strategy to scale tests well beyond the current approach.
B. Update the test strategy to replace unit tests with end to end integration tests.
C. Modify the test strategy to run tests directly in production after each new release.
D. Update the test strategy to test all infrastructure components in Google Cloud Platform.
10. For this question, refer to the Mountkirk Games case study.
Mountkirk Games anticipates its new game to be hugely popular and expects this to generate vast quantities of time series data. Mountkirk Games is keen on selecting a managed storage service for this time-series data. What GCP service would you recommend?
A. Cloud Bigtable.
B. Cloud Spanner.
C. Cloud Firestore.
D. Cloud Memorystore.
11. For this question, refer to the Mountkirk Games case study.
Mountkirk Games has redesigned parts of its game backend into multiple microservices that operate as HTTP (REST) APIs. Taking into consideration the technical requirements for the game backend platform as well as the business requirements, how should you design the game backend on Google Cloud platform?
A. Use a Layer 4 (TCP) Load Balancer and Google Compute Engine VMs in a Managed Instances Group (MIG) with instances in multiple zones in multiple regions.
B. Use a Layer 4 (TCP) Load Balancer and Google Compute Engine VMs in a Managed Instances Group (MIG) with instances restricted to a single zone in multiple regions.
C. Use a Layer 7 (HTTPS) Load Balancer and Google Compute Engine VMs in a Managed Instances Group (MIG) with instances in multiple zones in multiple regions.
D. Use a Layer 7 (HTTPS) Load Balancer and Google Compute Engine VMs in a Managed Instances Group (MIG) with instances restricted to a single zone in multiple regions.
12. For this question, refer to the Mountkirk Games case study.
Taking into consideration the technical requirements for the game backend platform as well as the game analytics platform, where should you store data in Google Cloud platform?
A. 1. For time-series data, use Cloud SQL. 2. For historical data queries, use Cloud Bigtable.
B. 1. For time-series data, use Cloud SQL. 2. For historical data queries, use Cloud Spanner.
C. 1. For time-series data, use Cloud BigTable. 2. For historical data queries, use BigQuery.
D. 1. For time-series data, use Cloud BigTable. 2. For historical data queries, use Cloud BigQuery. 3. For transactional data, use Cloud Spanner.
13. For this question, refer to the Mountkirk Games case study.
Your company is an industry-leading ISTQB certified software testing firm, and Mountkirk Games has recently partnered with your company for designing their new testing strategy. Mountkirk Games is concerned at the potential disruption caused by solar storms to its business. A solar storm last month resulted in downgraded mobile network coverage and slow upload speeds for a vast majority of mobile users in the Mediterranean. As a result, their analytics platform struggled to cope with the late arrival of data from these mobile devices. Mountkirk Games has asked you for your suggestions on avoiding such issues in future. What should you recommend?
A. Update the test strategy to include fault injection software and introduce latency instead of faults.
B. Update the test strategy to test from multiple mobile phone emulators from all GCP regions.
C. Update the test strategy to introduce random amounts of delay before processing the uploaded analytics files.
D. Update the test strategy to gather latency information from 1% of users and use this to simulate latency on production-like volume.
14. For this question, refer to the TerramEarth case study.
TerramEarth wants to preemptively stock replacement parts and reduce the unplanned downtime of their vehicles to less than one week. The CTO sees an Al-driven solution being the future of this prediction. Still, for the time being, the planis to have the analysts carry out the analysis by querying all data from a central location and make predictions. Which of the below designs would give the analysts the ability to query data from a central location?
A. HTTP(s) Load Balancer, GKE on Anthos, Pub/Sub, Dataflow, BigQuery.
B. HTTP(s) Load Balancer, GKE on Anthos, Dataflow, BigQuery.
C. HTTP(s) Load Balancer, GKE on Anthos, BigQuery.
D. App Engine Flexible, Pub/Sub, Dataflow, BigQuery.
E. App Engine Flexible, Pub/Sub, Dataflow, Cloud SQL.
15. For this question, refer to the TerramEarth case study.
You work for a consulting firm that specializes in providing next-generation digital services and has recently been contracted by TerramEarth to design and develop APIs that would enable TerramEarth to decrease unplanned downtime to less than one week. Given the short period for the project, TerramEarth wants you to focus on delivering APIs that meet their business requirements rather than spend time developing a custom framework that fits the needs of all APIs and their edge case scenarios. What should you do?
A. Expose APIs on Google App Engine through Google Cloud Endpoints for dealers and partners.
B. Expose APIs on Google App Engine to the public.
C. Expose Open API Specification compliant APIs on Google App Engine to the public.
D. Expose APIs on Google Kubernetes Engine to the public.
E. Expose Open API Specification compliant APIs on Google Kubernetes Engine to dealers and partners.
16. For this question, refer to the TerramEarth case study.
You work for a consulting firm that specializes in providing next-generation digital services and has recently been contracted by TerramEarth to help them enhance their APIs. One of their APIs used for retrieving vehicle is being used successfully by analysts to predict unplanned downtime and preemptively stock replacement parts. TerramEarth has asked you to enable delegated authorization for 3rd parties so that the dealer network can use this data to better position new products and services. What should you do?
A. Use OAuth 2.0 to delegate authorization.
B. Use SAML 2.0 to delegate authorization.
C. Open up the API to IP ranges of the dealer network.
D. Enable each deader to share their credentials with their trusted partner.
17. For this question, refer to the TerramEarth case study.
TerramEarth would like to reduce unplanned downtime for all its vehicles and preemptively stock replacement parts. To do this, TerramEarth has partnered with another firm to loT enable all vehicles in the field but is concerned that its existing data ingestion solution is not capable of handling the massive increase in ingested data. TerramEarth has asked you to design the data ingestion layer to support this requirement. What should you do?
A. Ingest data to Google Cloud Storage directly.
B. Ingest data through Google Cloud Pub/Sub.
C. Ingest data to Google BigQuery through streaming inserts.
D. Continue ingesting data via existing FTP solution.
18. TerramEarth would like to reduce unplanned downtime for all its vehicles and preemptively stock replacement parts. To do this, TerramEarth has partnered with another firm to loT enable all vehicles in the field. TerramEarth is concerned that its existing data ingestion solution may not satisfy all use cases. Early analysis has shown the FTP uploads are highly unreliable in areas with poor network connectivity and this frequently causes the FTP upload to restart from the beginning. On occasions, this has resulted in analysts querying old data and failing to predict unplanned downtimes accurately. How should you design the data ingestion layer to make it more reliable while ensuring data is made available to analysts as quickly as possible?
A. 1. Replace the existing FTP server with a cluster of FTP servers on a single GKE cluster. 2. After receiving the files, push them to Multi-Regional Cloud Storage bucket. 3. Modify the ETL process to pick up files from this bucket.
B. 1. Replace the existing FTP server with multiple FTP servers running in GKE clusters in multiple regions. 2. After receiving the files, push them a Multi-Regional Cloud Storage bucket in the same region. 3. Modify the ETL process to pick up files from this bucket.
C. 1. Use Google HTTP(s) APIs to upload files to multiple Multi-Regional Cloud Storage Buckets. 2. Modify the ETL process to pick up files from these buckets.
D. 1. Use Google HTTP(s) APIs to upload files to multiple Regional Cloud Storage Buckets. 2. Modify the ETL process to pick up files from these buckets.
19. For this question, refer to the TerramEarth case study.
TerramEarth would like to reduce unplanned downtime for all its vehicles and preemptively stock replacement parts. To do this, TerramEarth has partnered with another firm to loT enable all vehicles in the field. The telemetry data from vehicles is stored in the respective region buckets in the US, Asia and Europe. The feedback from most service centres and dealer networks indicates vehicle hydraulics fail after 69000 miles, and this has knock-on effects such as disabling the dynamic adjustment in the height of the vehicle. The vehicle design team has approached you to provide them with all raw telemetry data to analyze and determine the cause of this failure. You need to run this job on all the data. How should you do this while minimizing costs?
A. Transfer telemetry data from all Regional Cloud Storage buckets to another bucket in a single zone. Launch a Dataproc job in the same zone.
B. Transfer telemetry data from all Regional Cloud Storage buckets to another bucket in a single region. Launch a Dataproc job in the same region.
C. Run a Dataproc job in each region to extract, pre-process and tar (compress) the data. Transfer this data to a Multi-Regional Cloud Storage bucket. Launch a Dataprocjob.
D. Run a Dataproc job in each region to extract, pre-process and tar (compress) the data. Transfer this data to a Regional Cloud Storage bucket. Launch a Dataproc job.
2. For this question, refer to the Mountkirk Games case study.
Your company is an industry-leading ISTQB certified software testing firm, and Mountkirk Games has recently partnered with your company for designing their new testing strategy. Mountkirk Games has recently migrated their backend to GCP and uses continuous deployment to automate releases. Few of their releases have recently caused a loss of functionality within the application, a few other releases have had unintended performance issues. You have been asked to come up with a testing strategy that lets you properly test all new releases while also giving you the ability to test particular new release to scaled-up production-like traffic to detect performance issues. Mountkirk games want their test environments to scale cost-effectively. How should you design the test environments?
A. Design the test environments to scale based on simulated production traffic.
B. Make use of the existing on-premises infrastructure to scale based on simulated production traffic.
C. Stress tests every single GCP service used by the application individually.
D. Create multiple static test environments to handle different levels of traffic, e.g. small, medium, big.
20. TerramEarth would like to reduce unplanned downtime for all its vehicles and preemptively stock replacement parts. To do this, TerramEarth has partnered with another firm to loT enable all vehicles in the field. The CTO sees an Al-driven solution being the future of this prediction and wants to store all telemetry data in a cost-efficient way while the team works on building a blueprint for a machine learning model in a year. The CTO has asked you to facilitate cost-efficient storage of the telemetry data. Where should you store this data?
A. Compress the telemetry data in half-hourly snapshots on the vehicle loT device and push to a Nearline Google Cloud Storage bucket.
B. Use a real-time (streaming) dataflow job to compress the incoming data and store in BigQuery.
C. Use a real-time (streaming) dataflow job to compress the incoming data and store in Cloud Bigtable.
D. Compress the telemetry data in half-hourly snapshots on the vehicle loT device and push to a Coldline Google Cloud Storage bucket.
21. For this question, refer to the TerramEarth case study.
The feedback from all TerramEarth service centres and dealer networks indicates vehicle hydraulics fail after 69000 miles, and this has knock-on effects such as disabling the dynamic adjustment in the height of the vehicle. The vehicle design team wants the raw data to be analyzed, and operational parameters tweaked in response to various factors to prevent such failures. How can you facilitate this feedback loop to all the connected and unconnected vehicles while minimizing costs?
A. Engineers from vehicle design team analyze the raw telemetry data and determine patterns that can be used by algorithms to identify operational adjustments and tweak the drive train parameters automatically.
B. Use a custom machine learning solution in on-premises to identify operational adjustments and tweak the drive train parameters automatically.
C. Run a real-time (streaming) Dataflow job to identify operational adjustments and use Firebase Cloud Messaging to push the optimisations automatically.
D. Use Machine learning in Google Al Platform to identify operational adjustments and tweak the drive train parameters automatically.
22. For this question, refer to the TerramEarth case study.
TerramEarth would like to reduce unplanned downtime for all its vehicles and preemptively stock replacement parts. To do this, TerramEarth has partnered with another firm to loT enable all vehicles in the field. The vehicle telemetry data is saved in Cloud Storage for long term storage and is also pushed to BigQuery to enable analytics and train ML models. A recent automotive industry regulation in the EU prohibits TerramEarth from holding this data for longer than 3 years. What should you do?
A. Enable a bucket lifecycle management rule to delete objects older than 36 months. Update the default table expiration for BigQuery Datasets to 36 months.
B. Enable a bucket lifecycle management rule to set the Storage Class to NONE for objects older than 36 months. Set BigQuery table expiration time to 36 months.
C. Enable a bucket lifecycle management rule to delete objects older than 36 months. Use partitioned tables in BigQuery and set the partition expiration period to 36 months.
D. Enable a bucket lifecycle management rule to set the Storage Class to NONE for objects older than 36 months. Use partitioned tables in BigQuery and set the partition expiration period to 36 months.
23. TerramEarth has recently partnered with another firm to loT enable all vehicles in the field. Connecting all vehicles has resulted in a massive surge in ingested telemetry data, and
TerramEarth is concerned at the spiralling storage costs of storing this data in Cloud Storage for long term needs. The Machine Learning & Predictions team at TerramEarth has suggested data older than 1 year is of no use and can be purged. Data older than 30 days is only used in exceptional circumstances for training models but needs to be retained for audit purposes. What should you do?
A. Implement Google Cloud Storage lifecycle management rule to transition objects older than 30 days from Standard to Coldline Storage class. Implement another rule to Delete objects older than 1 year in Coldline Storage class.
B. Implement Google Cloud Storage lifecycle management rules to transition objects older than 30 days from Coldline to Nearline Storage class. Implement another rule to transition objects older than 90 days from Coldline to Nearline Storage class.
C. Implement Google Cloud Storage lifecycle management rules to transition objects older than 90 days from Standard to Nearline Storage class. Implement another rule to transition objects older than 180 days from Nearline to Coldline Storage class.
D. Implement Google Cloud Storage lifecycle management rule to transition objects older than 30 days from Standard to Coldline Storage class. Implement another rule to Delete objects older than 1 year in Nearline Storage class.
24. For this question, refer to the TerramEarth case study.
You work for a consulting firm that specializes in providing next-generation digital services and has recently been contracted by TerramEarth to help redesign their data warehousing platform. Taking into consideration its business requirements, technical requirements and executive statement, what replacement would recommend for their data warehousing needs?
A. Use BigQuery and enable table partitioning.
B. Use a single Compute Engine instance with machine type n1-standard-96 (96 CPUs, 360 GB memory).
C. Use BigQuery with federated data sources.
D. Use two Compute Engine instances – a non-preemptible instance with machine type n1-standard-96 (96 CPUs, 360 GB memory) and a preemptible instance with machine type n1-standard-32 – (32 CPUs, 120 GB memory).
25. You work for a consulting firm that specializes in providing next-generation digital services and has recently been contracted by TerramEarth to help redesign their data warehousing platform. Your redesigned solution includes Cloud Pub/Sub, Cloud Dataflow and BigQuery and is expected to satisfy both their business and technical requirements. But the service centres and maintenance departments have expressed concerns at the quality of data being ingested. You have been asked to modify the design to provide an ability to clean and prepare data for analysis and machine learning before saving data to BigQuery. You want to minimize cost. What should you do?
A. Sanitize the data during the ingestion process in a real-time (streaming) Dataflow job before inserting into BigQuery.
B. Use Cloud Scheduler to trigger Cloud Function that reads data from BigQuery, cleans it and updates the tables.
C. Run a query to export the required data from existing BigQuery tables and save the data to new BigQuery tables.
D. Run a daily job in Dataprep to sanitize data in BigQuery tables.
26. For this question, refer to the TerramEarth case study.
You work for a consulting firm that specializes in providing next-generation digital services and has recently been contracted by TerramEarth to help them enhance their data warehousing solution. TerramEarth would like to reduce unplanned downtime for all its vehicles and preemptively stock replacement parts. To do this, TerramEarth wants to enable all its analysts the ability to query vehicle telemetry data in real-time and visualize this data in dashboards. What should you do?
A. 1. Stream telemetry data from vehicles to Cloud Pub/Sub. Use Dataflow and BigQuery streaming inserts to store data in BigQuery. 2. Develop dashboards in Google Data Studio.
B. 1. Upload telemetry data from vehicles to Cloud Storage Bucket. Use Dataflow and BigQuery streaming inserts to store data in BigQuery. 2. Develop dashboards in Google Data Studio.
C. 1. Upload telemetry data from vehicles to Cloud Storage Bucket. Use Cloud Functions to transfer this data to partitioned tables in Cloud Dataproc Hive Cluster. 2. Develop dashboards in Google Data Studio.
D. 1. Stream telemetry data from vehicles to partitioned tables in Cloud Dataproc Hive Cluster. 2. Use Pig scripts to chart data.
27. For this question, refer to the TerramEarth case study.
TerramEarth would like to reduce unplanned downtime for all its vehicles and preemptively stock replacement parts. To do this, TerramEarth wants to enable all its analysts the ability to query vehicle telemetry data in real-time. However, TerramEarth is concerned with the reliability of its existing ingestion mechanism. In a recent incident, vehicle telemetry data from all vehicles got mixed up, and the vehicle design team were unable to identify which data belonged to a particular vehicle. TerramEarth and has asked you for your suggestion on enabling reliable ingestion of telemetry data. What should you suggest they use?
A. Use Cloud loT with Cloud HSM keys.
B. Use Cloud loT with per-device public/private key authentication.
C. Use Cloud loT with project-wide SSH keys.
D. Use Cloud loT with specific SSH keys.
28. For this question, refer to the TerramEarth case study.
TerramEarth needs to store all the raw telemetry data to use it as training data for machine learning models. How should TerramEarth store this data while minimizing cost and minimizing the changes to the existing processes?
A. Configure the loT devices on vehicles to stream the data directly into BigQuery.
B. Configure the loT devices on vehicles to stream the data to Cloud Pub/Sub and save to Cloud Dataproc HDFS on persistent disks for long term storage.
C. Continue receiving data via existing FTP process and save to Cloud Dataproc HDFS on persistent disks for long term storage.
D. Continue receiving data via existing FTP process and upload to Cloud Storage.
29. For this question, refer to the Dress4Win case study.
A recent security breach has resulted in Dress4Win engaging an external security investigations firm to investigate the incident. The security firm has suggested disabling all but essential access, including disabling external SSH access to their Google Cloud VMs while they analyze the log files expected to take about 4 weeks. An external security researcher has provided a tip-off about a possible security loophole. The development team has implemented a fix to address the loophole and want this deployed as soon as possible; however, the operations team is unable to deploy as they can’t SSH to the VMs. They need to check out the new release, build new docker images, push images to GCR, update GKE deployment to use the new image and delete public objects in a Cloud Storage bucket. You have been asked to identify a way to enable the operations team to deploy the fix immediately without enabling external SSH access. What should you do?
A. Grant the relevant IAM roles to the operations team and ask them to access services through Google Cloud Shell.
B. Ask the operations team to SSH to Google Compute Instances through VPN tunnel from a bastion host on the on-premises data centre.
C. Enable external SSH access, deploy the fix and disable it again.
D. Build an API for deployment that invoke relevant APIs of GCP Services in use to perform the deployment and have the operations team invoke the deployment API.
3. For this question, refer to the Mountkirk Games case study.
You work for a company which specializes in setting up resilient architectures in Cloud Platforms, and Mountkirk games have contracted your company to help them set up their Cloud Architecture. You have been passed these requirements: –
Services should be immune to regional GCP outages and where possible services across all regions should be exposed through a single IP address.
– The compute layer should not be publicly reachable. Instead, the requests to compute workloads should be directed through well-defined frontend services.
– Mountkirk Games has already decomposed existing complex interfaces into multiple microservices. Where possible, Mountkirk Games prefers to maintain the immutable nature of these microservice deployments.
– Mountkirk Games places a high value on being agile and reacting to change by deploying changes quickly and reliably. and rollback changes at short notice.
– Enable Caching for Static Content.
Taking into consideration these requirements, which GCP services would you recommend?
A. Google Cloud Dataflow, Google Compute Engine, Google Cloud Storage.
B. Google App Engine Google Cloud Storage Google Network Load Balancer.
C. Cloud CDN, Google Kubernetes Engine, Google Container Registry, Google HTTP(S) Load Balancer.
D. Cloud CDN, Google Cloud Pub/Sub, Google Cloud Functions, Google Cloud Deployment Manager.
30. For this question, refer to the Dress4Win case study.
Dress4Win has accumulated 2 TB of database backups, images and logs files in their on-premises data centre and wants to transfer this data to Google Cloud. What should you do?
A. Use a custom gsutil script to copy the files to a Nearline Storage bucket.
B. Use a custom gsutil script to copy the files to a Multi-Regional Storage bucket.
C. Transfer the files to Coldline Storage bucket using a Storage Transfer Service job.
D. Transfer the files to Multi-Regional Storage bucket using a Storage Transfer Service job.
31. For this question, refer to the Dress4Win case study.
You work for a company which specializes in setting up resilient and cost-efficient architectures in Cloud Platforms, and Dress4Win have contracted your company to help them set up their Cloud Architecture. Dress4Win has several VMs running Windows Server 2008 R2 and RedHat Linux and feels some of the machines were overprovisioned. You have been asked for your recommendation on what machine types they should migrate to in Google Cloud. What should you suggest?
A. Migrate to GCP machine types that are a close match to the existing physical machine in terms of the number of CPUs and Memory. Then, scale up or scale down the machine size as needed.
B. Migrate to GCP machine types that have the highest RAM to CPU ratio (highmem instance types).
C. Start with the smallest instances and scale up to a larger machine type until the performance is of the desired standard.
D. Migrate to custom machines in GCP with the same number of vCPUs and Memory as the existing virtual machines. Then, scale up or scale down the machine size as needed.
32. For this question, refer to the Dress4Win case study.
The operations team at Dress4Win have been involved in addressing numerous incidents recently. The operations team believe they could have done a better job if they had better monitoring on their systems and were notified quicker when applications experienced issues. One of the main reasons for delays in the investigation was that logs for each system were stored locally, and they had trouble combining logs from multiple systems to get a unified view of the application. Dress4Win want to avoid a repeat of these issues when they migrate their systems to Google Cloud. What GCP services should they use?
A. Cloud Monitoring, Cloud Trace, Cloud Debugger.
B. Cloud Logging Cloud Monitoring, Cloud Trace and Cloud Debugger.
C. Error Reporting, Cloud Logging and Cloud Monitoring.
D. Cloud Logging, Cloud Debugger, Error Reporting.
33. For this question, refer to the Dress4Win case study.
The CTO of Dress4Win has signed off on the budget for migration to Google Cloud and has asked teams to get familiar with Google Cloud. The DevOps team manages the deployment of all applications but is inexperienced when it comes to Google Cloud. You are the applications architect and have been approached by the DevOps team to suggest an application they can start migrating to Google Cloud with minimal changes. Their objective is to become familiar with its features, understand the deployment methodologies and develop documentation. What should you recommend?
A. Migrate an application that has several external dependencies.
B. Migrate an application that has no dependencies or minimal internal dependencies.
C. Migrate the MySQL database used for storing user data, inventory and static data.
D. Migrate the three RabbitMQ servers.
34. For this question, refer to the Dress4Win case study.
Dress4Win is partway through the migration to Google Cloud, and their next focus is on migration their MySQL database to Google Cloud. The operations team is concerned that this may adversely impact their production performance and cause unplanned downtime. How should you migrate the database to Google Cloud while allaying their anxiety over the impact to live traffic?
A. Shutdown MySQL server to take a full backup, export it to Cloud Storage, and create a Cloud SQL for MySQL instance from it.
B. Replicate data from on-premises MySQL database to a Cloud SQL for MySQL replica. Once replication is complete, modify all applications to write to Cloud SQL for MySQL.
C. Create a new Cloud SQL for MySQL instance in Google Cloud Platform. Update all applications to write to both on the on-premises MySQL database and Cloud SQL database. Then, delete the on-premises database.
D. Shutdown MySQL server to take a full backup and export it to Cloud Datastore. Update all applications to write to Cloud Datastore.
35. For this question, refer to the Dress4Win case study.
Dress4Win is partway through the migration to Google Cloud, and their next focus is on migrating their monitoring solution to Google Cloud. A VPN tunnel has already been configured to enable network traffic between the on-premises data centre and GCP network. The operations team have now created several uptime checks in Cloud monitoring to monitor the services in both Google Cloud and on-premises data centre. All uptime checks for services in Google cloud are healthy, while all uptime checks for services in the on-premises data centre are unhealthy. The operations team have logged into the on-premise VMs and found the applications to be healthy. They have approached you for your assistance in identifying and fixing the issue. What should you advise them?
A. Ask the operation team to install Cloud monitoring agents on all on-premise VMs.
B. Update on-premises firewall rules to allow traffic from IP Address range of uptime servers.
C. Update all on-premises application load balancers to pass through requests when User-Agent HTTP header is GoogleStackdriverMonitoring-imeChecks(https://cloud.google.com/monitoring).
D. Update all on-premises application servers to serve requests when User-Agent HTTP header is GoogleStackdriverMonitoring- /ptimeChecks(https://cloud.google.com/monitoring).
36. For this question, refer to the Dress4Win case study.
Dress4Win has partnered with a group of upmarket retailers to identify the next generation of models for their clothing lines. The new scheme allows users who have bought the retailers modelling samples to try them on and upload their images. Users signing up to the scheme have to agree to their images being shared with the retailer. You are an app developer at Dress4Win, and you want to ensure that images are stored securely, and users can easily retrieve, update and delete their images with minimal latency. How should you configure the system?
A. 1. Use Google Cloud Storage to save images. 2. Use Firestore (in Datastore mode) to map the customer ID and the location of their images in Google Cloud Storage.
B. 1. Use Google Cloud Storage to save images. 2. Tag each image with key as customer_id and value as the value of unique customer ID.
C. 1. Use Persistent SSDs to save images. Add monitoring to receive alerts when storage is full and more SSDs. 2. Name the files based on customer ID and a random suffix.
D. 1. Use Persistent SSDs to save images. Add monitoring to receive alerts when storage is full and more SSDs. 2. Map the customer ID and the location of their images on SSDs in Cloud SQL.
37. For this question, refer to the Dress4Win case study.
Your company is an industry-leading ISTQB certified software testing firm, and Dress4Win has recently partnered with your company for designing their new testing strategy. Dress4Win’s existing end to end tests cover all their endpoints running in their on-premises data centre, and they have asked you for your suggestion on the changes needed in the test plan to ensure no new issues crop up when they migrate to Google Cloud. What should you suggest?
A. Update the test plan to include stress testing of GCP infrastructure.
B. Update the test plan to include additional unit tests and load tests on production-like traffic.
C. Update the test plan to modify end to end tests for GCP environment.
D. Update the test plan to add canary tests to assess the impact of new releases in the production environment.
38. For this question, refer to the Dress4Win case study.
You work for a company which specializes in setting up resilient and cost-efficient architectures in Cloud Platforms, and Dress4Win have contracted your company to help them lower their Cloud Opex costs. You identified terabytes of audit data in Google Cloud Storage bucket, and this accounts for 22% of all Cloud costs. Although regulations require Dress4Win to retain their audit logs for 10 years, they are only used if there is an investigation into the company’s finances by the financial ombudsman. What should you do to reduce the storage costs?
A. Transition the data to Coldline Storage class.
B. Transition the data to Nearline Storage class.
C. Migrate the data to BigTable.
D. Migrate the data to BigQuery.
39. For this question, refer to the Dress4Win case study.
Dress4Win’s revenue from its Asian markets has dipped by over 50% in the previous quarter. The simulation testing from various locations in Asia has revealed that 62% of all tests have failed with timeout issues or slow responses. Dress4Win suspects this is because of the latency between its US-based data centre and the customers in Asia. Dress4Win wants to avoid such issues with its new Google Cloud backed solution. What should it do?
A. Configure the Global HTTP(s) load balancer to forward the request to managed instance groups.
B. Set up a custom regional software load balancer in each region. Configure the Global HTTP(s) load balancer to send requests to the region closest to traffic and configure the software load balancer to forward the request in round-robin pattern to an instance in each zone.
C. Configure the Global HTTP(s) load balancer to forward the request to the nearest region. Provision a VM instance in each zone to protect from zone failures.
D. Set up a custom regional software load balancer in each region. Configure the Global HTTP(s) load balancer to send requests to the region closest to traffic andconfigure the software load balancer to forward the requests to a regional managed instance group.
4. For this question, refer to the Mountkirk Games case study.
You work for a company which specializes in setting up resilient architectures in Cloud Platforms, and Mountkirk games have contracted your company to help them address few niggling issues in their Cloud Platform. Cloud Monitoring dashboards set up by MountKirk Games indicate 1% of its game users are being displayed Service Unavailable page upon trying to login with their credentials and 6.7% users take over 2 minutes to log in. You analyzed the code and found that this error page is displayed when an internal user management service throws HTTP 503 error. You suspect the issue might be with autoscaling. What should you do next?
A. Ensure the database used for managing user profiles is not down.
B. Ensure you the scaleup hasn’t hit the project quota limits.
C. Review recent releases to check for performance issues.
D. Ensure performance testing is not happening in the live environment.
41. You work for a company which specializes in setting up resilient and cost-efficient architectures in Cloud Platforms, and Dress4Win have contracted your company to help them migrate to Google Cloud. Taking into consideration the business and technical requirements, where and how should you deploy the services?
A. 1. Use Cloud Marketplace to provision Tomcat and Nginx on Google Compute Engine. 2. Replace MySQL with Cloud SQL for MySQL. 3. Use the Deployment Manager to provision Jenkins on Google Compute Engine.
B. 1. Use Cloud Marketplace to provision Tomcat and Nginx on Google Compute Engine. 2. Use Cloud Marketplace to provision MySQL server. 3. Use the Deployment Manager to provision Jenkins on Google Compute Engine.
C. 1. Migrate applications from Tomcat/Nginx to Google App Engine Standard. 2. Replace on-premises MySQL with Cloud Datastore. 3. Use Cloud Marketplace to provision Jenkins on Google Compute Engine.
D. 1. Migrate applications from Tomcat/Nginx to Google App Engine Standard. 2. Use Cloud Marketplace to provision MySQL Server. 3. Use Cloud Marketplace to provision Jenkins on Google Compute Engine.
42. For this question, refer to the Dress4Win case study.
You work for a company which specializes in setting up resilient and cost-efficient architectures in Cloud Platforms, and Dress4Win have contracted your company to help migrate to Google Cloud. The CTO at Dress4Win is keen on migrating the existing solution to Google Cloud as soon as possible. Where a “lift and shift” approach is not possible, the CTO is prepared to sign off an additional budget to redesign the required components to work in a Cloud-native way. Which of the below should you recommend Dress4Win do?
A. Migrate the Tomcat/Nginx applications to App Engine Standard service.
B. Configure RabbitMQ on a regional unmanaged instance group with an instance in each zone.
C. Replace Hadoop/Spark servers with Cloud Dataproc cluster.
D. Use custom machine types to deploy bastion hosts, security scanners and Jenkins for continuous integration.
43. For this question, refer to the Dress4Win case study.
Dress4Win failed to provide visibility into all administrative actions on the components/artefacts in its production solution that handle customer PII data. This has resulted in Dress4Win failing an audit and subsequently losing revenue. Dress4Win have contracted your company, which specializes in setting up resilient and cost-efficient architectures in Cloud Platforms, to help migrate their solution to Google Cloud and has asked you to identify what can be done in Google Cloud to satisfy the audit requirements. All modifications to the configuration and the metadata of individual components or GCP services that handle PII data are in the scope of the audit. What should you do?
A. Enable Cloud Trace on all web applications, identify the user identities and write them to logs.
B. Set up a dashboard in Cloud Monitoring based on the default metrics captured.
C. Enable Cloud Identity-Aware Proxy (IAP) on all web applications.
D. Pick up this information from Cloud Logging Console and Activity Page in GCP.
44. For this question, refer to the Dress4Win case study.
Dress4Win relies on the Active Directory structure (users and groups) to enable secure access to applications and VMs. While the current approach works, it is cumbersome and has not been maintained over the years resulting in a proliferation of groups in AD. The team that manages AD is unaware of the purpose of more than half of all AD groups, and they now assign applications directly to users instead of using AD groups. You are asked to recommend the simplest design to handle identity and access management when the solution moves to Google Cloud. What should you do?
A. Create custom IAM roles with the relevant access and grant them to the relevant Google Groups. Encrypt objects with Customer Supplied Encryption Key (CSEK) when uploading to Cloud Storage bucket.
B. Create custom IAM roles with the relevant access and grant them to the relevant Google Groups. Enable the default encryption feature in Cloud Storage to encrypt all uploads automatically.
C. Grant the predefined IAM roles to the relevant Google Groups. Rely on the encryption at rest by default feature of Google Cloud
D. Storage to encrypt objects at rest. Grant the predefined IAM roles to the relevant Google Groups. Encrypt objects with Customer Supplied Encryption Key (CSEK) when uploading to Cloud Storage bucket.
45. For this question, refer to the Dress4Win case study.
Dress4Win Cloud migration project manager has prepared a plan to start the migration work in 4 months. Your Team Lead is keen on driving strategic architectural changes in the existing on-premises solution to simplify the migration work to Google Cloud while aligning with the business requirements. What can you do to enable this?
A. Replace RabbitMQ servers with on-prem Google Pub/Sub.
B. Migrate MySQL to a version supported by Cloud SQL for MySQL.
C. Resize all VMs to match the sizing of predefined machine types in Google Cloud.
D. Migrate applications to GKE on-prem.
5. For this question, refer to the Mountkirk Games case study.
You have been hired as a Cloud Security Administrator at Mountkirk Games to improve the security landscape in their GCP platform. You notice that the development team and testing team work together to deliver new features, and they must have access to each other’s environments. A concerning observation is that they both also have access to staging and production environments and you are worried that they may accidentally break production applications. Further talks with the development team have revealed that one of the staging environments used for performance testing needs to import data from the production environment every night. What should you do to isolate production environments from all others?
A. Deploy development and test resources to one project. Deploy staging and production resources to another project.
B. Deploy development and test resources in one VPC. Deploy staging and production resources in another VPC.
C. Deploy development and test resources in one subnet. Deploy staging and production resources in another subnet.
D. Deploy development, test, staging and production resources in their respective projects.
6. For this question, refer to the Mountkirk Games case study.
Taking into consideration the technical requirements outlined in Mountkirk Games case study, what combination of services would you recommend for their batch and real-time analytics platform?
A. Kubernetes Engine, Container Registry, Cloud Pub/Sub, and Cloud SQL Cloud Storage, Cloud Pub/Sub, Dataflow, and BigQuery.
B. Cloud SQL for MySQL, Cloud Pub/Sub, and Dataflow.
C. Cloud Dataproc, Cloud Datalab, Cloud SQL and Dataflow.
D. Cloud Pub/Sub, Cloud Storage, and Cloud Dataproc.
7. For this question, refer to the Mountkirk Games case study.
Taking into consideration the technical requirements outlined in Mountkirk Games case study, what steps should you execute when migrating the batch and real-time game analytics solution to Google Cloud Platform? (Choose two)
A. Assess the impact of moving the current batch ETL code to Cloud Dataflow.
B. Denormalize data in BigQuery for better performance.
C. Migrate data from MySQL to Cloud SQL for MySQL.
D. Carry out performance testing in Clou SQL with 10 TB of analytics data.
E. Implement measures to defend against DDoS & SQL injection attacks when uploading files to Cloud Storage.
8. For this question, refer to the Mountkirk Games case study.
Taking into consideration the technical requirements for the game backend platform as well as the business requirements, how should you design the game backend on Google Cloud platform?
A. Use Google Compute Engine preemptible instances with Network Load Balancer.
B. Use Google Compute Engine non-preemptible instances with Network Load Balancer.
C. Use Google Compute Engine preemptible instances in a Managed Instances
D. Group (MIG) with autoscaling and Global HTTP(s) Load Balancer.
E. Use Google Compute Engine non-preemptible instances in a Managed Instances Group (MIG) with autoscaling and Global HTTP(s) Load Balancer.
9. For this question, refer to the Mountkirk Games case study.
The CTO of Mountkirk Games is concerned that the existing Cloud solution may lack the flexibility to embrace the next wave of transformations in cloud computing and technology advancements. He has asked you for your recommendation on implementing changes now that would help the business in future. What should you recommend?
A. Store more data and use it as training data for machine learning.
B. Migrate to GKE for better autoscaling.
C. Enable CI/CD integration to improve deployment velocity, agility and reaction to change.
D. Restructure the tables in MySQL database with a schema versioning tool to make it easier to support new features in future.
E. Patch servers frequently and stay on the latest supported patch levels and kernel version.