GCP Professional Cloud Architect Practice Exam Part 2
Notes: Hi all, Google Professional Cloud Architect Practice Exam will familiarize you with types of questions you may encounter on the certification exam and help you determine your readiness or if you need more preparation and/or experience. Successful completion of the practice exam does not guarantee you will pass the certification exam as the actual exam is longer and covers a wider range of topics.
We highly recommend you should take Google Professional Cloud Architect Actual Exam Version because it include actual exam questions and highlighted answers are collected and verified in our exam. It will help you pass exam in easier way.
1. You are developing an application to handle banking transactions such as credits and debits. The application requirements state you need to ensure all transactions are processed and that they are processed in the same order they are received. You also need to ensure each transaction is processed exactly once. Which GCP services should you use to ensure exactly-once first in first out the processing of transactions?
A. Use Cloud Pub/Sub for FIFO and Cloud SQL for exactly-once processing.
B. Use Cloud Pub/Sub for FIFO and Cloud Monitoring for exactly-once processing.
C. Use Cloud Pub/Sub for FIFO and exactly-once processing.
D. Use Cloud Pub/Sub for FIFO and Cloud DataFlow for exactly-once
2. Your company data centre is running out of space, and you have been asked to identify the best way to transfer 100 TB of audit logs to Cloud Storage. You want to follow Google-recommended practices. What should you do?
A. Use a transfer appliance to migrate data and decrypt the data in Cloud Storage using a transfer appliance rehydrator.
B. Use a transfer appliance to migrate data and decrypt the data in Cloud Storageusing a Cloud Dataprep.
C. Use gsutil to upload data to Cloud Storage using resumable transfers.
D. Use gsutil to streaming upload to Cloud Storage.
3. Your company runs a very successful social media application and plans to migrate to Google Cloud. Your company needs to store a variety of data such as customer session state, images, VM boot volumes, VM data volumes, application logs etc. Which combination of GCP services should you use?
A. 1. Use Local SSD for storing customer session state. 2. Use Cloud Storage Bucket with Lifecycle managed rules for application logs, images, VM boot volumes and VM data volumes.
B. 1. Use Memcache backed by Cloud SQL for storing customer session state. 2. Use instances with local SSDs for storing VM boot volumes and VM data volumes. 3. Use Cloud Storage Bucket with Lifecycle managed rules for storing application logs and images.
C. 1. Use Memcache backed by Cloud Datastore for storing customer session state. 2. Use Cloud Storage Bucket with Lifecycle managed rules for application logs, images, VM boot volumes and VM data volumes.
D. 1. Use Memcache backed by Persistent Disk SSDs for storing customer session state. 2. Use instances with local SSDs for storing VM boot volumes and VM data volumes. 3. Use Cloud Storage
Bucket with Lifecycle managed rules for storing application logs and images.
4. Your company would like to trial Google Cloud Platform while minimizing cost and has asked you to suggest a managed compute service that automatically scales to zero so that you do not incur costs in the absence of activity outside the regular business hours. What should you recommend?
A. AppEngine flexible environment
B. Compute Engine
C. Cloud Functions
D. Kubernetes Engine
5. You are managing a secure application that runs on several VMs, autoscales based on traffic and handles customer PII data. Your security team has mandated that all but essential traffic between instances is blocked. How should you design the network taking into consideration the autoscaling nature of the application – which prevents you from explicitly using Static IPs?
A. Configure Cloud DNS to allow just the essential traffic between these VMs.
B. UpdateVMs service accounts to allow traffic to and from other VMs.
C. Add network tags to VMs and set up firewall rules based on these network tags to allow just the essential traffic.
D. Move VMs to separate VPCs.
6. Your Team Lead has asked you for your suggestion on configuring a GKE cluster to scale cluster nodes up and down based on CPU utilization. What should you suggest?
A. Enable autoscaling on Managed Instance Group (MIG) for the GKE cluster. Enable Horizontal Pod Autoscaler based on CPU utilization.
B. Enable autoscaling on Managed Instance Group (MIG) for the GKE cluster. Update deployment to set appropriate values for maxUnavailable and maxSurge.
C. Enable GKE Cluster Autoscaler. Enable Horizontal Pod Autoscaler based on CPU utilization.
D. Enable GKE Cluster Autoscaler. Update deployment to set appropriate values for maxUnavailable and maxSurge.
7. Your company stores its customer data in several Google Cloud projects and uses BigQuery as its enterprise data warehouse. Although data is stored in different projects, your finance team has requested you to consolidate all querying costs in a single project. The security team has suggested enabling query only access, but not edit access, to the datasets for analytics users. What should you do?
A. In the GCP Billing Project, grant BigQuery dataViewer role to the analytics user group. 2. In projects that contain the data, grant BigQuery user role to the analytics user group.
B. In the GCP Billing Project, grant BigQuery dataViewer role to the analytics user group. 2. In projects that contain the data, grant BigQuery jobUser role to the analytics user group.
C. In the GCP Billing Project, grant BigQuery user role to the analytics user group. 2. In projects that contain the data, grant BigQuery dataViewer role to the analytics user group.
D. In the GCP Billing Project, grant BigQuery jobUser role to the analytics user group. 2. In projects that contain the data, grant BigQuery dataViewer role to the analytics user group.
8. A business-critical application deployed to Google Kubernetes Engine (GKE) is experiencing issues connecting to Cloud SQL database. The primary pods use a sidecar container to establish a connection to the database. You are asked you to carry out a post-mortem of incident. What should you do?
A. Ensure the sidecar container still has Container Registry Editor role.
B. Check GKE & Cloud SQL logs in Cloud Logging console.
C. Restart all primary pods. If the issue persists, rollback Cloud SQL to the latest full backup.
D. Restart the database.
9. Your company runs an application in compute engine for collecting users monthly subscription fees. The application pushes the logs of each user’s credit card details to Cloud Pub/Sub for subsequent payment processing. How should you configure the IAM access between Cloud Pub/Sub and Compute Engine VMs?
A. Modify VM access scopes to enable Cloud Pub/Sub IAM roles.
B. Grant the required Cloud Pub/Sub IAM roles to the VM service account.
C. Modify application in Compute Engine to instead call a Cloud Function that has the appropriate Cloud Pub/Sub IAM roles.
D. Enable OAuth 2.0 for Cloud Pub/Sub and configure the Compute Engine VMs to provide a valid access token.
10. You developed bug fixes for a mission-critical application running on App Engine standard service. Your change management board has advised caution and validate the bug fixes with live traffic on a small set of users before replacing the current version. What should you do?
A. Using Instance Group Updater (IGU), deploy a partial rollout. After validating, deploy a full rollout.
B. Deploy the fix as a new App Engine Application in the same project and split traffic between the two applications using HTTP(s) load balancer.
C. Deploy the fix as a new App Engine Application in a new VPC and split traffic between the two applications using HTTP(s) load balancer.
D. Deploy a new version in the App Engine application and use traffic splitting to distribute traffic across the old and new versions.
11. Your company has a hybrid architecture with workloads that run primarily from the data centre and failover to GCP when needed. The failover process requires moving large files from the data centre to GCP in a short period. Your IT Director has asked you to ensure the disaster recovery is resilient, and you identified the network connection between your data centre and GCP
network as a possible single point of failure. How should you design the network connection between the data centre and the GCP network to establish a secure and redundant connection?
A. Use Dedicated Interconnect to transfer the large files to Google Cloud Platform. Configure Cloud VPN to take over if the transfer appliance fails.
B. Use a Transfer Appliance to transfer the large files to Google Cloud Platform. Configure Cloud VPN to take over if the transfer appliance fails.
C. Use a Transfer Appliance to transfer the large files to Google Cloud Platform. Configure Direct Peering to take over if the transfer appliance fails.
D. Use Dedicated Interconnect to transfer the large files to Google Cloud Platform. Configure Direct Peering to take over if the transfer appliance fails.
12. You are deploying a GPS tracking application on App Engine Standard. The GPS tracking application uses Cloud SQL as the backend. Some of the queries are running very slow, and your Team Lead has asked you to explore setting up a caching layer to speed up the application. What should you do?
A. Use Memorystore for Memcached and set service level to dedicated. Create a key from the hash of the query. Modify application logic to check the key in the cache before querying Cloud SQL. If the key doesn’t exist, query Cloud SQL and populate cache before returning the result to the application.
B. Use Memorystore for Memcached and set service level to shared. Use App Engine Cron Service to populate the cache with keys containing query results every minute. Modify application logic to check the key in the cache before querying Cloud SQL.
C. Use Memorystore for Memcached and set service level to shared. Create a key from the hash of the query. Modify application logic to check the key in the cache before querying Cloud SQL.
D. Use Memorystore for Memcached and set service level to dedicated. Use App Engine Cron Service to populate the cache with keys containing query results every minute. Modify application logic to check the key in the cache before querying Cloud SQL. If the key doesn’t exist, query Cloud SQL and populate cache before returning the result to the application.
13. A mission-critical application has scaling issues, and your company has decided to migrate from on-premises to GKE to fix this issue. The application, when deployed to GKE, must serve requests on HTTPS and scales up/down based traffic. What should you do?
A. Use Kubernetes Ingress Resource and enable Compute Engine Managed Instance Group (MIG) autoscaling.
B. Use Kubernetes Ingress Resource and enable GKE Cluster Autoscaling as well as Horizontal Pod Autoscaling.
C. Use Kubernetes Service of type LoadBalancer and enable GKE Cluster Autoscaling as well as Horizontal Pod Autoscaling.
D. Use Kubernetes Service of type LoadBalancer and enable Compute Engine Managed Instance Group (MIG) autoscaling.
14. You are migrating an application to Google Cloud. The application relies on Microsoft SQL Server, and due to the mission-critical nature of the workload, the application should have no downtime in case of zonal outages with GCP. How should you configure the database?
A. Migrate to a regional Cloud Spanner instance.
B. Migrate the SQL Server database onto two Google Compute Engine instances in different zones and enable SQL Server Always-On-Availability- Groups with Windows failover clustering.
C. Migrate the SQL Server database onto two Google Compute Engine instances in different subnets and enable SQL Server Always-On-Availability-Groups with Windows failover clustering.
D. Migrate to a high availability enabled Cloud SQL instance.
15. A critical application recently suffered a regional outage causing your company loss of valuable revenue. You have been asked for a recommendation on improving the existing testing and disaster recovery processes, and preventing such occurrences in Google Cloud. What should you recommend?
A. Automate provisioning of GCP services using custom gcloud scripts. Monitor and debug tests using Activity Logs.
B. Automate provisioning of GCP services using deployment manager templates. Monitor and debug tests using Activity Logs.
C. Automate provisioning of GCP services using custom gcloud scripts. Monitor and debug tests using Cloud Logging and Cloud Monitoring.
D. Automate provisioning of GCP services using deployment manager templates. Monitor and debug tests using Cloud Logging and Cloud Monitoring.
16. Your company runs several successful mobile games from your on-premises data centre and plans to use GCP for machine learning to identify improvements and new opportunities. The existing games generate 10 TB of analytics data each day. Your company currently stores three months analytics data (approx.: 900TB) in a highly available NAS in your data centre and needs to transfer this data to GCP as part of the initial data load and as well as transfer data generated daily. Your data centre is connected to the network on a 100 MBps line. What should you do?
A. Use a transfer appliance to transfer archived analytics data. Work with your telco partner and networks team to establish a Dedicated Interconnect connection to Google Cloud and upload files daily.
B. Compress all files and upload using the gsutil.
C. Use a transfer appliance to transfer archived analytics data. Set up multiple Cloud VPN tunnels and upload files daily.
D. Use a transfer appliance to transfer archived analytics data. Set up a single Cloud VPN tunnel and upload files daily.
18. Your Cloud Security team has asked you to centralize the collection of all VM system logs and all admin activity logs in your project. What should you do?
A. Admin activity logs are collected automatically by Cloud Logging for most services. To collect system logs, you need to install Cloud Logging agent on each VM.
B. Install a Cloud Logging agent on a separate VM. Direct the VMs, admin activity logs and VM system logs to send all logs to it.
C. Install a custom log forwarder on a separate VM and direct the VMs to send all logs to it.
D. Cloud Logging automatically collects the two sets of logs.
19. Your company develops portable software that is used by customers all over the world. Current and previous versions of software can be downloaded from a dedicated website running on compute engine in US-Central. Some customers have complained about high latency when downloading the software. You want to minimize latency for all your customers. You want to follow Google recommended practices. How should you store the files?
A. Save current and all previous versions of portable software files in Multiple Multi-Regional Cloud Storage buckets, one bucket per multi-region.
B. Save current and all previous versions of portable software files in multiple Regional Cloud Storage buckets, one bucket per zone per region.
C. Save current and all previous versions of portable software files in a single Regional Cloud Storage bucket, one bucket per zone of the region.
D. Save current and all previous versions of portable software files in a Multi-Regional Cloud Storage bucket.
21. Regulatory requirements mandate your company to retain PII data of customers from an acquired company for at least four years. You want to put a solution in place to securely retain this data and delete when permitted by the regulations. Which should you do?
A. Import the acquired PII data to Cloud Storage and use object. lifecycle management rules to delete files when they expire.
B. Import the acquired PII data to Cloud Storage and use App Engine Cron Service with Cloud Functions to enable daily deletion of all expired data.
C. De-Identify PII data using the Cloud Data Loss Prevention API and store it forever.
D. Store PII data in Google Sheets and manually delete records daily as they expire.
22. You developed an application recognizes famous landmarks from uploaded photos. You want to run a free trial for 24 hours and open up the application to all users, including users that don’t have a Google account. What should you do?
A. Generate a signed URL on a Cloud Storage bucket with expiration set to 24 hours and have users upload their photos using this signed URL.
B. Deploy the application to Google Compute Engine and terminate the instances after 24 hours. Use Cloud Identity to authenticate users.
C. Deploy the application to Google Compute Engine and use Cloud Identity to authenticate users.
D. Enable users to upload their photos to a public Cloud Storage bucket and set a password on the bucket after the trial.
23. Your company recently acquired a competitor, and you have been tasked with migrating one of their legacy applications into your company’s Google Cloud project. You noticed the legacy application has several os dependencies and the scale-up is delayed due to long startup time. You want to deploy the application on compute engine and make use of managed instance group so that it can scale based on traffic. You also want to minimize the startup time so that scale up happens quicker. What should you do?
A. Create a startup script to install os dependencies and automate the creation of Managed Instance Group (MIG) using terraform.
B. Use the Deployment Manager to automate the creation of Managed Instance Group (MIG). Use Ansible to install os dependencies.
C. Create a custom GCP VM image with all os dependencies preinstalled. Use the Deployment Manager to automate the creation of Managed Instance Group (MIG) with the custom image.
D. Use Puppet to automate the creation of Managed Instance Group (MIG) and installation of os dependencies.
25. Your company operates a very successful mobile app that lets users superimpose stock images of their favourite pets with their uploaded images. You use a combination of Google Cloud Storage and Vision AI to achieve this. Recently, the photo uploads from mobile user devices to Google Cloud storage have started throwing HTTP errors with status codes of 429 and 5xx. What should you do to fix this issue?
A. Use Cloud Storage gPRC endpoints.
B. Enable Geo-redundancy by moving Cloud Storage bucket from Regional to Multi-regional.
C. Make requests to Cloud Storage only if its status is healthy.
D. Retry failures with exponential backoff.
26. Your auditors require you to supply them the number of queries run by each user in BigQuery over the last 12 months. You want to do this as efficiently as possible. What should you do?
A. In Cloud Audit Logs, apply a filter on BigQuery query operation to get the required information.
B. Use Google Data Studio BigQuery connector to access data from your BigQuery tables within Google Data Studio. Create dimensions, metrics and reports to obtain this information.
C. Execute bq show command to list all jobs and execute bq Is for each job. Aggregate the output by user_id and obtain the information.
D. Execute a query on BigQuery JOBS table to get this information.
27. An application you deployed to Google Cloud uses a single Cloud SQL for MySQL instance in us-west1-a zone. What should you do to ensure high availability?
A. Create a MySQL failover replica in us-east1 (different region).
B. Create a MySQL failover replica in us-west1-b (same region but different zone).
C. Create a MySQL read replica in us-east1 (different region).
D. Create a MySQL read replica in us-west1-b (same region but different zone).
28. Your company runs a parcel tracking application on App Engine Standard service. The application requires ACID transaction support and uses Cloud Datastore as its persistence layer. You have been asked to identify an efficient way to retrieve multiple parcels (datastore root entities) based on the relevant tracking IDs (datastore identifiers) while minimizing overhead in the calls from App Engine to Datastore. What should you do?
A. Create a Key object for each tracking ID. Perform multiple get operations – one for each tracking ID.
B. Create a Key object for each tracking ID. Perform a single batch get operation.
C. Generate a query filter for each tracking ID. Perform multiple query operations-one for each entity.
D. Generate a query filter to include all tracking IDs. Perform a single batch query operation.
29. You need to install a legacy software on a compute engine instance that has no access to the internet. Your networks team have not yet created a VPN connection between Google network and the on-premises network. How can you transfer the software binary from on-premises to Google Cloud so that you can install the legacy software on the VM?
A. From the on-premises network. upload the file to Cloud Source Repositories. Provision the VM with an internal IP address in a subnet with Private Google Access enabled and run gsutil to copy the file.
B. From the on-premises network, upload the file to a bucket in Cloud Storage. Enable a firewall rule to block all but traffic from Cloud Storage IP range and run gsutil to copy the file.
C. From the on-premises network, upload the file to a bucket in Cloud Storage. Provision the VM with an internal IP address in a subnet with Private Google Access enabled and run gsutil to copy the file.
D. From the on-premises network, upload the file to Cloud Source Repositories. Enable a firewall rule to block all but traffic from Cloud Source Repositories IP range and run gsutil to copy
the file.
30. You are designing an application that handles customer PII data, and your compliance department has asked you to ensure the solution is compliant with the European Union’s GDPR. What should you do?
A. Limit your application to native GCP services & APIs that are signed off for GDPR compliance.
B. Design a robust testing strategy using Cloud Security Scanner to pick up GDPR compliance gaps in GCP services.
C. Google complies with the GDPR in relation processing of customer personal data in all Google Cloud Platform. Your company should also make sure your web application meets GDPR data protection compliance strategy.
D. Turn on GDPR compliance setting for each GCP service you plan to use.
31. Your company which operates care homes country-wide has decided to migrate its batch workloads to GCP. The batch workloads are not time-critical and can be restarted if interrupted. The local regulations require you to use services that are HIPAA compliant. Which GCP services should your company use while ensuring service costs are minimized?
A. Use preemptible compute instances. Stop using GCP Services/APIs that are not compliant with HIPAA and disable them altogether.
B. Use standard compute instances. Stop using GCP Services/APIs that are not compliant with HIPAA.
C. Use standard compute instances. Stop using GCP Services/APIs that are not compliant with HIPAA and disable them altogether.
D. Use preemptible compute instances. Stop using GCP Services/APls that are not compliant with HIPAA.
32. Your company has started its Cloud migration journey. The first phase of migration involves moving over internal (staff-only) applications to Google Cloud. These internal applications depend on Active Directory (AD) for user sign-in, but the Active Directory is not scheduled to be migrated until next year. You want to minimize the effort. What should you do?
A. Set up a new replica AD domain controller in a Google Compute Engine (GCE) instance and configure Google Cloud Directory Sync (GCDS) to replicate on-prem AD to the replica in GCE.
B. Configure Google Cloud Directory Sync (GCDS) to sync AD usernames to cloud identities in GCP and configure applications to use SAML SSO with Cloud Identity as the Identity Provider (IdP).
C. Configure Admin Directory API to validate credentials against the AD domain controller.
D. Configure the identity provider in Cloud Identity-Aware Proxy to use the on-prem AD domain controller.
33. Your company runs mobile gaming servers and lets individual app developers host their games. Some of the games have been hugely popular, causing the gaming servers to go offline. Your company wants to capture vast quantities of key performance indicators from the gaming servers and monitor these KPls in real-time with low latency to identify the games which are taking down the servers. What should you do?
A. Store KPIs in Google Bigtable and visualize KPIs in Google Data Studio.
B. Save KPls in Cloud Datastore and visualize KPls in Cloud Datalab.
C. Store KPIs as custom metrics in Cloud Monitoring, and build dashboards in Cloud Monitoring to visualize KPls.
D. Push KPI files to Cloud Storage hourly, use BigQuery load jobs to ingest them and visualize KPls in Google Data Studio.
34. Your company is partway through migration to Google Cloud. All compute workloads are scheduled to be migrated to Google Compute Engine this month, but they all depend on Active Directory (AD) which is scheduled isn’t scheduled for migration until next year. How should you configure the firewall rules so that all compute engine instances can reach your data centre to connect to the Active Directory while denying all other outbound traffic from compute engine instances?
A. 1. Create an egress rule with priority 2000 to allow AD traffic for all instances. 2. Rely on the implied deny egress rule with priority 200 to block all other traffic.
B. 1. Create an egress rule with priority 200 to allow AD traffic for all instances. 2. Rely on the implied deny egress rule with priority 2000 to block all other traffic.
C. 1. Create an egress rule with priority 2000 to allow AD traffic for all instances. 2. Create an egress rule with priority 200 to block all other traffic.
D. 1. Create an egress rule with priority 200 to allow AD traffic for all instances. 2. Create an egress rule with priority 2000 to block all other traffic.
35. All internal applications in your company depend on a legacy staff Single Sign On (SSO) solution for authentication and authorization. The SSO application is deployed in a regional managed instance group (MIG), exposes public HTTPS REST endpoints and relies on a Cloud SQL instance to validate user information. How should you test the resilience of this system?
A. Work with a third-party company specializing in web scraping to compare and detect users credentials exposed in public breaches.
B. Configure Intrusion Detection Management (IDM) and Intrusion Prevention Management (IPM) to detect and prevent unauthorized and suspicious logins.
C. Update the existing system to add Cloud SQL read replica in a different zone to make the system immune from GCP zone failures and work with your operations team to validate the failover works as expected.
D. Work with your operations team and shut down all instances in a zone to simulate a disaster scenario and check if the failover works.
36. You deployed an application on Google Compute Engine, but scaling the application is problematic because it requires a consistent set of hostnames. You plan to migrate this application to GKE to overcome the scaling issue. What GKE feature should you use to enable a consistent set of hostnames?
A. StatefulSets
B. RBAC (Role-based access control) Cluster Role and Cluster Role Binding
C. Persistent Volumes and Claims
D. Use hostname environment variable inside containers
37. Your company’s stock market recommendations application has garnered good feedback among clients, and your CEO wants to use this as a launchpad to develop an even better machine learning-driven recommendations application. The CTO has asked you for your recommendation on how to improve the machine learning results over time. You want to follow Google recommended practices. What should you do?
A. Retain as much data as possible, including historical all recommendations and use this data to train machine learning models.
B. Use Cloud Monitoring to monitor the performance of existing ML Engine, and tune as necessary.
C. Deploy Machine learning models on newer & more powerful CPU architectures as they become available.
D. Migrate to TPUs that offer better performance.
38. You are migrating an application from an on-premises network to Google Cloud. The application should be resistant to regional failures, so your team has decided to deploy the application across two regions within the same VPC and fronted by an external HTTP(s) Load balancer. The workload depends on Active Directory, which is still hosted in the on-premises network. How should you configure the VPN between GCP Network and the on-premises network?
A. Enable network peering between GCP VPC and on-prem network.
B. Deploy a regional VPN Gateway and make sure both regions in use have at least one VPN tunnel to the on-prem network.
C. Deploy a global VPN Gateway with redundant VPN tunnels from all regions in the VPC to the on-prem network.
D. Tweak IAM rules to enable VPC sharing & expose VPC to on-prem network.
39. Your team would like to start using Google Kubernetes Engine (GKE) for deploying an application. A colleague has provided you with a Kubernetes deployment file. You enabled the Kubernetes Engine API, and you now need to deploy the application. What should you do?
A. Create a Kubernetes cluster by running gcloud container clusters create. Create a Kubernetes deployment by running kubectl apply -f .
B. Create a Kubernetes cluster by running kubectl container clusters create. Automate the deployment using a deployment manager template.
C. Create a Kubernetes cluster by running gcloud container clusters create. Automate the deployment using a deployment manager template.
D. Create a Kubernetes cluster by running kubectl container clusters create. Use kubectl to create the deployment. Create a Kubernetes deployment by running kubectl apply -f .
40. Your company has accumulated over hundreds of terabytes of marketing analytics data, and you have been asked to identify a database for an OLAP tool that can handle this volume of data. Which database would you recommend for this analytics data?
A. BigQuery
B. Cloud Firestore in Datastore mode
C. Cloud SQL
D. Cloud Storage
E. Cloud Spanner
41. You deployed multiple applications in a GKE cluster; however, one application is not responding to requests. All pods of the deployment that underpins the troublesome application keep restarting every 10 seconds. You have been asked to inspect the logs and identify the issue. What should you do?
A. In Cloud Logging, inspect logs of all compute engine instances from the GKE node pool.
B. Inspect Serial Port logs of all compute engine instances from the GKE node pool.
C. In Cloud Logging, inspect logs of all pods that serve the troublesome application.
D. Connect to a VM in the node pool, use kubectl exec -it
42. You deployed an application on Google Kubernetes Engine by running kubectl apply -f deployment.yaml and kubectl apply -f service.yaml. These have created a deployment called demo-deployment and a service called demo-service. You have now been asked to perform an update while minimizing the downtime to this application. You want to follow Google recommended practices. What should you do?
A. Execute a rolling update of the GKE compute cluster Managed Instance Group (MIG).
B. Update service.yaml file with the new container image. To deploy the update, delete the existing service and create a new service: kubectl delete service/demo-service and kubectl create -f service.yaml.
C. Update deployment.yaml file with the new container image. To deploy the update, delete the existing deployment and create a new deployment: kubectl delete deployment/demo-deployment and kubectl create -f deployment.yaml.
D. Deploy the update by updating image on the existing deployment: kubectl set image deployment/demo-deployment .
43. Your company wants to migrate a mission-critical application from your data centre to Google cloud. The application must be immune to regional outages. How should you deploy the application?
A. Migrate the application to two Compute Engine VMs in different regions within the same project. Use HTTP(s) load balancer to failover from one instance to another.
B. Migrate the application to a single Compute Engine VM and use HTTP(s) load balancer to failover from one instance to another.
C. Migrate the application to two Compute Engine Managed Instance Groups (MIG) in different regions within the same project. Use HTTP(s) load balancer to failover from one MIG to another.
D. Migrate the application to two Compute Engine Managed Instance Groups (MIG) in different regions in different projects. Use HTTP(s) load balancer to failover from one MIG to another.
44. Your company is designing a new application that requires robust and reliable task scheduling. Which GCP services should you use?
A. Set up RabbitMQ on a single compute engine instance. Use Cron service provided by Google Kubernetes Engine (GKE) to publish messages directly to RabbitMQ topic. Deploy an application in Compute Engine to subscribe to the topic and process messages as they arrive.
B. Use Cron service provided by Google Kubernetes Engine (GKE) to publish messages directly to a Cloud Pub/Sub topic. Deploy an application in Compute Engine to subscribe to the topic and process messages as they arrive.
C. Set up RabbitMQ on a single compute engine instance. Use App Engine cron service to publish messages directly to RabbitMQ topic. Deploy an application in Compute Engine to subscribe to the topic and process messages as they arrive.
D. Use App Engine cron service to publish messages directly to a Cloud Pub/Sub topic. Deploy an application in Compute Engine to subscribe to the topic and process messages as they arrive.
45. You have configured all applications running on App Engine Flex to push their logs to separate BigQuery tables for storage. Your company wants to minimize costs and has asked you to implement a solution for purging logs older than 45 days. What should you do?
A. Set expiration time on all tables to 45 days.
B. The default expiration settings in BigQuery automatically delete logs older than 45 days.
C. Remove logs older than 45 days by running a custom bq script.
D. Update the tables to be partitioned by ingestion date and set partition expiration to 45 days.
46. Your company needs to migrate a compute workload to App Engine service. The workload still relies on an on-premises database, but the security team has set up firewall rules that prevent the on-premises database from being publicly accessible. What should you do?
A. Use App Engine Standard service and connect to the on-premises database through Cloud VPN tunnel.
B. Use App Engine Flexible service and connect to the on-premises database through Cloud VPN tunnel.
C. Use App Engine Flexible service and enable the App Engine firewall rules to allow access to the on-premises database.
D. Use App Engine Standard service and enable the App Engine firewall rules to allow access to the on-premises database.
47. Your company has retained 20 GB of audit logs in a NAS drive in the data centre and would like to migrate these to Cloud Storage. The compliance department requires you to encrypt the files using your customer-supplied encryption keys. How can you achieve this?
A. Create the bucket and upload files to it using gsutil and supply the encryption key using the –encryption-key parameter.
B. Upload the files using gsutil and supply the encryption key using the –encryption-key parameter.
C. Modify gcloud configuration to include the encryption key. Upload the files using gsutil.
D. Modify .boto configuration to include the encryption key. Upload the files using gsutil.
48. Your mission-critical stock market recommendations application requires in-transit encryption end to end and relies on URL path-based routing. The application has users all over the world. Your company wants to migrate this application to Google cloud and has asked for your recommendation? Which GCP load balancer architecture would you recommend?
A. Use SSL Proxy Load Balancer with Global Forwarding Rules.
B. Use URL Maps with Google cross-region Load Balancer.
C. Use URL Maps with Google HTT(s) Load Balancer.
D. Use SSL Proxy Load Balancer with Managed Instance Groups (MIG).
49. Your company has deployed a stateless application on a single Google Compute Engine instance. The application is used heavily by all employees during business hours, and there is minimal usage at other times. The application is experiencing slowness and performance issues during peak business hours. You have been asked by your manager to address the performance issues. What should you do?
A. 1. Use gcloud compute images create to create an image of the persistent disk. 2. Use gcloud compute instance-templates create to create an instance template from the image. 3. Create a Managed Instances Group (MIG) based on the instance template and enable autoscaling.
B. 1. Use gcloud compute disks snapshot to create a snapshot of the persistent disk. 2. Use gcloud compute instance-templates create to create an instance template from the snapshot. 3. Create a Managed Instances Group (MIG) based on the instance template and enable autoscaling.
C. 1. Use gcloud compute instance-templates create to create an instance template from the persistent disk. 2. Use gcloud compute images create to create an image of the template. 3. Create a Managed Instances Group (MIG) based on the image and enable autoscaling.
D. 1. Use gcloud compute disks snapshot to create a snapshot of the persistent disk. 2. Use gcloud compute images create to create an image from the snapshot. 3. Create a Managed Instances Group (MIG) based on the image and enable autoscaling.
50. You host your company’s static website in the App Engine Flex service. You are leveraging Google’s Cloud CDN (Content Delivery Network) to cache content close to your users. Your company wants to optimise the CDN costs and has asked you to identify a way to improve the cache hit ratio. What should you do?
A. Update cache keys to remove the protocol (HTTP/HTTPS).
B. Move static content to a Cloud Storage Bucket, set up a load balancer for the bucket and direct requests from Google CDN to the load balancer.
C. Set cache expiration time to a lower value.
D. Set caching location header HTTP: Cache-Region to a GCP region closest to users.