Pre-Summer Sale Special - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: xmaspas7

Easiest Solution 2 Pass Your Certification Exams

Professional-Cloud-Architect Google Certified Professional - Cloud Architect (GCP) Free Practice Exam Questions (2026 Updated)

Prepare effectively for your Google Professional-Cloud-Architect Google Certified Professional - Cloud Architect (GCP) certification with our extensive collection of free, high-quality practice questions. Each question is designed to mirror the actual exam format and objectives, complete with comprehensive answers and detailed explanations. Our materials are regularly updated for 2026, ensuring you have the most current resources to build confidence and succeed on your first attempt.

For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the database workloads for your company, Mountkirk Games. Considering the business and technical requirements, what should you do?

A.

Use Cloud SQL for time series data, and use Cloud Bigtable for historical data queries.

B.

Use Cloud SQL to replace MySQL, and use Cloud Spanner for historical data queries.

C.

Use Cloud Bigtable to replace MySQL, and use BigQuery for historical data queries.

D.

Use Cloud Bigtable for time series data, use Cloud Spanner for transactional data, and use BigQuery for historical data queries.

Refer to the Altostrat Media case study for the following solution regarding API management and cost control.

Altostrat is using Apigee for API management and wants to ensure their APIs are protected from overuse and abuse. You need to implement an Apigee feature to control the total number of API calls for cost management. What should you do?

A.

Set up API key validation.

B.

Integrate OAuth 2.0 authorization.

C.

Configure Quota policies.

D.

Activate XML threat protection.

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to migrate from their current analytics and statistics reporting model to one that meets their technical requirements on Google Cloud Platform.

Which two steps should be part of their migration plan? (Choose two.)

A.

Evaluate the impact of migrating their current batch ETL code to Cloud Dataflow.

B.

Write a schema migration plan to denormalize data for better performance in BigQuery.

C.

Draw an architecture diagram that shows how to move from a single MySQL database to a MySQL cluster.

D.

Load 10 TB of analytics data from a previous game into a Cloud SQL instance, and run test queries against the full dataset to confirm that they complete successfully.

E.

Integrate Cloud Armor to defend against possible SQL injection attacks in analytics files uploaded to Cloud Storage.

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants you to design a way to test the analytics platform’s resilience to changes in mobile network latency. What should you do?

A.

Deploy failure injection software to the game analytics platform that can inject additional latency to mobile client analytics traffic.

B.

Build a test client that can be run from a mobile phone emulator on a Compute Engine virtual machine, and run multiple copies in Google Cloud Platform regions all over the world to generate realistic traffic.

C.

Add the ability to introduce a random amount of delay before beginning to process analytics files uploaded from mobile devices.

D.

Create an opt-in beta of the game that runs on players' mobile devices and collects response times from analytics endpoints running in Google Cloud Platform regions all over the world.

For this question, refer to the Mountkirk Games case study. Which managed storage option meets Mountkirk’s technical requirement for storing game activity in a time series database service?

A.

Cloud Bigtable

B.

Cloud Spanner

C.

BigQuery

D.

Cloud Datastore

For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the compute workloads for your company, Mountkirk Games. Considering the Mountkirk Games business and technical requirements, what should you do?

A.

Create network load balancers. Use preemptible Compute Engine instances.

B.

Create network load balancers. Use non-preemptible Compute Engine instances.

C.

Create a global load balancer with managed instance groups and autoscaling policies. Use preemptible Compute Engine instances.

D.

Create a global load balancer with managed instance groups and autoscaling policies. Use non-preemptible Compute Engine instances.

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to design their solution for the future in order to take advantage of cloud and technology improvements as they become available. Which two steps should they take? (Choose two.)

A.

Store as much analytics and game activity data as financially feasible today so it can be used to train machine learning models to predict user behavior in the future.

B.

Begin packaging their game backend artifacts in container images and running them on Kubernetes Engine to improve the availability to scale up or down based on game activity.

C.

Set up a CI/CD pipeline using Jenkins and Spinnaker to automate canary deployments and improve development velocity.

D.

Adopt a schema versioning tool to reduce downtime when adding new game features that require storing additional player data in the database.

E.

Implement a weekly rolling maintenance process for the Linux virtual machines so they can apply critical kernel patches and package updates and reduce the risk of 0-day vulnerabilities.

For this question, refer to the Mountkirk Games case study. You are in charge of the new Game Backend Platform architecture. The game communicates with the backend over a REST API.

You want to follow Google-recommended practices. How should you design the backend?

A.

Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance group. Use an L4 load balancer.

B.

Create an instance template for the backend. For every region, deploy it on a single-zone managed instance group. Use an L4 load balancer.

C.

Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance group. Use an L7 load balancer.

D.

Create an instance template for the backend. For every region, deploy it on a single-zone managed instance group. Use an L7 load balancer.

For this question, refer to the EHR Healthcare case study. You are responsible for designing the Google Cloud network architecture for Google Kubernetes Engine. You want to follow Google best practices. Considering the EHR Healthcare business and technical requirements, what should you do to reduce the attack surface?

A.

Use a private cluster with a private endpoint with master authorized networks configured.

B.

Use a public cluster with firewall rules and Virtual Private Cloud (VPC) routes.

C.

Use a private cluster with a public endpoint with master authorized networks configured.

D.

Use a public cluster with master authorized networks enabled and firewall rules.

For this question, refer to the EHR Healthcare case study. You are responsible for ensuring that EHR's use of Google Cloud will pass an upcoming privacy compliance audit. What should you do? (Choose two.)

A.

Verify EHR's product usage against the list of compliant products on the Google Cloud compliance page.

B.

Advise EHR to execute a Business Associate Agreement (BAA) with Google Cloud.

C.

Use Firebase Authentication for EHR's user facing applications.

D.

Implement Prometheus to detect and prevent security breaches on EHR's web-based applications.

E.

Use GKE private clusters for all Kubernetes workloads.

You need to upgrade the EHR connection to comply with their requirements. The new connection design must support business-critical needs and meet the same network and security policy requirements. What should you do?

A.

Add a new Dedicated Interconnect connection.

B.

Upgrade the bandwidth on the Dedicated Interconnect connection to 100 G.

C.

Add three new Cloud VPN connections.

D.

Add a new Carrier Peering connection.

For this question, refer to the EHR Healthcare case study. EHR has single Dedicated Interconnect

connection between their primary data center and Googles network. This connection satisfies

EHR’s network and security policies:

• On-premises servers without public IP addresses need to connect to cloud resources

without public IP addresses

• Traffic flows from production network mgmt. servers to Compute Engine virtual

machines should never traverse the public internet.

You need to upgrade the EHR connection to comply with their requirements. The new

connection design must support business critical needs and meet the same network and

security policy requirements. What should you do?

A.

Add a new Dedicated Interconnect connection

B.

Upgrade the bandwidth on the Dedicated Interconnect connection to 100 G

C.

Add three new Cloud VPN connections

D.

Add a new Carrier Peering connection

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for hybrid connectivity between EHR's on-premises systems and Google Cloud. You want to follow Google's recommended practices for production-level applications. Considering the EHR Healthcare business and technical requirements, what should you do?

A.

Configure two Partner Interconnect connections in one metro (City), and make sure the Interconnect connections are placed in different metro zones.

B.

Configure two VPN connections from on-premises to Google Cloud, and make sure the VPN devices on-premises are in separate racks.

C.

Configure Direct Peering between EHR Healthcare and Google Cloud, and make sure you are peering at least two Google locations.

D.

Configure two Dedicated Interconnect connections in one metro (City) and two connections in another metro, and make sure the Interconnect connections are placed in different metro zones.

For this question, refer to the EHR Healthcare case study. In the past, configuration errors put public IP addresses on backend servers that should not have been accessible from the Internet. You need to ensure that no one can put external IP addresses on backend Compute Engine instances and that external IP addresses can only be configured on frontend Compute Engine instances. What should you do?

A.

Create an Organizational Policy with a constraint to allow external IP addresses only on the frontend Compute Engine instances.

B.

Revoke the compute.networkAdmin role from all users in the project with front end instances.

C.

Create an Identity and Access Management (IAM) policy that maps the IT staff to the compute.networkAdmin role for the organization.

D.

Create a custom Identity and Access Management (IAM) role named GCE_FRONTEND with the compute.addresses.create permission.

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for securely deploying workloads to Google Cloud. You also need to ensure that only verified containers are deployed using Google Cloud services. What should you do? (Choose two.)

A.

Enable Binary Authorization on GKE, and sign containers as part of a CI/CD pipeline.

B.

Configure Jenkins to utilize Kritis to cryptographically sign a container as part of a CI/CD pipeline.

C.

Configure Container Registry to only allow trusted service accounts to create and deploy containers from the registry.

D.

Configure Container Registry to use vulnerability scanning to confirm that there are no vulnerabilities before deploying the workload.

For this question, refer to the EHR Healthcare case study. You are a developer on the EHR customer portal team. Your team recently migrated the customer portal application to Google Cloud. The load has increased on the application servers, and now the application is logging many timeout errors. You recently incorporated Pub/Sub into the application architecture, and the application is not logging any Pub/Sub publishing errors. You want to improve publishing latency. What should you do?

A.

Increase the Pub/Sub Total Timeout retry value.

B.

Move from a Pub/Sub subscriber pull model to a push model.

C.

Turn off Pub/Sub message batching.

D.

Create a backup Pub/Sub message queue.

You are migrating a large, on-premises application to Google Cloud. The application consists of several interconnected virtual machines. You want to create a detailed migration plan to ensure a smooth migration with minimal effort. You need to understand the existing environment, identify dependencies, and estimate the total cost of ownership (TCO) in the cloud. What should you do?

A.

Use Config Connector to declare the desired state of your Google Cloud resources in Kubernetes-style manifests.

B.

Use the Google Cloud Migration Center to perform an automated discovery and assessment of the on-premises environment.

C.

Use the Google Cloud pricing calculator to input the specifications of your on-premises servers and receive a TCO estimate.

D.

Write a custom script to query the vSphere API for virtual machine information and then import it into BigQuery for analysis.

You have created several preemptible Linux virtual machine instances using Google Compute Engine. You want to properly shut down your application before the virtual machines are preempted. What should you do?

A.

Create a shutdown script named k99.shutdown in the /etc/rc.6.d/ directory.

B.

Create a shutdown script registered as a xinetd service in Linux and configure a Stackdnver endpoint check to call the service.

C.

Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance.

D.

Create a shutdown script, registered as a xinetd service in Linux, and use the gcloud compute instances add-metadata command to specify the service URL as the value for a new metadata entry with the key shutdown-script-url

Your company is planning to upload several important files to Cloud Storage. After the upload is completed, they want to verify that the upload content is identical to what they have on- premises. You want to minimize the cost and effort of performing this check. What should you do?

A.

1) Use gsutil -m to upload all the files to Cloud Storage.

2) Use gsutil cp to download the uploaded files

3) Use Linux diff to compare the content of the files

B.

1) Use gsutil -m to upload all the files to Cloud Storage.

2) Develop a custom Java application that computes CRC32C hashes

3) Use gsutil ls -L gs://[YOUR_BUCKET_NAME] to collect CRC32C hashes of the uploaded files

4)Compare the hashes

C.

1) Use Linux shasum to compute a digest of files you want to upload

2) Use gsutil -m to upload all the files to the Cloud Storage

3) Use gsutil cp to download the uploaded files

4) Use Linux shasum to compute a digest of the downloaded files 5.Compre the hashes

D.

1)Use gsutil -m to upload all the files to Cloud Storage.

2)Use gsutil hash -c FILE_NAME to generate CRC32C hashes of all on-premises files

3)Use gsutil ls -L gs://[YOUR_BUCKET_NAME] to collect CRC32C hashes of the uploaded files

4)Compare the hashes

You want to optimize the performance of an accurate, real-time, weather-charting application. The data comes from 50,000 sensors sending 10 readings a second, in the format of a timestamp and sensor reading. Where should you store the data?

A.

Google BigQuery

B.

Google Cloud SQL

C.

Google Cloud Bigtable

D.

Google Cloud Storage

Copyright © 2014-2026 Solution2Pass. All Rights Reserved