Weekend Sale - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: xmaspas7

Easiest Solution 2 Pass Your Certification Exams

Professional-Cloud-Developer Google Certified Professional - Cloud Developer Free Practice Exam Questions (2025 Updated)

Prepare effectively for your Google Professional-Cloud-Developer Google Certified Professional - Cloud Developer certification with our extensive collection of free, high-quality practice questions. Each question is designed to mirror the actual exam format and objectives, complete with comprehensive answers and detailed explanations. Our materials are regularly updated for 2025, ensuring you have the most current resources to build confidence and succeed on your first attempt.

You are using Cloud Build to build a Docker image. You need to modify the build to execute unit and run

integration tests. When there is a failure, you want the build history to clearly display the stage at which the

build failed.

What should you do?

A.

Add RUN commands in the Dockerfile to execute unit and integration tests.

B.

Create a Cloud Build build config file with a single build step to compile unit and integration tests.

C.

Create a Cloud Build build config file that will spawn a separate cloud build pipeline for unit and integration

tests.

D.

Create a Cloud Build build config file with separate cloud builder steps to compile and execute unit and

integration tests.

Your team is developing unit tests for Cloud Function code. The code is stored in a Cloud Source Repositories repository. You are responsible for implementing the tests. Only a specific service account has the necessary permissions to deploy the code to Cloud Functions. You want to ensure that the code cannot be deployed without first passing the tests. How should you configure the unit testing process?

A.

Configure Cloud Build to deploy the Cloud Function. If the code passes the tests, a deployment approval is sent to you.

B.

Configure Cloud Build to deploy the Cloud Function, using the specific service account as the build agent. Run the unit tests after successful deployment.

C.

Configure Cloud Build to run the unit tests. If the code passes the tests, the developer deploys the Cloud Function.

D.

Configure Cloud Build to run the unit tests, using the specific service account as the build agent. If the code passes the tests, Cloud Build deploys the Cloud Function.

Your application takes an input from a user and publishes it to the user's contacts. This input is stored in a

table in Cloud Spanner. Your application is more sensitive to latency and less sensitive to consistency.

How should you perform reads from Cloud Spanner for this application?

A.

Perform Read-Only transactions.

B.

Perform stale reads using single-read methods.

C.

Perform strong reads using single-read methods.

D.

Perform stale reads using read-write transactions.

You are deploying a microservices application to Google Kubernetes Engine (GKE) that will broadcast livestreams. You expect unpredictable traffic patterns and large variations in the number of concurrent users. Your application must meet the following requirements:

• Scales automatically during popular events and maintains high availability

• Is resilient in the event of hardware failures

How should you configure the deployment parameters? (Choose two.)

A.

Distribute your workload evenly using a multi-zonal node pool.

B.

Distribute your workload evenly using multiple zonal node pools.

C.

Use cluster autoscaler to resize the number of nodes in the node pool, and use a Horizontal Pod Autoscaler to scale the workload.

D.

Create a managed instance group for Compute Engine with the cluster nodes. Configure autoscaling rules for the managed instance group.

E.

Create alerting policies in Cloud Monitoring based on GKE CPU and memory utilization. Ask an on-duty engineer to scale the workload by executing a script when CPU and memory usage exceed predefined thresholds.

You need to configure a Deployment on Google Kubernetes Engine (GKE). You want to include a check that verifies that the containers can connect to the database. If the Pod is failing to connect, you want a script on the container to run to complete a graceful shutdown. How should you configure the Deployment?

A.

Create two jobs: one that checks whether the container can connect to the database, and another that runs the shutdown script if the Pod is failing.

B.

Create the Deployment with a livenessProbe for the container that will fail if the container can't connect to the database. Configure a Prestop lifecycle handler that runs the shutdown script if the container is failing.

C.

Create the Deployment with a PostStart lifecycle handler that checks the service availability. Configure a PreStop lifecycle handler that runs the shutdown script if the container is failing.

D.

Create the Deployment with an initContainer that checks the service availability. Configure a Prestop lifecycle handler that runs the shutdown script if the Pod is failing.

Your application performs well when tested locally, but it runs significantly slower when you deploy it to App Engine standard environment. You want to diagnose the problem. What should you do?

A.

File a ticket with Cloud Support indicating that the application performs faster locally.

B.

Use Stackdriver Debugger Snapshots to look at a point-in-time execution of the application.

C.

Use Stackdriver Trace to determine which functions within the application have higher latency.

D.

Add logging commands to the application and use Stackdriver Logging to check where the latency problem occurs.

Your team develops services that run on Google Cloud. You want to process messages sent to a Pub/Sub topic, and then store them. Each message must be processed exactly once to avoid duplication of data and any data conflicts. You need to use the cheapest and most simple solution. What should you do?

A.

Process the messages with a Dataproc job, and write the output to storage.

B.

Process the messages with a Dataflow streaming pipeline using Apache Beam's PubSubIO package, and write the output to storage.

C.

Process the messages with a Cloud Function, and write the results to a BigQuery location where you can run a job to deduplicate the data.

D.

Retrieve the messages with a Dataflow streaming pipeline, store them in Cloud Bigtable, and use another Dataflow streaming pipeline to deduplicate messages.

You manage an application that runs in a Compute Engine instance. You also have multiple backend services executing in stand-alone Docker containers running in Compute Engine instances. The Compute Engine instances supporting the backend services are scaled by managed instance groups in multiple regions. You want your calling application to be loosely coupled. You need to be able to invoke distinct service implementations that are chosen based on the value of an HTTP header found in the request. Which Google Cloud feature should you use to invoke the backend services?

A.

Traffic Director

B.

Service Directory

C.

Anthos Service Mesh

D.

Internal HTTP(S) Load Balancing

You are writing a Compute Engine hosted application in project A that needs to securely authenticate to a Cloud Pub/Sub topic in project B.

What should you do?

A.

Configure the instances with a service account owned by project B. Add the service account as a Cloud Pub/Sub publisher to project A.

B.

Configure the instances with a service account owned by project A. Add the service account as a publisher on the topic.

C.

Configure Application Default Credentials to use the private key of a service account owned by project B. Add the service account as a Cloud Pub/Sub publisher to project A.

D.

Configure Application Default Credentials to use the private key of a service account owned by project A. Add the service account as a publisher on the topic

You are developing an application that will store and access sensitive unstructured data objects in a Cloud Storage bucket. To comply with regulatory requirements, you need to ensure that all data objects are available for at least 7 years after their initial creation. Objects created more than 3 years ago are accessed very infrequently (less than once a year). You need to configure object storage while ensuring that storage cost is optimized. What should you do? (Choose two.)

A.

Set a retention policy on the bucket with a period of 7 years.

B.

Use IAM Conditions to provide access to objects 7 years after the object creation date.

C.

Enable Object Versioning to prevent objects from being accidentally deleted for 7 years after object creation.

D.

Create an object lifecycle policy on the bucket that moves objects from Standard Storage to Archive Storage after 3 years.

E.

Implement a Cloud Function that checks the age of each object in the bucket and moves the objects older than 3 years to a second bucket with the Archive Storage class. Use Cloud Scheduler to trigger the Cloud Function on a daily schedule.

Your service adds text to images that it reads from Cloud Storage. During busy times of the year, requests to

Cloud Storage fail with an HTTP 429 "Too Many Requests" status code.

How should you handle this error?

A.

Add a cache-control header to the objects.

B.

Request a quota increase from the GCP Console.

C.

Retry the request with a truncated exponential backoff strategy.

D.

Change the storage class of the Cloud Storage bucket to Multi-regional.

You are in the final stage of migrating an on-premises data center to Google Cloud. You are quickly approaching your deadline, and discover that a web API is running on a server slated for decommissioning. You need to recommend a solution to modernize this API while migrating to Google Cloud. The modernized web API must meet the following requirements:

• Autoscales during high traffic periods at the end of each month

• Written in Python 3.x

• Developers must be able to rapidly deploy new versions in response to frequent code changes

You want to minimize cost, effort, and operational overhead of this migration. What should you do?

A.

Modernize and deploy the code on App Engine flexible environment.

B.

Modernize and deploy the code on App Engine standard environment.

C.

Deploy the modernized application to an n1-standard-1 Compute Engine instance.

D.

Ask the development team to re-write the application to run as a Docker container on Google Kubernetes Engine.

You are configuring a continuous integration pipeline using Cloud Build to automate the deployment of new container images to Google Kubernetes Engine (GKE). The pipeline builds the application from its source code, runs unit and integration tests in separate steps, and pushes the container to Container Registry. The application runs on a Python web server.

The Dockerfile is as follows:

FROM python:3.7-alpine -

COPY . /app -

WORKDIR /app -

RUN pip install -r requirements.txt

CMD [ "gunicorn", "-w 4", "main:app" ]

You notice that Cloud Build runs are taking longer than expected to complete. You want to decrease the build time. What should you do? (Choose two.)

A.

Select a virtual machine (VM) size with higher CPU for Cloud Build runs.

B.

Deploy a Container Registry on a Compute Engine VM in a VPC, and use it to store the final images.

C.

Cache the Docker image for subsequent builds using the -- cache-from argument in your build config file.

D.

Change the base image in the Dockerfile to ubuntu:latest, and install Python 3.7 using a package manager utility.

E.

Store application source code on Cloud Storage, and configure the pipeline to use gsutil to download the source code.

Your operations team has asked you to create a script that lists the Cloud Bigtable, Memorystore, and Cloud SQL databases running within a project. The script should allow users to submit a filter expression to limit the results presented. How should you retrieve the data?

A.

Use the HBase API, Redis API, and MySQL connection to retrieve database lists. Combine the results, and then apply the filter to display the results

B.

Use the HBase API, Redis API, and MySQL connection to retrieve database lists. Filter the results individually, and then combine them to display the results

C.

Run gcloud bigtable instances list, gcloud redis instances list, and gcloud sql databases list. Use a filter within the application, and then display the results

D.

Run gcloud bigtable instances list, gcloud redis instances list, and gcloud sql databases list. Use --filter flag with each command, and then display the results

You have decided to migrate your Compute Engine application to Google Kubernetes Engine. You need to build a container image and push it to Artifact Registry using Cloud Build. What should you do? (Choose two.)

A)

Run gcloud builds submit in the directory that contains the application source code.

B)

Run gcloud run deploy app-name --image gcr.io/$PROJECT_ID/app-name in the directory that contains the application source code.

C)

Run gcloud container images add-tag gcr.io/$PROJECT_ID/app-name gcr.io/$PROJECT_ID/app-name:latest in the directory that contains the application source code.

D)

In the application source directory, create a file named cloudbuild.yaml that contains the following contents:

E)

In the application source directory, create a file named cloudbuild.yaml that contains the following contents:

A.

Option A

B.

Option B

C.

Option C

D.

Option D

E.

Option E

You recently deployed your application in Google Kubernetes Engine, and now need to release a new version of your application. You need the ability to instantly roll back to the previous version in case there are issues with the new version. Which deployment model should you use?

A.

Perform a rolling deployment, and test your new application after the deployment is complete.

B.

Perform A/B testing, and test your application periodically after the new tests are implemented.

C.

Perform a blue/green deployment, and test your new application after the deployment is. complete.

D.

Perform a canary deployment, and test your new application periodically after the new version is deployed.

Your team has created an application that is hosted on a Google Kubernetes Engine (GKE) cluster You need to connect the application to a legacy REST service that is deployed in two GKE clusters in two different regions. You want to connect your application to the legacy service in a way that is resilient and requires the fewest number of steps You also want to be able to run probe-based health checks on the legacy service on a separate port How should you set up the connection?

A.

Use Traffic Director with a sidecar proxy to connect the application to the service.

B.

Use a proxyless Traffic Director configuration to connect the application to the service.

C.

Configure the legacy service's firewall to allow health checks originating from the proxy.

D.

Configure the legacy service's firewall to allow health checks originating from the application.

E.

Configure the legacy service's firewall to allow health checks originating from the Traffic Director control plane.

You are developing a web application that will be accessible over both HTTP and HTTPS and will run on Compute Engine instances. On occasion, you will need to SSH from your remote laptop into one of the Compute Engine instances to conduct maintenance on the app. How should you configure the instances while following Google-recommended best practices?

A.

Set up a backend with Compute Engine web server instances with a private IP address behind a TCP proxy load balancer.

B.

Configure the firewall rules to allow all ingress traffic to connect to the Compute Engine web servers, with each server having a unique external IP address.

C.

Configure Cloud Identity-Aware Proxy API for SSH access. Then configure the Compute Engine servers with private IP addresses behind an HTTP(s) load balancer for the application web traffic.

D.

Set up a backend with Compute Engine web server instances with a private IP address behind an HTTP(S) load balancer. Set up a bastion host with a public IP address and open firewall ports. Connect to the web instances using the bastion host.

You are parsing a log file that contains three columns: a timestamp, an account number (a string), and a

transaction amount (a number). You want to calculate the sum of all transaction amounts for each unique

account number efficiently.

Which data structure should you use?

A.

A linked list

B.

A hash table

C.

A two-dimensional array

D.

A comma-delimited string

Users are complaining that your Cloud Run-hosted website responds too slowly during traffic spikes. You want to provide a better user experience during traffic peaks. What should you do?

A.

Read application configuration and static data from the database on application startup.

B.

Package application configuration and static data into the application image during build time.

C.

Perform as much work as possible in the background after the response has been returned to the user.

D.

Ensure that timeout exceptions and errors cause the Cloud Run instance to exit quickly so a replacement instance can be started.

Copyright © 2014-2025 Solution2Pass. All Rights Reserved