Weekend Sale - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: xmaspas7

Easiest Solution 2 Pass Your Certification Exams

Professional-Cloud-Developer Google Certified Professional - Cloud Developer Free Practice Exam Questions (2025 Updated)

Prepare effectively for your Google Professional-Cloud-Developer Google Certified Professional - Cloud Developer certification with our extensive collection of free, high-quality practice questions. Each question is designed to mirror the actual exam format and objectives, complete with comprehensive answers and detailed explanations. Our materials are regularly updated for 2025, ensuring you have the most current resources to build confidence and succeed on your first attempt.

You need to deploy an internet-facing microservices application to Google Kubernetes Engine (GKE). You want to validate new features using the A/B testing method. You have the following requirements for deploying new container image releases

• There is no downtime when new container images are deployed.

• New production releases are tested and verified using a subset of production users.

What should you do?

A.

1 Configure your Cl/CD pipeline to update the Deployment manifest file by replacing the container version with the latest version.

2 Recreate the Pods in your cluster by applying the Deployment manifest file.

3 Validate the application's performance by comparing its functionality with the previous release version and roll back if an issue arises.

B.

1 install the Anthos Service Mesh on your GKE cluster.

2 Create two Deployments on the GKE cluster and label them with different version names.

3 Create a VirtualService with a routing rule to send a small percentage of traffic to the Deployment that references the new version of the application.

C.

1 Create a second namespace on GKE for the new release version.

2 Create a Deployment configuration for the second namespace with the desired number of Pods.

3 Deploy new container versions in the second namespace.

4 Update the ingress configuration to route traffic to the namespace with the new container versions.

D.

1. Implement a rolling update pattern by replacing the Pods gradually with the new release versify.

2 Validate the application's performance for the new subset of users during the rollout and roll back if an issue arises.

Your API backend is running on multiple cloud providers. You want to generate reports for the network latency of your API.

Which two steps should you take? (Choose two.)

A.

Use Zipkin collector to gather data.

B.

Use Fluentd agent to gather data.

C.

Use Stackdriver Trace to generate reports.

D.

Use Stackdriver Debugger to generate report.

E.

Use Stackdriver Profiler to generate report.

You are a lead developer working on a new retail system that runs on Cloud Run and Firestore. A web UI requirement is for the user to be able to browse through alt products. A few months after go-live, you notice that Cloud Run instances are terminated with HTTP 500: Container instances are exceeding memory limits errors during busy times

This error coincides with spikes in the number of Firestore queries

You need to prevent Cloud Run from crashing and decrease the number of Firestore queries. You want to use a solution that optimizes system performance What should you do?

A.

Create a custom jndex over the products

B.

Modify the query that returns the product list using cursors with limits

C.

Modify the Cloud Run configuration to increase the memory limits

D.

Modify the query that returns the product list using integer offsets

Your analytics system executes queries against a BigQuery dataset. The SQL query is executed in batch and passes the contents of a SQL file to the BigQuery CLI. Then it redirects the BigQuery CLI output to another process. However, you are getting a permission error from the BigQuery CLI when the queries are executed. You want to resolve the issue. What should you do?

A.

Grant the service account BigQuery Data Viewer and BigQuery Job User roles.

B.

Grant the service account BigQuery Data Editor and BigQuery Data Viewer roles.

C.

Create a view in BigQuery from the SQL query and SELECT* from the view in the CLI.

D.

Create a new dataset in BigQuery, and copy the source table to the new dataset Query the new dataset and table from the CLI.

You are a developer at a large organization. You have an application written in Go running in a production Google Kubernetes Engine (GKE) cluster. You need to add a new feature that requires access to BigQuery. You want to grant BigQuery access to your GKE cluster following Google-recommended best practices. What should you do?

A.

Create a Google service account with BigQuery access. Add the JSON key to Secret Manager, and use the Go client library to access the JSON key.

B.

Create a Google service account with BigQuery access. Add the Google service account JSON key as a Kubernetes secret, and configure the application to use this secret.

C.

Create a Google service account with BigQuery access. Add the Google service account JSON key to Secret Manager, and use an init container to access the secret for the application to use.

D.

Create a Google service account and a Kubernetes service account. Configure Workload Identity on the GKE cluster, and reference the Kubernetes service account on the application Deployment.

You recently deployed a Go application on Google Kubernetes Engine (GKE). The operations team has noticed that the application's CPU usage is high even when there is low production traffic. The operations team has asked you to optimize your application's CPU resource consumption. You want to determine which Go functions consume the largest amount of CPU. What should you do?

A.

Deploy a Fluent Bit daemonset on the GKE cluster to log data in Cloud Logging. Analyze the logs to get insights into your application code’s performance.

B.

Create a custom dashboard in Cloud Monitoring to evaluate the CPU performance metrics of your application.

C.

Connect to your GKE nodes using SSH. Run the top command on the shell to extract the CPU utilization of your application.

D.

Modify your Go application to capture profiling data. Analyze the CPU metrics of your application in flame graphs in Profiler.

You manage an ecommerce application that processes purchases from customers who can subsequently cancel or change those purchases. You discover that order volumes are highly variable and the backend order-processing system can only process one request at a time. You want to ensure seamless performance for customers regardless of usage volume. It is crucial that customers’ order update requests are performed in the sequence in which they were generated. What should you do?

A.

Send the purchase and change requests over WebSockets to the backend.

B.

Send the purchase and change requests as REST requests to the backend.

C.

Use a Pub/Sub subscriber in pull mode and use a data store to manage ordering.

D.

Use a Pub/Sub subscriber in push mode and use a data store to manage ordering.

You are using the Cloud Client Library to upload an image in your application to Cloud Storage. Users of the application report that occasionally the upload does not complete and the client library reports an HTTP 504 Gateway Timeout error. You want to make the application more resilient to errors. What changes to the application should you make?

A.

Write an exponential backoff process around the client library call.

B.

Write a one-second wait time backoff process around the client library call.

C.

Design a retry button in the application and ask users to click if the error occurs.

D.

Create a queue for the object and inform the users that the application will try again in 10 minutes.

You are creating a Google Kubernetes Engine (GKE) cluster and run this command:

The command fails with the error:

You want to resolve the issue. What should you do?

A.

Request additional GKE quota is the GCP Console.

B.

Request additional Compute Engine quota in the GCP Console.

C.

Open a support case to request additional GKE quotA.

D.

Decouple services in the cluster, and rewrite new clusters to function with fewer cores.

You have containerized a legacy application that stores its configuration on an NFS share. You need to deploy this application to Google Kubernetes Engine (GKE) and do not want the application serving traffic until after the configuration has been retrieved. What should you do?

A.

Use the gsutil utility to copy files from within the Docker container at startup, and start the service using an ENTRYPOINT script.

B.

Create a PersistentVolumeClaim on the GKE cluster. Access the configuration files from the volume, and start the service using an ENTRYPOINT script.

C.

Use the COPY statement in the Dockerfile to load the configuration into the container image. Verify that the configuration is available, and start the service using an ENTRYPOINT script.

D.

Add a startup script to the GKE instance group to mount the NFS share at node startup. Copy the configuration files into the container, and start the service using an ENTRYPOINT script.

You are designing a deployment technique for your new applications on Google Cloud. As part of your deployment planning, you want to use live traffic to gather performance metrics for both new and existing applications. You need to test against the full production load prior to launch. What should you do?

A.

Use canary deployment

B.

Use blue/green deployment

C.

Use rolling updates deployment

D.

Use A/B testing with traffic mirroring during deployment

You are reviewing and updating your Cloud Build steps to adhere to Google-recommended practices. Currently, your build steps include:

1. Pull the source code from a source repository.

2. Build a container image

3. Upload the built image to Artifact Registry.

You need to add a step to perform a vulnerability scan of the built container image, and you want the results of the scan to be available to your deployment pipeline running in Google Cloud. You want to minimize changes that could disrupt other teams' processes What should you do?

A.

Enable Binary Authorization, and configure it to attest that no vulnerabilities exist in a container image.

B.

Enable the Container Scanning API in Artifact Registry, and scan the built container images for vulnerabilities.

C.

Upload the built container images to your Docker Hub instance, and scan them for vulnerabilities.

D.

Add Artifact Registry to your Aqua Security instance, and scan the built container images for vulnerabilities

You recently migrated an on-premises monolithic application to a microservices application on Google Kubernetes Engine (GKE). The application has dependencies on backend services on-premises, including a CRM system and a MySQL database that contains personally identifiable information (PII). The backend services must remain on-premises to meet regulatory requirements.

You established a Cloud VPN connection between your on-premises data center and Google Cloud. You notice that some requests from your microservices application on GKE to the backend services are failing due to latency issues caused by fluctuating bandwidth, which is causing the application to crash. How should you address the latency issues?

A.

Use Memorystore to cache frequently accessed PII data from the on-premises MySQL database

B.

Use Istio to create a service mesh that includes the microservices on GKE and the on-premises services

C.

Increase the number of Cloud VPN tunnels for the connection between Google Cloud and the on-premises services

D.

Decrease the network layer packet size by decreasing the Maximum Transmission Unit (MTU) value from its default value on Cloud VPN

You are designing a chat room application that will host multiple rooms and retain the message history for each room. You have selected Firestore as your database. How should you represent the data in Firestore?

A.

 Create a collection for the rooms. For each room, create a document that lists the contents of the messages

B.

 Create a collection for the rooms. For each room, create a collection that contains a document for each message

C.

 Create a collection for the rooms. For each room, create a document that contains a collection for documents, each of which contains a message.

D.

 Create a collection for the rooms, and create a document for each room. Create a separate collection for messages, with one document per message. Each room’s document contains a list of references to the messages.

Your company has a BigQuery dataset named "Master" that keeps information about employee travel and

expenses. This information is organized by employee department. That means employees should only be able

to view information for their department. You want to apply a security framework to enforce this requirement

with the minimum number of steps.

What should you do?

A.

Create a separate dataset for each department. Create a view with an appropriate WHERE clause to

select records from a particular dataset for the specific department. Authorize this view to access records

from your Master dataset. Give employees the permission to this department-specific dataset.

B.

Create a separate dataset for each department. Create a data pipeline for each department to copy

appropriate information from the Master dataset to the specific dataset for the department. Give employees

the permission to this department-specific dataset.

C.

Create a dataset named Master dataset. Create a separate view for each department in the Master

dataset. Give employees access to the specific view for their department.

D.

Create a dataset named Master dataset. Create a separate table for each department in the Master

dataset. Give employees access to the specific table for their department.

You are evaluating developer tools to help drive Google Kubernetes Engine adoption and integration with your development environment, which includes VS Code and IntelliJ. What should you do?

A.

Use Cloud Code to develop applications.

B.

Use the Cloud Shell integrated Code Editor to edit code and configuration files.

C.

Use a Cloud Notebook instance to ingest and process data and deploy models.

D.

Use Cloud Shell to manage your infrastructure and applications from the command line.

You are using Cloud Build to create a new Docker image on each source code commit to a Cloud Source Repositoties repository. Your application is built on every commit to the master branch. You want to release specific commits made to the master branch in an automated method. What should you do?

A.

Manually trigger the build for new releases.

B.

Create a build trigger on a Git tag pattern. Use a Git tag convention for new releases.

C.

Create a build trigger on a Git branch name pattern. Use a Git branch naming convention for new releases.

D.

Commit your source code to a second Cloud Source Repositories repository with a second Cloud Build trigger. Use this repository for new releases only.

You want to notify on-call engineers about a service degradation in production while minimizing development

time.

What should you do?

A.

Use Cloud Function to monitor resources and raise alerts.

B.

Use Cloud Pub/Sub to monitor resources and raise alerts.

C.

Use Stackdriver Error Reporting to capture errors and raise alerts.

D.

Use Stackdriver Monitoring to monitor resources and raise alerts.

You are developing a JPEG image-resizing API hosted on Google Kubernetes Engine (GKE). Callers of the service will exist within the same GKE cluster. You want clients to be able to get the IP address of the service.

What should you do?

A.

Define a GKE Service. Clients should use the name of the A record in Cloud DNS to find the service's

cluster IP address.

B.

Define a GKE Service. Clients should use the service name in the URL to connect to the service.

C.

Define a GKE Endpoint. Clients should get the endpoint name from the appropriate environment variable in

the client container.

D.

Define a GKE Endpoint. Clients should get the endpoint name from Cloud DNS.

You are developing a new public-facing application that needs to retrieve specific properties in the metadata of users’ objects in their respective Cloud Storage buckets. Due to privacy and data residency requirements, you must retrieve only the metadata and not the object data. You want to maximize the performance of the retrieval process. How should you retrieve the metadata?

A.

Use the patch method.

B.

Use the compose method.

C.

Use the copy method.

D.

Use the fields request parameter.

Copyright © 2014-2025 Solution2Pass. All Rights Reserved