Summer Sale Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: s2p65

Easiest Solution 2 Pass Your Certification Exams

Professional-Cloud-Network-Engineer Google Cloud Certified - Professional Cloud Network Engineer Free Practice Exam Questions (2025 Updated)

Prepare effectively for your Google Professional-Cloud-Network-Engineer Google Cloud Certified - Professional Cloud Network Engineer certification with our extensive collection of free, high-quality practice questions. Each question is designed to mirror the actual exam format and objectives, complete with comprehensive answers and detailed explanations. Our materials are regularly updated for 2025, ensuring you have the most current resources to build confidence and succeed on your first attempt.

Your company acquired a new division. The new division’s network team requires complete control over their networking infrastructure. You need to extend your existing Google Cloud network infrastructure, that consists of a single VPC, to allow workloads from all divisions to communicate with each other. You want to avoid incurring extra costs and granting unnecessary permissions to the new division’s networking team. What should you do?

A.

Q • Create a new project for the new division's network team.

• Create a new VPC within the new project.

• Establish a VPC peering between your existing VPC and the new division’s VPC.

• Grant roles/compute. networkAdmin on the newly created project to the new division’s network team group.

B.

O * Create a new project for the new division’s network team.

• Create a new VPC within the new project.

• Establish a VPC peering between your existing VPC and the new division’s VPC.

• Create a new subnet dedicated to the new division’s workloads.

• Grant roles/compute .networkuser on the new project to the new division's network team group.

C.

O • Create a new project for the new division's network team.

• Create a new VPC within the new project.

• Establish a VPN connection between your existing VPC and the new division's VPC.

• Grant roles/compute .networkAdmin on the newly created project to the new division’s network team group.

D.

Q • Ensure that the project hosting the existing network infrastructure is enabled as a host project.

• Create a new subnet dedicated to the new division’s workloads in the existing VPC.

• Grant roles/compute. networkuser on the newly created subnet to the new division’s network team group.

You need to configure a static route to an on-premises resource behind a Cloud VPN gateway that is configured for policy-based routing using the gcloud command.

Which next hop should you choose?

A.

The default internet gateway

B.

The IP address of the Cloud VPN gateway

C.

The name and region of the Cloud VPN tunnel

D.

The IP address of the instance on the remote side of the VPN tunnel

Question:

You reviewed the user behavior for your main application, which uses an external global Application Load Balancer, and found that the backend servers were overloaded due to erratic spikes in client requests. You need to limit concurrent sessions and return an HTTP 429 "Too Many Requests" response back to the client while following Google-recommended practices. What should you do?

A.

Create a Cloud Armor security policy, and apply the predefined Open Worldwide Application Security Project (OWASP) rules to automatically implement the rate limit per client IP address.

B.

Configure the load balancer to accept only the defined amount of requests per client IP address, increase the backend servers to support more traffic, and redirect traffic to a different backend to burst traffic.

C.

Configure a VM with Linux, implement the rate limit through iptables, and use a firewall rule to send an HTTP 429 response to the client application.

D.

Create a Cloud Armor security policy, and associate the policy with the load balancer. Configure the security policy's settings as follows: action: throttle, conform-action: allow, exceed-action: deny-429.

Question:

Your organization has distributed geographic applications with significant data volumes. You need to create a design that exposes the HTTPS workloads globally and keeps traffic costs to a minimum. What should you do?

A.

Deploy a regional external Application Load Balancer with Standard Network Service Tier.

B.

Deploy a regional external Application Load Balancer with Premium Network Service Tier.

C.

Deploy a global external proxy Network Load Balancer with Standard Network Service Tier.

D.

Deploy a global external Application Load Balancer with Premium Network Service Tier.

Your company runs an enterprise platform on-premises using virtual machines (VMS). Your internet customers have created tens of thousands of DNS domains panting to your public IP addresses allocated to the Vtvls Typically, your customers hard-code your IP addresses In their DNS records You are now planning to migrate the platform to Compute Engine and you want to use Bring your Own IP you want to minimize disruption to the Platform What Should you d0?

A.

Create a VPC and request static external IP addresses from Google Cloud Assagn the IP addresses to the Compute Engine instances. Notify your customers of the new IP addresses so they can update their DNS

B.

Verify ownership of your IP addresses. After the verification, Google Cloud advertises and provisions the IP prefix for you_ Assign the IP addresses to the Compute Engine Instances

C.

Create a VPC With the same IP address range as your on-premises network Asson the IP addresses to the Compute Engine Instances.

D.

Verify ownership of your IP addresses. Use live migration to import the prefix Assign the IP addresses to Compute Engine instances.

Your company's on-premises office is connected to Google Cloud using HA VPN. The security team will soon enable VPC Service Controls. You need to create a plan with minimal configuration adjustments, so clients at the office will still be able to privately call the Google APIs and be protected by VPC Service Controls. What should you do?

A.

Create a design with a DNS configuration that resolves the Google APIs to 199.36.153.4/30; advertise 199.36.153.4/30 from Google Cloud to the onpremises routers; add an access level to authorize the on-premises network to access the APIs.

B.

Create a design with a DNS configuration that resolves the Google APIs to 199.36.153.8/30; advertise 199.36.153.8/30 from Google Cloud to the onpremises routers.

C.

Create a design with a DNS configuration that resolves the Google APIs to 199.36.153.8/30; advertise 199.36.153.8/30 from Google Cloud to the onpremise routers: add an access level to authorize the on-premises network to access the APIs.

D.

Create a design with a DNS configuration that resolves the Google APIs to 199.36.153.4/30; advertise 199.36.153.4/30 from Google Cloud to the onpremises routers.

You are deploying an application that runs on Compute Engine instances. You need to determine how to expose your application to a new customer You must ensure that your application meets the following requirements

• Maps multiple existing reserved external IP addresses to the Instance

• Processes IP Encapsulating Security Payload (ESP) traffic

What should you do?

A.

Configure a target pool, and create protocol forwarding rules for each external IP address.

B.

Configure a backend service, and create an external network load balancer for each external IP address

C.

Configure a target instance, and create a protocol forwarding rule for each external IP address to be mapped to the instance.

D.

Configure the Compute Engine Instances' network Interface external IP address from None to Ephemeral Add as many external IP addresses as required

Your company has defined a resource hierarchy that includes a parent folder with subfolders for each department. Each department defines their respective project and VPC in the assigned folder and has the appropriate permissions to create Google Cloud firewall rules. The VPCs should not allow traffic to flow between them. You need to block all traffic from any source, including other VPCs, and delegate only the intra-VPC firewall rules to the respective departments. What should you do?

A.

Create a VPC firewall rule in each VPC to block traffic from any source, with priority 0.

B.

Create a VPC firewall rule in each VPC to block traffic from any source, with priority 1000.

C.

Create two hierarchical firewall policies per department's folder with two rules in each: a high-priority rule that matches traffic from the private CIDRs assigned to the respective VPC and sets the action to allow, and another lower-priority rule that blocks traffic from any other source.

D.

Create two hierarchical firewall policies per department's folder with two rules in each: a high-priority rule that matches traffic from the private CIDRs assigned to the respective VPC and sets the action to goto_next, and another lower-priority rule that blocks traffic from any other source.

You decide to set up Cloud NAT. After completing the configuration, you find that one of your instances is not using the Cloud NAT for outbound NAT.

What is the most likely cause of this problem?

A.

The instance has been configured with multiple interfaces.

B.

An external IP address has been configured on the instance.

C.

You have created static routes that use RFC1918 ranges.

D.

The instance is accessible by a load balancer external IP address.

Your company recently migrated to Google Cloud in a Single region. You configured separate Virtual Private Cloud (VPC) networks for two departments. Department A and Department B. Department A has requested access to resources that are part Of Department Bis VPC. You need to configure the traffic from private IP addresses to flow between the VPCs using multi-NIC virtual machines (VMS) to meet security requirements Your configuration also must

• Support both TCP and UDP protocols

• Provide fully automated failover

• Include health-checks

Require minimal manual Intervention In the client VMS

Which approach should you take?

A.

Create the VMS In the same zone, and configure static routes With IP addresses as next hops.

B.

Create the VMS in different zones, and configure static routes with instance names as next hops

C.

Create an Instance template and a managed instance group. Configure a Single internal load balancer, and define a custom static route with the Internal TCP/UDP load balancer as the next hop

D.

Create an instance template and a managed instance group. Configure two separate internal TCP/IJDP load balancers for each protocol (TCP!UDP), and configure the client VIVIS to use the internal load balancers' virtual IP addresses

Your organization has resources in two different VPCs, each in different Google Cloud projects, and requires connectivity between the resources in the two VPCs. You have already determined that there is no IP address overlap; however, one VPC uses privately used public IP (PUPI) ranges. You would like to enable connectivity between these resources by using a lower cost and higher performance method. What should you do?

A.

Create an HA VPN between the two VPCs that includes the PUPI ranges in the custom route advertisements of the Cloud Router. Create the necessary ingress VPC firewall rules that target the specific resources by using IP ranges as the source filter.

B.

Create a VPC Network Peering connection between the two VPCs that allows the export and import of custom routes for public IP addresses. Create the necessary ingress VPC firewall rules that target the specific resources by using service accounts as the source filter.

C.

Create a VPC Network Peering connection between the two VPCs that allows the export and import of subnet routes with public IP addresses. Create the necessary ingress VPC firewall rules that target the specific resources by using IP ranges as the source filter.

D.

Create a VPC Network Peering connection between the two VPCs that allows the export and import of subnet routes with public IP addresses. Create the necessary ingress VPC firewall rules that target the specific resources by using network tags as the source filter.

In your Google Cloud organization, you have two folders: Dev and Prod. You want a scalable and consistent way to enforce the following firewall rules for all virtual machines (VMs) with minimal cost:

Port 8080 should always be open for VMs in the projects in the Dev folder.

Any traffic to port 8080 should be denied for all VMs in your projects in the Prod folder.

What should you do?

A.

Create and associate a firewall policy with the Dev folder with a rule to open port 8080. Create and associate a firewall policy with the Prod folder with a rule to deny traffic to port 8080.

B.

Create a Shared VPC for the Dev projects and a Shared VPC for the Prod projects. Create a VPC firewall rule to open port 8080 in the Shared VPC for Dev. Create a firewall rule to deny traffic to port 8080 in the Shared VPC for Prod. Deploy VMs to those Shared VPCs.

C.

In all VPCs for the Dev projects, create a VPC firewall rule to open port 8080. In all VPCs for the Prod projects, create a VPC firewall rule to deny traffic to port 8080.

D.

Use Anthos Config Connector to enforce a security policy to open port 8080 on the Dev VMs and deny traffic to port 8080 on the Prod VMs.

You recently deployed your application in Google Cloud. You need to verify your Google Cloud network configuration before deploying your on-premises workloads. You want to confirm that your Google Cloud network configuration allows traffic to flow from your cloud resources to your on- premises network. This validation should also analyze and diagnose potential failure points in your Google Cloud network configurations without sending any data plane test traffic. What should you do?

A.

Use Network Intelligence Center's Connectivity Tests.

B.

Enable Packet Mirroring on your application and send test traffic.

C.

Use Network Intelligence Center's Network Topology visualizations.

D.

Enable VPC Flow Logs and send test traffic.

You are configuring an Application Load Balancer. The backend resides in your on-premises data center and is connected by Dedicated Interconnect. You need to ensure the load balancer can reference these on-premises resources. You do not want the traffic to traverse the internet at all. What should you do?

A.

Q Configure a Private Service Connect network endpoint group (NEG) as a backend service as part of the load balancer. Ensure firewalls are opened for the client source IPs.

B.

Q Configure a zonal network endpoint group (NEG) as a backend service as part of the load balancer. Ensure firewalls are opened for the client source IPs.

C.

Q Configure an internet network endpoint group (NEG) as a backend service as part of the load balancer. Ensure firewalls are opened for the proxy-only subnet.

D.

Q Configure a hybrid network endpoint group (NEG) as a backend service as part of the load balancer. Ensure firewalls are opened for the proxy-only subnet.

Question:

Your organization is deploying a mission-critical application with components in different regions due to strict compliance requirements. There are latency issues between different applications that reside in us-central1 and us-east4. The application team suspects the Google Cloud network as the source of the excessive latency despite using the Premium Network Service Tier. You need to use Google-recommended practices with the least amount of effort to verify the inter-region latency by investigating network performance. What should you do?

A.

Set up the Performance Dashboard in Network Intelligence Center. Select the traffic type (cross-zonal), the metric (latency - RTT), the time period, the desired regions (us-central1 and us-east4), and the network tier.

B.

Enable VPC Flow Logs for the VPC. Identify major bottlenecks from the application level using Flow Analyzer.

C.

Configure two Linux VMs in each zone for each region. Install the application, and run a load test using each zone from different regions.

D.

Configure a VM with a probe in Network Intelligence Center in each zone for each region. Choose the traffic type (cross-zonal), metric (latency - RTT), desired regions (us-central1 and us-east4), and the network tier.

You are configuring an HA VPN connection between your Virtual Private Cloud (VPC) and on-premises network. The VPN gateway is named VPN_GATEWAY_1. You need to restrict VPN tunnels created in the project to only connect to your on-premises VPN public IP address: 203.0.113.1/32. What should you do?

A.

Configure a firewall rule accepting 203.0.113.1/32, and set a target tag equal to VPN_GATEWAY_1.

B.

Configure the Resource Manager constraint constraints/compute.restrictVpnPeerIPs to use an allowList consisting of only the 203.0.113.1/32 address.

C.

Configure a Google Cloud Armor security policy, and create a policy rule to allow 203.0.113.1/32.

D.

Configure an access control list on the peer VPN gateway to deny all traffic except 203.0.113.1/32, and attach it to the primary external interface.

You are adding steps to a working automation that uses a service account to authenticate. You need to drive the automation the ability to retrieve files from a Cloud Storage bucket. Your organization requires using the least privilege possible.

What should you do?

A.

Grant the compute.instanceAdmin to your user account.

B.

Grant the iam.serviceAccountUser to your user account.

C.

Grant the read-only privilege to the service account for the Cloud Storage bucket.

D.

Grant the cloud-platform privilege to the service account for the Cloud Storage bucket.

Your organization has a single project that contains multiple Virtual Private Clouds (VPCs). You need to secure API access to your Cloud Storage buckets and BigQuery datasets by allowing API access only from resources in your corporate public networks. What should you do?

A.

Create an access context policy that allows your VPC and corporate public network IP ranges, and then attach the policy to Cloud Storage and BigQuery.

B.

Create a VPC Service Controls perimeter for your project with an access context policy that allows your corporate public network IP ranges.

C.

Create a firewall rule to block API access to Cloud Storage and BigQuery from unauthorized networks.

D.

Create a VPC Service Controls perimeter for each VPC with an access context policy that allows your corporate public network IP ranges.

All the instances in your project are configured with the custom metadata enable-oslogin value set to FALSE and to block project-wide SSH keys. None of the instances are set with any SSH key, and no project-wide SSH keys have been configured. Firewall rules are set up to allow SSH sessions from any IP address range. You want to SSH into one instance.

What should you do?

A.

Open the Cloud Shell SSH into the instance using gcloud compute ssh.

B.

Set the custom metadata enable-oslogin to TRUE, and SSH into the instance using a third-party tool like putty or ssh.

C.

Generate a new SSH key pair. Verify the format of the private key and add it to the instance. SSH into the instance using a third-party tool like putty or ssh.

D.

Generate a new SSH key pair. Verify the format of the public key and add it to the project. SSH into the instance using a third-party tool like putty or ssh.

You have several microservices running in a private subnet in an existing Virtual Private Cloud (VPC). You need to create additional serverless services that use Cloud Run and Cloud Functions to access the microservices. The network traffic volume between your serverless services and private microservices is low. However, each serverless service must be able to communicate with any of your microservices. You want to implement a solution that minimizes cost. What should you do?

A.

Deploy your serverless services to the serverless VPC. Peer the serverless service VPC to the existing VPC. Configure firewall rules to allow traffic between the serverless services and your existing microservices.

B.

Create a serverless VPC access connector for each serverless service. Configure the connectors to allow traffic between the serverless services and your existing microservices.

C.

Deploy your serverless services to the existing VPC. Configure firewall rules to allow traffic between the serverless services and your existing microservices.

D.

Create a serverless VPC access connector. Configure the serverless service to use the connector for communication to the microservices.

Copyright © 2014-2025 Solution2Pass. All Rights Reserved