11.11 Sale Special - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: xmaspas7

Easiest Solution 2 Pass Your Certification Exams

Professional-Data-Engineer Google Professional Data Engineer Exam Free Practice Exam Questions (2025 Updated)

Prepare effectively for your Google Professional-Data-Engineer Google Professional Data Engineer Exam certification with our extensive collection of free, high-quality practice questions. Each question is designed to mirror the actual exam format and objectives, complete with comprehensive answers and detailed explanations. Our materials are regularly updated for 2025, ensuring you have the most current resources to build confidence and succeed on your first attempt.

Page: 2 / 4
Total 387 questions

Your company’s on-premises Apache Hadoop servers are approaching end-of-life, and IT has decided to migrate the cluster to Google Cloud Dataproc. A like-for-like migration of the cluster would require 50 TB of Google Persistent Disk per node. The CIO is concerned about the cost of using that much block storage. You want to minimize the storage cost of the migration. What should you do?

A.

Put the data into Google Cloud Storage.

B.

Use preemptible virtual machines (VMs) for the Cloud Dataproc cluster.

C.

Tune the Cloud Dataproc cluster so that there is just enough disk for all data.

D.

Migrate some of the cold data into Google Cloud Storage, and keep only the hot data in Persistent Disk.

You want to use a database of information about tissue samples to classify future tissue samples as either normal or mutated. You are evaluating an unsupervised anomaly detection method for classifying the tissue samples. Which two characteristic support this method? (Choose two.)

A.

There are very few occurrences of mutations relative to normal samples.

B.

There are roughly equal occurrences of both normal and mutated samples in the database.

C.

You expect future mutations to have different features from the mutated samples in the database.

D.

You expect future mutations to have similar features to the mutated samples in the database.

E.

You already have labels for which samples are mutated and which are normal in the database.

Your company is streaming real-time sensor data from their factory floor into Bigtable and they have noticed extremely poor performance. How should the row key be redesigned to improve Bigtable performance on queries that populate real-time dashboards?

A.

Use a row key of the form .

B.

Use a row key of the form .

C.

Use a row key of the form #.

D.

Use a row key of the form >##.

Your company is running their first dynamic campaign, serving different offers by analyzing real-time data during the holiday season. The data scientists are collecting terabytes of data that rapidly grows every hour during their 30-day campaign. They are using Google Cloud Dataflow to preprocess the data and collect the feature (signals) data that is needed for the machine learning model in Google Cloud Bigtable. The team is observing suboptimal performance with reads and writes of their initial load of 10 TB of data. They want to improve this performance while minimizing cost. What should they do?

A.

Redefine the schema by evenly distributing reads and writes across the row space of the table.

B.

The performance issue should be resolved over time as the site of the BigDate cluster is increased.

C.

Redesign the schema to use a single row key to identify values that need to be updated frequently in the cluster.

D.

Redesign the schema to use row keys based on numeric IDs that increase sequentially per user viewing the offers.

You create an important report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. You notice that visualizations are not showing data that is less than 1 hour old. What should you do?

A.

Disable caching by editing the report settings.

B.

Disable caching in BigQuery by editing table details.

C.

Refresh your browser tab showing the visualizations.

D.

Clear your browser history for the past hour then reload the tab showing the virtualizations.

Your company handles data processing for a number of different clients. Each client prefers to use their own suite of analytics tools, with some allowing direct query access via Google BigQuery. You need to secure the data so that clients cannot see each other’s data. You want to ensure appropriate access to the data. Which three steps should you take? (Choose three.)

A.

Load data into different partitions.

B.

Load data into a different dataset for each client.

C.

Put each client’s BigQuery dataset into a different table.

D.

Restrict a client’s dataset to approved users.

E.

Only allow a service account to access the datasets.

F.

Use the appropriate identity and access management (IAM) roles for each client’s users.

Your company’s customer and order databases are often under heavy load. This makes performing analytics against them difficult without harming operations. The databases are in a MySQL cluster, with nightly backups taken using mysqldump. You want to perform analytics with minimal impact on operations. What should you do?

A.

Add a node to the MySQL cluster and build an OLAP cube there.

B.

Use an ETL tool to load the data from MySQL into Google BigQuery.

C.

Connect an on-premises Apache Hadoop cluster to MySQL and perform ETL.

D.

Mount the backups to Google Cloud SQL, and then process the data using Google Cloud Dataproc.

Your company is migrating their 30-node Apache Hadoop cluster to the cloud. They want to re-use Hadoop jobs they have already created and minimize the management of the cluster as much as possible. They also want to be able to persist data beyond the life of the cluster. What should you do?

A.

Create a Google Cloud Dataflow job to process the data.

B.

Create a Google Cloud Dataproc cluster that uses persistent disks for HDFS.

C.

Create a Hadoop cluster on Google Compute Engine that uses persistent disks.

D.

Create a Cloud Dataproc cluster that uses the Google Cloud Storage connector.

E.

Create a Hadoop cluster on Google Compute Engine that uses Local SSD disks.

Your company uses a proprietary system to send inventory data every 6 hours to a data ingestion service in the cloud. Transmitted data includes a payload of several fields and the timestamp of the transmission. If there are any concerns about a transmission, the system re-transmits the data. How should you deduplicate the data most efficiency?

A.

Assign global unique identifiers (GUID) to each data entry.

B.

Compute the hash value of each data entry, and compare it with all historical data.

C.

Store each data entry as the primary key in a separate database and apply an index.

D.

Maintain a database table to store the hash value and other metadata for each data entry.

Which of the following is not possible using primitive roles?

A.

Give a user viewer access to BigQuery and owner access to Google Compute Engine instances.

B.

Give UserA owner access and UserB editor access for all datasets in a project.

C.

Give a user access to view all datasets in a project, but not run queries on them.

D.

Give GroupA owner access and GroupB editor access for all datasets in a project.

To run a TensorFlow training job on your own computer using Cloud Machine Learning Engine, what would your command start with?

A.

gcloud ml-engine local train

B.

gcloud ml-engine jobs submit training

C.

gcloud ml-engine jobs submit training local

D.

You can't run a TensorFlow program on your own computer using Cloud ML Engine .

Which Cloud Dataflow / Beam feature should you use to aggregate data in an unbounded data source every hour based on the time when the data entered the pipeline?

A.

An hourly watermark

B.

An event time trigger

C.

The with Allowed Lateness method

D.

A processing time trigger

Which of these rules apply when you add preemptible workers to a Dataproc cluster (select 2 answers)?

A.

Preemptible workers cannot use persistent disk.

B.

Preemptible workers cannot store data.

C.

If a preemptible worker is reclaimed, then a replacement worker must be added manually.

D.

A Dataproc cluster cannot have only preemptible workers.

Dataproc clusters contain many configuration files. To update these files, you will need to use the --properties option. The format for the option is: file_prefix:property=_____.

A.

details

B.

value

C.

null

D.

id

Which of the following are examples of hyperparameters? (Select 2 answers.)

A.

Number of hidden layers

B.

Number of nodes in each hidden layer

C.

Biases

D.

Weights

The YARN ResourceManager and the HDFS NameNode interfaces are available on a Cloud Dataproc cluster ____.

A.

application node

B.

conditional node

C.

master node

D.

worker node

Which of the following job types are supported by Cloud Dataproc (select 3 answers)?

A.

Hive

B.

Pig

C.

YARN

D.

Spark

Which of the following statements is NOT true regarding Bigtable access roles?

A.

Using IAM roles, you cannot give a user access to only one table in a project, rather than all tables in a project.

B.

To give a user access to only one table in a project, grant the user the Bigtable Editor role forthat table.

C.

You can configure access control only at the project level.

D.

To give a user access to only one table in a project, you must configure access through your application.

You are planning to use Google's Dataflow SDK to analyze customer data such as displayed below. Your project requirement is to extract only the customer name from the data source and then write to an output PCollection.

Tom,555 X street

Tim,553 Y street

Sam, 111 Z street

Which operation is best suited for the above data processing requirement?

A.

ParDo

B.

Sink API

C.

Source API

D.

Data extraction

Which of these operations can you perform from the BigQuery Web UI?

A.

Upload a file in SQL format.

B.

Load data with nested and repeated fields.

C.

Upload a 20 MB file.

D.

Upload multiple files using a wildcard.

Page: 2 / 4
Total 387 questions
Copyright © 2014-2025 Solution2Pass. All Rights Reserved