Summer Sale Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: s2p65

Easiest Solution 2 Pass Your Certification Exams

Associate-Data-Practitioner Google Cloud Associate Data Practitioner (ADP Exam) Free Practice Exam Questions (2025 Updated)

Prepare effectively for your Google Associate-Data-Practitioner Google Cloud Associate Data Practitioner (ADP Exam) certification with our extensive collection of free, high-quality practice questions. Each question is designed to mirror the actual exam format and objectives, complete with comprehensive answers and detailed explanations. Our materials are regularly updated for 2025, ensuring you have the most current resources to build confidence and succeed on your first attempt.

You want to build a model to predict the likelihood of a customer clicking on an online advertisement. You have historical data in BigQuery that includes features such as user demographics, ad placement, and previous click behavior. After training the model, you want to generate predictions on new data. Which model type should you use in BigQuery ML?

A.

Linear regression

B.

Matrix factorization

C.

Logistic regression

D.

K-means clustering

Your company is setting up an enterprise business intelligence platform. You need to limit data access between many different teams while following the Google-recommended approach. What should you do first?

A.

Create a separate Looker Studio report for each team, and share each report with the individuals within each team.

B.

Create one Looker Studio report with multiple pages, and add each team's data as a separate data source to the report.

C.

Create a Looker (Google Cloud core) instance, and create a separate dashboard for each team.

D.

Create a Looker (Google Cloud core) instance, and configure different Looker groups for each team.

You work for a global financial services company that trades stocks 24/7. You have a Cloud SGL for PostgreSQL user database. You need to identify a solution that ensures that the database is continuously operational, minimizes downtime, and will not lose any data in the event of a zonal outage. What should you do?

A.

Continuously back up the Cloud SGL instance to Cloud Storage. Create a Compute Engine instance with PostgreSCL in a different region. Restore the backup in the Compute Engine instance if a failure occurs.

B.

Create a read replica in another region. Promote the replica to primary if a failure occurs.

C.

Configure and create a high-availability Cloud SQL instance with the primary instance in zone A and a secondary instance in any zone other than zone A.

D.

Create a read replica in the same region but in a different zone.

Your company has developed a website that allows users to upload and share video files. These files are most frequently accessed and shared when they are initially uploaded. Over time, the files are accessed and shared less frequently, although some old video files may remain very popular. You need to design a storage system that is simple and cost-effective. What should you do?

A.

Create a single-region bucket with custom Object Lifecycle Management policies based on upload date.

B.

Create a single-region bucket with Autoclass enabled.

C.

Create a single-region bucket. Configure a Cloud Scheduler job that runs every 24 hours and changes the storage class based on upload date.

D.

Create a single-region bucket with Archive as the default storage class.

Your company currently uses an on-premises network file system (NFS) and is migrating data to Google Cloud. You want to be able to control how much bandwidth is used by the data migration while capturing detailed reporting on the migration status. What should you do?

A.

Use a Transfer Appliance.

B.

Use Cloud Storage FUSE.

C.

Use Storage Transfer Service.

D.

Use gcloud storage commands.

You manage a web application that stores data in a Cloud SQL database. You need to improve the read performance of the application by offloading read traffic from the primary database instance. You want to implement a solution that minimizes effort and cost. What should you do?

A.

Use Cloud CDN to cache frequently accessed data.

B.

Store frequently accessed data in a Memorystore instance.

C.

Migrate the database to a larger Cloud SQL instance.

D.

Enable automatic backups, and create a read replica of the Cloud SQL instance.

Your organization is building a new application on Google Cloud. Several data files will need to be stored in Cloud Storage. Your organization has approved only two specific cloud regions where these data files can reside. You need to determine a Cloud Storage bucket strategy that includes automated high availability. What should you do?

A.

Create a dual-region bucket, and upload the files to this bucket.

B.

Create a single-region bucket in each of the two regions, and use the gcloud storage command to replicate the data across the buckets in both regions.

C.

Create a multi-region bucket, and upload the files to this bucket.

D.

Create a single-region bucket in each of the two regions, and use Storage Transfer Service to replicate the data across the buckets in both regions.

Your organization sends IoT event data to a Pub/Sub topic. Subscriber applications read and perform transformations on the messages before storing them in the data warehouse. During particularly busy times when more data is being written to the topic, you notice that the subscriber applications are not acknowledging messages within the deadline. You need to modify your pipeline to handle these activity spikes and continue to process the messages. What should you do?

A.

Retry messages until they are acknowledged.

B Implement flow control on the subscribers

B.

Forward unacknowledged messages to a dead-letter topic.

C.

Seek back to the last acknowledged message.

You are a database administrator managing sales transaction data by region stored in a BigQuery table. You need to ensure that each sales representative can only see the transactions in their region. What should you do?

A.

Add a policy tag in BigQuery.

B.

Create a row-level access policy.

C.

Create a data masking rule.

D.

Grant the appropriate 1AM permissions on the dataset.

You work for a healthcare company. You have a daily ETL pipeline that extracts patient data from a legacy system, transforms it, and loads it into BigQuery for analysis. The pipeline currently runs manually using a shell script. You want to automate this process and add monitoring to ensure pipeline observability and troubleshooting insights. You want one centralized solution, using open-source tooling, without rewriting the ETL code. What should you do?

A.

Create a direct acyclic graph (DAG) in Cloud Composer to orchestrate a pipeline trigger daily. Monitor the pipeline's execution using the Apache Airflow web interface and Cloud Monitoring.

B.

Configure Cloud Dataflow to implement the ETL pipeline, and use Cloud Scheduler to trigger the Dataflow pipeline daily. Monitor the pipelines execution using the Dataflow job monitoring interface and Cloud Monitoring.

C.

Use Cloud Scheduler to trigger a Dataproc job to execute the pipeline daily. Monitor the job's progress using the Dataproc job web interface and Cloud Monitoring.

D.

Create a Cloud Run function that runs the pipeline daily. Monitor the functions execution using Cloud Monitoring.

You have an existing weekly Storage Transfer Service transfer job from Amazon S3 to a Nearline Cloud Storage bucket in Google Cloud. Each week, the job moves a large number of relatively small files. As the number of files to be transferred each week has grown over time, you are at risk of no longer completing the transfer in the allocated time frame. You need to decrease the total transfer time by replacing the process. Your solution should minimize costs where possible. What should you do?

A.

Create a transfer job using the Google Cloud CLI, and specify the Standard storage class with the —custom-storage-class flag.

B.

Create parallel transfer jobs using include and exclude prefixes.

C.

Create a batch Dataflow job that is scheduled weekly to migrate the data from Amazon S3 to Cloud Storage.

D.

Create an agent-based transfer job that utilizes multiple transfer agents on Compute Engine instances.

You are working on a project that requires analyzing daily social media data. You have 100 GB of JSON formatted data stored in Cloud Storage that keeps growing.

You need to transform and load this data into BigQuery for analysis. You want to follow the Google-recommended approach. What should you do?

A.

Manually download the data from Cloud Storage. Use a Python script to transform and upload the data into BigQuery.

B.

Use Cloud Run functions to transform and load the data into BigQuery.

C.

Use Dataflow to transform the data and write the transformed data to BigQuery.

D.

Use Cloud Data Fusion to transfer the data into BigQuery raw tables, and use SQL to transform it.

Your retail company collects customer data from various sources:

Online transactions: Stored in a MySQL database

Customer feedback: Stored as text files on a company server

Social media activity: Streamed in real-time from social media platforms

You are designing a data pipeline to extract this data. Which Google Cloud storage system(s) should you select for further analysis and ML model training?

A.

1. Online transactions: Cloud Storage

2. Customer feedback: Cloud Storage

3. Social media activity: Cloud Storage

B.

1. Online transactions: BigQuery

2. Customer feedback: Cloud Storage

3. Social media activity: BigQuery

C.

1. Online transactions: Bigtable

2. Customer feedback: Cloud Storage

3. Social media activity: CloudSQL for MySQL

D.

1. Online transactions: Cloud SQL for MySQL

2. Customer feedback: BigQuery

3. Social media activity: Cloud Storage

You are building a batch data pipeline to process 100 GB of structured data from multiple sources for daily reporting. You need to transform and standardize the data prior to loading the data to ensure that it is stored in a single dataset. You want to use a low-code solution that can be easily built and managed. What should you do?

A.

Use Cloud Data Fusion to ingest data and load the data into BigQuery. Use Looker Studio to perform data cleaning and transformation.

B.

Use Cloud Data Fusion to ingest the data, perform data cleaning and transformation, and load the data into BigQuery.

C.

Use Cloud Data Fusion to ingest the data, perform data cleaning and transformation, and load the data into Cloud SQL for PostgreSQL.

D.

Use Cloud Storage to store the data. Use Cloud Run functions to perform data cleaning and transformation, and load the data into BigQuery.

Your organization plans to move their on-premises environment to Google Cloud. Your organization’s network bandwidth is less than 1 Gbps. You need to move over 500 ТВ of data to Cloud Storage securely, and only have a few days to move the data. What should you do?

A.

Request multiple Transfer Appliances, copy the data to the appliances, and ship the appliances back to Google Cloud to upload the data to Cloud Storage.

B.

Connect to Google Cloud using VPN. Use Storage Transfer Service to move the data to Cloud Storage.

C.

Connect to Google Cloud using VPN. Use the gcloud storage command to move the data to Cloud Storage.

D.

Connect to Google Cloud using Dedicated Interconnect. Use the gcloud storage command to move the data to Cloud Storage.

Your team needs to analyze large datasets stored in BigQuery to identify trends in user behavior. The analysis will involve complex statistical calculations, Python packages, and visualizations. You need to recommend a managed collaborative environment to develop and share the analysis. What should you recommend?

A.

Create a Colab Enterprise notebook and connect the notebook to BigQuery. Share the notebook with your team. Analyze the data and generate visualizations in Colab Enterprise.

B.

Create a statistical model by using BigQuery ML. Share the query with your team. Analyze the data and generate visualizations in Looker Studio.

C.

Create a Looker Studio dashboard and connect the dashboard to BigQuery. Share the dashboard with your team. Analyze the data and generate visualizations in Looker Studio.

D.

Connect Google Sheets to BigQuery by using Connected Sheets. Share the Google Sheet with your team. Analyze the data and generate visualizations in Gooqle Sheets.

You work for a home insurance company. You are frequently asked to create and save risk reports with charts for specific areas using a publicly available storm event dataset. You want to be able to quickly create and re-run risk reports when new data becomes available. What should you do?

A.

Export the storm event dataset as a CSV file. Import the file to Google Sheets, and use cell data in the worksheets to create charts.

B.

Copy the storm event dataset into your BigQuery project. Use BigQuery Studio to query and visualize the data in Looker Studio.

C.

Reference and query the storm event dataset using SQL in BigQuery Studio. Export the results to Google Sheets, and use cell data in the worksheets to create charts.

D.

Reference and query the storm event dataset using SQL in a Colab Enterprise notebook. Display the table results and document with Markdown, and use Matplotlib to create charts.

Your organization uses scheduled queries to perform transformations on data stored in BigQuery. You discover that one of your scheduled queries has failed. You need to troubleshoot the issue as quickly as possible. What should you do?

A.

Navigate to the Logs Explorer page in Cloud Logging. Use filters to find the failed job, and analyze the error details.

B.

Set up a log sink using the gcloud CLI to export BigQuery audit logs to BigQuery. Query those logs to identify the error associated with the failed job ID.

C.

Request access from your admin to the BigQuery information_schema. Query the jobs view with the failed job ID, and analyze error details.

D.

Navigate to the Scheduled queries page in the Google Cloud console. Select the failed job, and analyze the error details.

You manage a Cloud Storage bucket that stores temporary files created during data processing. These temporary files are only needed for seven days, after which they are no longer needed. To reduce storage costs and keep your bucket organized, you want to automatically delete these files once they are older than seven days. What should you do?

A.

Set up a Cloud Scheduler job that invokes a weekly Cloud Run function to delete files older than seven days.

B.

Configure a Cloud Storage lifecycle rule that automatically deletes objects older than seven days.

C.

Develop a batch process using Dataflow that runs weekly and deletes files based on their age.

D.

Create a Cloud Run function that runs daily and deletes files older than seven days.

Your organization's website uses an on-premises MySQL as a backend database. You need to migrate the on-premises MySQL database to Google Cloud while maintaining MySQL features. You want to minimize administrative overhead and downtime. What should you do?

A.

Install MySQL on a Compute Engine virtual machine. Export the database files using the mysqldump command. Upload the files to Cloud Storage, and import them into the MySQL instance on Compute Engine.

B.

Use Database Migration Service to transfer the data to Cloud SQL for MySQL, and configure the on premises MySQL database as the source.

C.

Use a Google-provided Dataflow template to replicate the MySQL database in BigQuery.

D.

Export the database tables to CSV files, and upload the files to Cloud Storage. Convert the MySQL schema to a Spanner schema, create a JSON manifest file, and run a Google-provided Dataflow template to load the data into Spanner.

Copyright © 2014-2025 Solution2Pass. All Rights Reserved