GCP GCP-PDE Free Practice Questions — Page 1

Professional Data Engineer • 5 questions • Answers & explanations included

Question 1

Your company built a TensorFlow neutral-network model with a large number of neurons and layers. The model fits well for the training data. However, when tested against new data, it performs poorly. What method can you employ to address this?

A. Threading
B. Serialization
C. Dropout Methods
D. Dimensionality Reduction
Show Answer & Explanation

Correct Answer: C. Dropout Methods

The model fits training data well but fails on new data — this is overfitting. Dropout randomly disables neurons during training, forcing the network to learn more generalized patterns. Threading (A) is about concurrency, not model performance. Serialization (B) is about data formatting. Dimensionality Reduction (D) reduces features but doesn't directly fix overfitting in deep neural networks the way dropout does. Dropout is a standard regularization technique in TensorFlow via tf.keras.layers.Dropout.

Question 2

You are building a model to make clothing recommendations. You know a user's fashion preference is likely to change over time, so you build a data pipeline to stream new data back to the model as it becomes available. How should you use this data to train the model?

A. Continuously retrain the model on just the new data.
B. Continuously retrain the model on a combination of existing data and the new data.
C. Train on the existing data while using the new data as your test set.
D. Train on the new data while using the existing data as your test set.
Show Answer & Explanation

Correct Answer: B. Continuously retrain the model on a combination of existing data and the new data.

Fashion preferences change over time (concept drift), so the model needs new data. But training only on new data (A) causes catastrophic forgetting — the model loses patterns learned from historical data. Using new data as a test set (C, D) wastes valuable training signal. Combining both preserves old knowledge while adapting to new trends. This is the standard approach for online/incremental learning pipelines in production ML systems.

Question 3

You designed a database for patient records as a pilot project to cover a few hundred patients in three clinics. Your design used a single database table to represent all patients and their visits, and you used self-joins to generate reports. The server resource utilization was at 50%. Since then, the scope of the project has expanded. The database must now store 100 times more patient records. You can no longer run the reports, because they either take too long or they encounter errors with insufficient compute resources. How should you adjust the database design?

A. Add capacity (memory and disk space) to the database server by the order of 200.
B. Shard the tables into smaller ones based on date ranges, and only generate reports with prespecified date ranges.
C. Normalize the master patient-record table into the patient table and the visits table, and create other necessary tables to avoid self-join.
D. Partition the table into smaller tables, with one for each clinic. Run queries against the smaller table pairs, and use unions for consolidated reports.
Show Answer & Explanation

Correct Answer: C. Normalize the master patient-record table into the patient table and the visits table, and create other necessary tables to avoid self-join.

Self-joins on a single large table scale very poorly — they create Cartesian-like operations that explode with 100x data. Normalizing into separate tables (patient, visits, etc.) eliminates self-joins and allows efficient indexed lookups. Option A (200x hardware) is costly and doesn't fix the design flaw. Option B (date sharding) helps partially but doesn't fix the structural self-join problem. Option D (per-clinic partitioning) requires UNIONs and doesn't address the root schema issue. Proper normalization is the correct database design fix here.

Question 4

You create an important report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. You notice that visualizations are not showing data that is less than 1 hour old. What should you do?

A. Disable caching by editing the report settings.
B. Disable caching in BigQuery by editing table details.
C. Refresh your browser tab showing the visualizations.
D. Clear your browser history for the past hour then reload the tab showing the virtualizations.
Show Answer & Explanation

Correct Answer: A. Disable caching by editing the report settings.

Google Data Studio 360 caches query results to reduce BigQuery costs and improve load speed. By default, cached data can be up to 15 minutes old, but with report-level caching it can be longer. Disabling caching in the report settings forces Data Studio to re-query BigQuery on each load. Option B is incorrect — BigQuery itself doesn't have a user-facing caching toggle on tables. Options C and D (browser refresh/history) do not affect Data Studio's server-side query cache.

Question 5

An external customer provides you with a daily dump of data from their database. The data flows into Google Cloud Storage GCS as comma-separated values (CSV) files. You want to analyze this data in Google BigQuery, but the data could have rows that are formatted incorrectly or corrupted. How should you build this pipeline?

A. Use federated data sources, and check data in the SQL query.
B. Enable BigQuery monitoring in Google Stackdriver and create an alert.
C. Import the data into BigQuery using the gcloud CLI and set max_bad_records to 0.
D. Run a Google Cloud Dataflow batch pipeline to import the data into BigQuery, and push errors to another dead-letter table for analysis.
Show Answer & Explanation

Correct Answer: D. Run a Google Cloud Dataflow batch pipeline to import the data into BigQuery, and push errors to another dead-letter table for analysis.

Dataflow allows you to validate, transform, and route data — sending bad rows to a dead-letter table instead of failing the entire pipeline. This is the GCP best practice for handling corrupt or malformed data in ETL pipelines. Option A (federated sources + SQL checks) is fragile and doesn't isolate errors cleanly. Option B (Stackdriver alerting) monitors but doesn't handle errors. Option C (max_bad_records=0) rejects all bad records and may cause the entire load to fail with no error analysis capability.

Ready for the Full GCP-PDE Experience?

Access all 55 pages of practice questions and simulate the real exam with timed mode.

Start Interactive Quiz →