GCP GCP-PCDE Free Practice Questions — Page 3

Professional Cloud Database Engineer • 5 questions • Answers & explanations included

Question 11

Your organization operates in a highly regulated industry. Separation of concerns (SoC) and security principle of least privilege (PoLP) are critical. The operations team consists of: Person A is a database administrator. Person B is an analyst who generates metric reports. Application C is responsible for automatic backups. You need to assign roles to team members for Cloud Spanner. Which roles should you assign?

A. roles/spanner.databaseAdmin for Person A roles/spanner.databaseReader for Person B roles/spanner.backupWriter for Application C
B. roles/spanner.databaseAdmin for Person A roles/spanner.databaseReader for Person B roles/spanner.backupAdmin for Application C
C. roles/spanner.databaseAdmin for Person A roles/spanner.databaseUser for Person B roles/spanner databaseReader for Application C
D. roles/spanner.databaseAdmin for Person A roles/spanner.databaseUser for Person B roles/spanner.backupWriter for Application C
Show Answer & Explanation

Correct Answer: A. roles/spanner.databaseAdmin for Person A roles/spanner.databaseReader for Person B roles/spanner.backupWriter for Application C

Using least privilege (PoLP) and separation of concerns (SoC) for Cloud Spanner IAM: DBA needs full database admin rights → roles/spanner.databaseAdmin Analyst only reads data for reports → roles/spanner.databaseReader Backup application only creates backups → roles/spanner.backupWriter A is correct because backupWriter allows creating backups without granting excessive admin rights — perfectly aligned with PoLP.B is wrong because backupAdmin grants both create AND delete backup permissions — too broad for an automated backup process.C is wrong because databaseReader for the backup application gives read-only data access but no backup creation rights at all.D is wrong because databaseUser for the analyst allows both reads and writes — too broad for a report-only role.

Question 12

You are designing an augmented reality game for iOS and Android devices. You plan to use Cloud Spanner as the primary backend database for game state storage and player authentication. You want to track in-game rewards that players unlock at every stage of the game. During the testing phase, you discovered that costs are much higher than anticipated, but the query response times are within the SLA. You want to follow Google-recommended practices. You need the database to be performant and highly available while you keep costs low. What should you do?

A. Manually scale down the number of nodes after the peak period has passed.
B. Use interleaving to co-locate parent and child rows.
C. Use the Cloud Spanner query optimizer to determine the most efficient way to execute the SQL query.
D. Use granular instance sizing in Cloud Spanner and Autoscaler.
Show Answer & Explanation

Correct Answer: D. Use granular instance sizing in Cloud Spanner and Autoscaler.

The issue is costs higher than expected but performance within SLA. Cloud Spanner charges per node or processing unit. The Google-recommended approach for cost optimization is granular instance sizing (use processing units instead of full nodes) combined with Autoscaler to scale dynamically.D is correct because processing units (minimum 100 PU vs 1000 PU per node) allow fine-grained cost control, and Autoscaler adjusts capacity based on load automatically.A is wrong because manual scaling is error-prone, operationally heavy, and not a Google best practice.B is wrong because interleaving improves performance by co-locating data, but doesn't reduce costs directly.C is wrong because the query optimizer improves execution efficiency but doesn't reduce instance costs when the instance is over-provisioned.

Question 13

You recently launched a new product to the US market. You currently have two Bigtable clusters in one US region to serve all the traffic. Your marketing team is planning an immediate expansion to APAC. You need to roll out the regional expansion while implementing high availability according to Google-recommended practices. What should you do?

A. Maintain a target of 23% CPU utilization by locating: cluster-a in zone us-central1-a cluster-b in zone europe-west1-d cluster-c in zone asia-east1-b
B. Maintain a target of 23% CPU utilization by locating: cluster-a in zone us-central1-a cluster-b in zone us-central1-b cluster-c in zone us-east1-a
C. Maintain a target of 35% CPU utilization by locating: cluster-a in zone us-central1-a cluster-b in zone australia-southeast1-a cluster-c in zone europe-west1-d cluster-d in zone asia-east1-b
D. Maintain a target of 35% CPU utilization by locating: cluster-a in zone us-central1-a cluster-b in zone us-central2-a cluster-c in zone asia-northeast1-b cluster-d in zone asia-east1-b
Show Answer & Explanation

Correct Answer: A. Maintain a target of 23% CPU utilization by locating: cluster-a in zone us-central1-a cluster-b in zone europe-west1-d cluster-c in zone asia-east1-b

For multi-region Bigtable HA expansion to APAC, Google recommends maintaining CPU utilization below 35% for single-cluster or below 70% for replication, but for multi-cluster replication with HA, the target is 35% per cluster — however for a 3-cluster multi-region setup, Google recommends 23% CPU utilization per cluster (to handle traffic if one cluster fails, the remaining two can each absorb 50% more).A is correct because it places clusters in US, Europe, and Asia covering global distribution including APAC, with the correct 23% CPU target for a 3-cluster replication group ensuring HA.B is wrong because all three clusters are in the US — no APAC coverage despite the expansion requirement.C is wrong because 35% CPU target is for single-cluster setups, not 3+ cluster replication groups.D is wrong because two clusters in Asia and none in Europe doesn't provide balanced global coverage, and 35% is the wrong target for multi-cluster replication.

Question 14

Your ecommerce website captures user clickstream data to analyze customer traffic patterns in real time and support personalization features on your website. You plan to analyze this data using big data tools. You need a low-latency solution that can store 8 TB of data and can scale to millions of read and write requests per second. What should you do?

A. Write your data into Bigtable and use Dataproc and the Apache Hbase libraries for analysis.
B. Deploy a Cloud SQL environment with read replicas for improved performance. Use Datastream to export data to Cloud Storage and analyze with Dataproc and the Cloud Storage connector.
C. Use Memorystore to handle your low-latency requirements and for real-time analytics.
D. Stream your data into BigQuery and use Dataproc and the BigQuery Storage API to analyze large volumes of data.
Show Answer & Explanation

Correct Answer: A. Write your data into Bigtable and use Dataproc and the Apache Hbase libraries for analysis.

Requirements: 8 TB storage, millions of reads/writes per second, low latency, big data analysis tools.A is correct because Bigtable is purpose-built for high-throughput, low-latency workloads at petabyte scale, and integrates natively with Dataproc and Apache HBase libraries for big data analysis — a perfect fit.B is wrong because Cloud SQL doesn't scale to millions of requests per second and is not designed for big data analytics workloads.C is wrong because Memorystore (Redis) is an in-memory cache — it cannot durably store 8 TB and is not suitable for big data analysis.D is wrong because BigQuery is optimized for batch analytics, not low-latency real-time read/write at millions of requests per second.

Question 15

Your company uses Cloud Spanner for a mission-critical inventory management system that is globally available. You recently loaded stock keeping unit (SKU) and product catalog data from a company acquisition and observed hotspots in the Cloud Spanner database. You want to follow Google-recommended schema design practices to avoid performance degradation. What should you do? (Choose two.)

A. Use an auto-incrementing value as the primary key.
B. Normalize the data model.
C. Promote low-cardinality attributes in multi-attribute primary keys.
D. Promote high-cardinality attributes in multi-attribute primary keys.
E. Use bit-reverse sequential value as the primary key.
Show Answer & Explanation

Correct Answers: D. Promote high-cardinality attributes in multi-attribute primary keys.; E. Use bit-reverse sequential value as the primary key.

Cloud Spanner hotspots occur when rows are written sequentially to the same split boundary. Google recommends avoiding monotonically increasing keys. D is correct because high-cardinality attributes (many unique values) distribute writes evenly across splits, preventing hotspots. E is correct because bit-reverse sequential values take auto-increment IDs and reverse their bits, distributing them across the key space while preserving uniqueness — a Google-recommended pattern specifically for avoiding hotspots. A is wrong because auto-incrementing keys are the primary cause of hotspots in Cloud Spanner — all new writes go to the same split boundary. B is wrong because normalization improves data integrity but doesn't address key distribution hotspots. C is wrong because low-cardinality attributes (few unique values like status flags) concentrate writes, making hotspots worse.

Ready for the Full GCP-PCDE Experience?

Access all 30 pages of practice questions and simulate the real exam with timed mode.

Start Interactive Quiz →