GCP GCP-PCDE Free Practice Questions — Page 1

Professional Cloud Database Engineer • 5 questions • Answers & explanations included

Question 1

You are developing a new application on a VM that is on your corporate network. The application will use Java Database Connectivity (JDBC) to connect to Cloud SQL for PostgreSQL. Your Cloud SQL instance is con gured with IP address 192.168.3.48, and SSL is disabled. You want to ensure that your application can access your database instance without requiring con guration changes to your database. What should you do?

A. Define a connection string using your Google username and password to point to the external (public) IP address of your Cloud SQL instance.
B. Define a connection string using a database username and password to point to the internal (private) IP address of your Cloud SQL instance.
C. Define a connection string using Cloud SQL Auth proxy configured with a service account to point to the internal (private) IP address of your Cloud SQL instance.
D. Define a connection string using Cloud SQL Auth proxy configured with a service account to point to the external (public) IP address of your Cloud SQL instance.
Show Answer & Explanation

Correct Answer: B. Define a connection string using a database username and password to point to the internal (private) IP address of your Cloud SQL instance.

The Cloud SQL instance has IP 192.168.3.48, which is a private RFC 1918 address — this confirms it's a private/internal IP, meaning it's accessible directly from within a corporate network via VPC peering or private services access. Since SSL is disabled and the app is on the same corporate network, a direct JDBC connection using a database username and password to the private IP is the simplest and most appropriate solution — no extra tooling needed. Option B is correct because the private IP is already reachable from the corporate VM, SSL is off, and using a standard DB username/password satisfies JDBC requirements with zero database configuration changes. Option A is wrong because Google account credentials (IAM) are not standard JDBC database credentials, and using a public IP introduces unnecessary exposure and may require authorized network configuration changes. Option C is wrong because the Cloud SQL Auth Proxy listens on localhost (127.0.0.1), not on the instance's private IP — so you wouldn't point JDBC to the private IP when using the proxy. Also, the proxy adds complexity not needed here. Option D is wrong for the same proxy reason as C, plus it uses the public IP unnecessarily when private IP is already available.

Question 2

Your digital-native business runs its database workloads on Cloud SQL. Your website must be globally accessible 24/7. You need to prepare your Cloud SQL instance for high availability (HA). You want to follow Google-recommended practices. What should you do? (Choose two.)

A. Set up manual backups.
B. Create a PostgreSQL database on-premises as the HA option.
C. Configure single zone availability for automated backups.
D. Enable point-in-time recovery.
E. Schedule automated backups.
Show Answer & Explanation

Correct Answers: D. Enable point-in-time recovery.; E. Schedule automated backups.

Google recommends enabling automated backups and point-in-time recovery (PITR) together as the standard HA and data protection strategy for Cloud SQL instances. Option E is correct because scheduled automated backups ensure you have consistent, regular recovery points without manual intervention — critical for a 24/7 globally accessible site. Option D is correct because PITR allows recovery to any specific point in time using write-ahead logs (WAL), protecting against accidental data deletion or corruption — a key Google-recommended practice for Cloud SQL HA setups. Option A is wrong because manual backups are not reliable for HA — they require human action, are inconsistent, and don't align with Google's recommended automated approach. Option B is wrong because creating an on-premises PostgreSQL instance as HA defeats the purpose of using a fully managed Cloud SQL service — Google recommends using Cloud SQL regional instances with automatic failover instead. Option C is wrong because single zone availability is the opposite of HA. Google recommends regional (multi-zone) availability so that if one zone fails, Cloud SQL automatically fails over to the standby in another zone.

Question 3

Your company wants to move to Google Cloud. Your current data center is closing in six months. You are running a large, highly transactional Oracle application footprint on VMWare. You need to design a solution with minimal disruption to the current architecture and provide ease of migration to Google Cloud. What should you do?

A. Migrate applications and Oracle databases to Google Cloud VMware Engine (VMware Engine).
B. Migrate applications and Oracle databases to Compute Engine.
C. Migrate applications to Cloud SQL.
D. Migrate applications and Oracle databases to Google Kubernetes Engine (GKE).
Show Answer & Explanation

Correct Answer: A. Migrate applications and Oracle databases to Google Cloud VMware Engine (VMware Engine).

Google Cloud VMware Engine (GCVE) is purpose-built to lift-and-shift existing VMware workloads — including Oracle databases — directly to Google Cloud without re-architecting, re-platforming, or changing the existing stack. Option A is correct because the question explicitly states the workload runs on VMware and requires minimal disruption. GCVE lets you migrate VMware VMs as-is, preserving Oracle licensing, configurations, and architecture — meeting the 6-month deadline with the least risk. Option B is wrong because migrating Oracle to bare Compute Engine requires manual re-configuration of the OS, storage, networking, and Oracle setup — significantly more effort and disruption than a VMware lift-and-shift. Option C is wrong because Cloud SQL does not support Oracle — it only supports MySQL, PostgreSQL, and SQL Server. Migrating a large Oracle footprint to Cloud SQL would require a full database re-platforming effort, which contradicts "minimal disruption." Option D is wrong because containerizing a large, highly transactional Oracle workload on GKE is complex, risky, and time-consuming — Oracle on Kubernetes is not a standard or recommended pattern, especially under a tight 6-month timeline.

Question 4

Your customer has a global chat application that uses a multi-regional Cloud Spanner instance. The application has recently experienced degraded performance after a new version of the application was launched. Your customer asked you for assistance. During initial troubleshooting, you observed high read latency. What should you do?

A. Use query parameters to speed up frequently executed queries.
B. Change the Cloud Spanner configuration from multi-region to single region.
C. Use SQL statements to analyze SPANNER_SYS.READ_STATS* tables.
D. Use SQL statements to analyze SPANNER_SYS.QUERY_STATS* tables.
Show Answer & Explanation

Correct Answer: C. Use SQL statements to analyze SPANNER_SYS.READ_STATS* tables.

When troubleshooting high read latency in Cloud Spanner, the correct approach is to analyze SPANNER_SYS.READ_STATS* tables, which capture read operation statistics including latency, bytes read, and CPU usage per read shape.C is correct because READ_STATS* tables are specifically designed to diagnose read performance issues — exactly what the symptom describes.D is wrong because QUERY_STATS* tables track SQL query execution stats, not raw read operations. The issue here is read latency, not query latency.A is wrong because query parameters help with query plan caching but don't directly address high read latency.B is wrong because changing from multi-region to single region would reduce availability and defeats the purpose of a global chat app — and doesn't fix the root cause.

Question 5

Your company has PostgreSQL databases on-premises and on Amazon Web Services (AWS). You are planning multiple database migrations to Cloud SQL in an effort to reduce costs and downtime. You want to follow Google-recommended practices and use Google native data migration tools. You also want to closely monitor the migrations as part of the cutover strategy. What should you do?

A. Use Database Migration Service to migrate all databases to Cloud SQL.
B. Use Database Migration Service for one-time migrations, and use third-party or partner tools for change data capture (CDC) style migrations.
C. Use data replication tools and CDC tools to enable migration.
D. Use a combination of Database Migration Service and partner tools to support the data migration strategy.
Show Answer & Explanation

Correct Answer: A. Use Database Migration Service to migrate all databases to Cloud SQL.

Google's native tool for database migrations to Cloud SQL is Database Migration Service (DMS), which supports both one-time and continuous (CDC) migrations natively for MySQL and PostgreSQL — including from AWS RDS and on-premises.A is correct because DMS natively supports CDC-style continuous migrations and one-time migrations for both PostgreSQL and MySQL, covering all stated requirements with Google-native tooling.B is wrong because DMS supports CDC natively — there's no need to split between DMS and third-party tools.C and D are wrong because they introduce third-party or partner tools unnecessarily when the question explicitly asks for Google native tools.

Ready for the Full GCP-PCDE Experience?

Access all 30 pages of practice questions and simulate the real exam with timed mode.

Start Interactive Quiz →