GCP GCP-PCA Free Practice Questions — Page 3

Professional Cloud Architect • 5 questions • Answers & explanations included

Question 11

Your customer is moving an existing corporate application to Google Cloud Platform from an on-premises data center. The business owners require minimal user disruption. There are strict security team requirements for storing passwords. What authentication strategy should they use?

A. Use G Suite Password Sync to replicate passwords into Google
B. Federate authentication via SAML 2.0 to the existing Identity Provider
C. Provision users in Google using the Google Cloud Directory Sync tool
D. Ask users to set their Google password to match their corporate password
Show Answer & Explanation

Correct Answer: B. Federate authentication via SAML 2.0 to the existing Identity Provider

Federating authentication via SAML 2.0 to the existing Identity Provider (IdP) is the best strategy. It allows users to authenticate with their existing corporate credentials without any password migration or duplication. This satisfies both minimal user disruption and strict password security requirements, since passwords never leave the corporate IdP. Option A (G Suite Password Sync) replicates passwords to Google, which is a security risk. Option C provisions users but doesn't federate authentication. Option D asking users to manually match passwords is insecure and operationally impractical.

Question 12

Your company has successfully migrated to the cloud and wants to analyze their data stream to optimize operations. They do not have any existing code for this analysis, so they are exploring all their options. These options include a mix of batch and stream processing, as they are running some hourly jobs and live- processing some data as it comes in. Which technology should they use for this?

A. Google Cloud Dataproc
B. Google Cloud Dataflow
C. Google Container Engine with Bigtable
D. Google Compute Engine with Google BigQuery
Show Answer & Explanation

Correct Answer: B. Google Cloud Dataflow

Google Cloud Dataflow is a fully managed, serverless service based on Apache Beam that natively supports both batch and streaming pipelines in a unified model. This directly matches the requirement of mixed batch and stream processing without writing new infrastructure code. Option A (Dataproc) runs Hadoop/Spark but is better suited for existing Spark/Hadoop workloads, not new unified pipelines. Option C and D require significant infrastructure management and don't provide a unified batch+stream model out of the box.

Question 13

Your customer is receiving reports that their recently updated Google App Engine application is taking approximately 30 seconds to load for some of their users. This behavior was not reported before the update. What strategy should you take?

A. Work with your ISP to diagnose the problem
B. Open a support ticket to ask for network capture and flow data to diagnose the problem, then roll back your application
C. Roll back to an earlier known good release initially, then use Stackdriver Trace and Logging to diagnose the problem in a development/test/staging environment
D. Roll back to an earlier known good release, then push the release again at a quieter period to investigate. Then use Stackdriver Trace and Logging to diagnose the problem
Show Answer & Explanation

Correct Answer: C. Roll back to an earlier known good release initially, then use Stackdriver Trace and Logging to diagnose the problem in a development/test/staging environment

The correct first action is to roll back to the last known good release to restore user experience immediately. Then use Stackdriver Trace to identify slow operations and Stackdriver Logging to find errors in a safe non-production environment. Option D also rolls back but then re-pushes the broken release to production, which unnecessarily risks user impact. Option A (ISP) is wrong — the problem started after an update, pointing to application code. Option B (support ticket) is too slow and reactive for a production performance issue.

Question 14

A production database virtual machine on Google Compute Engine has an ext4-formatted persistent disk for data files. The database is about to run out of storage space. How can you remediate the problem with the least amount of downtime?

A. In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.
B. Shut down the virtual machine, use the Cloud Platform Console to increase the persistent disk size, then restart the virtual machine
C. In the Cloud Platform Console, increase the size of the persistent disk and verify the new space is ready to use with the fdisk command in Linux
D. In the Cloud Platform Console, create a new persistent disk attached to the virtual machine, format and mount it, and configure the database service to move the files to the new disk
E. In the Cloud Platform Console, create a snapshot of the persistent disk restore the snapshot to a new larger disk, unmount the old disk, mount the new disk and restart the database service
Show Answer & Explanation

Correct Answer: A. In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.

Google Cloud persistent disks can be resized live without detaching or stopping the VM. After resizing in the Console, the resize2fs command extends the ext4 filesystem to use the new space — all without downtime. Option B requires a full VM shutdown, causing downtime. Option C uses fdisk, which is for partition management, not filesystem resizing on an already-partitioned disk. Option D requires moving all database files to a new disk, which is complex and risky. Option E (snapshot + new disk) also causes downtime and unnecessary steps.

Question 15

Your application needs to process credit card transactions. You want the smallest scope of Payment Card Industry (PCI) compliance without compromising the ability to analyze transactional data and trends relating to which payment methods are used. How should you design your architecture?

A. Create a tokenizer service and store only tokenized data
B. Create separate projects that only process credit card data
C. Create separate subnetworks and isolate the components that process credit card data
D. Streamline the audit discovery phase by labeling all of the virtual machines (VMs) that process PCI data
E. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor
Show Answer & Explanation

Correct Answer: A. Create a tokenizer service and store only tokenized data

Tokenization replaces sensitive credit card data with a non-sensitive token. Only the tokenizer service handles raw PCI data, minimizing the PCI compliance scope to just that service. The rest of the system stores only tokens, which are safe for analytics on payment trends. This satisfies both requirements: minimal PCI scope and ability to analyze transactional data. Options B and C reduce scope but don't eliminate the need to handle raw card data broadly. Option D (labels) aids auditing but doesn't reduce scope. Option E (BigQuery export) would spread PCI data further, increasing scope.

Ready for the Full GCP-PCA Experience?

Access all 55 pages of practice questions and simulate the real exam with timed mode.

Start Interactive Quiz →