Question.66 Your company has an existing GCP organization with hundreds of projects and a billing account. Your company recently acquired another company that also has hundreds of projects and its own billing account. You would like to consolidate all GCP costs of both GCP organizations onto a single invoice. You would like to consolidate all costs as of tomorrow. What should you do? (A) Link the acquired companyג€™s projects to your company’s billing account. (B) Configure the acquired company’s billing account and your company’s billing account to export the billing data into the same BigQuery dataset. (C) Migrate the acquired companyג€™s projects into your companyג€™s GCP organization. Link the migrated projects to your company’s billing account. (D) Create a new GCP organization and a new billing account. Migrate the acquired company’s projects and your company’s projects into the new GCP organization and link the projects to the new billing account. |
66. Click here to View Answer
Answer : D
Reference:
https://cloud.google.com/resource-manager/docs/migrating-projects-billing
Question.67 You built an application on Google Cloud that uses Cloud Spanner. Your support team needs to monitor the environment but should not have access to table data. You need a streamlined solution to grant the correct permissions to your support team, and you want to follow Google-recommended practices. What should you do? (A) Add the support team group to the roles/monitoring.viewer role (B) Add the support team group to the roles/spanner.databaseUser role. (C) Add the support team group to the roles/spanner.databaseReader role. (D) Add the support team group to the roles/stackdriver.accounts.viewer role. |
67. Click here to View Answer
Answer : B
Question.68 For analysis purposes, you need to send all the logs from all of your Compute Engine instances to a BigQuery dataset called platform-logs. You have already installed the Cloud Logging agent on all the instances. You want to minimize cost. What should you do? (A) 1. Give the BigQuery Data Editor role on the platform-logs dataset to the service accounts used by your instances. 2. Update your instancesג€™ metadata to add the following value: logs-destination: bq://platform-logs. (B) 1. In Cloud Logging, create a logs export with a Cloud Pub/Sub topic called logs as a sink. 2. Create a Cloud Function that is triggered by messages in the logs topic. 3. Configure that Cloud Function to drop logs that are not from Compute Engine and to insert Compute Engine logs in the platform-logs dataset. (C) 1. In Cloud Logging, create a filter to view only Compute Engine logs. 2. Click Create Export. 3. Choose BigQuery as Sink Service, and the platform-logs dataset as Sink Destination. (D) 1. Create a Cloud Function that has the BigQuery User role on the platform-logs dataset. 2. Configure this Cloud Function to create a BigQuery Job that executes this query: INSERT INTO dataset.platform-logs (timestamp, log) SELECT timestamp, log FROM compute.logs WHERE timestamp > DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY) 3. Use Cloud Scheduler to trigger this Cloud Function once a day. |
68. Click here to View Answer
Answer : C
Question.69 You are using Deployment Manager to create a Google Kubernetes Engine cluster. Using the same Deployment Manager deployment, you also want to create a DaemonSet in the kube-system namespace of the cluster. You want a solution that uses the fewest possible services. What should you do? (A) Add the clusterג€™s API as a new Type Provider in Deployment Manager, and use the new type to create the DaemonSet. (B) Use the Deployment Manager Runtime Configurator to create a new Config resource that contains the DaemonSet definition. (C) With Deployment Manager, create a Compute Engine instance with a startup script that uses kubectl to create the DaemonSet. (D) In the clusterג€™s definition in Deployment Manager, add a metadata that has kube-system as key and the DaemonSet manifest as value. |
69. Click here to View Answer
Answer : C
Reference:
https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl
Question.70 You are building an application that will run in your data center. The application will use Google Cloud Platform (GCP) services like AutoML. You created a service account that has appropriate access to AutoML. You need to enable authentication to the APIs from your on-premises environment. What should you do? A. Use service account credentials in your on-premises application. B. Use gcloud to create a key file for the service account that has appropriate permissions. C. Set up direct interconnect between your data center and Google Cloud Platform to enable authentication for your on-premises applications. D. Go to the IAM & admin console, grant a user account permissions similar to the service account permissions, and use this user account for authentication from your data center. |
70. Click here to View Answer
Answer : B
Reference:
https://cloud.google.com/vision/automl/docs/before-you-begin