Question.81 You are operating a Google Kubernetes Engine (GKE) cluster for your company where different teams can run non-production workloads. Your Machine Learning (ML) team needs access to Nvidia Tesla P100 GPUs to train their models. You want to minimize effort and cost. What should you do? (A) Ask your ML team to add the ג€accelerator: gpuג€ annotation to their pod specification. (B) Recreate all the nodes of the GKE cluster to enable GPUs on all of them. (C) Create your own Kubernetes cluster on top of Compute Engine with nodes that have GPUs. Dedicate this cluster to your ML team. (D) Add a new, GPU-enabled, node pool to the GKE cluster. Ask your ML team to add the cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification. |
81. Click here to View Answer
Answer : B
Question.82 Your VMs are running in a subnet that has a subnet mask of 255.255.255.240. The current subnet has no more free IP addresses and you require an additional 10 IP addresses for new VMs. The existing and new VMs should all be able to reach each other without additional routes. What should you do? (A) Use gcloud to expand the IP range of the current subnet. (B) Delete the subnet, and recreate it using a wider range of IP addresses. (C) Create a new project. Use Shared VPC to share the current network with the new project. (D) Create a new subnet with the same starting IP but a wider range to overwrite the current subnet. |
82. Click here to View Answer
Answer : C
Question.83 Your organization uses G Suite for communication and collaboration. All users in your organization have a G Suite account. You want to grant some G Suite users access to your Cloud Platform project. What should you do? (A) Enable Cloud Identity in the GCP Console for your domain. (B) Grant them the required IAM roles using their G Suite email address. (C) Create a CSV sheet with all usersג€™ email addresses. Use the gcloud command line tool to convert them into Google Cloud Platform accounts. (D) In the G Suite console, add the users to a special group called cloud-console-users@yourdomain.com. Rely on the default behavior of the Cloud Platform to grant users access if they are members of this group. |
83. Click here to View Answer
Answer : B
Reference:
https://cloud.google.com/resource-manager/docs/creating-managing-organizationNext Question
Question.84 You have a Google Cloud Platform account with access to both production and development projects. You need to create an automated process to list all compute instances in development and production projects on a daily basis. What should you do? (A) Create two configurations using gcloud config. Write a script that sets configurations as active, individually. For each configuration, use gcloud compute instances list to get a list of compute resources. (B) Create two configurations using gsutil config. Write a script that sets configurations as active, individually. For each configuration, use gsutil compute instances list to get a list of compute resources. (C) Go to Cloud Shell and export this information to Cloud Storage on a daily basis. (D) Go to GCP Console and export this information to Cloud SQL on a daily basis. |
84. Click here to View Answer
Answer : A
Question.85 You have a large 5-TB AVRO file stored in a Cloud Storage bucket. Your analysts are proficient only in SQL and need access to the data stored in this file. You want to find a cost-effective way to complete their request as soon as possible. What should you do? (A) Load data in Cloud Datastore and run a SQL query against it. (B) Create a BigQuery table and load data in BigQuery. Run a SQL query on this table and drop this table after you complete your request. (C) Create external tables in BigQuery that point to Cloud Storage buckets and run a SQL query on these external tables to complete your request. (D) Create a Hadoop cluster and copy the AVRO file to NDFS by compressing it. Load the file in a hive table and provide access to your analysts so that they can run SQL queries. |
85. Click here to View Answer
Answer : C