Question.46 You want to use a BigQuery table as a data sink. In which writing mode(s) can you use BigQuery as a sink? (A) Both batch and streaming (B) BigQuery cannot be used as a sink (B) Only batch (D) Only streaming |
46. Click here to View Answer
Answer is (A) Both batch and streaming
When you apply a BigQueryIO.Write transform in batch mode to write to a single table, Dataflow invokes a BigQuery load job. When you apply a BigQueryIO.Write transform in streaming mode or in batch mode using a function to specify the destination table, Dataflow uses BigQuery’s streaming inserts.
Reference:
https://cloud.google.com/dataflow/model/bigquery-io
Question.47 You have a job that you want to cancel. It is a streaming pipeline, and you want to ensure that any data that is in-flight is processed and written to the output. Which of the following commands can you use on the Dataflow monitoring console to stop the pipeline job? (A) Cancel (B) Drain (C) Stop (D) Finish |
47. Click here to View Answer
Answer is (B) Drain
Using the Drain option to stop your job tells the Dataflow service to finish your job in its current state. Your job will immediately stop ingesting new data from input sources, but the Dataflow service will preserve any existing resources (such as worker instances) to finish processing and writing any buffered data in your pipeline.
Reference:
https://cloud.google.com/dataflow/pipelines/stopping-a-pipeline
Question.48 Which of the following statements is NOT true regarding Bigtable access roles? (A) Using IAM roles, you cannot give a user access to only one table in a project, rather than all tables in a project. (B) To give a user access to only one table in a project, grant the user the Bigtable Editor role for that table. (C) You can configure access control only at the project level. (D) To give a user access to only one table in a project, you must configure access through your application. |
48. Click here to View Answer
Answer is (B) To give a user access to only one table in a project, grant the user the Bigtable Editor role for that table.
For Cloud Bigtable, you can configure access control at the project level. For example, you can grant the ability to:
Read from, but not write to, any table within the project.
Read from and write to any table within the project, but not manage instances.
Read from and write to any table within the project, and manage instances.
Reference:
https://cloud.google.com/bigtable/docs/access-control
Question.49 What is the general recommendation when designing your row keys for a Cloud Bigtable schema? (A) Include multiple time series values within the row key (B) Keep the row keep as an 8 bit integer (C) Keep your row key reasonably short (D) Keep your row key as long as the field permits |
49. Click here to View Answer
Answer is (C)
A general guide is to, keep your row keys reasonably short. Long row keys take up additional memory and storage and increase the time it takes to get responses from the Cloud Bigtable server.
Reference:
https://cloud.google.com/bigtable/docs/schema-design#row-keys
Question.50 All Google Cloud Bigtable client requests go through a front-end server ______ they are sent to a Cloud Bigtable node. (A) before (B) after (C) only if (D) once |
50. Click here to View Answer
Answer is (A) before
In a Cloud Bigtable architecture all client requests go through a front-end server before they are sent to a Cloud Bigtable node.
The nodes are organized into a Cloud Bigtable cluster, which belongs to a Cloud Bigtable instance, which is a container for the cluster. Each node in the cluster handles a subset of the requests to the cluster.
When additional nodes are added to a cluster, you can increase the number of simultaneous requests that the cluster can handle, as well as the maximum throughput for the entire cluster.