👉Get Full PDF
Question.6 Which browsers are recommended for best use with Databricks Notebook? (A) Chrome and Firefox (B) Microsoft Edge and IE 11 (C) Safari and Microsoft Edge (D) None of These |
6. Click here to View Answer
Answer is (A) Chrome and Firefox
Chrome and Firefox are the recommended browsers by Databricks. Microsoft Edge and IE 11 are not recommended because of faulty rendering of iFrames, but Safari is also an acceptable browser.
Question.7 How do you connect your Spark cluster to the Azure Blob? (A) By calling the .connect() function on the Spark Cluster. (B) By mounting it (C) By calling the .connect() function on the Azure Blob (D) None of These |
7. Click here to View Answer
Answer is (B) By mounting it
By mounting it. Mounts require Azure credentials such as SAS keys and give access to a virtually infinite store for your data. The .connect() function is not a valid method.
Question.8 How does Spark connect to databases like MySQL, Hive and other data stores? (A) JDBC (B) ODBC (C) Using the REST API Layer (D) None of These |
8. Click here to View Answer
Answer is (A) JDBC
JDBC. JDBC stands for Java Database Connectivity, and is a Java API for connecting to databases such as MySQL, Hive, and other data stores. ODBC is not an option and the REST API Layer is not available
Question.9 How do you specify parameters when reading data? (A) Using .option() during your read allows you to pass key/value pairs specifying aspects of your read (B) Using .parameter() during your read allows you to pass key/value pairs specifying aspects of your read (C) Using .keys() during your read allows you to pass key/value pairs specifying aspects of your read (D) None of These |
9. Click here to View Answer
Answer is (A) “Using .option() during your read allows you to pass key/value pairs specifying aspects of your read”
Using .option() during your read allows you to pass key/value pairs specifying aspects of your read. For instance, options for reading CSV data include header, delimiter, and inferSchema.
Question.10 By default, how are corrupt records dealt with using spark.read.json()? (A) They appear in a column called “_corrupt_record” (B) They get deleted automatically (C) They throw an exception and exit the read operation (D) None of These |
10. Click here to View Answer
Answer is (A) “They appear in a column called _corrupt_record”
They appear in a column called “_corrupt_record”. They do not get deleted automatically or throw an exception and exit the read operation