👉Get Full PDF
Question.11 You need to create a data loading pattern for a Type 1 slowly changing dimension (SCD). Which two actions should you include in the process? (A) Update rows when the non-key attributes have changed. (B) Insert new rows when the natural key exists in the dimension table, and the non-key attribute values have changed. (C) Update the effective end date of rows when the non-key attribute values have changed. (D) Insert new records when the natural key is a new value in the table. |
11. Click here to View Answer
Answers are;
A. Update rows when the non-key attributes have changed.
D. Insert new records when the natural key is a new value in the table.
AD = SCD1, BC = SCD2
Type 1 SCD does not preserve history, therefore no end dates for table entries exists.
Question.12 You have an Azure Repos repository named Repo1 and a Fabric-enabled Microsoft Power BI Premium capacity. The capacity contains two workspaces named Workspace1 and Workspace2. Git integration is enabled at the workspace level. You plan to use Microsoft Power BI Desktop and Workspace1 to make version-controlled changes to a semantic model stored in Repo1. The changes will be built and deployed to Workspace2 by using Azure Pipelines. You need to ensure that report and semantic model definitions are saved as individual text files in a folder hierarchy. The solution must minimize development and maintenance effort. In which file format should you save the changes? (A) PBIP (B) PBIDS (C) PBIT (D) PBIX |
12. Click here to View Answer
Answer is (A) PBIP
PBIP (Power BI Project):
-> PBIP format is designed to work with version control systems like Azure Repos. It breaks down Power BI artifacts into individual files that can be managed and versioned separately, facilitating better collaboration and change tracking.
-> Folder Hierarchy: It saves the project structure in a folder hierarchy, where each component of the Power BI project (like datasets, reports, data sources) is stored in separate files.
-> Text-Based: Being a text-based format, it integrates well with Git repositories and supports diff and merge operations.
Reference:
https://learn.microsoft.com/en-us/power-bi/developer/projects/projects-overview
Question.13 You have a Fabric workspace named Workspace1 that contains a lakehouse named Lakehouse1. In Workspace1, you create a data pipeline named Pipeline1. You have CSV files stored in an Azure Storage account. You need to add an activity to Pipeline1 that will copy data from the CSV files to Lakehouse1. The activity must support Power Query M formula language expressions. Which type of activity should you add? (A) Dataflow (B) Notebook (C) Script (D) Copy data |
13. Click here to View Answer
Answer is (A) Dataflow
Power Query M Support: Dataflows in Azure Data Factory and Synapse Analytics support the Power Query M formula language, enabling you to perform complex transformations and data manipulations as part of the data ingestion process.
Transformations: Dataflows allow for a wide range of data transformation capabilities which are especially useful when working with CSV files to cleanse, aggregate, or reshape data before loading it into the destination.
Question.14 You are the administrator of a Fabric workspace that contains a lakehouse named Lakehouse1. Lakehouse1 contains the following tables: Table1: A Delta table created by using a shortcut Table2: An external table created by using Spark Table3: A managed table You plan to connect to Lakehouse1 by using its SQL endpoint. What will you be able to do after connecting to Lakehouse1? (A) Read Table3. (B) Update the data Table3. (C) Read Table2. (D) Update the data in Table1. |
14. Click here to View Answer
Answer is (A) Read Table3.
Answer is A, the managed tables can be read from the connection point.
B & D is out as you can’t update a table in lakehouse using SQL endpoint as this is read only. You will need to use spark or dataflows.
C is out because when you create external table using spark, you can see the table from the lakehouse but you can’t see the table from SQL endpoint let alone ready.
Reference:
https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-sql-analytics-endpoint
Question.15 You have a Fabric tenant that contains a warehouse. You use a dataflow to load a new dataset from OneLake to the warehouse. You need to add a PowerQuery step to identify the maximum values for the numeric columns. Which function should you include in the step? (A) Table.MaxN (B) Table.Max (C) Table.Range (D) Table.Profile |
15. Click here to View Answer
Answer is Table.Profile
We should use the Table.Profile
to identify the maximum values for the numeric columns because only Table.Profile
returns the maximum values for each column.
The Table.Max
returns the largest row in the table
Reference:
Table.Profile – https://learn.microsoft.com/en-us/powerquery-m/table-profile
Table.Max – https://learn.microsoft.com/en-us/powerquery-m/table-max