Skip to content

Commit b079d55

Browse files
authored
Merge pull request #2131 from MicrosoftDocs/main638911982604088587sync_temp
Repo sync for protected branch
2 parents 0b833d6 + 20e7675 commit b079d55

File tree

2 files changed

+10
-8
lines changed

2 files changed

+10
-8
lines changed

docs/data-factory/cdc-copy-job.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ Complete the following steps to create a new Copy job to ingest data from Azure
120120

121121
## Known limitations
122122
- When both CDC-enabled and non-CDC-enabled source tables are selected in a Copy Job, it treats all tables as watermark-based incremental copy.
123-
- When CDC-enabled source tables are selected, column mapping and temp DB can't be configured.
123+
- When CDC-enabled source tables are selected, column mapping can't be configured.
124124
- Custom capture instances aren't supported; only the default capture instance is supported.
125125
- SCD2 isn't supported for CDC-enabled source datastore yet.
126126
- DDL isn't supported yet in Copy job.

docs/security/experience-specific-guidance.md

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ This document provides experience-specific guidance for recovering your Fabric d
1414

1515
## Sample scenario
1616

17-
A number of the guidance sections in this document use the following sample scenario for purposes of explanation and illustration. Refer back to this scenario as necessary.
17+
Many guidance sections in this document use the following sample scenario for purposes of explanation and illustration. Refer back to this scenario as necessary.
1818

1919
Let's say you have a capacity C1 in region A that has a workspace W1. If you've [turned on disaster recovery](./disaster-recovery-guide.md#disaster-recovery-capacity-setting) for capacity C1, OneLake data will be replicated to a backup in region B. If region A faces disruptions, the Fabric service in C1 fails over to region B.
2020

@@ -53,7 +53,7 @@ Customers can recreate lakehouses by using a custom Scala script.
5353

5454
1. Create a new notebook in the workspace C2.W2.
5555

56-
1. To recover the tables and files from the original lakehouse, refer to the data with OneLake paths such as abfss (see [Connecting to Microsoft OneLake](../onelake/onelake-access-api.md)). You can use the code example below (see [Introduction to Microsoft Spark Utilities](/azure/synapse-analytics/spark/microsoft-spark-utilities?pivots=programming-language-python/)) in the notebook to get the ABFS paths of files and tables from the original lakehouse. (Replace C1.W1 with the actual workspace name)
56+
1. To recover the tables and files from the original lakehouse, refer to the data with OneLake paths such as abfss (see [Connecting to Microsoft OneLake](../onelake/onelake-access-api.md)). You can use the following code example (see [Introduction to Microsoft Spark Utilities](/azure/synapse-analytics/spark/microsoft-spark-utilities?pivots=programming-language-python/)) in the notebook to get the ABFS paths of files and tables from the original lakehouse. (Replace C1.W1 with the actual workspace name)
5757

5858
```
5959
mssparkutils.fs.ls('abfs[s]://<C1.W1>@onelake.dfs.fabric.microsoft.com/<item>.<itemtype>/<Tables>/<fileName>')
@@ -85,18 +85,18 @@ Customers can recreate lakehouses by using a custom Scala script.
8585
mssparkutils.fs.write(s"$destination/_delta_log/_last_checkpoint", "", true)
8686
```
8787
88-
1. Once you run the script, the tables will appear in the new lakehouse.
88+
1. Once you run the script, the tables appear in the new lakehouse.
8989
9090
#### Approach 2: Use Azure Storage Explorer to copy files and tables
9191
9292
To recover only specific Lakehouse files or tables from the original lakehouse, use Azure Storage Explorer. Refer to [Integrate OneLake with Azure Storage Explorer](../onelake/onelake-azure-storage-explorer.md) for detailed steps. For large data sizes, use [Approach 1](#approach-1-using-custom-script-to-copy-lakehouse-delta-tables-and-files).
9393
9494
> [!NOTE]
95-
> The two approaches described above recover both the metadata and data for Delta-formatted tables, because the metadata is co-located and stored with the data in OneLake. For non-Delta formatted tables (e.g. CSV, Parquet, etc.) that are created using Spark Data Definition Language (DDL) scripts/commands, the user is responsible for maintaining and re-running the Spark DDL scripts/commands to recover them.
95+
> The two approaches described above recover both the metadata and data for Delta-formatted tables, because the metadata is co-located and stored with the data in OneLake. For non-Delta formatted tables (for example, CSV, Parquet, etc.) that are created using Spark Data Definition Language (DDL) scripts/commands, the user is responsible for maintaining and re-running the Spark DDL scripts/commands to recover them.
9696
9797
### Notebook
9898
99-
Notebooks from the primary region remain unavailable to customers and the code in notebooks won't be replicated to the secondary region. To recover Notebook code in the new region, there are two approaches to recovering Notebook code content.
99+
Notebooks from the primary region remain unavailable to customers and the code in notebooks aren't replicated to the secondary region. To recover Notebook code in the new region, there are two approaches to recovering Notebook code content.
100100
101101
#### Approach 1: User-managed redundancy with Git integration (in public preview)
102102
@@ -116,7 +116,7 @@ The best way to make this easy and quick is to use Fabric Git integration, then
116116
117117
:::image type="content" source="./media/experience-specific-guidance/notebook-reconnect-to-ado-repo.png" alt-text="Screenshot showing notebook reconnected to ADO repo.":::
118118
119-
1. Select the Source control button. Then select the relevant branch of the repo. Then select **Update all**. The original notebook will appear.
119+
1. Select the Source control button. Then select the relevant branch of the repo. Then select **Update all**. The original notebook appears.
120120
121121
:::image type="content" source="./media/experience-specific-guidance/notebook-source-control-update-all.png" alt-text="Screenshot showing how to update all notebooks on a branch.":::
122122
@@ -158,7 +158,7 @@ If you don't take the Git integration approach, you can save the latest version
158158
159159
### Spark Job Definition
160160
161-
Spark job definitions (SJD) from the primary region remain unavailable to customers, and the main definition file and reference file in the notebook will be replicated to the secondary region via OneLake. If you want to recover the SJD in the new region, you can follow the manual steps described below to recover the SJD. Note that historical runs of the SJD won't be recovered.
161+
Spark job definitions (SJD) from the primary region remain unavailable to customers, and the main definition file and reference file in the notebook will be replicated to the secondary region via OneLake. If you want to recover the SJD in the new region, you can follow the manual steps described below to recover the SJD. Historical runs of the SJD won't be recovered.
162162
163163
You can recover the SJD items by copying the code from the original region by using Azure Storage Explorer and manually reconnecting Lakehouse references after the disaster.
164164
@@ -262,6 +262,8 @@ If you want to recover a Dataflow Gen2 item in the new region, you need to expor
262262
263263
1. The template is then imported into your new Dataflow Gen2 item.
264264
265+
Dataflows Save As feature is not supported in the event of disaster recovery.
266+
265267
### Data Pipelines
266268
267269
Customers can't access data pipelines in the event of regional disaster, and the configurations aren't replicated to the paired region. We recommend building your critical data pipelines in multiple workspaces across different regions.

0 commit comments

Comments
 (0)