Skip to content

Commit 6c57f78

Browse files
JasonWHowellSyntaxC4
authored andcommitted
Fixing formatting on the numbered items
1 parent 6c86671 commit 6c57f78

File tree

1 file changed

+12
-10
lines changed

1 file changed

+12
-10
lines changed

articles/azure-databricks/howto-regional-disaster-recovery.md

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ To create your own regional disaster recovery topology, follow these requirement
3737

3838
## Detailed migration steps
3939

40-
1. Set up the Databricks command-line interface on your computer
40+
1. **Set up the Databricks command-line interface on your computer**
4141

4242
This article shows a number of code examples that use the command-line interface for most of the automated steps, since it is an easy-to-user wrapper over Azure Databricks REST API.
4343

@@ -50,7 +50,9 @@ To create your own regional disaster recovery topology, follow these requirement
5050
> [!NOTE]
5151
> Any python scripts provided in this article are expected to work with Python 2.7+ < 3.x.
5252
53-
2. Configure two profiles. One for the primary workspace, and another one for the secondary workspace:
53+
**2. Configure two profiles.**
54+
55+
Configure one for the primary workspace, and another one for the secondary workspace:
5456

5557
```bash
5658
databricks configure --profile primary
@@ -69,11 +71,11 @@ To create your own regional disaster recovery topology, follow these requirement
6971
databricks workspace ls --profile secondary
7072
```
7173

72-
3. Migrate Azure Active Directory users
74+
**3. Migrate Azure Active Directory users**
7375

7476
Manually add the same Azure Active Directory users to the secondary workspace that exist in primary workspace.
7577

76-
4. Migrate the user folders and notebooks
78+
**4. Migrate the user folders and notebooks**
7779

7880
Use the following python code to migrate the sandboxed user environments, which include the nested folder structure and notebooks per user.
7981

@@ -114,7 +116,7 @@ To create your own regional disaster recovery topology, follow these requirement
114116
print "All done"
115117
```
116118

117-
5. Migrate cluster configuration
119+
**5. Migrate the cluster configurations**
118120

119121
Once notebooks have been migrated, you can optionally migrate the cluster configurations to the new workspace. It's almost a fully automated step using databricks-cli, unless you would like to do selective cluster config migration rather than for all.
120122

@@ -167,7 +169,7 @@ To create your own regional disaster recovery topology, follow these requirement
167169
print "All done"
168170
```
169171

170-
6. Migrate jobs configuration
172+
**6. Migrate the jobs configuration**
171173

172174
If you migrated cluster configurations in the previous step, you can opt to migrate job configurations to the new workspace. It is a fully automated step using databricks-cli, unless you would like to do selective job config migration rather than doing it for all jobs.
173175

@@ -231,15 +233,15 @@ To create your own regional disaster recovery topology, follow these requirement
231233
print "All done"
232234
```
233235

234-
7. Migrate libraries
236+
**7. Migrate libraries**
235237

236238
There's currently no straightforward way to migrate libraries from one workspace to another. Reinstall those libraries into the new workspace. Hence this step is mostly manual. This is possible to automate using combination of [DBFS CLI](https://github.com/databricks/databricks-cli#dbfs-cli-examples) to upload custom libraries to the workspace and [Libraries CLI](https://github.com/databricks/databricks-cli#libraries-cli).
237239

238-
8. Migrate Azure blob storage and Azure Data Lake Store Mounts
240+
**8. Migrate Azure blob storage and Azure Data Lake Store mounts**
239241

240242
Manually remount all [Azure Blob storage](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-storage.html) and [Azure Data Lake Store (Gen 1)](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-datalake.html) mount points using a notebook-based solution. The storage resources would have been mounted in the primary workspace, and that has to be repeated in the secondary workspace. There is no external API for mounts.
241243

242-
9. Migrate cluster init scripts
244+
**9. Migrate cluster init scripts**
243245

244246
Any cluster initialization scripts can be migrated from old to new workspace using the [DBFS CLI](https://github.com/databricks/databricks-cli#dbfs-cli-examples). First, copy the needed scripts from "dbfs:/dat abricks/init/.." to your local desktop or virtual machine. Next, copy those scripts into the new workspace at the same path.
245247

@@ -251,7 +253,7 @@ To create your own regional disaster recovery topology, follow these requirement
251253
dbfs cp -r old-ws-init-scripts dbfs:/databricks/init --profile secondary
252254
```
253255

254-
1. Manually reconfigure and reapply access control.
256+
**1. Manually reconfigure and reapply access control.**
255257

256258
If your existing primary workspace is configured to use the Premium tier (SKU), it's likely you also are using the [Access Control feature](https://docs.azuredatabricks.net/administration-guide/admin-settings/index.html#manage-access-control).
257259

0 commit comments

Comments
 (0)