Skip to content

Commit 6c86671

Browse files
JasonWHowellSyntaxC4
authored andcommitted
Fixing bash code blocks
1 parent 28fb5ca commit 6c86671

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

articles/azure-databricks/howto-regional-disaster-recovery.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ As you notice in the preceding architecture description, there are a number of c
2929

3030
To create your own regional disaster recovery topology, follow these requirements:
3131

32-
1. Provision multiple Azure Databricks workspaces in a separate Azure region. For example, create the primary Azure Databricks workspace in East US2. Create the secondary disaster-recovery Azure Databricks workspace in a separate region, such as West US.
32+
1. Provision multiple Azure Databricks workspaces in separate Azure regions. For example, create the primary Azure Databricks workspace in East US2. Create the secondary disaster-recovery Azure Databricks workspace in a separate region, such as West US.
3333

3434
2. Use [Geo-redundant storage](../storage/common/storage-redundancy-grs.md#read-access-geo-redundant-storage). The data associated Azure Databricks is stored by default in Azure Storage. The results from Databricks jobs are also stored in Azure Blob Storage, so that the processed data is durable and remains highly available after cluster is terminated. As the Storage and Databricks cluster are co-located, you must use Geo-redundant storage so that data can be accessed in secondary region if primary region is no longer accessible.
3535

@@ -52,7 +52,7 @@ To create your own regional disaster recovery topology, follow these requirement
5252
5353
2. Configure two profiles. One for the primary workspace, and another one for the secondary workspace:
5454

55-
```python
55+
```bash
5656
databricks configure --profile primary
5757
databricks configure --profile secondary
5858
```
@@ -64,7 +64,7 @@ To create your own regional disaster recovery topology, follow these requirement
6464
```
6565

6666
You can manually switch at the command line if needed:
67-
```python
67+
```bash
6868
databricks workspace ls --profile primary
6969
databricks workspace ls --profile secondary
7070
```

0 commit comments

Comments
 (0)