Skip to content

Commit 9a96b24

Browse files
authored
Merge pull request #50241 from JasonWHowell/databricks
Updating article
2 parents 607936c + 35fcd79 commit 9a96b24

File tree

3 files changed

+12
-6
lines changed

3 files changed

+12
-6
lines changed

articles/azure-databricks/howto-regional-disaster-recovery.md

Lines changed: 12 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -13,13 +13,19 @@ ms.date: 08/27/2018
1313

1414
This article describes a disaster recovery architecture useful for Azure Databricks clusters, and the steps to accomplish that design.
1515

16-
## Control plan architecture
16+
## Azure Databricks overview
1717

18-
At a high level, when you create an Azure Databricks workspace from the Azure portal, a [managed appliance](../managed-applications/overview.md) is deployed as an Azure resource in your subscription, in the chose Azure region (for example, West US). This appliance is deployed in an [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) with a [Network Security Group](../virtual-network/manage-network-security-group.md) and an Azure Storage account, available in your subscription. The virtual network provides perimeter level security to the Databricks workspace and is protected via network security group. Within the workspace, you can create Databricks cluster(s) by providing the worker and driver VM type and Databricks runtime version. The persisted data is available in your storage account, which can be Azure Blob Storage or Azure Data Lake Store. Once the cluster is created, you can run jobs via notebooks, REST APIs, ODBC/JDBC endpoints by attaching them to a specific cluster.
18+
Azure Databricks is a fast, easy, and collaborative Apache Spark-based analytics service. For a big data pipeline, the data (raw or structured) is ingested into Azure through Azure Data Factory in batches, or streamed near real-time using Kafka, Event Hub, or IoT Hub. This data lands in a data lake for long term persisted storage, in Azure Blob Storage or Azure Data Lake Storage. As part of your analytics workflow, use Azure Databricks to read data from multiple data sources such as [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), [Azure Data Lake Storage](../data-lake-store/index.md), [Azure Cosmos DB](../cosmos-db/index.yml), or [Azure SQL Data Warehouse](../sql-data-warehouse/index.md) and turn it into breakthrough insights using Spark.
1919

20-
The Databricks control plane manages and monitors the Databricks workspace environment. Any management operation such as create cluster will be initiated from the Control Plane. All metadata, such as scheduled jobs, is stored in an Azure Database with geo-replication for fault tolerance.
20+
![Databricks pipeline](media/howto-regional-disaster-recovery/databricks-pipeline.png)
2121

22-
![Databricks control plane architecture](media/howto-regional-disaster-recovery/databricks-control-plane.png)
22+
## Azure Databricks architecture
23+
24+
At a high level, when you create an Azure Databricks workspace from the Azure portal, a [managed appliance](../managed-applications/overview.md) is deployed as an Azure resource in your subscription, in the chosen Azure region (for example, West US). This appliance is deployed in an [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) with a [Network Security Group](../virtual-network/manage-network-security-group.md) and an Azure Storage account, available in your subscription. The virtual network provides perimeter level security to the Databricks workspace and is protected via network security group. Within the workspace, you can create Databricks clusters by providing the worker and driver VM type and Databricks runtime version. The persisted data is available in your storage account, which can be Azure Blob Storage or Azure Data Lake Store. Once the cluster is created, you can run jobs via notebooks, REST APIs, ODBC/JDBC endpoints by attaching them to a specific cluster.
25+
26+
The Databricks control plane manages and monitors the Databricks workspace environment. Any management operation such as create cluster will be initiated from the control plane. All metadata, such as scheduled jobs, is stored in an Azure Database with geo-replication for fault tolerance.
27+
28+
![Databricks architecture](media/howto-regional-disaster-recovery/databricks-architecture.png)
2329

2430
One of the advantages of this architecture is that users can connect Azure Databricks to any storage resource in their account. A key benefit is that both compute (Azure Databricks) and storage can be scaled independently of each other.
2531

@@ -237,15 +243,15 @@ To create your own regional disaster recovery topology, follow these requirement
237243

238244
7. **Migrate libraries**
239245

240-
There's currently no straightforward way to migrate libraries from one workspace to another. Reinstall those libraries into the new workspace. Hence this step is mostly manual. This is possible to automate using combination of [DBFS CLI](https://github.com/databricks/databricks-cli#dbfs-cli-examples) to upload custom libraries to the workspace and [Libraries CLI](https://github.com/databricks/databricks-cli#libraries-cli).
246+
There's currently no straightforward way to migrate libraries from one workspace to another. Instead, reinstall those libraries into the new workspace manually. It is possible to automate using combination of [DBFS CLI](https://github.com/databricks/databricks-cli#dbfs-cli-examples) to upload custom libraries to the workspace and [Libraries CLI](https://github.com/databricks/databricks-cli#libraries-cli).
241247

242248
8. **Migrate Azure blob storage and Azure Data Lake Store mounts**
243249

244250
Manually remount all [Azure Blob storage](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-storage.html) and [Azure Data Lake Store (Gen 1)](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-datalake.html) mount points using a notebook-based solution. The storage resources would have been mounted in the primary workspace, and that has to be repeated in the secondary workspace. There is no external API for mounts.
245251

246252
9. **Migrate cluster init scripts**
247253

248-
Any cluster initialization scripts can be migrated from old to new workspace using the [DBFS CLI](https://github.com/databricks/databricks-cli#dbfs-cli-examples). First, copy the needed scripts from "dbfs:/dat abricks/init/.." to your local desktop or virtual machine. Next, copy those scripts into the new workspace at the same path.
254+
Any cluster initialization scripts can be migrated from old to new workspace using the [DBFS CLI](https://github.com/databricks/databricks-cli#dbfs-cli-examples). First, copy the needed scripts from `dbfs:/dat abricks/init/..` to your local desktop or virtual machine. Next, copy those scripts into the new workspace at the same path.
249255

250256
```bash
251257
// Primary to local
104 KB
Loading

0 commit comments

Comments
 (0)