Skip to content

Commit 599c33b

Browse files
committed
Merge branch 'master' of https://github.com/MicrosoftDocs/azure-docs-pr into ipv6inplaceupgradepowershell
2 parents a9f62c7 + f49ab39 commit 599c33b

File tree

4 files changed

+42
-13
lines changed

4 files changed

+42
-13
lines changed

articles/data-factory/concepts-data-flow-overview.md

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,38 @@ The graph displays the transformation stream. It shows the lineage of source dat
3535

3636
![Canvas](media/data-flow/canvas2.png "Canvas")
3737

38+
### Azure integration runtime data flow properties
39+
40+
![Debug button](media/data-flow/debugbutton.png "Debug button")
41+
42+
When you begin working with data flows in ADF, you will want to turn on the "Debug" switch for data flows at the top of the browser UI. This will spin-up an Azure Databricks cluster to use for interactive debugging, data previews, and pipeline debug executions. You can set the size of the cluster being utilized by choosing a custom [Azure Integration Runtime](concepts-integration-runtime.md). The debug session will stay alive for up to 60 minutes after your last data preview or last debug pipeline execution.
43+
44+
When you operationalize your pipelines with data flow activities, ADF will use the Azure Integration Runtime associated with the [activity](control-flow-execute-data-flow-activity.md) in the "Run On" property.
45+
46+
The default Azure Integration Runtime is a small 4-core single worker node cluster intended to allow you to preview data and quickly execute debug pipelines at minimal costs. Set a larger Azure IR configuration if you are performing operations against large datasets.
47+
48+
You can instruct ADF to maintain a pool of cluster resources (VMs) by setting a TTL in the Azure IR data flow properties. This will result in faster job execution on subsequent activities.
49+
50+
#### Azure integration runtime and data flow strategies
51+
52+
##### Execute data flows in parallel
53+
54+
If you execute data flows in a pipeline in parallel, ADF will spin-up separate Azure Databricks clusters for each activity execution based on the settings in your Azure Integration Runtime attached to each activity. To design parallel executions in ADF pipelines, add your data flow activities without precedence constraints in the UI.
55+
56+
Of these three options, this option will likely execute in the shortest amount of time. However, each parallel data flow will execute at the same time on separate clusters, so the ordering of events is non-deterministic.
57+
58+
##### Overload single data flow
59+
60+
If you put all of your logic inside a single data flow, ADF will all execute in that same job execution context on a single Spark cluster instance.
61+
62+
This option can possibly be more difficult to follow and troubleshoot because your business rules and business logic will be jumble together. This option also doesn't provide much re-usability.
63+
64+
##### Execute data flows serially
65+
66+
If you execute your data flow activities in serial in the pipeline and you have set a TTL on the Azure IR configuration, then ADF will reuse the compute resources (VMs) resulting in faster subsequent execution times. You will still receive a new Spark context for each execution.
67+
68+
Of these three options, this will likely take the longest time to execute end-to-end. But it does provide a clean separation of logical operations in each data flow step.
69+
3870
### Configuration panel
3971

4072
The configuration panel shows the settings specific to the currently selected transformation. If no transformation is selected, it shows the data flow. In the overall data flow configuration, you can edit the name and description under the **General** tab or add parameters via the **Parameters** tab. For more information, see [Mapping data flow parameters](parameters-data-flow.md).

articles/key-vault/quick-create-net.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Azure Key Vault helps safeguard cryptographic keys and secrets used by cloud app
2121
- Simplify and automate tasks for SSL/TLS certificates.
2222
- Use FIPS 140-2 Level 2 validated HSMs.
2323

24-
[API reference documentation](/dotnet/api/overview/azure/key-vault?view=azure-dotnet) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/AutoRest/src/KeyVault) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.KeyVault/)
24+
[API reference documentation](/dotnet/api/overview/azure/key-vault?view=azure-dotnet) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/keyvault) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.KeyVault/)
2525

2626
## Prerequisites
2727

articles/sql-database/sql-database-recovery-using-backups.md

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ ms.date: 09/26/2019
1414
---
1515
# Recover an Azure SQL database by using automated database backups
1616

17-
By default, Azure SQL Database backups are stored in geo-replicated blob storage. The following options are available for database recovery by using [automated database backups](sql-database-automated-backups.md). You can:
17+
By default, Azure SQL Database backups are stored in geo-replicated blob storage (RA-GRS storage type). The following options are available for database recovery by using [automated database backups](sql-database-automated-backups.md). You can:
1818

1919
- Create a new database on the same SQL Database server, recovered to a specified point in time within the retention period.
2020
- Create a database on the same SQL Database server, recovered to the deletion time for a deleted database.
@@ -28,9 +28,6 @@ If you configured [backup long-term retention](sql-database-long-term-retention.
2828
2929
When you're using the Standard or Premium service tiers, your database restore might incur an extra storage cost. The extra cost is incurred when the maximum size of the restored database is greater than the amount of storage included with the target database's service tier and performance level. For pricing details of extra storage, see the [SQL Database pricing page](https://azure.microsoft.com/pricing/details/sql-database/). If the actual amount of used space is less than the amount of storage included, you can avoid this extra cost by setting the maximum database size to the included amount.
3030

31-
> [!NOTE]
32-
> When you create a [database copy](sql-database-copy.md), you use [automated database backups](sql-database-automated-backups.md).
33-
3431
## Recovery time
3532

3633
The recovery time to restore a database by using automated database backups is affected by several factors:
@@ -42,9 +39,9 @@ The recovery time to restore a database by using automated database backups is a
4239
- The network bandwidth if the restore is to a different region.
4340
- The number of concurrent restore requests being processed in the target region.
4441

45-
For a large or very active database, the restore might take several hours. If there is a prolonged outage in a region, it's possible that there are large numbers of geo-restore requests being processed by other regions. When there are many requests, the recovery time can increase for databases in that region. Most database restores complete in less than 12 hours.
42+
For a large or very active database, the restore might take several hours. If there is a prolonged outage in a region, it's possible that a high number of geo-restore requests will be initiated for disaster recovery. When there are many requests, the recovery time for individual databases can increase. Most database restores complete in less than 12 hours.
4643

47-
For a single subscription, there are limitations on the number of concurrent restore requests. These limitations apply to any combination of point-in-time restores, geo restores, and restores from long-term retention backup.
44+
For a single subscription, there are limitations on the number of concurrent restore requests. These limitations apply to any combination of point-in-time restores, geo-restores, and restores from long-term retention backup.
4845

4946
| | **Max # of concurrent requests being processed** | **Max # of concurrent requests being submitted** |
5047
| :--- | --: | --: |
@@ -61,7 +58,7 @@ There isn't a built-in method to restore the entire server. For an example of ho
6158

6259
You can restore a standalone, pooled, or instance database to an earlier point in time by using the Azure portal, [PowerShell](https://docs.microsoft.com/powershell/module/az.sql/restore-azsqldatabase), or the [REST API](https://docs.microsoft.com/rest/api/sql/databases). The request can specify any service tier or compute size for the restored database. Ensure that you have sufficient resources on the server to which you are restoring the database. When complete, the restore creates a new database on the same server as the original database. The restored database is charged at normal rates, based on its service tier and compute size. You don't incur charges until the database restore is complete.
6360

64-
You generally restore a database to an earlier point for recovery purposes. You can treat the restored database as a replacement for the original database, or use it as source data to update the original database.
61+
You generally restore a database to an earlier point for recovery purposes. You can treat the restored database as a replacement for the original database, or use it as a data source to update the original database.
6562

6663
- **Database replacement**
6764

@@ -149,7 +146,7 @@ To geo-restore a single SQL database from the Azure portal in the region and ser
149146

150147
![Screenshot of Create SQL Database options](./media/sql-database-recovery-using-backups/geo-restore-azure-sql-database-list-annotated.png)
151148

152-
Complete the process of creating a new database. When you create the single Azure SQL database, it contains the restored geo-restore backup.
149+
Complete the process of creating a new database from the backup. When you create the single Azure SQL database, it contains the restored geo-restore backup.
153150

154151
#### Managed instance database
155152

@@ -179,7 +176,7 @@ For a PowerShell script that shows how to perform geo-restore for a managed inst
179176
You can't perform a point-in-time restore on a geo-secondary database. You can only do so on a primary database. For detailed information about using geo-restore to recover from an outage, see [Recover from an outage](sql-database-disaster-recovery.md).
180177

181178
> [!IMPORTANT]
182-
> Geo-restore is the most basic disaster recovery solution available in SQL Database. It relies on automatically created geo-replicated backups with recovery point objective (RPO) equal to 1 hour, and the estimated recovery time of up to 12 hours. It doesn't guarantee that the target region will have the capacity to restore your databases after a regional outage, because a sharp increase of demand is likely. If your application uses relatively small databases and is not critical to the business, geo-restore is an appropriate disaster recovery solution. For business-critical applications that use large databases and must ensure business continuity, you should use [Auto-failover groups](sql-database-auto-failover-group.md). It offers a much lower RPO and recovery time objective, and the capacity is always guaranteed. For more information on business continuity choices, see [Overview of business continuity](sql-database-business-continuity.md).
179+
> Geo-restore is the most basic disaster recovery solution available in SQL Database. It relies on automatically created geo-replicated backups with recovery point objective (RPO) equal to 1 hour, and the estimated recovery time of up to 12 hours. It doesn't guarantee that the target region will have the capacity to restore your databases after a regional outage, because a sharp increase of demand is likely. If your application uses relatively small databases and is not critical to the business, geo-restore is an appropriate disaster recovery solution. For business-critical applications that require large databases and must ensure business continuity, use [Auto-failover groups](sql-database-auto-failover-group.md). It offers a much lower RPO and recovery time objective, and the capacity is always guaranteed. For more information on business continuity choices, see [Overview of business continuity](sql-database-business-continuity.md).
183180
184181
## Programmatically performing recovery by using automated backups
185182

0 commit comments

Comments
 (0)