Skip to content

Commit 09e8388

Browse files
committed
Merge branch 'master' of https://github.com/Microsoft/azure-docs-pr into oct2119
2 parents ef75638 + 476bc1e commit 09e8388

File tree

12 files changed

+203
-175
lines changed

12 files changed

+203
-175
lines changed

articles/active-directory-b2c/active-directory-b2c-setup-commonaad-custom.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ You can define Azure AD as a claims provider by adding Azure AD to the **ClaimsP
8383
<!-- Update the Client ID below to the Application ID -->
8484
<Item Key="client_id">00000000-0000-0000-0000-000000000000</Item>
8585
<Item Key="response_types">code</Item>
86-
<Item Key="scope">openid</Item>
86+
<Item Key="scope">openid profile</Item>
8787
<Item Key="response_mode">form_post</Item>
8888
<Item Key="HttpBinding">POST</Item>
8989
<Item Key="UsePolicyInRedirectUri">false</Item>
21.5 KB
Loading

articles/app-service/overview-diagnostics.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ Diagnostics Tools include more advanced diagnostic tools that help you investiga
8888

8989
### Proactive CPU monitoring
9090

91-
Proactive CPU monitoring provides you an easy, proactive way to take an action when your app or child process for your app is consuming high CPU resources. You can set your own CPU threshold rules to temporarily mitigate a high CPU issue until the real cause for the unexpected issue is found. For more information, see [Mitigate your CPU problems before they happen](https://azure.github.io/AppService/2019/10/07/Mitigate-your-CPU-problems-before-they-even-happen.html).Proactive CPU monitoring provides you an easy, proactive way to take an action when your app or child process for your app is consuming high CPU resources. You can set your own CPU threshold rules to temporarily mitigate a high CPU issue until the real cause for the unexpected issue is found.
91+
Proactive CPU monitoring provides you an easy, proactive way to take an action when your app or child process for your app is consuming high CPU resources. You can set your own CPU threshold rules to temporarily mitigate a high CPU issue until the real cause for the unexpected issue is found. For more information, see [Mitigate your CPU problems before they happen](https://azure.github.io/AppService/2019/10/07/Mitigate-your-CPU-problems-before-they-even-happen.html).
9292

9393
![Proactive CPU monitoring](./media/app-service-diagnostics/proactive-cpu-monitoring-9.png)
9494

articles/automation/pre-post-scripts.md

Lines changed: 43 additions & 40 deletions
Large diffs are not rendered by default.

articles/azure-monitor/app/opencensus-python.md

Lines changed: 58 additions & 57 deletions
Large diffs are not rendered by default.

articles/cognitive-services/LUIS/luis-tutorial-prebuilt-intents-entities.md

Lines changed: 38 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.custom: seodec18
99
ms.service: cognitive-services
1010
ms.subservice: language-understanding
1111
ms.topic: tutorial
12-
ms.date: 08/20/2019
12+
ms.date: 10/21/2019
1313
ms.author: diberry
1414
---
1515

@@ -58,12 +58,9 @@ LUIS provides several prebuilt entities for common data extraction.
5858

5959
1. Select the following entities from the list of prebuilt entities then select **Done**:
6060

61-
* **[PersonName](luis-reference-prebuilt-person.md)**
6261
* **[GeographyV2](luis-reference-prebuilt-geographyV2.md)**
6362

64-
![Screenshot of number selected in prebuilt entities dialog](./media/luis-tutorial-prebuilt-intents-and-entities/select-prebuilt-entities.png)
65-
66-
These entities will help you add name and place recognition to your client application.
63+
This entity will help you add place recognition to your client application.
6764

6865
## Add example utterances to the None intent
6966

@@ -81,79 +78,83 @@ LUIS provides several prebuilt entities for common data extraction.
8178

8279
1. [!INCLUDE [LUIS How to get endpoint first step](../../../includes/cognitive-services-luis-tutorial-how-to-get-endpoint.md)]
8380

84-
1. Go to the end of the URL in the browser address bar and enter `I want to cancel my trip to Seattle to see Bob Smith`. The last query string parameter is `q`, the utterance **query**.
81+
1. Go to the end of the URL in the browser address bar and enter `I want to cancel my trip to Seattle`. The last query string parameter is `q`, the utterance **query**.
8582

8683
```json
8784
{
88-
"query": "I want to cancel my trip to Seattle to see Bob Smith.",
85+
"query": "I want to cancel my trip to Seattle",
8986
"topScoringIntent": {
90-
"intent": "Utilities.ReadAloud",
91-
"score": 0.100361854
87+
"intent": "Utilities.Cancel",
88+
"score": 0.1055009
9289
},
9390
"intents": [
9491
{
95-
"intent": "Utilities.ReadAloud",
96-
"score": 0.100361854
92+
"intent": "Utilities.Cancel",
93+
"score": 0.1055009
9794
},
9895
{
99-
"intent": "Utilities.Stop",
100-
"score": 0.08102781
96+
"intent": "Utilities.SelectItem",
97+
"score": 0.02659072
10198
},
10299
{
103-
"intent": "Utilities.SelectNone",
104-
"score": 0.0398852825
100+
"intent": "Utilities.Stop",
101+
"score": 0.0253379084
105102
},
106103
{
107-
"intent": "Utilities.Cancel",
108-
"score": 0.0277276486
104+
"intent": "Utilities.ReadAloud",
105+
"score": 0.02528683
109106
},
110107
{
111-
"intent": "Utilities.SelectItem",
112-
"score": 0.0220712926
108+
"intent": "Utilities.SelectNone",
109+
"score": 0.02434013
113110
},
114111
{
115-
"intent": "Utilities.StartOver",
116-
"score": 0.0145813478
112+
"intent": "Utilities.Escalate",
113+
"score": 0.009161292
117114
},
118115
{
119-
"intent": "None",
120-
"score": 0.012434179
116+
"intent": "Utilities.Help",
117+
"score": 0.006861785
121118
},
122119
{
123-
"intent": "Utilities.Escalate",
124-
"score": 0.0122632384
120+
"intent": "Utilities.StartOver",
121+
"score": 0.00633448
125122
},
126123
{
127124
"intent": "Utilities.ShowNext",
128-
"score": 0.008534077
125+
"score": 0.0053827134
126+
},
127+
{
128+
"intent": "None",
129+
"score": 0.002602003
129130
},
130131
{
131132
"intent": "Utilities.ShowPrevious",
132-
"score": 0.00547111453
133+
"score": 0.001797354
133134
},
134135
{
135136
"intent": "Utilities.SelectAny",
136-
"score": 0.00152912608
137+
"score": 0.000831930141
137138
},
138139
{
139140
"intent": "Utilities.Repeat",
140-
"score": 0.0005556819
141-
},
142-
{
143-
"intent": "Utilities.FinishTask",
144-
"score": 0.000169488427
141+
"score": 0.0006924066
145142
},
146143
{
147144
"intent": "Utilities.Confirm",
148-
"score": 0.000149565312
145+
"score": 0.000606057351
149146
},
150147
{
151148
"intent": "Utilities.GoBack",
152-
"score": 0.000141017343
149+
"score": 0.000276725681
150+
},
151+
{
152+
"intent": "Utilities.FinishTask",
153+
"score": 0.000267822179
153154
},
154155
{
155156
"intent": "Utilities.Reject",
156-
"score": 6.27324E-06
157+
"score": 3.21784828E-05
157158
}
158159
],
159160
"entities": [
@@ -162,18 +163,12 @@ LUIS provides several prebuilt entities for common data extraction.
162163
"type": "builtin.geographyV2.city",
163164
"startIndex": 28,
164165
"endIndex": 34
165-
},
166-
{
167-
"entity": "bob smith",
168-
"type": "builtin.personName",
169-
"startIndex": 43,
170-
"endIndex": 51
171166
}
172167
]
173168
}
174169
```
175170

176-
The result predicted the Utilities.Cancel intent with 80% confidence and extracted the city and person name data.
171+
The result predicted the Utilities.Cancel intent with 80% confidence and extracted the city data.
177172

178173

179174
## Clean up resources

articles/data-factory/concepts-data-flow-overview.md

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,38 @@ The graph displays the transformation stream. It shows the lineage of source dat
3535

3636
![Canvas](media/data-flow/canvas2.png "Canvas")
3737

38+
### Azure integration runtime data flow properties
39+
40+
![Debug button](media/data-flow/debugbutton.png "Debug button")
41+
42+
When you begin working with data flows in ADF, you will want to turn on the "Debug" switch for data flows at the top of the browser UI. This will spin-up an Azure Databricks cluster to use for interactive debugging, data previews, and pipeline debug executions. You can set the size of the cluster being utilized by choosing a custom [Azure Integration Runtime](concepts-integration-runtime.md). The debug session will stay alive for up to 60 minutes after your last data preview or last debug pipeline execution.
43+
44+
When you operationalize your pipelines with data flow activities, ADF will use the Azure Integration Runtime associated with the [activity](control-flow-execute-data-flow-activity.md) in the "Run On" property.
45+
46+
The default Azure Integration Runtime is a small 4-core single worker node cluster intended to allow you to preview data and quickly execute debug pipelines at minimal costs. Set a larger Azure IR configuration if you are performing operations against large datasets.
47+
48+
You can instruct ADF to maintain a pool of cluster resources (VMs) by setting a TTL in the Azure IR data flow properties. This will result in faster job execution on subsequent activities.
49+
50+
#### Azure integration runtime and data flow strategies
51+
52+
##### Execute data flows in parallel
53+
54+
If you execute data flows in a pipeline in parallel, ADF will spin-up separate Azure Databricks clusters for each activity execution based on the settings in your Azure Integration Runtime attached to each activity. To design parallel executions in ADF pipelines, add your data flow activities without precedence constraints in the UI.
55+
56+
Of these three options, this option will likely execute in the shortest amount of time. However, each parallel data flow will execute at the same time on separate clusters, so the ordering of events is non-deterministic.
57+
58+
##### Overload single data flow
59+
60+
If you put all of your logic inside a single data flow, ADF will all execute in that same job execution context on a single Spark cluster instance.
61+
62+
This option can possibly be more difficult to follow and troubleshoot because your business rules and business logic will be jumble together. This option also doesn't provide much re-usability.
63+
64+
##### Execute data flows serially
65+
66+
If you execute your data flow activities in serial in the pipeline and you have set a TTL on the Azure IR configuration, then ADF will reuse the compute resources (VMs) resulting in faster subsequent execution times. You will still receive a new Spark context for each execution.
67+
68+
Of these three options, this will likely take the longest time to execute end-to-end. But it does provide a clean separation of logical operations in each data flow step.
69+
3870
### Configuration panel
3971

4072
The configuration panel shows the settings specific to the currently selected transformation. If no transformation is selected, it shows the data flow. In the overall data flow configuration, you can edit the name and description under the **General** tab or add parameters via the **Parameters** tab. For more information, see [Mapping data flow parameters](parameters-data-flow.md).

articles/key-vault/quick-create-net.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Azure Key Vault helps safeguard cryptographic keys and secrets used by cloud app
2121
- Simplify and automate tasks for SSL/TLS certificates.
2222
- Use FIPS 140-2 Level 2 validated HSMs.
2323

24-
[API reference documentation](/dotnet/api/overview/azure/key-vault?view=azure-dotnet) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/AutoRest/src/KeyVault) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.KeyVault/)
24+
[API reference documentation](/dotnet/api/overview/azure/key-vault?view=azure-dotnet) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/keyvault) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.KeyVault/)
2525

2626
## Prerequisites
2727

articles/marketplace/partner-center-portal/publishing-status.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ During the **Preview creation** step, we create a version of your offer accessib
101101

102102
In this step, you will be emailed with a request for you to review and approve your offer preview prior to the final publishing step.
103103

104-
If you have selected to sell your offer through Microsoft, you will be able to test the acquisition and deployment of your offer to ensure that it meets your requirements during this preview approval stage. Your offer will not yet be available in the pubic marketplace. Once you test and approve this preview, you will need to select **Go-Live** on the [**Offer Overview**](https://partner.microsoft.com/dashboard/commercial-marketplace/overview) dashboard.
104+
If you have selected to sell your offer through Microsoft, you will be able to test the acquisition and deployment of your offer to ensure that it meets your requirements during this preview approval stage. Your offer will not yet be available in the public marketplace. Once you test and approve this preview, you will need to select **Go-Live** on the [**Offer Overview**](https://partner.microsoft.com/dashboard/commercial-marketplace/overview) dashboard.
105105

106106
If you want to make changes to the offer during this preview stage, you may edit and resubmit to publish a new preview. See the article [Update existing marketplace offers](#update-existing-marketplace-offers) for details on more changes.
107107

articles/sql-database/sql-database-recovery-using-backups.md

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ ms.date: 09/26/2019
1414
---
1515
# Recover an Azure SQL database by using automated database backups
1616

17-
By default, Azure SQL Database backups are stored in geo-replicated blob storage. The following options are available for database recovery by using [automated database backups](sql-database-automated-backups.md). You can:
17+
By default, Azure SQL Database backups are stored in geo-replicated blob storage (RA-GRS storage type). The following options are available for database recovery by using [automated database backups](sql-database-automated-backups.md). You can:
1818

1919
- Create a new database on the same SQL Database server, recovered to a specified point in time within the retention period.
2020
- Create a database on the same SQL Database server, recovered to the deletion time for a deleted database.
@@ -28,9 +28,6 @@ If you configured [backup long-term retention](sql-database-long-term-retention.
2828
2929
When you're using the Standard or Premium service tiers, your database restore might incur an extra storage cost. The extra cost is incurred when the maximum size of the restored database is greater than the amount of storage included with the target database's service tier and performance level. For pricing details of extra storage, see the [SQL Database pricing page](https://azure.microsoft.com/pricing/details/sql-database/). If the actual amount of used space is less than the amount of storage included, you can avoid this extra cost by setting the maximum database size to the included amount.
3030

31-
> [!NOTE]
32-
> When you create a [database copy](sql-database-copy.md), you use [automated database backups](sql-database-automated-backups.md).
33-
3431
## Recovery time
3532

3633
The recovery time to restore a database by using automated database backups is affected by several factors:
@@ -42,9 +39,9 @@ The recovery time to restore a database by using automated database backups is a
4239
- The network bandwidth if the restore is to a different region.
4340
- The number of concurrent restore requests being processed in the target region.
4441

45-
For a large or very active database, the restore might take several hours. If there is a prolonged outage in a region, it's possible that there are large numbers of geo-restore requests being processed by other regions. When there are many requests, the recovery time can increase for databases in that region. Most database restores complete in less than 12 hours.
42+
For a large or very active database, the restore might take several hours. If there is a prolonged outage in a region, it's possible that a high number of geo-restore requests will be initiated for disaster recovery. When there are many requests, the recovery time for individual databases can increase. Most database restores complete in less than 12 hours.
4643

47-
For a single subscription, there are limitations on the number of concurrent restore requests. These limitations apply to any combination of point-in-time restores, geo restores, and restores from long-term retention backup.
44+
For a single subscription, there are limitations on the number of concurrent restore requests. These limitations apply to any combination of point-in-time restores, geo-restores, and restores from long-term retention backup.
4845

4946
| | **Max # of concurrent requests being processed** | **Max # of concurrent requests being submitted** |
5047
| :--- | --: | --: |
@@ -61,7 +58,7 @@ There isn't a built-in method to restore the entire server. For an example of ho
6158

6259
You can restore a standalone, pooled, or instance database to an earlier point in time by using the Azure portal, [PowerShell](https://docs.microsoft.com/powershell/module/az.sql/restore-azsqldatabase), or the [REST API](https://docs.microsoft.com/rest/api/sql/databases). The request can specify any service tier or compute size for the restored database. Ensure that you have sufficient resources on the server to which you are restoring the database. When complete, the restore creates a new database on the same server as the original database. The restored database is charged at normal rates, based on its service tier and compute size. You don't incur charges until the database restore is complete.
6360

64-
You generally restore a database to an earlier point for recovery purposes. You can treat the restored database as a replacement for the original database, or use it as source data to update the original database.
61+
You generally restore a database to an earlier point for recovery purposes. You can treat the restored database as a replacement for the original database, or use it as a data source to update the original database.
6562

6663
- **Database replacement**
6764

@@ -149,7 +146,7 @@ To geo-restore a single SQL database from the Azure portal in the region and ser
149146

150147
![Screenshot of Create SQL Database options](./media/sql-database-recovery-using-backups/geo-restore-azure-sql-database-list-annotated.png)
151148

152-
Complete the process of creating a new database. When you create the single Azure SQL database, it contains the restored geo-restore backup.
149+
Complete the process of creating a new database from the backup. When you create the single Azure SQL database, it contains the restored geo-restore backup.
153150

154151
#### Managed instance database
155152

@@ -179,7 +176,7 @@ For a PowerShell script that shows how to perform geo-restore for a managed inst
179176
You can't perform a point-in-time restore on a geo-secondary database. You can only do so on a primary database. For detailed information about using geo-restore to recover from an outage, see [Recover from an outage](sql-database-disaster-recovery.md).
180177

181178
> [!IMPORTANT]
182-
> Geo-restore is the most basic disaster recovery solution available in SQL Database. It relies on automatically created geo-replicated backups with recovery point objective (RPO) equal to 1 hour, and the estimated recovery time of up to 12 hours. It doesn't guarantee that the target region will have the capacity to restore your databases after a regional outage, because a sharp increase of demand is likely. If your application uses relatively small databases and is not critical to the business, geo-restore is an appropriate disaster recovery solution. For business-critical applications that use large databases and must ensure business continuity, you should use [Auto-failover groups](sql-database-auto-failover-group.md). It offers a much lower RPO and recovery time objective, and the capacity is always guaranteed. For more information on business continuity choices, see [Overview of business continuity](sql-database-business-continuity.md).
179+
> Geo-restore is the most basic disaster recovery solution available in SQL Database. It relies on automatically created geo-replicated backups with recovery point objective (RPO) equal to 1 hour, and the estimated recovery time of up to 12 hours. It doesn't guarantee that the target region will have the capacity to restore your databases after a regional outage, because a sharp increase of demand is likely. If your application uses relatively small databases and is not critical to the business, geo-restore is an appropriate disaster recovery solution. For business-critical applications that require large databases and must ensure business continuity, use [Auto-failover groups](sql-database-auto-failover-group.md). It offers a much lower RPO and recovery time objective, and the capacity is always guaranteed. For more information on business continuity choices, see [Overview of business continuity](sql-database-business-continuity.md).
183180
184181
## Programmatically performing recovery by using automated backups
185182

0 commit comments

Comments
 (0)