Skip to content

Commit e15278b

Browse files
authored
Merge pull request #107187 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/Microsoft/azure-docs (branch master)
2 parents 6b2f3dc + 9326f70 commit e15278b

File tree

12 files changed

+78
-43
lines changed

12 files changed

+78
-43
lines changed

articles/application-gateway/application-gateway-metrics.md

Lines changed: 3 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -121,10 +121,6 @@ For Application Gateway, the following metrics are available:
121121

122122
Count of successful requests that Application Gateway has served. The request count can be further filtered to show count per each/specific backend pool-http setting combination.
123123

124-
- **Web Application Firewall matched rules**
125-
126-
- **Web Application Firewall triggered rules**
127-
128124
### Backend metrics
129125

130126
For Application Gateway, the following metrics are available:
@@ -176,9 +172,9 @@ For Application Gateway, the following metrics are available:
176172

177173
Count of successful requests that Application Gateway has served. The request count can be further filtered to show count per each/specific backend pool-http setting combination.
178174

179-
- **Web Application Firewall matched rules**
180-
181-
- **Web Application Firewall triggered rules**
175+
- **Web Application Firewall Blocked Requests Count**
176+
- **Web Application Firewall Blocked Requests Distribution**
177+
- **Web Application Firewall Total Rule Distribution**
182178

183179
### Backend metrics
184180

articles/azure-functions/functions-create-first-azure-function-azure-cli.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -276,7 +276,7 @@ Use the following Azure CLI commands to create these items. Each command provide
276276
If you are using Python 3.6, change `--runtime-version` to `3.6`.
277277
278278
```azurecli
279-
az functionapp create --resource-group AzureFunctionsQuickstart-rg --os-type Linux --consumption-plan-location westeurope --runtime python --runtime-version 3.7 --functions_version 2 --name <APP_NAME> --storage-account <STORAGE_NAME>
279+
az functionapp create --resource-group AzureFunctionsQuickstart-rg --os-type Linux --consumption-plan-location westeurope --runtime python --runtime-version 3.7 --functions-version 2 --name <APP_NAME> --storage-account <STORAGE_NAME>
280280
```
281281
::: zone-end
282282
@@ -285,19 +285,19 @@ Use the following Azure CLI commands to create these items. Each command provide
285285
286286
287287
```azurecli
288-
az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location westeurope --runtime node --runtime-version 10 --functions_version 2 --name <APP_NAME> --storage-account <STORAGE_NAME>
288+
az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location westeurope --runtime node --runtime-version 10 --functions-version 2 --name <APP_NAME> --storage-account <STORAGE_NAME>
289289
```
290290
::: zone-end
291291
292292
::: zone pivot="programming-language-csharp"
293293
```azurecli
294-
az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location westeurope --runtime dotnet --functions_version 2 --name <APP_NAME> --storage-account <STORAGE_NAME>
294+
az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location westeurope --runtime dotnet --functions-version 2 --name <APP_NAME> --storage-account <STORAGE_NAME>
295295
```
296296
::: zone-end
297297
298298
::: zone pivot="programming-language-powershell"
299299
```azurecli
300-
az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location westeurope --runtime powershell --functions_version 2 --name <APP_NAME> --storage-account <STORAGE_NAME>
300+
az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location westeurope --runtime powershell --functions-version 2 --name <APP_NAME> --storage-account <STORAGE_NAME>
301301
```
302302
::: zone-end
303303

articles/azure-monitor/insights/network-performance-monitor-faq.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -182,7 +182,7 @@ A hop may not respond to a traceroute in one or more of the below scenarios:
182182
* The network devices are not allowing ICMP_TTL_EXCEEDED traffic.
183183
* A firewall is blocking the ICMP_TTL_EXCEEDED response from the network device.
184184

185-
When either of the endpoints lies in Azure, traceroute shows up unidentified hops as Azure ndrastructure does not reveal identity to traceroute.
185+
When either of the endpoints lies in Azure, traceroute shows up unidentified hops as Azure Infrastructure does not reveal identity to traceroute.
186186

187187
### I get alerts for unhealthy tests but I do not see the high values in NPM's loss and latency graph. How do I check what is unhealthy?
188188
NPM raises an alert if end to end latency between source and destination crosses the threshold for any path between them. Some networks have multiple paths connecting the same source and destination. NPM raises an alert is any path is unhealthy. The loss and latency seen in the graphs is the average value for all the paths, hence it may not show the exact value of a single path. To understand where the threshold has been breached, look for the "SubType" column in the alert. If the issue is caused by a path the SubType value will be NetworkPath (for Performance Monitor tests), EndpointPath (for Service Connectivity Monitor tests) and ExpressRoutePath (for ExpressRotue Monitor tests).

articles/azure-portal/admin-timeout.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ To confirm that the inactivity timeout policy is set correctly, select **Notific
3939
The setting takes effect for new sessions. It won’t apply immediately to any users who are already signed in.
4040

4141
> [!NOTE]
42-
> If an admin has configured a directory-level timeout setting, users can override the policy and set their own inactive sign-out duration. However, the user must choose a time interval that is less than what is set at the directory level.
42+
> If a Global Administrator has configured a directory-level timeout setting, users can override the policy and set their own inactive sign-out duration. However, the user must choose a time interval that is less than what is set at the directory level by the Global Administrator.
4343
>
4444
4545
## Next steps

articles/azure-portal/supportability/how-to-create-azure-support-request.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ You can get to **Help + support** in the Azure portal. It's available from the A
3737

3838
### Role-based access control
3939

40-
To create a support request, you must be an admin or be assigned to the [Support Request Contributor](../../role-based-access-control/built-in-roles.md#support-request-contributor) role.
40+
To create a support request, you must be an admin or be assigned to the [Support Request Contributor](../../role-based-access-control/built-in-roles.md#support-request-contributor) role at the subscription level.
4141

4242
### Go to Help + support from the global header
4343

articles/cosmos-db/data-explorer.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,8 @@ Azure Cosmos DB explorer is a standalone web-based interface that allows you to
4343

4444
Currently the **Open Full Screen** experience that allows you to share temporary read-write or read access is not yet supported for Azure Cosmos DB Gremlin and Table API accounts. You can still view your Gremlin and Table API accounts by passing the connection string to Azure Cosmos DB Explorer.
4545

46+
Currently, viewing documents that contain a UUID is not supported in Data Explorer. This does not affect loading collections, only viewing individual documents or queries that include these documents. To view and manage these documents, users should continue to use the tool that was originally used to create these documents.
47+
4648
## Next steps
4749
Now that you have learned how to get started with Azure Cosmos DB explorer to manage your data, next you can:
4850

articles/data-explorer/ingest-data-event-grid.md

Lines changed: 21 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ Now connect to the Event Grid from Azure Data Explorer, so that data flowing int
115115
**Setting** | **Suggested value** | **Field description**
116116
|---|---|---|
117117
| Table | *TestTable* | The table you created in **TestDatabase**. |
118-
| Data format | *JSON* | Supported formats are Avro, CSV, JSON, MULTILINE JSON, PSV, SOH, SCSV, TSV, and TXT. Supported compression options: Zip and GZip |
118+
| Data format | *JSON* | Supported formats are Avro, CSV, JSON, MULTILINE JSON, PSV, SOH, SCSV, TSV, RAW, and TXT. Supported compression options: Zip and GZip |
119119
| Column mapping | *TestMapping* | The mapping you created in **TestDatabase**, which maps incoming JSON data to the column names and data types of **TestTable**.|
120120
| | |
121121
@@ -147,14 +147,33 @@ Save the data into a file and upload it with this script:
147147
az storage container create --name $container_name
148148

149149
echo "Uploading the file..."
150-
az storage blob upload --container-name $container_name --file $file_to_upload --name $blob_name
150+
az storage blob upload --container-name $container_name --file $file_to_upload --name $blob_name --metadata "rawSizeBytes=1024"
151151

152152
echo "Listing the blobs..."
153153
az storage blob list --container-name $container_name --output table
154154

155155
echo "Done"
156156
```
157157

158+
> [!NOTE]
159+
> To achieve the best ingestion performance, the *uncompressed* size of the compressed blobs submitted for ingestion must be communicated. Because Event Grid notifications contain only basic details, the size information must be explicitly communicated. The uncompressed size information can be provided by setting the `rawSizeBytes` property on the blob metadata with the *uncompressed* data size in bytes.
160+
161+
### Ingestion properties
162+
163+
You can specify the [Ingestion properties](https://docs.microsoft.com/azure/kusto/management/data-ingestion/#ingestion-properties) of the blob ingestion via the blob metadata.
164+
165+
These properties can be set:
166+
167+
|**Property** | **Property description**|
168+
|---|---|
169+
| `rawSizeBytes` | Size of the raw (uncompressed) data. For Avro/ORC/Parquet, this is the size before format-specific compression is applied.|
170+
| `kustoTable` | Name of the existing target table. Overrides the `Table` set on the `Data Connection` blade. |
171+
| `kustoDataFormat` | Data format. Overrides the `Data format` set on the `Data Connection` blade. |
172+
| `kustoIngestionMappingReference` | Name of the existing ingestion mapping to be used. Overrides the `Column mapping` set on the `Data Connection` blade.|
173+
| `kustoIgnoreFirstRecord` | If set to `true`, Kusto ignores the first row of the blob. Use in tabular format data (CSV, TSV, or similar) to ignore headers. |
174+
| `kustoExtentTags` | String representing [tags](/azure/kusto/management/extents-overview#extent-tagging) that will be attached to resulting extent. |
175+
| `kustoCreationTime` | Overrides [$IngestionTime](/azure/kusto/query/ingestiontimefunction?pivots=azuredataexplorer) for the blob, formatted as a ISO 8601 string. Use for backfilling. |
176+
158177
> [!NOTE]
159178
> Azure Data Explorer won't delete the blobs post ingestion.
160179
> Retain the blobs for thrre to five days.

articles/hdinsight/hdinsight-hadoop-compare-storage-options.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: hrasheed
66
ms.reviewer: jasonh
77
ms.service: hdinsight
88
ms.topic: conceptual
9-
ms.date: 02/26/2020
9+
ms.date: 03/10/2020
1010
---
1111

1212
# Compare storage options for use with Azure HDInsight clusters
@@ -41,14 +41,14 @@ You can create a cluster using different combinations of services for primary an
4141
|---|---|---|---|
4242
| 3.6 & 4.0 | General Purpose V1 , General Purpose V2 | General Purpose V1 , General Purpose V2, BlobStorage(Block Blobs) | Yes |
4343
| 3.6 & 4.0 | General Purpose V1 , General Purpose V2 | Data Lake Storage Gen2 | No |
44-
| 3.6 & 4.0 | General Purpose V1 , General Purpose V2 | Data Lake Storage Gen1 | Yes |
4544
| 3.6 & 4.0 | Data Lake Storage Gen2* | Data Lake Storage Gen2 | Yes |
4645
| 3.6 & 4.0 | Data Lake Storage Gen2* | General Purpose V1 , General Purpose V2, BlobStorage(Block Blobs) | Yes |
4746
| 3.6 & 4.0 | Data Lake Storage Gen2 | Data Lake Storage Gen1 | No |
4847
| 3.6 | Data Lake Storage Gen1 | Data Lake Storage Gen1 | Yes |
4948
| 3.6 | Data Lake Storage Gen1 | General Purpose V1 , General Purpose V2, BlobStorage(Block Blobs) | Yes |
5049
| 3.6 | Data Lake Storage Gen1 | Data Lake Storage Gen2 | No |
5150
| 4.0 | Data Lake Storage Gen1 | Any | No |
51+
| 4.0 | General Purpose V1 , General Purpose V2 | Data Lake Storage Gen1 | No |
5252

5353
*=This could be one or multiple Data Lake Storage Gen2 accounts, as long as they are all setup to use the same managed identity for cluster access.
5454

articles/media-services/previous/media-services-sspk.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,10 +87,10 @@ Interim and Final SSPK licensees can submit technical questions to [smoothpk@mic
8787
* EchoStar Purchasing Corporation
8888
* Enseo, Inc.
8989
* Fluendo S.A.
90+
* Guangzhou Shikun Electronics., Ltd.
9091
* HANDAN BroadInfoCom Co., Ltd.
9192
* Infomir GMBH
9293
* Irdeto USA Inc.
93-
* iWEDIA S.A.
9494
* Liberty Global Services BV
9595
* MediaTek Inc.
9696
* MStar Co, Ltd

articles/sql-database/sql-database-business-continuity.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ If the maximum supported backup retention period for point-in-time restore (PITR
5959
|:---------------------------------------------| :-------------- | :----------------|
6060
| Automatic failover | No | Yes |
6161
| Fail over multiple databases simultaneously | No | Yes |
62-
| Update connection string after failover | Yes | No |
62+
| User must update connection string after failover | Yes | No |
6363
| Managed instance supported | No | Yes |
6464
| Can be in same region as primary | Yes | No |
6565
| Multiple replicas | Yes | No |

0 commit comments

Comments
 (0)