Skip to content

Commit d52f662

Browse files
committed
Merge branch 'master' of https://github.com/MicrosoftDocs/azure-docs-pr into rolyon-availability-zones-regions-update
2 parents 7b9178c + f7a2025 commit d52f662

File tree

11 files changed

+45
-33
lines changed

11 files changed

+45
-33
lines changed

articles/api-management/api-management-faq.md

Lines changed: 1 addition & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -20,9 +20,6 @@ Get the answers to common questions, patterns, and best practices for Azure API
2020

2121
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
2222

23-
## Contact us
24-
* [How can I ask the Microsoft Azure API Management team a question?](#how-can-i-ask-the-microsoft-azure-api-management-team-a-question)
25-
2623
## Frequently asked questions
2724
* [What does it mean when a feature is in preview?](#what-does-it-mean-when-a-feature-is-in-preview)
2825
* [How can I secure the connection between the API Management gateway and my back-end services?](#how-can-i-secure-the-connection-between-the-api-management-gateway-and-my-back-end-services)
@@ -43,15 +40,8 @@ Get the answers to common questions, patterns, and best practices for Azure API
4340
* [Can I move an API Management service from one subscription to another?](#can-i-move-an-api-management-service-from-one-subscription-to-another)
4441
* [Are there restrictions on or known issues with importing my API?](#are-there-restrictions-on-or-known-issues-with-importing-my-api)
4542

46-
### How can I ask the Microsoft Azure API Management team a question?
47-
You can contact us by using one of these options:
48-
49-
* Post your questions in our [API Management MSDN forum](https://social.msdn.microsoft.com/forums/azure/home?forum=azureapimgmt).
50-
* Send an email to <mailto:[email protected]>.
51-
* Send us a feature request in the [Azure feedback forum](https://feedback.azure.com/forums/248703-api-management).
52-
5343
### What does it mean when a feature is in preview?
54-
When a feature is in preview, it means that we're actively seeking feedback on how the feature is working for you. A feature in preview is functionally complete, but it's possible that we'll make a breaking change in response to customer feedback. We recommend that you don't depend on a feature that is in preview in your production environment. If you have any feedback on preview features, please let us know through one of the contact options in [How can I ask the Microsoft Azure API Management team a question?](#how-can-i-ask-the-microsoft-azure-api-management-team-a-question).
44+
When a feature is in preview, it means that we're actively seeking feedback on how the feature is working for you. A feature in preview is functionally complete, but it's possible that we'll make a breaking change in response to customer feedback. We recommend that you don't depend on a feature that is in preview in your production environment.
5545

5646
### How can I secure the connection between the API Management gateway and my back-end services?
5747
You have several options to secure the connection between the API Management gateway and your back-end services. You can:

articles/azure-monitor/app/troubleshoot-availability.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Troubleshoot web tests in Azure Application Insights. Get alerts if
44
ms.topic: conceptual
55
author: lgayhardt
66
ms.author: lagayhar
7-
ms.date: 09/19/2019
7+
ms.date: 04/28/2020
88

99
ms.reviewer: sdash
1010
---
@@ -64,6 +64,10 @@ Check the classic alerts configuration to confirm your email is directly listed,
6464

6565
Check to ensure the application receiving the webhook notification is available, and successfully processes the webhook requests. See [this](https://docs.microsoft.com/azure/monitoring-and-diagnostics/monitor-alerts-unified-log-webhook) for more information.
6666

67+
### I am getting 403 Forbidden errors, what does this mean?
68+
69+
This error indicates that you need to add firewall exceptions to allow the availability agents to test your target url. For a full list of agent IP addresses to allow, consult the [IP exception article](https://docs.microsoft.com/azure/azure-monitor/app/ip-addresses#availability-tests).
70+
6771
### Intermittent test failure with a protocol violation error?
6872

6973
The error ("protocol violation..CR must be followed by LF") indicates an issue with the server (or dependencies). This happens when malformed headers are set in the response. It can be caused by load balancers or CDNs. Specifically, some headers might not be using CRLF to indicate end of line, which violates the HTTP specification and therefore fail validation at the .NET WebRequest level. Inspect the response to spot headers, which might be in violation.

articles/data-factory/data-flow-exists.md

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,14 @@ To create a free-form expression that contains operators other than "and" and "e
3737

3838
![Exists custom settings](media/data-flow/exists1.png "exists custom")
3939

40+
## Broadcast optimization
41+
42+
![Broadcast Join](media/data-flow/broadcast.png "Broadcast Join")
43+
44+
In joins, lookups and exists transformation, if one or both data streams fit into worker node memory, you can optimize performance by enabling **Broadcasting**. By default, the spark engine will automatically decide whether or not to broadcast one side. To manually choose which side to broadcast, select **Fixed**.
45+
46+
It's not recommended to disable broadcasting via the **Off** option unless your joins are running into timeout errors.
47+
4048
## Data flow script
4149

4250
### Syntax
@@ -46,7 +54,7 @@ To create a free-form expression that contains operators other than "and" and "e
4654
exists(
4755
<conditionalExpression>,
4856
negate: { true | false },
49-
broadcast: {'none' | 'left' | 'right' | 'both'}
57+
broadcast: { 'auto' | 'left' | 'right' | 'both' | 'off' }
5058
) ~> <existsTransformationName>
5159
```
5260

@@ -65,7 +73,7 @@ NameNorm2, TypeConversions
6573
exists(
6674
NameNorm2@EmpID == TypeConversions@EmpID && NameNorm2@Region == DimEmployees@Region,
6775
negate:false,
68-
broadcast: 'none'
76+
broadcast: 'auto'
6977
) ~> checkForChanges
7078
```
7179

articles/data-factory/data-flow-join.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,9 @@ Unlike merge join in tools like SSIS, the join transformation isn't a mandatory
6464

6565
![Join Transformation optimize](media/data-flow/joinoptimize.png "Join Optimization")
6666

67-
If one or both of the data streams fit into worker node memory, further optimize your performance by enabling **Broadcast** in the optimize tab. You can also repartition your data on the join operation so that it fits better into memory per worker.
67+
In joins, lookups and exists transformation, if one or both data streams fit into worker node memory, you can optimize performance by enabling **Broadcasting**. By default, the spark engine will automatically decide whether or not to broadcast one side. To manually choose which side to broadcast, select **Fixed**.
68+
69+
It's not recommended to disable broadcasting via the **Off** option unless your joins are running into timeout errors.
6870

6971
## Self-Join
7072

@@ -85,7 +87,7 @@ When testing the join transformations with data preview in debug mode, use a sma
8587
join(
8688
<conditionalExpression>,
8789
joinType: { 'inner'> | 'outer' | 'left_outer' | 'right_outer' | 'cross' }
88-
broadcast: { 'none' | 'left' | 'right' | 'both' }
90+
broadcast: { 'auto' | 'left' | 'right' | 'both' | 'off' }
8991
) ~> <joinTransformationName>
9092
```
9193

articles/data-factory/data-flow-lookup.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -50,11 +50,11 @@ When testing the lookup transformation with data preview in debug mode, use a sm
5050

5151
## Broadcast optimization
5252

53-
In Azure Data Factory mapping data flows execute in scaled-out Spark environments. If your dataset can fit into worker node memory space, your lookup performance can be optimized by enabling broadcasting.
54-
5553
![Broadcast Join](media/data-flow/broadcast.png "Broadcast Join")
5654

57-
Enabling broadcasting pushes the entire dataset into memory. For smaller datasets containing only a few thousand rows, broadcasting can greatly improve your lookup performance. For large datasets, this option can lead to an out of memory exception.
55+
In joins, lookups and exists transformation, if one or both data streams fit into worker node memory, you can optimize performance by enabling **Broadcasting**. By default, the spark engine will automatically decide whether or not to broadcast one side. To manually choose which side to broadcast, select **Fixed**.
56+
57+
It's not recommended to disable broadcasting via the **Off** option unless your joins are running into timeout errors.
5858

5959
## Data flow script
6060

@@ -67,7 +67,7 @@ Enabling broadcasting pushes the entire dataset into memory. For smaller dataset
6767
multiple: { true | false },
6868
pickup: { 'first' | 'last' | 'any' }, ## Only required if false is selected for multiple
6969
{ desc | asc }( <sortColumn>, { true | false }), ## Only required if 'first' or 'last' is selected. true/false determines whether to put nulls first
70-
broadcast: { 'none' | 'left' | 'right' | 'both' }
70+
broadcast: { 'auto' | 'left' | 'right' | 'both' | 'off' }
7171
) ~> <lookupTransformationName>
7272
```
7373
### Example
@@ -81,7 +81,7 @@ SQLProducts, DimProd lookup(ProductID == ProductKey,
8181
multiple: false,
8282
pickup: 'first',
8383
asc(ProductKey, true),
84-
broadcast: 'none')~> LookupKeys
84+
broadcast: 'auto')~> LookupKeys
8585
```
8686
##
8787
Next steps
14.3 KB
Loading
29.8 KB
Loading

articles/iot-hub/virtual-network-support.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,11 @@
22
title: Azure IoT Hub support for virtual networks
33
description: How to use virtual networks connectivity pattern with IoT Hub
44
services: iot-hub
5-
author: rezasherafat
5+
author: jlian
66
ms.service: iot-fundamentals
77
ms.topic: conceptual
8-
ms.date: 03/13/2020
9-
ms.author: rezas
8+
ms.date: 04/28/2020
9+
ms.author: jlian
1010
---
1111

1212
# IoT Hub support for virtual networks
@@ -195,7 +195,7 @@ A managed service identity can be assigned to your hub at resource provisioning
195195
After substituting the values for your resource `name`, `location`, `SKU.name` and `SKU.tier`, you can use Azure CLI to deploy the resource in an existing resource group using:
196196

197197
```azurecli-interactive
198-
az group deployment create --name <deployment-name> --resource-group <resource-group-name> --template-file <template-file.json>
198+
az deployment group create --name <deployment-name> --resource-group <resource-group-name> --template-file <template-file.json>
199199
```
200200

201201
After the resource is created, you can retrieve the managed service identity assigned to your hub using Azure CLI:

articles/search/index-add-custom-analyzers.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ author: Yahnoosh
88
ms.author: jlembicz
99
ms.service: cognitive-search
1010
ms.topic: conceptual
11-
ms.date: 11/04/2019
11+
ms.date: 04/27/2020
1212
translation.priority.mt:
1313
- "de-de"
1414
- "es-es"
@@ -274,7 +274,7 @@ This section provides the valid values for attributes specified in the definitio
274274
|**analyzer_name**|**analyzer_type** <sup>1</sup>|**Description and Options**|
275275
|-|-|-|
276276
|[keyword](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/KeywordAnalyzer.html)| (type applies only when options are available) |Treats the entire content of a field as a single token. This is useful for data like zip codes, IDs, and some product names.|
277-
|[pattern](https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/PatternAnalyzer.html)|PatternAnalyzer|Flexibly separates text into terms via a regular expression pattern.<br /><br /> **Options**<br /><br /> lowercase (type: bool) - Determines whether terms are lowercased. The default is true.<br /><br /> [pattern](https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html?is-external=true) (type: string) - A regular expression pattern to match token separators. The default is \w+.<br /><br /> [flags](https://docs.oracle.com/javase/6/docs/api/java/util/regex/Pattern.html#field_summary) (type: string) - Regular expression flags. The default is an empty string. Allowed values: CANON_EQ, CASE_INSENSITIVE, COMMENTS, DOTALL, LITERAL, MULTILINE, UNICODE_CASE, UNIX_LINES<br /><br /> stopwords (type: string array) - A list of stopwords. The default is an empty list.|
277+
|[pattern](https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/PatternAnalyzer.html)|PatternAnalyzer|Flexibly separates text into terms via a regular expression pattern.<br /><br /> **Options**<br /><br /> lowercase (type: bool) - Determines whether terms are lowercased. The default is true.<br /><br /> [pattern](https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html?is-external=true) (type: string) - A regular expression pattern to match token separators. The default is `\W+`, which matches non-word characters.<br /><br /> [flags](https://docs.oracle.com/javase/6/docs/api/java/util/regex/Pattern.html#field_summary) (type: string) - Regular expression flags. The default is an empty string. Allowed values: CANON_EQ, CASE_INSENSITIVE, COMMENTS, DOTALL, LITERAL, MULTILINE, UNICODE_CASE, UNIX_LINES<br /><br /> stopwords (type: string array) - A list of stopwords. The default is an empty list.|
278278
|[simple](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/SimpleAnalyzer.html)|(type applies only when options are available) |Divides text at non-letters and converts them to lower case. |
279279
|[standard](https://lucene.apache.org/core/6_6_1/core/org/apache/lucene/analysis/standard/StandardAnalyzer.html) <br />(Also referred to as standard.lucene)|StandardAnalyzer|Standard Lucene analyzer, composed of the standard tokenizer, lowercase filter, and stop filter.<br /><br /> **Options**<br /><br /> maxTokenLength (type: int) - The maximum token length. The default is 255. Tokens longer than the maximum length are split. Maximum token length that can be used is 300 characters.<br /><br /> stopwords (type: string array) - A list of stopwords. The default is an empty list.|
280280
|standardasciifolding.lucene|(type applies only when options are available) |Standard analyzer with Ascii folding filter. |
@@ -317,7 +317,7 @@ In the table below, the tokenizers that are implemented using Apache Lucene are
317317
| microsoft_language_stemming_tokenizer | MicrosoftLanguageStemmingTokenizer| Divides text using language-specific rules and reduces words to their base forms<br /><br /> **Options**<br /><br />maxTokenLength (type: int) - The maximum token length, default: 255, maximum: 300. Tokens longer than the maximum length are split. Tokens longer than 300 characters are first split into tokens of length 300 and then each of those tokens is split based on the maxTokenLength set.<br /><br /> isSearchTokenizer (type: bool) - Set to true if used as the search tokenizer, set to false if used as the indexing tokenizer.<br /><br /> language (type: string) - Language to use, default "english". Allowed values include:<br />"arabic", "bangla", "bulgarian", "catalan", "croatian", "czech", "danish", "dutch", "english", "estonian", "finnish", "french", "german", "greek", "gujarati", "hebrew", "hindi", "hungarian", "icelandic", "indonesian", "italian", "kannada", "latvian", "lithuanian", "malay", "malayalam", "marathi", "norwegianBokmaal", "polish", "portuguese", "portugueseBrazilian", "punjabi", "romanian", "russian", "serbianCyrillic", "serbianLatin", "slovak", "slovenian", "spanish", "swedish", "tamil", "telugu", "turkish", "ukrainian", "urdu" |
318318
|[nGram](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ngram/NGramTokenizer.html)|NGramTokenizer|Tokenizes the input into n-grams of the given size(s).<br /><br /> **Options**<br /><br /> minGram (type: int) - Default: 1, maximum: 300.<br /><br /> maxGram (type: int) - Default: 2, maximum: 300. Must be greater than minGram. <br /><br /> tokenChars (type: string array) - Character classes to keep in the tokens. Allowed values: "letter", "digit", "whitespace", "punctuation", "symbol". Defaults to an empty array - keeps all characters. |
319319
|[path_hierarchy_v2](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/path/PathHierarchyTokenizer.html)|PathHierarchyTokenizerV2|Tokenizer for path-like hierarchies.<br /><br /> **Options**<br /><br /> delimiter (type: string) - Default: '/.<br /><br /> replacement (type: string) - If set, replaces the delimiter character. Default same as the value of delimiter.<br /><br /> maxTokenLength (type: int) - The maximum token length. Default: 300, maximum: 300. Paths longer than maxTokenLength are ignored.<br /><br /> reverse (type: bool) - If true, generates token in reverse order. Default: false.<br /><br /> skip (type: bool) - Initial tokens to skip. The default is 0.|
320-
|[pattern](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/pattern/PatternTokenizer.html)|PatternTokenizer|This tokenizer uses regex pattern matching to construct distinct tokens.<br /><br /> **Options**<br /><br /> [pattern](https://docs.oracle.com/javase/6/docs/api/java/util/regex/Pattern.html) (type: string) - Regular expression pattern. The default is \W+. <br /><br /> [flags](https://docs.oracle.com/javase/6/docs/api/java/util/regex/Pattern.html#field_summary) (type: string) - Regular expression flags. The default is an empty string. Allowed values: CANON_EQ, CASE_INSENSITIVE, COMMENTS, DOTALL, LITERAL, MULTILINE, UNICODE_CASE, UNIX_LINES<br /><br /> group (type: int) - Which group to extract into tokens. The default is -1 (split).|
320+
|[pattern](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/pattern/PatternTokenizer.html)|PatternTokenizer|This tokenizer uses regex pattern matching to construct distinct tokens.<br /><br /> **Options**<br /><br /> [pattern](https://docs.oracle.com/javase/6/docs/api/java/util/regex/Pattern.html) (type: string) - Regular expression pattern to match token separators. The default is `\W+`, which matches non-word characters. <br /><br /> [flags](https://docs.oracle.com/javase/6/docs/api/java/util/regex/Pattern.html#field_summary) (type: string) - Regular expression flags. The default is an empty string. Allowed values: CANON_EQ, CASE_INSENSITIVE, COMMENTS, DOTALL, LITERAL, MULTILINE, UNICODE_CASE, UNIX_LINES<br /><br /> group (type: int) - Which group to extract into tokens. The default is -1 (split).|
321321
|[standard_v2](https://lucene.apache.org/core/6_6_1/core/org/apache/lucene/analysis/standard/StandardTokenizer.html)|StandardTokenizerV2|Breaks text following the [Unicode Text Segmentation rules](https://unicode.org/reports/tr29/).<br /><br /> **Options**<br /><br /> maxTokenLength (type: int) - The maximum token length. Default: 255, maximum: 300. Tokens longer than the maximum length are split.|
322322
|[uax_url_email](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/standard/UAX29URLEmailTokenizer.html)|UaxUrlEmailTokenizer|Tokenizes urls and emails as one token.<br /><br /> **Options**<br /><br /> maxTokenLength (type: int) - The maximum token length. Default: 255, maximum: 300. Tokens longer than the maximum length are split.|
323323
|[whitespace](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/WhitespaceTokenizer.html)|(type applies only when options are available) |Divides text at whitespace. Tokens that are longer than 255 characters are split.|

0 commit comments

Comments
 (0)