Skip to content

Commit 122c68e

Browse files
committed
Merging changes synced from https://github.com/MicrosoftDocs/azure-docs-pr (branch live)
2 parents ee27d74 + c80dda2 commit 122c68e

27 files changed

+198
-113
lines changed

articles/aks/http-application-routing.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,11 @@ When the add-on is enabled, it creates a DNS Zone in your subscription. For more
1717
> [!CAUTION]
1818
> The HTTP application routing add-on is designed to let you quickly create an ingress controller and access your applications. This add-on is not currently designed for use in a production environment and is not recommended for production use. For production-ready ingress deployments that include multiple replicas and TLS support, see [Create an HTTPS ingress controller](./ingress-tls.md).
1919
20+
21+
## Limitations
22+
23+
* HTTP application routing doesn't currently work with AKS versions 1.22.6+
24+
2025
## HTTP routing solution overview
2126

2227
The add-on deploys two components: a [Kubernetes Ingress controller][ingress] and an [External-DNS][external-dns] controller.

articles/azure-cache-for-redis/cache-high-availability.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.author: franlanglois
1212

1313
As with any cloud systems, unplanned outages can occur that result in a virtual machines (VM) instance, an Availability Zone, or a complete Azure region going down. We recommend customers have a plan in place to handle zone or regional outages.
1414

15-
This article presents the information for customers to create a _business continuity and disaster recovery plan_ for their Azure Cache for Redis, or Azure Cache for Redis Enterprise implementation.
15+
This article presents the information for customers to create a *business continuity and disaster recovery plan* for their Azure Cache for Redis, or Azure Cache for Redis Enterprise implementation.
1616

1717
Various high availability options are available in the Standard, Premium, and Enterprise tiers:
1818

@@ -72,8 +72,8 @@ A zone redundant cache provides automatic failover. When the current primary nod
7272

7373
A cache in either Enterprise tier runs on a Redis Enterprise *cluster*. It always requires an odd number of server nodes to form a quorum. By default, it has three nodes, each hosted on a dedicated VM.
7474

75-
- An Enterprise cache has two same-sized *data nodes* and one smaller *quorum node*.
76-
- An Enterprise Flash cache has three same-sized data nodes.
75+
- An Enterprise cache has two same-sized *data nodes* and one smaller *quorum node*.
76+
- An Enterprise Flash cache has three same-sized data nodes.
7777

7878
The Enterprise cluster divides Azure Cache for Redis data into partitions internally. Each partition has a *primary* and at least one *replica*. Each data node holds one or more partitions. The Enterprise cluster ensures that the primary and replica(s) of any partition are never collocated on the same data node. Partitions replicate data asynchronously from primaries to their corresponding replicas.
7979

@@ -103,7 +103,7 @@ Consider choosing a geo-redundant storage account to ensure high availability of
103103

104104
Applicable tiers: **Premium**
105105

106-
[Geo-replication](cache-how-to-geo-replication.md) is a mechanism for linking two or more Azure Cache for Redis instances, typically spanning two Azure regions. Geo-replication is designed mainly for disaster recovery. Two Premium tier cache instances are connected through geo-replication in away that provides reads and writes to your primary cache, and that data is replicated to the secondary cache.
106+
[Geo-replication](cache-how-to-geo-replication.md) is a mechanism for linking two or more Azure Cache for Redis instances, typically spanning two Azure regions. Geo-replication is designed mainly for disaster recovery. Two Premium tier cache instances are connected through geo-replication in a way that provides reads and writes to your primary cache, and that data is replicated to the secondary cache.
107107
For more information on how to set it up, see [Configure geo-replication for Premium Azure Cache for Redis instances](./cache-how-to-geo-replication.md).
108108

109109
If the region hosting the primary cache goes down, you’ll need to start the failover by: first, unlinking the secondary cache, and then, updating your application to point to the secondary cache for reads and writes.
@@ -137,4 +137,4 @@ Learn more about how to configure Azure Cache for Redis high-availability option
137137
- [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
138138
- [Add replicas to Azure Cache for Redis](cache-how-to-multi-replicas.md)
139139
- [Enable zone redundancy for Azure Cache for Redis](cache-how-to-zone-redundancy.md)
140-
- [Set up geo-replication for Azure Cache for Redis](cache-how-to-geo-replication.md)
140+
- [Set up geo-replication for Azure Cache for Redis](cache-how-to-geo-replication.md)

articles/azure-monitor/containers/container-insights-prometheus-integration.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -277,11 +277,11 @@ For the following Kubernetes environments:
277277
- Azure Stack or on-premises
278278
- Azure Red Hat OpenShift and Red Hat OpenShift version 4.x
279279
280-
run the command `kubectl apply -f <configmap_yaml_file.yaml`.
280+
run the command `kubectl apply -f <config3. map_yaml_file.yaml>`.
281281
282-
For an Azure Red Hat OpenShift v3.x cluster, run the command, `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging` to open the file in your default editor to modify and then save it.
282+
For an example, run the command, `Example: kubectl apply -f container-azm-ms-agentconfig.yaml` to open the file in your default editor to modify and then save it.
283283
284-
The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, not all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" updated`.
284+
The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, not all restart at the same time. When the restarts are finished, a popup message is displayed that's similar to the following and includes the result: 'configmap "container-azm-ms-agentconfig' created to indicate the configmap resource created.
285285
286286
## Verify configuration
287287

articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88
ms.service: cognitive-services
99
ms.subservice: speech-service
1010
ms.topic: conceptual
11-
ms.date: 01/24/2022
11+
ms.date: 04/22/2022
1212
ms.author: alexeyo
1313
---
1414

@@ -68,9 +68,9 @@ In the following tables, the parameters without the **Adjustable** row aren't ad
6868

6969
| Quota | Free (F0)<sup>3</sup> | Standard (S0) |
7070
|--|--|--|
71-
| **Max number of transactions per second (TPS) per Speech service resource** | | |
72-
| Real-time API. Prebuilt neural voices and custom neural voices. | 20 per 60 seconds | 200<sup>4</sup> |
73-
| Adjustable | No<sup>4</sup> | Yes<sup>4</sup> |
71+
| **Max number of transactions per certain time period per Speech service resource** | | |
72+
| Real-time API. Prebuilt neural voices and custom neural voices. | 20 transactions per 60 seconds | 200 transactions per second (TPS) |
73+
| Adjustable | No<sup>4</sup> | Yes<sup>5</sup> |
7474
| **HTTP-specific quotas** | | |
7575
| Max audio length produced per request | 10 min | 10 min |
7676
| Max total number of distinct `<voice>` and `<audio>` tags in SSML | 50 | 50 |

articles/cognitive-services/language-service/concepts/model-lifecycle.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88
ms.service: cognitive-services
99
ms.subservice: language-service
1010
ms.topic: conceptual
11-
ms.date: 03/15/2022
11+
ms.date: 04/21/2022
1212
ms.author: aahi
1313
---
1414

@@ -58,7 +58,7 @@ Use the table below to find which model versions are supported by each feature.
5858
| Custom NER | `2021-11-01-preview` | | `2021-11-01-preview` |
5959
| Personally Identifiable Information (PII) detection | `2019-10-01`, `2020-02-01`, `2020-04-01`,`2020-07-01`, `2021-01-15` | `2021-01-15` | |
6060
| Question answering | `2021-10-01` | `2021-10-01` |
61-
| Text Analytics for health | `2021-05-15` | `2021-05-15` | |
61+
| Text Analytics for health | `2021-05-15`, `2022-03-01` | `2022-03-01` | |
6262
| Key phrase extraction | `2019-10-01`, `2020-07-01`, `2021-06-01` | `2021-06-01` | |
6363
| Text summarization | `2021-08-01` | `2021-08-01` | |
6464

articles/cosmos-db/mongodb/mongodb-indexing.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -241,13 +241,11 @@ In the preceding example, omitting the ```"university":1``` clause returns an er
241241
242242
`cannot create unique index over {student_id : 1.0} with shard key pattern { university : 1.0 }`
243243
244-
#### Note
245-
246-
Support for unique index on existing collections with data is available in preview. You can sign up for the feature “Azure Cosmos DB API for MongoDB New Unique Indexes in existing collection” through the [Preview Features blade in the portal](./../access-previews.md).
247-
248244
#### Limitations
249245
250-
On accounts that have continuous backup or synapse link enabled, unique indexes will need to be created while the collection is empty.
246+
Unique indexes need to be created while the collection is empty.
247+
248+
Support for unique index on existing collections with data is available in preview for accounts that do not use Synapse Link or Continuous backup. You can sign up for the feature “Azure Cosmos DB API for MongoDB New Unique Indexes in existing collection” through the [Preview Features blade in the portal](./../access-previews.md).
251249
252250
#### Unique partial indexes
253251

articles/data-factory/connector-sharepoint-online-list.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: data-factory
77
ms.subservice: data-movement
88
ms.custom: synapse
99
ms.topic: conceptual
10-
ms.date: 04/18/2022
10+
ms.date: 04/22/2022
1111
ms.author: jianleishen
1212
---
1313
# Copy data from SharePoint Online List by using Azure Data Factory or Azure Synapse Analytics
@@ -104,7 +104,7 @@ The following sections provide details about properties you can use to define en
104104

105105
## Linked service properties
106106

107-
The following properties are supported for an SharePoint Online List linked service:
107+
The following properties are supported for a SharePoint Online List linked service:
108108

109109
| **Property** | **Description** | **Required** |
110110
| ------------------- | ------------------------------------------------------------ | ------------ |

articles/frontdoor/front-door-faq.yml

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -218,9 +218,12 @@ sections:
218218
- question: |
219219
How does Front Door handle ‘domain fronting’ behavior?
220220
answer: |
221-
As of April 29, 2022, Microsoft has made a change to the behavior of Azure Front Door Standard/Premium/(classic) and Azure CDN from Microsoft (classic) in alignment with its commitment to stop allowing domain fronting behavior on its platform. After this date, new Front Door and Azure CDN Standard from Microsoft resources that are created will block any HTTP request that exhibit this behavior.
221+
As of April 29, 2022, Microsoft has made a change to the behavior of Azure Front Door Standard/Premium/(classic) and Azure CDN from Microsoft (classic) in alignment with its commitment to stop allowing domain fronting behavior on its platform. Once blocking domain fronting is enabled, AFD and CDN resources will block any HTTP request that exhibit this behavior.
222222
If this behavior is enabled for your resource, requests where Host header in HTTP/HTTPS requests doesn't match the original TLS SNI extension used during the TLS negotiation, will be blocked.
223-
223+
224+
If you wish to block domain fronting for any existing Azure Front Door Standard and Premium, Azure Front Door (classic) and Azure CDN Standard from Microsoft (classic) resources or for new Azure Front Door Standard and Premium, Azure Front Door (classic) and Azure CDN Standard from Microsoft (classic) resources, please create a support request and provide your subscription and
225+
resource information. Upon enabling of blocking domain fronting behavior, Azure Front Door and Azure CDN Standard from Microsoft (classic) resources will block any HTTP request that exhibit this behavior.
226+
224227
When Front Door blocks a request due to this mismatch:
225228
The client will receive a HTTP “421 Misdirected Request” error code response
226229
Front Door will log the block in its diagnostic logs under the “Error Info” property with the value “SSLMismatchedSNI”

articles/purview/.openpublishing.redirection.purview.json

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -149,6 +149,11 @@
149149
"source_path_from_root": "/articles/purview/tutorial-data-owner-policies-resource-group.md",
150150
"redirect_url": "/azure/purview/how-to-data-owner-policies-resource-group",
151151
"redirect_document_id": true
152+
},
153+
{
154+
"source_path_from_root": "/articles/purview/how-to-enable-data-use-governance.md",
155+
"redirect_url": "/azure/purview/how-to-enable-data-use-management",
156+
"redirect_document_id": true
152157
}
153158
]
154159
}

articles/purview/concept-self-service-data-access-policy.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,13 +18,13 @@ This article helps you understand Microsoft Purview Self-service data access pol
1818
1919
## Important limitations
2020

21-
The self-service data access policy is only supported when the prerequisites mentioned in [data use governance](./how-to-enable-data-use-governance.md#prerequisites) are satisfied.
21+
The self-service data access policy is only supported when the prerequisites mentioned in [data use management](./how-to-enable-data-use-management.md#prerequisites) are satisfied.
2222

2323
## Overview
2424

25-
Microsoft Purview Self-service data access workflow allows data consumer to request access to data when browsing or searching for data. Once the data access request is approved, a policy gets auto-generated to grant access to the requestor provided the data source is enabled for data use governance. Currently, self-service data access policy is supported for storage accounts, containers, folders, and files.
25+
Microsoft Purview Self-service data access workflow allows data consumer to request access to data when browsing or searching for data. Once the data access request is approved, a policy gets auto-generated to grant access to the requestor provided the data source is enabled for data use management. Currently, self-service data access policy is supported for storage accounts, containers, folders, and files.
2626

27-
A **workflow admin** will need to map a self-service data access workflow to a collection. Collection is logical grouping of data sources that are registered within Microsoft Purview. **Only data source(s) that are registered** for data use governance will have self-service policies auto-generated.
27+
A **workflow admin** will need to map a self-service data access workflow to a collection. Collection is logical grouping of data sources that are registered within Microsoft Purview. **Only data source(s) that are registered** for data use management will have self-service policies auto-generated.
2828

2929
## Terminology
3030

@@ -34,7 +34,7 @@ A **workflow admin** will need to map a self-service data access workflow to a c
3434

3535
* **Self-service data access workflow** is the workflow that is initiated when a data consumer requests access to data.
3636

37-
* **Approver** is either security group or AAD users that can approve self-service access requests.
37+
* **Approver** is either security group or Azure Active Directory (Azure AD) users that can approve self-service access requests.
3838

3939
## How to use Microsoft Purview self-service data access policy
4040

@@ -44,12 +44,12 @@ With self-service data access workflow, data consumers can not only find data as
4444

4545
A default self-service data access workflow template is provided with every Microsoft Purview account.The default template can be amended to add more approvers and/or set the approver's email address. For more details refer [Create and enable self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md).
4646

47-
Whenever a data consumer requests access to a dataset, the notification is sent to the workflow approver(s). The approver(s) can view the request and approve it either from Microsft Purview portal or from within the email notification. When the request is approved, a policy is auto-generated and applied against the respective data source. Self-service data access policy gets auto-generated only if the data source is registered for **data use governance**. The pre-requisites mentioned within the [data use governance](./how-to-enable-data-use-governance.md#prerequisites) have to be satisfied.
47+
Whenever a data consumer requests access to a dataset, the notification is sent to the workflow approver(s). The approver(s) can view the request and approve it either from Microsoft Purview portal or from within the email notification. When the request is approved, a policy is auto-generated and applied against the respective data source. Self-service data access policy gets auto-generated only if the data source is registered for **data use management**. The pre-requisites mentioned within the [data use management](./how-to-enable-data-use-management.md#prerequisites) have to be satisfied.
4848

4949
## Next steps
5050

5151
If you would like to preview these features in your environment, follow the link below.
52-
- [Enable data use governance](./how-to-enable-data-use-governance.md#prerequisites)
52+
- [Enable data use management](./how-to-enable-data-use-management.md#prerequisites)
5353
- [create self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md)
5454
- [working with policies at file level](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-file-level-permission/ba-p/3102166)
5555
- [working with policies at folder level](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-folder-level-permission/ba-p/3109583)

0 commit comments

Comments
 (0)