Skip to content

Commit 0f0e16b

Browse files
authored
Merge pull request #218057 from MicrosoftDocs/main
Publish to live, Friday 4 AM PST 11/11
2 parents 7bb6235 + 1a9f7b5 commit 0f0e16b

File tree

63 files changed

+476
-76
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

63 files changed

+476
-76
lines changed

.whatsnew/.application-management.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
},
1818
"areas": [
1919
{
20-
"name": [ "."],
20+
"names": [ "."],
2121
"heading": "Azure Active Directory application management"
2222
}
2323
]

articles/active-directory/hybrid/how-to-connect-install-prerequisites.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ We recommend that you harden your Azure AD Connect server to decrease the securi
9494
* You can have only one sync engine per SQL instance. Sharing a SQL instance with FIM/MIM Sync, DirSync, or Azure AD Sync *isn't supported*.
9595

9696
### Accounts
97-
* You must have an Azure AD Global Administrator account for the Azure AD tenant you want to integrate with. This account must be a *school or organization account* and can't be a *Microsoft account*.
97+
* You must have an Azure AD Global Administrator account or Hybrid Identity Administrator account for the Azure AD tenant you want to integrate with. This account must be a *school or organization account* and can't be a *Microsoft account*.
9898
* If you use [express settings](reference-connect-accounts-permissions.md#express-settings-installation) or upgrade from DirSync, you must have an Enterprise Administrator account for your on-premises Active Directory.
9999
* If you use the custom settings installation path, you have more options. For more information, see [Custom installation settings](reference-connect-accounts-permissions.md#custom-installation-settings).
100100

articles/application-gateway/private-link-configure.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ A private endpoint is a network interface that uses a private IP address from th
6969
> If the public or private IP configuration resource is missing when trying to select a _Target sub-resource_ on the _Resource_ tab of private endpoint creation, please ensure a listener is actively utilizing the respected frontend IP configuration. Frontend IP configurations without an associated listener won't be shown as a _Target sub-resource_.
7070
7171
> [!Note]
72-
> If you're provisioning a **Private Endpoint** from within another tenant, you will need to utilize the Azure Application Gateway Resource ID and Frontend Configuration ID as the target sub-resource. For example, if the frontend configuration of the gateway was named _PrivateFrontendIp_, the target sub-resource would be as follows: _/subscriptions/xxxx-xxxx-xxxx-xxxx-xxxx/resourceGroups/resourceGroupname/providers/Microsoft.Network/applicationGateways/appgwname/frontendIPConfigurations/PrivateFrontendIp_.
72+
> If you're provisioning a **Private Endpoint** from within another tenant, you will need to utilize the Azure Application Gateway Resource ID and the _Name_ of the Frontend IP configuration as the target sub-resource. For example, if I had a private IP associated to the Application Gateway and the Name listed in Frontend IP configuration of the portal for the private IP is _PrivateFrontendIp_, the target sub-resource value would be: _PrivateFrontendIp_.
7373
7474
# [Azure PowerShell](#tab/powershell)
7575

articles/azure-maps/creator-facility-ontology.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ The `unit` feature class defines a physical and non-overlapping area that can be
119119

120120
| Property | Type | Required | Description |
121121
|-------------|--------|----------|-------------|
122-
|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID.<BR>When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined.<BR>Maximum length allowed is 1,000 characters.|
122+
|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
123123
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
124124
|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
125125
|`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures](#structure) don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`](#opening). By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`](#lineelement) or [`areaElement`](#areaelement) with an `isObstruction` property equal to `true`.|
@@ -208,7 +208,7 @@ The `level` class feature defines an area of a building at a set elevation. For
208208
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
209209
|`facilityId` | [facility.Id](#facility) |true | The ID of a [`facility`](#facility) feature.|
210210
| `ordinal` | integer | true | The level number. Used by the [`verticalPenetration`](#verticalpenetration) feature to determine the relative order of the floors to help with travel direction. The general practice is to start with 0 for the ground floor. Add +1 for every floor upwards, and -1 for every floor going down. It can be modeled with any numbers, as long as the higher physical floors are represented by higher ordinal values. |
211-
| `abbreviatedName` | string | false | A four-character abbreviated level name, like what would be found on an elevator button. Maximum length allowed is 1,000 characters.|
211+
| `abbreviatedName` | string | false | A four-character abbreviated level name, like what would be found on an elevator button. |
212212
| `heightAboveFacilityAnchor` | double | false | Vertical distance of the level's floor above [`facility.anchorHeightAboveSeaLevel`](#facility), in meters. |
213213
| `verticalExtent` | double | false | Vertical extent of the level, in meters. If not provided, defaults to [`facility.defaultLevelVerticalExtent`](#facility).|
214214
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
@@ -226,7 +226,7 @@ The `level` class feature defines an area of a building at a set elevation. For
226226
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
227227
|`facilityId` | [facility.Id](#facility) |true | The ID of a [`facility`](#facility) feature.|
228228
| `ordinal` | integer | true | The level number. Used by the [`verticalPenetration`](#verticalpenetration) feature to determine the relative order of the floors to help with travel direction. The general practice is to start with 0 for the ground floor. Add +1 for every floor upwards, and -1 for every floor going down. It can be modeled with any numbers, as long as the higher physical floors are represented by higher ordinal values. |
229-
| `abbreviatedName` | string | false | A four-character abbreviated level name, like what would be found on an elevator button. Maximum length allowed is 1,000 characters.|
229+
| `abbreviatedName` | string | false | A four-character abbreviated level name, like what would be found on an elevator button.|
230230
| `heightAboveFacilityAnchor` | double | false | Vertical distance of the level's floor above [`facility.anchorHeightAboveSeaLevel`](#facility), in meters. |
231231
| `verticalExtent` | double | false | Vertical extent of the level, in meters. If not provided, defaults to [`facility.defaultLevelVerticalExtent`](#facility).|
232232
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|

articles/azure-video-indexer/toc.yml

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -17,10 +17,10 @@
1717
href: invite-users.md
1818
- name: Responsible use of AI
1919
items:
20+
- name: Limited access features
21+
href: limited-access-features.md
2022
- name: Transparency notes
2123
items:
22-
- name: Limited access features
23-
href: limited-access-features.md
2424
- name: Transparency notes
2525
href: /legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context
2626
- name: Audio effects detection
@@ -146,22 +146,27 @@
146146
- name: View closed captions
147147
href: view-closed-captions.md
148148
- name: Customize content models
149+
displayName: customizing
149150
items:
150151
- name: Animated characters
152+
displayName: customizing
151153
href: animated-characters-recognition-how-to.md
152154
- name: Person
155+
displayName: customizing
153156
items:
154157
- name: using the website
155158
href: customize-person-model-with-website.md
156159
- name: using the API
157160
href: customize-person-model-with-api.md
158161
- name: Brands
162+
displayName: customizing
159163
items:
160164
- name: using the website
161165
href: customize-brands-model-with-website.md
162166
- name: using the API
163167
href: customize-brands-model-with-api.md
164168
- name: Language
169+
displayName: customizing
165170
items:
166171
- name: using the website
167172
href: customize-language-model-with-website.md
@@ -222,8 +227,6 @@
222227
href: /answers/topics/azure-video-indexer.html
223228
- name: FAQ
224229
href: faq.yml
225-
- name: Compliance
226-
href: https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942
227230
- name: Release notes
228231
href: release-notes.md
229232
- name: Stack Overflow

articles/batch/best-practices.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ This article discusses best practices and useful tips for using the Azure Batch
2626

2727
- **Multiple compute nodes:** Individual nodes aren't guaranteed to always be available. While uncommon, hardware failures, operating system updates, and a host of other issues can cause individual nodes to be offline. If your Batch workload requires deterministic, guaranteed progress, you should allocate pools with multiple nodes.
2828

29-
- **Images with impending end-of-life (EOL) dates:** We strongly recommended avoiding images with impending Batch support end of life (EOL) dates. These dates can be discovered via the [`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages), [PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images). It's your responsibility to periodically refresh your view of the EOL dates pertinent to your pools and migrate your workloads before the EOL date occurs. If you're using a custom image with a specified node agent, ensure that you follow Batch support end-of-life dates for the image for which your custom image is derived or aligned with.
29+
- **Images with impending end-of-life (EOL) dates:** We strongly recommended avoiding images with impending Batch support end of life (EOL) dates. These dates can be discovered via the [`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages), [PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images). It's your responsibility to periodically refresh your view of the EOL dates pertinent to your pools and migrate your workloads before the EOL date occurs. If you're using a custom image with a specified node agent, ensure that you follow Batch support end-of-life dates for the image for which your custom image is derived or aligned with. An image without a specified `batchSupportEndOfLife` date indicates that such a date has not been determined yet by the Batch service. Absence of a date does not indicate that the respective image will be supported indefinitely. An EOL date may be added or updated in the future at anytime.
3030

3131
- **Unique resource names:** Batch resources (jobs, pools, etc.) often come and go over time. For example, you may create a pool on Monday, delete it on Tuesday, and then create another similar pool on Thursday. Each new resource you create should be given a unique name that you haven't used before. You can create uniqueness by using a GUID (either as the entire resource name, or as a part of it) or by embedding the date and time that the resource was created in the resource name. Batch supports [DisplayName](/dotnet/api/microsoft.azure.batch.jobspecification.displayname), which can give a resource a more readable name even if the actual resource ID is something that isn't human-friendly. Using unique names makes it easier for you to differentiate which particular resource did something in logs and metrics. It also removes ambiguity if you ever have to file a support case for a resource.
3232

@@ -44,7 +44,7 @@ For the purposes of isolation, if your scenario requires isolating jobs or tasks
4444

4545
Batch node agents aren't automatically upgraded for pools that have non-zero compute nodes. To ensure your Batch pools receive the latest security fixes and updates to the Batch node agent, you need to either resize the pool to zero compute nodes or recreate the pool. It's recommended to monitor the [Batch Node Agent release notes](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md) to understand changes to new Batch node agent versions. Checking regularly for updates when they were released enables you to plan upgrades to the latest agent version.
4646

47-
Before you recreate or resize your pool, you should download any node agent logs for debugging purposes if you're experiencing issues with your Batch pool or compute nodes. This is further discussed in the [Nodes](#nodes) section.
47+
Before you recreate or resize your pool, you should download any node agent logs for debugging purposes if you're experiencing issues with your Batch pool or compute nodes. This process is further discussed in the [Nodes](#nodes) section.
4848

4949
> [!NOTE]
5050
> For general guidance about security in Azure Batch, see [Batch security and compliance best practices](security-best-practices.md).
@@ -253,7 +253,7 @@ For User Defined Routes (UDRs), it's recommended to use `BatchNodeManagement.<re
253253

254254
Ensure that your systems honor DNS Time-to-Live (TTL) for your Batch account service URL. Additionally, ensure that your Batch service clients and other connectivity mechanisms to the Batch service don't rely on IP addresses.
255255

256-
If your requests receive 5xx level HTTP responses and there's a "Connection: close" header in the response, your Batch service client should observe the recommendation by closing the existing connection, re-resolving DNS for the Batch account service URL, and attempt following requests on a new connection.
256+
Any HTTP requests with 5xx level status codes along with a "Connection: close" header in the response requires adjusting your Batch service client behavior. Your Batch service client should observe the recommendation by closing the existing connection, re-resolving DNS for the Batch account service URL, and attempt following requests on a new connection.
257257

258258
### Retry requests automatically
259259

articles/data-factory/concepts-data-flow-performance-sources.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.author: makromer
88
ms.service: data-factory
99
ms.subservice: data-flows
1010
ms.custom: synapse
11-
ms.date: 06/20/2022
11+
ms.date: 10/11/2022
1212
---
1313

1414
# Optimizing sources

articles/data-factory/connector-troubleshoot-azure-data-lake.md

Lines changed: 24 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: jianleishen
66
ms.service: data-factory
77
ms.subservice: data-movement
88
ms.topic: troubleshooting
9-
ms.date: 08/10/2022
9+
ms.date: 11/08/2022
1010
ms.author: jianleishen
1111
ms.custom: has-adal-ref, synapse
1212
---
@@ -108,6 +108,29 @@ This article provides suggestions to troubleshoot common problems with the Azure
108108
1. The file name contains `_metadata`.
109109
2. The file name starts with `.` (dot).
110110
111+
### Error code: ADLSGen2ForbiddenError
112+
113+
- **Message**: `ADLS Gen2 failed for forbidden: Storage operation % on % get failed with 'Operation returned an invalid status code 'Forbidden'.`
114+
115+
- **Cause**: There are two possible causes:
116+
117+
1. The integration runtime is blocked by network access in Azure storage account firewall settings.
118+
2. The service principal or managed identity doesn’t have enough permission to access the data.
119+
120+
- **Recommendation**:
121+
122+
1. Check your Azure storage account network settings to see whether the public network access is disabled. If disabled, use a managed virtual network integration runtime and create a private endpoint to access. For more information, see [Managed virtual network](managed-virtual-network-private-endpoint.md) and [Build a copy pipeline using managed VNet and private endpoints](tutorial-copy-data-portal-private.md).
123+
124+
1. If you have enabled selected virtual networks and IP addresses in your Azure storage account network setting:
125+
126+
1. It's possible because some IP address ranges of your integration runtime are not allowed by your storage account firewall settings. Add the Azure integration runtime IP addresses or the self-hosted integration runtime IP address to your storage account firewall. For Azure integration runtime IP addresses, see [Azure Integration Runtime IP addresses](azure-integration-runtime-ip-addresses.md), and to learn how to add IP ranges in the storage account firewall, see [Managing IP network rules](../storage/common/storage-network-security.md#managing-ip-network-rules).
127+
128+
1. If you allow trusted Azure services to access this storage account in the firewall, you must use [managed identity authentication](connector-azure-data-lake-storage.md#managed-identity) in copy activity.
129+
130+
For more information about the Azure storage account firewalls settings, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md).
131+
132+
1. If you use service principal or managed identity authentication, grant service principal or managed identity appropriate permissions to do copy. For source, at least the **Storage Blob Data Reader** role. For sink, at least the **Storage Blob Data Contributor** role. For more information, see [Copy and transform data in Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#service-principal-authentication).
133+
111134
## Next steps
112135
113136
For more troubleshooting help, try these resources:

articles/data-factory/control-flow-execute-data-flow-activity.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.subservice: data-flows
88
ms.custom: synapse
99
ms.topic: conceptual
1010
ms.author: makromer
11-
ms.date: 07/20/2022
11+
ms.date: 10/27/2022
1212
---
1313

1414
# Data Flow activity in Azure Data Factory and Azure Synapse Analytics
@@ -25,7 +25,7 @@ To use a Data Flow activity in a pipeline, complete the following steps:
2525
1. Select the new Data Flow activity on the canvas if it is not already selected, and its **Settings** tab, to edit its details.
2626

2727
:::image type="content" source="media/control-flow-execute-data-flow-activity/data-flow-activity.png" alt-text="Shows the UI for a Data Flow activity.":::
28-
1. Checkpoint key is used to set the checkpoint when data flow is used for changed data capture. You can overwrite it. Data flow activities use a guid value as checkpoint key instead of “pipelinename + activityname” so that it can always keep tracking customer’s change data capture state even there’s any renaming actions. All existing data flow activity will use the old pattern key for backward compatibility. Checkpoint key option after publishing a new data flow activity with change data capture enabled data flow resource is shown as below.
28+
1. Checkpoint key is used to set the checkpoint when data flow is used for changed data capture. You can overwrite it. Data flow activities use a guid value as checkpoint key instead of “pipeline name + activity name” so that it can always keep tracking customer’s change data capture state even there’s any renaming actions. All existing data flow activity will use the old pattern key for backward compatibility. Checkpoint key option after publishing a new data flow activity with change data capture enabled data flow resource is shown as below.
2929

3030
:::image type="content" source="media/control-flow-execute-data-flow-activity/data-flow-activity-checkpoint.png" alt-text="Shows the UI for a Data Flow activity with checkpoint key.":::
3131
3. Select an existing data flow or create a new one using the New button. Select other options as required to complete your configuration.

0 commit comments

Comments
 (0)