Skip to content

Commit c4d7609

Browse files
authored
Merge pull request #301674 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents b447fa6 + 33ef7e6 commit c4d7609

File tree

8 files changed

+11
-8
lines changed

8 files changed

+11
-8
lines changed

articles/azure-functions/functions-reference-python.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -842,7 +842,7 @@ The *host.json* file must also be updated to include an HTTP `routePrefix`, as s
842842
"extensionBundle":
843843
{
844844
"id": "Microsoft.Azure.Functions.ExtensionBundle",
845-
"version": "[3.*, 4.0.0)"
845+
"version": "[4.*, 5.0.0)"
846846
},
847847
"extensions":
848848
{
@@ -910,7 +910,7 @@ You can use Asynchronous Server Gateway Interface (ASGI)-compatible and Web Serv
910910
"extensionBundle":
911911
{
912912
"id": "Microsoft.Azure.Functions.ExtensionBundle",
913-
"version": "[2.*, 3.0.0)"
913+
"version": "[4.*, 5.0.0)"
914914
},
915915
"extensions":
916916
{

articles/iot-operations/deploy-iot-ops/overview-deploy.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,9 @@ To install Azure IoT Operations, have the following hardware requirements availa
2929
| Available memory for Azure IoT Operations (RAM) | 10-GB | Depends on usage |
3030
| CPU | 4 vCPUs | 8 vCPUs |
3131

32+
>[!NOTE]
33+
>The minimum configuration is appropriate when running AIO only.
34+
3235
## Choose your features
3336

3437
Azure IoT Operations offers two deployment modes. You can choose to deploy with *test settings*, a basic subset of features that are simpler to get started with for evaluation scenarios. Or, you can choose to deploy with *secure settings*, the full feature set.

articles/role-based-access-control/permissions/hybrid-multicloud.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -228,7 +228,7 @@ Azure service: [Azure Arc](/azure/azure-arc/)
228228
> | Microsoft.HybridCompute/machines/UpgradeExtensions/action | Upgrades Extensions on Azure Arc machines |
229229
> | Microsoft.HybridCompute/machines/assessPatches/action | Assesses any Azure Arc machines to get missing software patches |
230230
> | Microsoft.HybridCompute/machines/installPatches/action | Installs patches on any Azure Arc machines |
231-
> | Microsoft.HybridCompute/machines/listAccessDetails/action | Retreives the access details for a machines resource |
231+
> | Microsoft.HybridCompute/machines/listAccessDetails/action | Retrieves the access details for a machines resource |
232232
> | Microsoft.HybridCompute/machines/addExtensions/action | Setup Extensions on Azure Arc machines |
233233
> | Microsoft.HybridCompute/machines/extensions/read | Reads any Azure Arc extensions |
234234
> | Microsoft.HybridCompute/machines/extensions/write | Installs or Updates an Azure Arc extensions |

articles/sentinel/sap/deploy-data-connector-agent-container.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -348,7 +348,7 @@ At this stage, the system's **Health** status is **Pending**. If the agent is up
348348
- If you have the **Entra ID Application Developer** role or higher, continue to the next step.
349349
- If you don't have the **Entra ID Application Developer** role or higher:
350350
- Share the DCR ID with your Entra ID administrator or colleague with the required permissions.
351-
- Ensure that the **Monitoring Metrics Publishing** role is assigned on the DCR, with the service principal assignment, using the client ID from the Entra ID app registration.
351+
- Ensure that the **Monitoring Metrics Publisher** role is assigned on the DCR, with the service principal assignment, using the client ID from the Entra ID app registration.
352352
- Retrieve the client ID and client secret from the Entra ID app registration to use for authorization on the DCR.
353353
354354
The SAP admin uses the client ID and client secret information to post to the DCR.

articles/storage/common/redundancy-migration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -350,7 +350,7 @@ Some storage account features aren't compatible with other features or operation
350350

351351
Boot diagnostics doesn't support premium storage accounts or zone-redundant storage accounts. When either premium or zone-redundant storage accounts are used for boot diagnostics, users receive a `StorageAccountTypeNotSupported` error upon starting their virtual machine (VM).
352352

353-
Any conversion attempts to add zonal redundancy, such as LRS to ZRS or GRS to GZRS, will fail. To convert your account to a zone-redundant SKU, disable boot diagnostics on your account and resubmit the request. To learn more about boot diagnostics, review the [Azure boot diagnotics](/azure/virtual-machines/boot-diagnostics#enable-managed-boot-diagnostics) article.
353+
Any conversion attempts to add zonal redundancy, such as LRS to ZRS or GRS to GZRS, will fail. To convert your account to a zone-redundant SKU, disable boot diagnostics on your account and resubmit the request. To learn more about boot diagnostics, review the [Azure boot diagnostics](/azure/virtual-machines/boot-diagnostics#enable-managed-boot-diagnostics) article.
354354

355355
### Storage account type
356356

articles/storage/files/files-redundancy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -213,7 +213,7 @@ $token = Get-AzAccessToken
213213
# Invoke SRP list SKU API, and get the returned SKU list
214214
$result = Invoke-RestMethod -Method Get -Uri "https://management.azure.com/subscriptions/$($subscriptionID)/providers/Microsoft.Storage/skus?api-version=2024-01-01" -Headers @{"Authorization" = "Bearer $($token.Token)"}
215215
216-
# Filter the SKU list to get the required information, customization requried here to get the best result.
216+
# Filter the SKU list to get the required information, customization required here to get the best result.
217217
$filteredResult = $result | `
218218
Select-Object -ExpandProperty value | `
219219
Where-Object {

articles/storage/files/storage-files-configure-p2s-vpn-linux.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -227,7 +227,7 @@ sudo ipsec up $VIRTUAL_NETWORK_NAME
227227

228228
## Mount Azure file share
229229

230-
After seting up your Point-to-Site VPN, you can mount your Azure file share. See [Mount SMB file shares to Linux](storage-how-to-use-files-linux.md) or [Mount NFS file share to Linux](storage-files-how-to-mount-nfs-shares.md).
230+
After setting up your Point-to-Site VPN, you can mount your Azure file share. See [Mount SMB file shares to Linux](storage-how-to-use-files-linux.md) or [Mount NFS file share to Linux](storage-files-how-to-mount-nfs-shares.md).
231231

232232
## See also
233233

articles/storage/solution-integration/validated-partners/data-management/komprise-quick-start-guide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ With a quick install of a local Komprise data Observer, in 30 minutes or less yo
5757
- Date set too large for destination storage service in file count or capacity
5858
- Data sets with an exceedingly large number of tiny files or with a large number of empty directories
5959
- Slow-performing shares
60-
- Lack of destination support for sparce files or symbolic links
60+
- Lack of destination support for sparse files or symbolic links
6161

6262
Komprise knows it can be challenging to find just the right data across billions of files. Komprise Deep Analytics builds a Global File Index of all your file’s metadata, giving a unified way to search, tag and create select data sets across storage silos. You can identify orphan data, data by name, location, owner, date, application type or extension. Administrators can use these queries and tagged data sets to move, copy, confine, or feed your data pipelines. They can also set data workflow policies. This allows business to use other Azure cloud data services like personal data identification, running cloud data analytics, and culling and feeding edge data to cloud data lakes.
6363

0 commit comments

Comments
 (0)