Skip to content

Commit d15b56c

Browse files
authored
Merge pull request #263452 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents 0eed899 + 892dba1 commit d15b56c

File tree

5 files changed

+17
-7
lines changed

5 files changed

+17
-7
lines changed

articles/ai-services/language-service/summarization/includes/regional-availability.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,9 @@ ms.custom:
1313
---
1414

1515
> [!IMPORTANT]
16-
> Our preview region, Sweden Central, showcases our latest and continually evolving LLM fine tuning techniques based on GPT models. You are welcome to try them out with a Langauge resource in the Sweden Central region.
16+
> Our preview region, Sweden Central, showcases our latest and continually evolving LLM fine tuning techniques based on GPT models. You are welcome to try them out with a Language resource in the Sweden Central region.
1717
>
1818
> Conversation summarization is only available using:
1919
> - REST API
2020
> - Python
21-
> - C#
21+
> - C#

articles/aks/azure-netapp-files-smb.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ You must install a Container Storage Interface (CSI) driver to create a Kubernet
7777
7878
```bash
7979
helm repo add csi-driver-smb https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/charts
80-
helm install csi-driver-smb csi-driver-smb/csi-driver-smb --namespace kube-system --version v1.10.0 -set windows.enabled=true
80+
helm install csi-driver-smb csi-driver-smb/csi-driver-smb --namespace kube-system --version v1.13.0 --set windows.enabled=true
8181
```
8282
8383
For other methods of installing the SMB CSI Driver, see [Install SMB CSI driver master version on a Kubernetes cluster](https://github.com/kubernetes-csi/csi-driver-smb/blob/master/docs/install-csi-driver-master.md).

articles/azure-functions/functions-develop-vs-code.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@ At this point, you can do one of these tasks:
209209

210210
## Add a function to your project
211211

212-
You can add a new function to an existing project baswed on one of the predefined Functions trigger templates. To add a new function trigger, select F1 to open the command palette, and then search for and run the command **Azure Functions: Create Function**. Follow the prompts to choose your trigger type and define the required attributes of the trigger. If your trigger requires an access key or connection string to connect to a service, get it ready before you create the function trigger.
212+
You can add a new function to an existing project based on one of the predefined Functions trigger templates. To add a new function trigger, select F1 to open the command palette, and then search for and run the command **Azure Functions: Create Function**. Follow the prompts to choose your trigger type and define the required attributes of the trigger. If your trigger requires an access key or connection string to connect to a service, get it ready before you create the function trigger.
213213

214214
::: zone pivot="programming-language-csharp"
215215
The results of this action are that a new C# class library (.cs) file is added to your project.

articles/azure-resource-manager/management/move-resource-group-and-subscription.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -179,8 +179,11 @@ $destinationResourceGroup = Get-AzResourceGroup -Name $destinationName
179179
$resources = Get-AzResource -ResourceGroupName $sourceName | Where-Object { $_.Name -in $resourcesToMove }
180180
181181
Invoke-AzResourceAction -Action validateMoveResources `
182-
-ResourceId $sourceResourceGroup.ResourceId `
183-
-Parameters @{ resources= $resources.ResourceId;targetResourceGroup = $destinationResourceGroup.ResourceId }
182+
-ResourceId $sourceResourceGroup.ResourceId `
183+
-Parameters @{
184+
resources = $resources.ResourceId; # Wrap in an @() array if providing a single resource ID string.
185+
targetResourceGroup = $destinationResourceGroup.ResourceId
186+
}
184187
```
185188

186189
If validation passes, you see no output.

articles/healthcare-apis/dicom/dicom-data-lake.md

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,19 +47,26 @@ AHDS/{workspace-name}/dicom/{dicom-service-name}/{partition-name}
4747
| `{dicom-service-name}` | The name of the DICOM service instance. |
4848
| `{partition-name}` | The name of the data partition. Note, if no partitions are specified, all DICOM data is stored in the default partition, named `Microsoft.Default`. |
4949

50+
In addition to DICOM data, a small file to enable [health checks](#health-check) will be written to this location.
51+
5052
> [!NOTE]
5153
> During public preview, the DICOM service writes data to the storage container and reads the data, but user-added data isn't read and indexed by the DICOM service. Similarly, if DICOM data written by the DICOM service is modified or removed, it may result in errors when accessing data with the DICOMweb APIs.
5254
5355
## Permissions
5456

55-
The DICOM service is granted access to the data like any other service or application accessing data in a storage account. Access can be revoked at any time without affecting your organization's ability to access the data. The DICOM service needs to be granted the [Storage Blob Data Contributor](/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor) role by using a system-assigned or user-assigned managed identity.
57+
The DICOM service is granted access to the data like any other service or application accessing data in a storage account. Access can be revoked at any time without affecting your organization's ability to access the data. The DICOM service needs the ability to read, write, and delete files in the provided file system. This can be provided by granting the [Storage Blob Data Contributor](/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor) role to the system-assigned or user-assigned managed identity attached to the DICOM service.
5658

5759
## Access tiers
5860

5961
You can manage costs for imaging data stored by the DICOM service by using Azure Storage access tiers for the data lake storage account. The DICOM service only supports online access tiers (either hot, cool, or cold), and can retrieve imaging data in those tiers immediately. The hot tier is the best choice for data that is in active use. The cool or cold tier is ideal for data that is accessed less frequently but still must be available for reading and writing.
6062

6163
To learn more about access tiers, including cost tradeoffs and best practices, see [Azure Storage access tiers](/azure/storage/blobs/access-tiers-overview)
6264

65+
## Health check
66+
67+
The DICOM service writes a small file to the data lake every 30 seconds, following the [Data Contract](#data-contracts) to ensure it maintains access. Making any changes to files stored under the `healthCheck` sub-directory might result in incorrect status of the health check.
68+
If there is an issue with access, status and details are displayed by [Azure Resource Health](../../service-health/overview.md). Azure Resource Health specifies if any action is required to restore access, for example reinstating a role to the DICOM service's identity.
69+
6370
## Limitations
6471

6572
During public preview, the DICOM service with data lake storage has these limitations:

0 commit comments

Comments
 (0)