Skip to content

Commit 68db497

Browse files
authored
Merge pull request #262822 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents f0e0704 + dd3824a commit 68db497

File tree

12 files changed

+77
-46
lines changed

12 files changed

+77
-46
lines changed

articles/ai-services/authentication.md

Lines changed: 22 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -216,11 +216,12 @@ Now that you have a custom subdomain associated with your resource, you're going
216216
New-AzADServicePrincipal -ApplicationId <APPLICATION_ID>
217217
```
218218

219-
>[!NOTE]
219+
> [!NOTE]
220220
> If you register an application in the Azure portal, this step is completed for you.
221221
222222
3. The last step is to [assign the "Cognitive Services User" role](/powershell/module/az.Resources/New-azRoleAssignment) to the service principal (scoped to the resource). By assigning a role, you're granting service principal access to this resource. You can grant the same service principal access to multiple resources in your subscription.
223-
>[!NOTE]
223+
224+
> [!NOTE]
224225
> The ObjectId of the service principal is used, not the ObjectId for the application.
225226
> The ACCOUNT_ID will be the Azure resource Id of the Azure AI services account you created. You can find Azure resource Id from "properties" of the resource in Azure portal.
226227
@@ -239,32 +240,31 @@ In this sample, a password is used to authenticate the service principal. The to
239240
```
240241

241242
2. Get a token:
242-
> [!NOTE]
243-
> If you're using Azure Cloud Shell, the `SecureClientSecret` class isn't available.
244-
245-
#### [PowerShell](#tab/powershell)
246243
```powershell-interactive
247-
$authContext = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext" -ArgumentList "https://login.windows.net/<TENANT_ID>"
248-
$secureSecretObject = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.SecureClientSecret" -ArgumentList $SecureStringPassword
249-
$clientCredential = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.ClientCredential" -ArgumentList $app.ApplicationId, $secureSecretObject
250-
$token=$authContext.AcquireTokenAsync("https://cognitiveservices.azure.com/", $clientCredential).Result
251-
$token
252-
```
244+
$tenantId = $context.Tenant.Id
245+
$clientId = $app.ApplicationId
246+
$clientSecret = "<YOUR_PASSWORD>"
247+
$resourceUrl = "https://cognitiveservices.azure.com/"
253248
254-
#### [Azure Cloud Shell](#tab/azure-cloud-shell)
255-
```Azure Cloud Shell
256-
$authContext = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext" -ArgumentList "https://login.windows.net/<TENANT_ID>"
257-
$clientCredential = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.ClientCredential" -ArgumentList $app.ApplicationId, <YOUR_PASSWORD>
258-
$token=$authContext.AcquireTokenAsync("https://cognitiveservices.azure.com/", $clientCredential).Result
259-
$token
260-
```
261-
262-
---
249+
$tokenEndpoint = "https://login.microsoftonline.com/$tenantId/oauth2/token"
250+
$body = @{
251+
grant_type = "client_credentials"
252+
client_id = $clientId
253+
client_secret = $clientSecret
254+
resource = $resourceUrl
255+
}
256+
257+
$responseToken = Invoke-RestMethod -Uri $tokenEndpoint -Method Post -Body $body
258+
$accessToken = $responseToken.access_token
259+
```
263260

261+
> [!NOTE]
262+
> Anytime you use passwords in a script, the most secure option is to use the PowerShell Secrets Management module and integrate with a solution such as Azure KeyVault.
263+
264264
3. Call the Computer Vision API:
265265
```powershell-interactive
266266
$url = $account.Endpoint+"vision/v1.0/models"
267-
$result = Invoke-RestMethod -Uri $url -Method Get -Headers @{"Authorization"=$token.CreateAuthorizationHeader()} -Verbose
267+
$result = Invoke-RestMethod -Uri $url -Method Get -Headers @{"Authorization"="Bearer $accessToken"} -Verbose
268268
$result | ConvertTo-Json
269269
```
270270

articles/ai-services/openai/includes/use-your-data-dotnet.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ var chatCompletionsOptions = new ChatCompletionsOptions()
4444
new AzureCognitiveSearchChatExtensionConfiguration()
4545
{
4646
SearchEndpoint = new Uri(searchEndpoint),
47-
SearchKey = new AzureKeyCredential(searchKey),
47+
Key = searchKey,
4848
IndexName = searchIndex,
4949
},
5050
}
@@ -153,7 +153,7 @@ var chatCompletionsOptions = new ChatCompletionsOptions()
153153
new AzureCognitiveSearchChatExtensionConfiguration()
154154
{
155155
SearchEndpoint = new Uri(searchEndpoint),
156-
SearchKey = new AzureKeyCredential(searchKey),
156+
Key = searchKey,
157157
IndexName = searchIndex,
158158
},
159159
}

articles/aks/csi-secrets-store-identity-access.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,7 @@ In this security model, you can grant access to your cluster's resources to team
179179
1. Access your key vault using the [`az aks show`][az-aks-show] command and the user-assigned managed identity created by the add-on.
180180
181181
```azurecli-interactive
182-
az aks show -g <resource-group> -n <cluster-name> --query addonProfiles.azureKeyvaultSecretsProvider.identity.clientId -o tsv
182+
az aks show -g <resource-group> -n <cluster-name> --query addonProfiles.azureKeyvaultSecretsProvider.identity.objectId -o tsv
183183
```
184184
185185
Alternatively, you can create a new managed identity and assign it to your virtual machine (VM) scale set or to each VM instance in your availability set using the following commands.
@@ -193,10 +193,10 @@ In this security model, you can grant access to your cluster's resources to team
193193
2. Create a role assignment that grants the identity permission access to the key vault secrets, access keys, and certificates using the [`az role assignment create`][az-role-assignment-create] command.
194194
195195
```azurecli-interactive
196-
export IDENTITY_CLIENT_ID="$(az identity show -g <resource-group> --name <identity-name> --query 'clientId' -o tsv)"
196+
export IDENTITY_OBJECT_ID="$(az identity show -g <resource-group> --name <identity-name> --query 'principalId' -o tsv)"
197197
export KEYVAULT_SCOPE=$(az keyvault show --name <key-vault-name> --query id -o tsv)
198198
199-
az role assignment create --role Key Vault Administrator --assignee <identity-client-id> --scope $KEYVAULT_SCOPE
199+
az role assignment create --role "Key Vault Administrator" --assignee $IDENTITY_OBJECT_ID --scope $KEYVAULT_SCOPE
200200
```
201201
202202
3. Create a `SecretProviderClass` using the following YAML. Make sure to use your own values for `userAssignedIdentityID`, `keyvaultName`, `tenantId`, and the objects to retrieve from your key vault.

articles/aks/csi-secrets-store-nginx-tls.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -229,7 +229,7 @@ Again, the instructions change slightly depending on your scenario. Follow the i
229229
spec:
230230
type: ClusterIP
231231
ports:
232-
- port: 80
232+
- port: 80
233233
selector:
234234
app: aks-helloworld-one
235235
```
@@ -278,7 +278,7 @@ Again, the instructions change slightly depending on your scenario. Follow the i
278278
spec:
279279
type: ClusterIP
280280
ports:
281-
- port: 80
281+
- port: 80
282282
selector:
283283
app: aks-helloworld-two
284284
```
@@ -334,7 +334,7 @@ Again, the instructions change slightly depending on your scenario. Follow the i
334334
spec:
335335
type: ClusterIP
336336
ports:
337-
- port: 80
337+
- port: 80
338338
selector:
339339
app: aks-helloworld-one
340340
```
@@ -372,7 +372,7 @@ Again, the instructions change slightly depending on your scenario. Follow the i
372372
spec:
373373
type: ClusterIP
374374
ports:
375-
- port: 80
375+
- port: 80
376376
selector:
377377
app: aks-helloworld-two
378378
```
@@ -400,11 +400,11 @@ We can now deploy a Kubernetes ingress resource referencing the secret.
400400
spec:
401401
ingressClassName: nginx
402402
tls:
403-
- hosts:
403+
- hosts:
404404
- demo.azure.com
405405
secretName: ingress-tls-csi
406406
rules:
407-
- host: demo.azure.com
407+
- host: demo.azure.com
408408
http:
409409
paths:
410410
- path: /hello-world-one(/|$)(.*)

articles/azure-monitor/essentials/azure-monitor-workspace-manage.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -133,12 +133,12 @@ Create a link between the Azure Monitor workspace and the Grafana workspace by u
133133
If your cluster is already configured to send data to an Azure Monitor managed service for Prometheus, you must disable it first using the following command:
134134

135135
```azurecli
136-
az aks update --disable-azuremonitormetrics -g <cluster-resource-group> -n <cluster-name>
136+
az aks update --disable-azure-monitor-metrics -g <cluster-resource-group> -n <cluster-name>
137137
```
138138

139139
Then, either enable or re-enable using the following command:
140140
```azurecli
141-
az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id
141+
az aks update --enable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id
142142
<azure-monitor-workspace-name-resource-id> --grafana-resource-id <grafana-workspace-name-resource-id>
143143
```
144144

articles/iot-operations/connect-to-cloud/howto-configure-data-lake.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -270,7 +270,7 @@ The specification field of a DataLakeConnectorTopicMap resource contains the fol
270270
- `mqttSourceTopic`: The name of the MQTT topic(s) to subscribe to. Supports [MQTT topic wildcard notation](https://chat.openai.com/share/c6f86407-af73-4c18-88e5-f6053b03bc02).
271271
- `qos`: The quality of service level for subscribing to the MQTT topic. It can be one of 0 or 1.
272272
- `table`: The table field specifies the configuration and properties of the Delta table in the Data Lake Storage account. It has the following subfields:
273-
- `tableName`: The name of the Delta table to create or append to in the Data Lake Storage account. This field is also known as the container name when used with Azure Data Lake Storage Gen2. It can contain any English letter, upper or lower case, and underbar `_`, with length up to 256 characters. No dashes `-` or space characters are allowed.
273+
- `tableName`: The name of the Delta table to create or append to in the Data Lake Storage account. This field is also known as the container name when used with Azure Data Lake Storage Gen2. It can contain any **lower case** English letter, and underbar `_`, with length up to 256 characters. No dashes `-` or space characters are allowed.
274274
- `schema`: The schema of the Delta table, which should match the format and fields of the message payload. It's an array of objects, each with the following subfields:
275275
- `name`: The name of the column in the Delta table.
276276
- `format`: The data type of the column in the Delta table. It can be one of `boolean`, `int8`, `int16`, `int32`, `int64`, `uInt8`, `uInt16`, `uInt32`, `uInt64`, `float16`, `float32`, `float64`, `date32`, `timestamp`, `binary`, or `utf8`. Unsigned types, like `uInt8`, aren't fully supported, and are treated as signed types if specified here.
@@ -295,7 +295,7 @@ spec:
295295
mqttSourceTopic: "orders"
296296
qos: 1
297297
table:
298-
tableName: "ordersTable"
298+
tableName: "orders"
299299
schema:
300300
- name: "orderId"
301301
format: int32

articles/machine-learning/how-to-use-automated-ml-for-ml-models.md

Lines changed: 14 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -123,15 +123,26 @@ Otherwise, you see a list of your recent automated ML experiments, including th
123123
Additional configurations|Description
124124
------|------
125125
Primary metric| Main metric used for scoring your model. [Learn more about model metrics](how-to-configure-auto-train.md#primary-metric).
126-
Debug model via the Responsible AI dashboard | Generate a Responsible AI dashboard to do a holistic assessment and debugging of the recommended best model. This includes insights such as model explanations, fairness and performance explorer, data explorer, model error analysis. [Learn more about how you can generate a Responsible AI dashboard.](./how-to-responsible-ai-insights-ui.md). RAI Dashboard can only be run if 'Serverless' compute (preview) is specified in the experiment set-up step.
126+
Enable ensemble stacking | Ensemble learning improves machine learning results and predictive performance by combining multiple models as opposed to using single models. [Learn more about ensemble models](concept-automated-ml.md#ensemble).
127127
Blocked algorithm| Select algorithms you want to exclude from the training job. <br><br> Allowing algorithms is only available for [SDK experiments](how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels).
128-
Exit criterion| When any of these criteria are met, the training job is stopped. <br> *Training job time (hours)*: How long to allow the training job to run. <br> *Metric score threshold*: Minimum metric score for all pipelines. This ensures that if you have a defined target metric you want to reach, you don't spend more time on the training job than necessary.
129-
Concurrency| *Max concurrent iterations*: Maximum number of pipelines (iterations) to test in the training job. The job won't run more than the specified number of iterations. Learn more about how automated ML performs [multiple child jobs on clusters](how-to-configure-auto-train.md#multiple-child-runs-on-clusters).
128+
Explain best model| Automatically shows explainability on the best model created by Automated ML.
129+
130130

131131
1. (Optional) View featurization settings: if you choose to enable **Automatic featurization** in the **Additional configuration settings** form, default featurization techniques are applied. In the **View featurization settings**, you can change these defaults and customize accordingly. Learn how to [customize featurizations](#customize-featurization).
132132

133133
![Screenshot shows the Select task type dialog box with View featurization settings called out.](media/how-to-use-automated-ml-for-ml-models/view-featurization-settings.png)
134134

135+
1. The **[Optional] Limits** form allows you to do the following.
136+
137+
| Option | Description |
138+
|---|-----|
139+
|**Max trials**| Maximum number of trials, each with different combination of algorithm and hyperparameters to try during the AutoML job. Must be an integer between 1 and 1000.
140+
|**Max concurrent trials**| Maximum number of trial jobs that can be executed in parallel. Must be an integer between 1 and 1000.
141+
|**Max nodes**| Maximum number of nodes this job can use from selected compute target.
142+
|**Metric score threshold**| When this threshold value will be reached for an iteration metric the training job will terminate. Keep in mind that meaningful models have correlation > 0, otherwise they are as good as guessing the average Metric threshold should be between bounds [0, 10].
143+
|**Experiment timeout (minutes)**| Maximum time in minutes the entire experiment is allowed to run. Once this limit is reached the system will cancel the AutoML job, including all its trials (children jobs).
144+
|**Iteration timeout (minutes)**| Maximum time in minutes each trial job is allowed to run. Once this limit is reached the system will cancel the trial.
145+
|**Enable early termination**| Select to end the job if the score is not improving in the short term.
135146

136147
1. The **[Optional] Validate and test** form allows you to do the following.
137148

articles/sap/workloads/dbms-guide-maxdb.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -342,6 +342,9 @@ When deploying SAP MaxDB into Azure, you must review your backup methodology. Ev
342342

343343
Backing up and restoring a database in Azure works the same way as it does for on-premises systems, so you can use standard SAP MaxDB backup/restore tools, which are described in one of the SAP MaxDB documentation documents listed in SAP Note [767598].
344344

345+
#### <a name="01885ad6-88cf-4d5a-bdb5-6d43a6eed53e"></a>Backup and Restore with Azure Backup
346+
You can also integrate MaxDB backup with **Azure Backup** using the third-party backup tool **Maxback** (https://maxback.io). MaxBack allows you to backup and restore MaxDB on Windows with VSS integration, which is also used by Azure Backup. The advantage of using Azure Backup is that backup and restore is done at the storage level. MaxBack ensures that the database is in the right state for backup and restore, and automatically handles log volume backups.
347+
345348
#### <a name="77cd2fbb-307e-4cbf-a65f-745553f72d2c"></a>Performance Considerations for Backup and Restore
346349
As in bare-metal deployments, backup and restore performance are dependent on how many volumes can be read in parallel and the throughput of those volumes. Therefore, one can assume:
347350

articles/synapse-analytics/sql-data-warehouse/design-guidance-for-replicated-tables.md

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Design guidance for replicated tables
33
description: Recommendations for designing replicated tables in Synapse SQL pool
44
author: WilliamDAssafMSFT
55
ms.author: wiassaf
6-
ms.date: 09/27/2022
6+
ms.date: 01/09/2024
77
ms.service: synapse-analytics
88
ms.subservice: sql-dw
99
ms.topic: conceptual
@@ -163,6 +163,8 @@ For example, this load pattern loads data from four sources, but only invokes on
163163

164164
To ensure consistent query execution times, consider forcing the build of the replicated tables after a batch load. Otherwise, the first query will still use data movement to complete the query.
165165

166+
The 'Build Replicated Table Cache' operation can execute up to two operations simultaneously. For example, if you attempt to rebuild the cache for five tables, the system will utilize a staticrc20 (which cannot be modified) to concurrently build two tables at the time. Therefore, it is recommended to avoid using large replicated tables exceeding 2 GB, as this may slow down the cache rebuild across the nodes and increase the overall time.
167+
166168
This query uses the [sys.pdw_replicated_table_cache_state](/sql/relational-databases/system-catalog-views/sys-pdw-replicated-table-cache-state-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) DMV to list the replicated tables that have been modified, but not rebuilt.
167169

168170
```sql
@@ -184,6 +186,18 @@ To trigger a rebuild, run the following statement on each table in the preceding
184186
SELECT TOP 1 * FROM [ReplicatedTable]
185187
```
186188

189+
To monitor the rebuild process, you can use [sys.dm_pdw_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-exec-requests-transact-sql?view=azure-sqldw-latest&preserve-view=true), where the `command` will start with 'BuildReplicatedTableCache'. For example:
190+
191+
```sql
192+
-- Monitor Build Replicated Cache
193+
SELECT *
194+
FROM sys.dm_pdw_exec_requests
195+
WHERE command like 'BuildReplicatedTableCache%'
196+
```
197+
198+
> [!TIP]
199+
> [Table size queries](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-overview#table-size-queries) can be used to verify which table(s) have a replicated distribution policy and which are larger than 2 GB.
200+
187201
## Next steps
188202

189203
To create a replicated table, use one of these statements:

articles/virtual-machines/automatic-extension-upgrade.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ Automatic Extension Upgrade supports the following extensions (and more are adde
7373
- [Azure Monitor Agent](../azure-monitor/agents/azure-monitor-agent-overview.md)
7474
- [Log Analytics Agent for Linux](../azure-monitor/agents/log-analytics-agent.md)
7575
- [Azure Diagnostics extension for Linux](../azure-monitor/agents/diagnostics-extension-overview.md)
76-
76+
- Service Fabric – [Linux](../service-fabric/service-fabric-tutorial-create-vnet-and-linux-cluster.md#service-fabric-extension)
7777

7878
## Enabling Automatic Extension Upgrade
7979

0 commit comments

Comments
 (0)