Skip to content

Commit aea4e06

Browse files
authored
Merge pull request #180007 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/MicrosoftDocs/azure-docs (branch master)
2 parents ee7e827 + 56b8987 commit aea4e06

File tree

9 files changed

+34
-32
lines changed

9 files changed

+34
-32
lines changed

articles/active-directory/saas-apps/benq-iam-provisioning-tutorial.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ ms.author: thwimmer
1616

1717
# Tutorial: Configure BenQ IAM for automatic user provisioning
1818

19-
This tutorial describes the steps you need to perform in both BenQ IAM and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [BenQ IAM](https://service-portaltest.benq.com/login) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
19+
This tutorial describes the steps you need to perform in both BenQ IAM and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [BenQ IAM](https://service-portal.benq.com/login) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
2020

2121

2222
## Supported capabilities

articles/azure-arc/data/create-data-controller-direct-prerequisites.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,8 @@ To connect your kubernetes cluster to Azure, use Azure CLI `az` with the followi
3030

3131
### Install tools
3232

33-
- Install or upgrade to the latest version of Azure CLI ([install](/sql/azdata/install/deploy-install-azdata))
33+
- Helm version 3.3+ ([install](https://helm.sh/docs/intro/install/))
34+
- Install or upgrade to the latest version of Azure CLI ([download](https://aka.ms/installazurecliwindows))
3435

3536
### Add extensions for Azure CLI
3637

articles/azure-sql/database/recovery-using-backups.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -207,6 +207,9 @@ You can also use Azure PowerShell or the REST API for recovery. The following ta
207207
> [!IMPORTANT]
208208
> The PowerShell Azure Resource Manager module is still supported by SQL Database and SQL Managed Instance, but all future development is for the Az.Sql module. For these cmdlets, see [AzureRM.Sql](/powershell/module/AzureRM.Sql/). Arguments for the commands in the Az module and in Azure Resource Manager modules are to a great extent identical.
209209
210+
> [!NOTE]
211+
> Restore points represent a period between the earliest restore point and the latest log backup point. Information on latest restore point is currently unavailable on Azure PowerShell.
212+
210213
#### SQL Database
211214

212215
To restore a standalone or pooled database, see [Restore-AzSqlDatabase](/powershell/module/az.sql/restore-azsqldatabase).

articles/cosmos-db/free-tier.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ You can have up to one free tier Azure Cosmos DB account per an Azure subscripti
2525
In shared throughput model, when you provision throughput on a database, the throughput is shared across all the containers in the database. When using the free tier, you can provision a shared database with up to 1000 RU/s for free. All containers in the database will share the throughput.
2626

2727
Just like the regular account, in the free tier account, a shared throughput database can have a max of 25 containers.
28-
Any additional databases with shared throughput or containers with dedicated throughput beyond 1000 RU/s are billed at the regular pricing. In a free tier account, you can create a max of 5 shared throughput databases.
28+
Any additional databases with shared throughput or containers with dedicated throughput beyond 1000 RU/s are billed at the regular pricing.
2929

3030
## Free tier with Azure discount
3131

articles/cosmos-db/sql/index-metrics.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ Index Utilization Information
131131
Index Impact Score: High
132132
---
133133
```
134-
These index metrics show that the query used the indexed paths `/name/?`, `/age/?`, `/town/?`, and `/timestamp/?`. The index metrics also indicate that there's a high likelihood that adding the composite indexes (`/name` ASC, `(/town ASC, /age ASC)` and `(/name ASC, /town ASC, /timestamp ASC)` will further improve performance.
134+
These index metrics show that the query used the indexed paths `/name/?`, `/age/?`, `/town/?`, and `/timestamp/?`. The index metrics also indicate that there's a high likelihood that adding the composite indexes `(/name ASC, /town ASC, /age ASC)` and `(/name ASC, /town ASC, /timestamp ASC)` will further improve performance.
135135

136136
## Next steps
137137

articles/data-factory/author-global-parameters.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -109,5 +109,5 @@ Set-AzDataFactoryV2 -InputObject $dataFactory -Force
109109

110110
## Next steps
111111

112-
* Learn about Azure Data Factory's [continuous integration and deployment process](continuous-integration-delivery.md)
113-
* Learn how to use the [control flow expression language](control-flow-expression-language-functions.md)
112+
* Learn about Azure Data Factory's [continuous integration and deployment process](continuous-integration-delivery-improvements.md)
113+
* Learn how to use the [control flow expression language](control-flow-expression-language-functions.md)

articles/postgresql/howto-connect-with-managed-identity.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ Retrieve the application ID for the system-assigned managed identity, which you'
3939

4040
```azurecli
4141
# Get the client ID (application ID) of the system-assigned managed identity
42-
az ad sp list --display-name obs-locdev-wus2 --query [*].appId --out tsv
42+
az ad sp list --display-name vm-name --query [*].appId --out tsv
4343
```
4444

4545
## Creating a PostgreSQL user for your Managed Identity
@@ -98,12 +98,10 @@ namespace Driver
9898
{
9999
class Script
100100
{
101-
// Obtain connection string information from the portal
102-
//
101+
// Obtain connection string information from the portal for use in the following variables
103102
private static string Host = "HOST";
104103
private static string User = "USER";
105104
private static string Database = "DATABASE";
106-
//private static string ClientId = "CLIENT_ID";
107105

108106
static async Task Main(string[] args)
109107
{

articles/synapse-analytics/machine-learning/tutorial-score-model-predict-spark-pool.md

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ Make sure all prerequisites are in place before following these steps for using
5151

5252
1. **Import libraries:** Import the following libraries to use PREDICT in spark session.
5353

54-
```PYSPARK
54+
```python
5555
#Import libraries
5656
from pyspark.sql.functions import col, pandas_udf,udf,lit
5757
from azureml.core import Workspace
@@ -65,7 +65,7 @@ Make sure all prerequisites are in place before following these steps for using
6565
> [!NOTE]
6666
> Before running this script, update it with the URI for ADLS Gen2 data file along with model output return data type and ADLS/AML URI for the model file.
6767
68-
```PYSPARK
68+
```python
6969
#Set input data path
7070
DATA_FILE = "abfss://<filesystemname>@<account name>.dfs.core.windows.net/<file path>"
7171

@@ -90,7 +90,7 @@ Make sure all prerequisites are in place before following these steps for using
9090
9191
- **Through service principal:** You can use service principal client ID and secret directly to authenticate to AML workspace. Service principal must have "Contributor" access to the AML workspace.
9292

93-
```PYSPARK
93+
```python
9494
#AML workspace authentication using service principal
9595
AZURE_TENANT_ID = "<tenant_id>"
9696
AZURE_CLIENT_ID = "<client_id>"
@@ -116,15 +116,15 @@ Make sure all prerequisites are in place before following these steps for using
116116

117117
- **Through linked service:** You can use linked service to authenticate to AML workspace. Linked service can use "service principal" or Synapse workspace's "Managed Service Identity (MSI)" for authentication. "Service principal" or "Managed Service Identity (MSI)" must have "Contributor" access to the AML workspace.
118118

119-
```PYSPARK
119+
```python
120120
#AML workspace authentication using linked service
121121
from notebookutils.mssparkutils import azureML
122122
ws = azureML.getWorkspace("<linked_service_name>") # "<linked_service_name>" is the linked service name, not AML workspace name. Also, linked service supports MSI and service principal both
123123
```
124124

125125
4. **Enable PREDICT in spark session:** Set the spark configuration `spark.synapse.ml.predict.enabled` to `true` to enable the library.
126126

127-
```PYSPARK
127+
```python
128128
#Enable SynapseML predict
129129
spark.conf.set("spark.synapse.ml.predict.enabled","true")
130130
```
@@ -134,7 +134,7 @@ Make sure all prerequisites are in place before following these steps for using
134134
> [!NOTE]
135135
> Update model alias and model uri in this script before running it.
136136
137-
```PYSPARK
137+
```python
138138
#Bind model within Spark session
139139
model = pcontext.bind_model(
140140
return_types=RETURN_TYPES,
@@ -150,7 +150,7 @@ Make sure all prerequisites are in place before following these steps for using
150150
> [!NOTE]
151151
> Update view name in this script before running it.
152152
153-
```PYSPARK
153+
```python
154154
#Read data from ADLS
155155
df = spark.read \
156156
.format("csv") \
@@ -165,7 +165,7 @@ Make sure all prerequisites are in place before following these steps for using
165165
> [!NOTE]
166166
> Update the model alias name, view name, and comma separated model input column name in this script before running it. Comma separated model input columns are the same as those used while training the model.
167167
168-
```PYSPARK
168+
```python
169169
#Call PREDICT using Spark SQL API
170170

171171
predictions = spark.sql(
@@ -177,15 +177,15 @@ Make sure all prerequisites are in place before following these steps for using
177177
).show()
178178
```
179179

180-
```PYSPARK
180+
```python
181181
#Call PREDICT using user defined function (UDF)
182182

183183
df = df[<comma_separated_model_input_column_name>] # for ex. df["empid","empname"]
184184

185185
df.withColumn("PREDICT",model.udf(lit("<random_alias_name>"),*df.columns)).show()
186186
```
187187

188-
```PYSPARK
188+
```python
189189
#Call PREDICT using Transformer API
190190

191191
columns = [<comma_separated_model_input_column_name>] # for ex. df["empid","empname"]
@@ -199,7 +199,7 @@ Make sure all prerequisites are in place before following these steps for using
199199

200200
1. Import libraries and read the training dataset from ADLS.
201201

202-
```PYSPARK
202+
```python
203203
# Import libraries and read training dataset from ADLS
204204

205205
import fsspec
@@ -217,7 +217,7 @@ Make sure all prerequisites are in place before following these steps for using
217217

218218
1. Train model and generate mlflow artifacts.
219219

220-
```PYSPARK
220+
```python
221221
# Train model and generate mlflow artifacts
222222

223223
import os
@@ -309,7 +309,7 @@ Make sure all prerequisites are in place before following these steps for using
309309

310310
1. Store model MLFLOW artifacts in ADLS or register in AML.
311311

312-
```PYSPARK
312+
```python
313313
# Store model MLFLOW artifacts in ADLS
314314

315315
STORAGE_PATH = 'abfs[s]://<container>/<path-to-store-folder>'
@@ -328,7 +328,7 @@ Make sure all prerequisites are in place before following these steps for using
328328
recursive=True, overwrite=True)
329329
```
330330

331-
```PYSPARK
331+
```python
332332
# Register model MLFLOW artifacts in AML
333333

334334
from azureml.core import Workspace, Model
@@ -364,7 +364,7 @@ Make sure all prerequisites are in place before following these steps for using
364364

365365
1. Set required parameters using variables.
366366

367-
```PYSPARK
367+
```python
368368
# If using ADLS uploaded model
369369

370370
import pandas as pd
@@ -379,7 +379,7 @@ Make sure all prerequisites are in place before following these steps for using
379379
RUNTIME = "mlflow"
380380
```
381381

382-
```PYSPARK
382+
```python
383383
# If using AML registered model
384384

385385
from pyspark.sql.functions import col, pandas_udf,udf,lit
@@ -396,13 +396,13 @@ Make sure all prerequisites are in place before following these steps for using
396396

397397
1. Enable SynapseML PREDICT functionality in spark session.
398398

399-
```PYSPARK
399+
```python
400400
spark.conf.set("spark.synapse.ml.predict.enabled","true")
401401
```
402402

403403
1. Bind model in spark session.
404404

405-
```PYSPARK
405+
```python
406406
# If using ADLS uploaded model
407407

408408
model = pcontext.bind_model(
@@ -413,7 +413,7 @@ Make sure all prerequisites are in place before following these steps for using
413413
).register()
414414
```
415415

416-
```PYSPARK
416+
```python
417417
# If using AML registered model
418418

419419
model = pcontext.bind_model(
@@ -427,7 +427,7 @@ Make sure all prerequisites are in place before following these steps for using
427427

428428
1. Load test data from ADLS.
429429

430-
```PYSPARK
430+
```python
431431
# Load data from ADLS
432432

433433
df = spark.read \
@@ -442,7 +442,7 @@ Make sure all prerequisites are in place before following these steps for using
442442

443443
1. Call PREDICT to generate the score.
444444

445-
```PYSPARK
445+
```python
446446
# Call PREDICT
447447

448448
predictions = spark.sql(

articles/virtual-desktop/app-attach-faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ sections:
106106
- Antivirus programs
107107
108108
- question: |
109-
How many MISX applications can I add to each session host?
109+
How many MSIX applications can I add to each session host?
110110
answer: |
111111
Each session host has different limits based on their CPU, memory, and OS. Going over these limits can affect application performance and overall user experience. However, MSIX app attach itself has no limit on how many applications it can use.
112112

0 commit comments

Comments
 (0)