You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/active-directory/saas-apps/benq-iam-provisioning-tutorial.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ ms.author: thwimmer
16
16
17
17
# Tutorial: Configure BenQ IAM for automatic user provisioning
18
18
19
-
This tutorial describes the steps you need to perform in both BenQ IAM and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [BenQ IAM](https://service-portaltest.benq.com/login) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
19
+
This tutorial describes the steps you need to perform in both BenQ IAM and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [BenQ IAM](https://service-portal.benq.com/login) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
Copy file name to clipboardExpand all lines: articles/azure-sql/database/recovery-using-backups.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -207,6 +207,9 @@ You can also use Azure PowerShell or the REST API for recovery. The following ta
207
207
> [!IMPORTANT]
208
208
> The PowerShell Azure Resource Manager module is still supported by SQL Database and SQL Managed Instance, but all future development is for the Az.Sql module. For these cmdlets, see [AzureRM.Sql](/powershell/module/AzureRM.Sql/). Arguments for the commands in the Az module and in Azure Resource Manager modules are to a great extent identical.
209
209
210
+
> [!NOTE]
211
+
> Restore points represent a period between the earliest restore point and the latest log backup point. Information on latest restore point is currently unavailable on Azure PowerShell.
212
+
210
213
#### SQL Database
211
214
212
215
To restore a standalone or pooled database, see [Restore-AzSqlDatabase](/powershell/module/az.sql/restore-azsqldatabase).
Copy file name to clipboardExpand all lines: articles/cosmos-db/free-tier.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ You can have up to one free tier Azure Cosmos DB account per an Azure subscripti
25
25
In shared throughput model, when you provision throughput on a database, the throughput is shared across all the containers in the database. When using the free tier, you can provision a shared database with up to 1000 RU/s for free. All containers in the database will share the throughput.
26
26
27
27
Just like the regular account, in the free tier account, a shared throughput database can have a max of 25 containers.
28
-
Any additional databases with shared throughput or containers with dedicated throughput beyond 1000 RU/s are billed at the regular pricing. In a free tier account, you can create a max of 5 shared throughput databases.
28
+
Any additional databases with shared throughput or containers with dedicated throughput beyond 1000 RU/s are billed at the regular pricing.
Copy file name to clipboardExpand all lines: articles/cosmos-db/sql/index-metrics.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -131,7 +131,7 @@ Index Utilization Information
131
131
Index Impact Score: High
132
132
---
133
133
```
134
-
These index metrics show that the query used the indexed paths `/name/?`, `/age/?`, `/town/?`, and `/timestamp/?`. The index metrics also indicate that there's a high likelihood that adding the composite indexes (`/name` ASC, `(/town ASC, /age ASC)` and `(/name ASC, /town ASC, /timestamp ASC)` will further improve performance.
134
+
These index metrics show that the query used the indexed paths `/name/?`, `/age/?`, `/town/?`, and `/timestamp/?`. The index metrics also indicate that there's a high likelihood that adding the composite indexes `(/name ASC, /town ASC, /age ASC)` and `(/name ASC, /town ASC, /timestamp ASC)` will further improve performance.
Copy file name to clipboardExpand all lines: articles/synapse-analytics/machine-learning/tutorial-score-model-predict-spark-pool.md
+21-21Lines changed: 21 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -51,7 +51,7 @@ Make sure all prerequisites are in place before following these steps for using
51
51
52
52
1.**Import libraries:** Import the following libraries to use PREDICT in spark session.
53
53
54
-
```PYSPARK
54
+
```python
55
55
#Import libraries
56
56
from pyspark.sql.functions import col, pandas_udf,udf,lit
57
57
from azureml.core import Workspace
@@ -65,7 +65,7 @@ Make sure all prerequisites are in place before following these steps for using
65
65
> [!NOTE]
66
66
> Before running this script, update it with the URI for ADLS Gen2 data file along with model output return data type and ADLS/AML URI for the model file.
@@ -90,7 +90,7 @@ Make sure all prerequisites are in place before following these steps for using
90
90
91
91
-**Through service principal:** You can use service principal client ID and secret directly to authenticate to AML workspace. Service principal must have "Contributor" access to the AML workspace.
92
92
93
-
```PYSPARK
93
+
```python
94
94
#AML workspace authentication using service principal
95
95
AZURE_TENANT_ID="<tenant_id>"
96
96
AZURE_CLIENT_ID="<client_id>"
@@ -116,15 +116,15 @@ Make sure all prerequisites are in place before following these steps for using
116
116
117
117
-**Through linked service:** You can use linked service to authenticate to AML workspace. Linked service can use "service principal"or Synapse workspace's "Managed Service Identity (MSI)" for authentication. "Service principal" or "Managed Service Identity (MSI)" must have "Contributor" access to the AML workspace.
118
118
119
-
```PYSPARK
119
+
```python
120
120
#AML workspace authentication using linked service
121
121
from notebookutils.mssparkutils import azureML
122
122
ws = azureML.getWorkspace("<linked_service_name>") # "<linked_service_name>" is the linked service name, not AML workspace name. Also, linked service supports MSI and service principal both
123
123
```
124
124
125
125
4. **Enable PREDICTin spark session:** Set the spark configuration `spark.synapse.ml.predict.enabled` to `true` to enable the library.
@@ -134,7 +134,7 @@ Make sure all prerequisites are in place before following these steps for using
134
134
> [!NOTE]
135
135
> Update model alias and model uri in this script before running it.
136
136
137
-
```PYSPARK
137
+
```python
138
138
#Bind model within Spark session
139
139
model = pcontext.bind_model(
140
140
return_types=RETURN_TYPES,
@@ -150,7 +150,7 @@ Make sure all prerequisites are in place before following these steps for using
150
150
> [!NOTE]
151
151
> Update view name in this script before running it.
152
152
153
-
```PYSPARK
153
+
```python
154
154
#Read data from ADLS
155
155
df = spark.read \
156
156
.format("csv") \
@@ -165,7 +165,7 @@ Make sure all prerequisites are in place before following these steps for using
165
165
> [!NOTE]
166
166
> Update the model alias name, view name, and comma separated model input column name in this script before running it. Comma separated model input columns are the same as those used while training the model.
167
167
168
-
```PYSPARK
168
+
```python
169
169
#Call PREDICT using Spark SQL API
170
170
171
171
predictions = spark.sql(
@@ -177,15 +177,15 @@ Make sure all prerequisites are in place before following these steps for using
177
177
).show()
178
178
```
179
179
180
-
```PYSPARK
180
+
```python
181
181
#Call PREDICT using user defined function (UDF)
182
182
183
183
df = df[<comma_separated_model_input_column_name>] # for ex. df["empid","empname"]
Copy file name to clipboardExpand all lines: articles/virtual-desktop/app-attach-faq.yml
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -106,7 +106,7 @@ sections:
106
106
- Antivirus programs
107
107
108
108
- question: |
109
-
How many MISX applications can I add to each session host?
109
+
How many MSIX applications can I add to each session host?
110
110
answer: |
111
111
Each session host has different limits based on their CPU, memory, and OS. Going over these limits can affect application performance and overall user experience. However, MSIX app attach itself has no limit on how many applications it can use.
0 commit comments