You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/active-directory/develop/msal-client-application-configuration.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -100,7 +100,7 @@ Currently, the only way to get an app to sign in users with only personal Micros
100
100
101
101
## Client ID
102
102
103
-
The client ID is the unique **Application (client) ID** assigned to your app by Azure AD when the app was registered.
103
+
The client ID is the unique **Application (client) ID** assigned to your app by Azure AD when the app was registered. You can find the **Application (Client) ID** in your Azure subscription by Azure AD => Enterprise applications => Application ID.
Copy file name to clipboardExpand all lines: articles/active-directory/saas-apps/snowflake-provisioning-tutorial.md
+40-30Lines changed: 40 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,7 +37,7 @@ The scenario outlined in this tutorial assumes that you already have the followi
37
37
*[An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
38
38
* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (Application Administrator, Cloud Application Administrator, Application Owner, or Global Administrator)

71
71
72
-
3. Create the custom role AAD_PROVISIONER. All users and roles in Snowflake created by Azure AD will be owned by the scoped down AAD_PROVISIONER role.
72
+
1. Create the custom role AAD_PROVISIONER. All users and roles in Snowflake created by Azure AD will be owned by the scoped down AAD_PROVISIONER role.
73
73
74
74

75
75
76
-
4. Let the ACCOUNTADMIN role create the security integration using the AAD_PROVISIONER custom role.
76
+
1. Let the ACCOUNTADMIN role create the security integration using the AAD_PROVISIONER custom role.
77
77
78
78

79
79
80
-
5. Create and copy the authorization token to the clipboard and store securely for later use. Use this token for each SCIM REST API request and place it in the request header. The access token expires after six months and a new access token can be generated with this statement.
80
+
1. Create and copy the authorization token to the clipboard and store securely for later use. Use this token for each SCIM REST API request and place it in the request header. The access token expires after six months and a new access token can be generated with this statement.
81
81
82
82

83
83
@@ -103,35 +103,37 @@ To configure automatic user provisioning for Snowflake in Azure AD:
103
103
104
104
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise applications** > **All applications**.
105
105
106
-

106
+

107
107
108
-
2. In the list of applications, select **Snowflake**.
108
+
1. In the list of applications, select **Snowflake**.
109
109
110
-

110
+

111
111
112
-
3. Select the **Provisioning** tab.
112
+
1. Select the **Provisioning** tab.
113
113
114
-

114
+

115
115
116
-
4. Set **Provisioning Mode** to **Automatic**.
116
+
1. Set **Provisioning Mode** to **Automatic**.
117
117
118
-

118
+

119
119
120
-
5. In the **Admin Credentials** section, enter the SCIM 2.0 base URL and authentication token that you retrieved earlier in the **Tenant URL** and **Secret Token** boxes, respectively.
120
+
1. In the **Admin Credentials** section, enter the SCIM 2.0 base URL and authentication token that you retrieved earlier in the **Tenant URL** and **Secret Token** boxes, respectively.
121
+
>[!NOTE]
122
+
>The Snowflake SCIM endpoint consists of the Snowflake account URL appended with `/scim/v2/`. For example, if your Snowflake account name is `acme` and your Snowflake account is in the `east-us-2` Azure region, the **Tenant URL** value is `https://acme.east-us-2.azure.snowflakecomputing.com/scim/v2`.
121
123
122
124
Select **Test Connection** to ensure that Azure AD can connect to Snowflake. If the connection fails, ensure that your Snowflake account has admin permissions and try again.
123
125
124
-

126
+

125
127
126
-
6. In the **Notification Email** box, enter the email address of a person or group who should receive the provisioning error notifications. Then select the **Send an email notification when a failure occurs** check box.
128
+
1. In the **Notification Email** box, enter the email address of a person or group who should receive the provisioning error notifications. Then select the **Send an email notification when a failure occurs** check box.
127
129
128
-

130
+

129
131
130
-
7. Select **Save**.
132
+
1. Select **Save**.
131
133
132
-
8. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Snowflake**.
134
+
1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Snowflake**.
133
135
134
-
9. Review the user attributes that are synchronized from Azure AD to Snowflake in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Snowflake for update operations. Select the **Save** button to commit any changes.
136
+
1. Review the user attributes that are synchronized from Azure AD to Snowflake in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Snowflake for update operations. Select the **Save** button to commit any changes.
135
137
136
138
|Attribute|Type|
137
139
|---|---|
@@ -141,33 +143,41 @@ To configure automatic user provisioning for Snowflake in Azure AD:
10. In the **Mappings** section, select **Synchronize Azure Active Directory Groups to Snowflake**.
148
+
>[!NOTE]
149
+
>Snowflake supported custom extension user attributes during SCIM provisioning:
150
+
>* DEFAULT_ROLE
151
+
>* DEFAULT_WAREHOUSE
152
+
>* DEFAULT_SECONDARY_ROLES
153
+
>* SNOWFLAKE NAME AND LOGIN_NAME FIELDS TO BE DIFFERENT
148
154
149
-
11. Review the group attributes that are synchronized from Azure AD to Snowflake in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Snowflake for update operations. Select the **Save** button to commit any changes.
155
+
> How to set up Snowflake custom extension attributes in Azure AD SCIM user provisioning is explained [here](https://community.snowflake.com/s/article/HowTo-How-to-Set-up-Snowflake-Custom-Attributes-in-Azure-AD-SCIM-for-Default-Roles-and-Default-Warehouses).
156
+
157
+
1. In the **Mappings** section, select **Synchronize Azure Active Directory Groups to Snowflake**.
158
+
159
+
1. Review the group attributes that are synchronized from Azure AD to Snowflake in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Snowflake for update operations. Select the **Save** button to commit any changes.
150
160
151
161
|Attribute|Type|
152
162
|---|---|
153
163
|displayName|String|
154
164
|members|Reference|
155
165
156
-
12. To configure scoping filters, see the instructions in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
166
+
1. To configure scoping filters, see the instructions in the[Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
157
167
158
-
13. To enable the Azure AD provisioning service for Snowflake, change **Provisioning Status** to **On** in the **Settings** section.
168
+
1. To enable the Azure AD provisioning service for Snowflake, change **Provisioning Status** to **On** in the **Settings** section.
159
169
160
-

170
+

161
171
162
-
14. Define the users and groups that you want to provision to Snowflake by choosing the desired values in **Scope** in the **Settings** section.
172
+
1. Define the users and groups that you want to provision to Snowflake by choosing the desired values in **Scope** in the **Settings** section.
163
173
164
174
If this option is not available, configure the required fields under **Admin Credentials**, select **Save**, and refresh the page.
165
175
166
-

176
+

167
177
168
-
15. When you're ready to provision, select **Save**.
178
+
1. When you're ready to provision, select **Save**.
169
179
170
-

180
+

171
181
172
182
This operation starts the initial synchronization of all users and groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs. Subsequent syncs occur about every 40 minutes, as long as the Azure AD provisioning service is running.
Copy file name to clipboardExpand all lines: articles/aks/operator-best-practices-run-at-scale.md
+5-2Lines changed: 5 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,14 +36,16 @@ To increase the node limit beyond 1000, you must have the following pre-requisit
36
36
> [!NOTE]
37
37
> You can't use NPM with clusters greater than 500 Nodes
38
38
39
-
40
39
## Node pool scaling considerations and best practices
41
40
42
-
* For system node pools, use the *Standard_D16ds_v5* SKU or equivalent core/memory VM SKUs to provide sufficient compute resources for *kube-system* pods.
41
+
* For system node pools, use the *Standard_D16ds_v5* SKU or equivalent core/memory VM SKUs with ephemeral OS disks to provide sufficient compute resources for *kube-system* pods.
43
42
* Create at-least five user node pools to scale up to 5,000 nodes since there's a 1000 nodes per node pool limit.
44
43
* Use cluster autoscaler wherever possible when running at-scale AKS clusters to ensure dynamic scaling of node pools based on the demand for compute resources.
45
44
* When scaling beyond 1000 nodes without cluster autoscaler, it's recommended to scale in batches of a maximum 500 to 700 nodes at a time. These scaling operations should also have 2 mins to 5-mins sleep time between consecutive scale-ups to prevent Azure API throttling.
46
45
46
+
> [!NOTE]
47
+
> You can't use [Stop and Start feature][Stop and Start feature] on clusters enabled with the greater than 1000 node limit
48
+
47
49
## Cluster upgrade best practices
48
50
49
51
* AKS clusters have a hard limit of 5000 nodes. This limit prevents clusters from upgrading that are running at this limit since there's no more capacity do a rolling update with the max surge property. We recommend scaling the cluster down below 3000 nodes before doing cluster upgrades to provide extra capacity for node churn and minimize control plane load.
@@ -61,3 +63,4 @@ To increase the node limit beyond 1000, you must have the following pre-requisit
Copy file name to clipboardExpand all lines: articles/automation/manage-runbooks.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -267,6 +267,9 @@ When you test a runbook, the [Draft version](#publish-a-runbook) is executed and
267
267
268
268
Even though the Draft version is being run, the runbook still executes normally and performs any actions against resources in the environment. For this reason, you should only test runbooks on non-production resources.
269
269
270
+
> [!NOTE]
271
+
> All runbook execution actions are logged in the **Activity Log** of the automation account with the operation name **Create an Azure Automation job**. However, runbook execution in a test pane where the draft version of the runbook is executed would be logged in the activity logs with the operation name **Write an Azure Automation runbook draft**. Select **Operation** and **JSON** tab to see the scope ending with *../runbooks/(runbook name)/draft/testjob*.
272
+
270
273
The procedure to test each [type of runbook](automation-runbook-types.md) is the same. There's no difference in testing between the textual editor and the graphical editor in the Azure portal.
271
274
272
275
1. Open the Draft version of the runbook in either the [textual editor](automation-edit-textual-runbook.md) or the [graphical editor](automation-graphical-authoring-intro.md).
Copy file name to clipboardExpand all lines: articles/azure-arc/servers/manage-automatic-vm-extension-upgrade.md
+9-4Lines changed: 9 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,13 +46,18 @@ If you continue to have trouble upgrading an extension, you can [disable automat
46
46
47
47
## Supported extensions
48
48
49
-
Automatic extension upgrade supports the following extensions (and more are added periodically):
49
+
Automatic extension upgrade supports the following extensions:
50
50
51
-
- Azure Monitor Agent - Linux and Windows
52
-
-Azure Security agent - Linux and Windows
51
+
- Azure Monitor agent - Linux and Windows
52
+
-Log Analytics agent (OMS agent) - Linux only
53
53
- Dependency agent – Linux and Windows
54
+
- Azure Security agent - Linux and Windows
54
55
- Key Vault Extension - Linux only
55
-
- Log Analytics agent (OMS agent) - Linux only
56
+
- Azure Update Management Center - Linux and Windows
57
+
- Azure Automation Hybrid Runbook Worker - Linux and Windows
58
+
- Azure Arc-enabled SQL Server agent - Windows only
59
+
60
+
More extensions will be added over time. Extensions that do not support automatic extension upgrade today are still configured to enable automatic upgrades by default. This setting will have no effect until the extension publisher chooses to support automatic upgrades.
Copy file name to clipboardExpand all lines: articles/azure-functions/durable/durable-functions-overview.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,7 +41,8 @@ The primary use case for Durable Functions is simplifying complex, stateful coor
41
41
42
42
### <aname="chaining"></a>Pattern #1: Function chaining
43
43
44
-
In the function chaining pattern, a sequence of functions executes in a specific order. In this pattern, the output of one function is applied to the input of another function.
44
+
In the function chaining pattern, a sequence of functions executes in a specific order. In this pattern, the output of one function is applied to the input of another function. The use of queues between each function ensures that the system stays durable and scalable, even though there is a flow of control from one function to the next.
45
+
45
46
46
47

[Azure Functions](./functions-overview.md) allows you to implement your system's logic into readily-available blocks of code. These code blocks are called "functions".
13
+
[Azure Functions](./functions-overview.md) allows you to implement your system's logic as event-driven, readily-available blocks of code. These code blocks are called "functions".
0 commit comments