You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/active-directory/authentication/howto-password-ban-bad-on-premises-faq.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -134,6 +134,10 @@ No.
134
134
135
135
Audit mode is only supported in the on-premises Active Directory environment. Azure AD is implicitly always in "enforce" mode when it evaluates passwords.
136
136
137
+
**Q: My users see the traditional Windows error message when a password is rejected by Azure AD Password Protection. Is it possible to customize this error message so that users know what really happened?**
138
+
139
+
No. The error message seen by users when a password is rejected by a domain controller is controlled by the client machine, not by the domain controller. This behavior happens whether a password is rejected by the default Active Directory password policies or by a password-filter-based solution such as Azure AD Password Protection.
140
+
137
141
## Additional content
138
142
139
143
The following links are not part of the core Azure AD Password Protection documentation but may be a useful source of additional information on the feature.
Copy file name to clipboardExpand all lines: articles/active-directory/authentication/howto-password-ban-bad-on-premises-operations.md
+4-1Lines changed: 4 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -52,9 +52,12 @@ This message is only one example of several possible outcomes. The specific erro
52
52
53
53
Affected end users may need to work with their IT staff to understand the new requirements and be more able to choose secure passwords.
54
54
55
+
> [!NOTE]
56
+
> Azure AD Password Protection has no control over the specific error message displayed by the client machine when a weak password is rejected.
57
+
55
58
## Enable Mode
56
59
57
-
This setting should normally be left in its default enabled (Yes) state. Configuring this setting to disabled (No) will cause all deployed Azure AD Password Protection DC agents to go into a quiescent mode where all passwords are accepted as-is, and no validation activities will be executed whatsoever (for example, not even audit events will be emitted).
60
+
This setting should be left in its default enabled (Yes) state. Configuring this setting to disabled (No) will cause all deployed Azure AD Password Protection DC agents to go into a quiescent mode where all passwords are accepted as-is, and no validation activities will be executed whatsoever (for example, not even audit events will be emitted).
1. Configure the sample with the application ID and key that you recorded earlier while registering your application. Change the following lines of code by replacing the values with the names of your directory and APIs:
56
+
1. Configure the sample with the client ID and key that you recorded earlier while registering your application. Change the following lines of code by replacing the values with the names of your directory and APIs:
57
57
58
58
```javascript
59
-
// The current application coordinates were pre-registered in a B2C directory.
The name of the [user flow](https://docs.microsoft.com/azure/active-directory-b2c/active-directory-b2c-reference-policies) in this tutorial is **B2C_1_signupsignin1**. If you're using a different user flow name, set the **authority** value to that name.
Copy file name to clipboardExpand all lines: articles/aks/acs-aks-migration.md
+2-3Lines changed: 2 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,10 +31,9 @@ ACS and AKS differ in some key areas that affect migration. Before any migration
31
31
32
32
## Differences between Kubernetes versions
33
33
34
-
If you're migrating to a newer version of Kubernetes (for example, from 1.7.x to 1.9.x), review the following resources to understand a few changes to the Kubernetes API:
34
+
If you're migrating to a newer version of Kubernetes, review the following resources to understand the Kubernetes versioning strategies:
35
35
36
-
*[Migrating a ThirdPartyResource to CustomResourceDefinition](https://kubernetes.io/docs/tasks/access-kubernetes-api/migrate-third-party-resource/)
37
-
*[Workloads API changes in versions 1.8 and 1.9](https://kubernetes.io/docs/reference/workloads-18-19/)
36
+
*[Kubernetes version and version skew support policy](https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions)
Copy file name to clipboardExpand all lines: articles/aks/cluster-container-registry-integration.md
+4-19Lines changed: 4 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,39 +7,24 @@ manager: gwallace
7
7
8
8
ms.service: container-service
9
9
ms.topic: article
10
-
ms.date: 08/15/2018
10
+
ms.date: 09/17/2018
11
11
ms.author: mlearned
12
12
---
13
13
14
-
# Preview - Authenticate with Azure Container Registry from Azure Kubernetes Service
14
+
# Authenticate with Azure Container Registry from Azure Kubernetes Service
15
15
16
16
When you're using Azure Container Registry (ACR) with Azure Kubernetes Service (AKS), an authentication mechanism needs to be established. This article details the recommended configurations for authentication between these two Azure services.
17
17
18
18
You can set up the AKS to ACR integration in a few simple commands with the Azure CLI.
19
19
20
-
> [!IMPORTANT]
21
-
> AKS preview features are self-service opt-in. Previews are provided "as-is" and "as available" and are excluded from the service level agreements and limited warranty. AKS Previews are partially covered by customer support on best effort basis. As such, these features are not meant for production use. For additional infromation, please see the following support articles:
22
-
>
23
-
> *[AKS Support Policies](support-policies.md)
24
-
> *[Azure Support FAQ](faq.md)
25
-
26
20
## Before you begin
27
21
28
22
You must have the following:
29
23
30
24
***Owner** or **Azure account administrator** role on the **Azure subscription**
31
-
* You also need the Azure CLI version 2.0.70 or later and the aks-preview 0.4.8 extension
25
+
* You also need the Azure CLI version 2.0.73 or later
32
26
* You need [Docker installed](https://docs.docker.com/install/) on your client, and you need access to [docker hub](https://hub.docker.com/)
33
27
34
-
## Install latest AKS CLI preview extension
35
-
36
-
You need the **aks-preview 0.4.13** extension or later.
37
-
38
-
```azurecli
39
-
az extension remove --name aks-preview
40
-
az extension add -y --name aks-preview
41
-
```
42
-
43
28
## Create a new AKS cluster with ACR integration
44
29
45
30
You can set up AKS and ACR integration during the initial creation of your AKS cluster. To allow an AKS cluster to interact with ACR, an Azure Active Directory **service principal** is used. The following CLI command allows you to authorize an existing ACR in your subscription and configures the appropriate **ACRPull** role for the service principal. Supply valid values for your parameters below. The parameters in brackets are optional.
@@ -48,7 +33,7 @@ az login
48
33
az acr create -n myContainerRegistry -g myContainerRegistryResourceGroup --sku basic [in case you do not have an existing ACR]
49
34
az aks create -n myAKSCluster -g myResourceGroup --attach-acr <acr-name-or-resource-id>
Copy file name to clipboardExpand all lines: articles/aks/concepts-scale.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ ms.author: zarhoads
12
12
13
13
# Scaling options for applications in Azure Kubernetes Service (AKS)
14
14
15
-
As you run applications in Azure Kubernetes Service (AKS), you may need to increase or decrease the amount of compute resources. As the number of application instances you need change, the number of underlying Kubernetes nodes may also need to change. You may also need to quickly provision a large number of additional application instances.
15
+
As you run applications in Azure Kubernetes Service (AKS), you may need to increase or decrease the amount of compute resources. As the number of application instances you need change, the number of underlying Kubernetes nodes may also need to change. You also might need to quickly provision a large number of additional application instances.
16
16
17
17
This article introduces the core concepts that help you scale applications in AKS:
18
18
@@ -23,7 +23,7 @@ This article introduces the core concepts that help you scale applications in AK
23
23
24
24
## Manually scale pods or nodes
25
25
26
-
You can manually scale replicas (pods) and nodes to test how your application responds to a change in available resources and state. Manually scaling resources also lets you define a set amount of resources to use to maintain a fixed cost, such as the number of nodes. To manually scale, you define the replica or node count, and the Kubernetes API schedules creating additional pods or draining nodes.
26
+
You can manually scale replicas (pods) and nodes to test how your application responds to a change in available resources and state. Manually scaling resources also lets you define a set amount of resources to use to maintain a fixed cost, such as the number of nodes. To manually scale, you define the replica or node count. The Kubernetes API then schedules creating additional pods or draining nodes based on that replica or node count.
27
27
28
28
To get started with manually scaling pods and nodes see [Scale applications in AKS][aks-scale].
29
29
@@ -39,15 +39,15 @@ To get started with the horizontal pod autoscaler in AKS, see [Autoscale pods in
39
39
40
40
### Cooldown of scaling events
41
41
42
-
As the horizontal pod autoscaler checks the Metrics API every 30 seconds, previous scale events may not have successfully completed before another check is made. This behavior could cause the horizontal pod autoscaler to change the number of replicas before the previous scale event has been able to receive application workload and the resource demands to adjust accordingly.
42
+
As the horizontal pod autoscaler checks the Metrics API every 30 seconds, previous scale events may not have successfully completed before another check is made. This behavior could cause the horizontal pod autoscaler to change the number of replicas before the previous scale event could receive application workload and the resource demands to adjust accordingly.
43
43
44
-
To minimize these race events, cooldown or delay values are set. These values define how long the horizontal pod autoscaler must wait after a scale event before another scale event can be triggered. This behavior allows the new replica count to take effect and the Metrics API reflect the distributed workload. By default, the delay on scale up events is 3 minutes, and the delay on scale down events is 5 minutes
44
+
To minimize these race events, cooldown or delay values are set. These values define how long the horizontal pod autoscaler must wait after a scale event before another scale event can be triggered. This behavior allows the new replica count to take effect and the Metrics API to reflect the distributed workload. By default, the delay on scale up events is 3 minutes, and the delay on scale down events is 5 minutes
45
45
46
-
Currently, you cannot tune these cooldown values from the default.
46
+
Currently, you can't tune these cooldown values from the default.
47
47
48
48
## Cluster autoscaler
49
49
50
-
To respond to changing pod demands, Kubernetes has a cluster autoscaler (currently in preview in AKS) that adjusts the number of nodes based on the requested compute resources in the node pool. By default, the cluster autoscaler checks the Metrics API server every 10 seconds for any required changes in node count. If the cluster autoscale determines that a change is required, the number of nodes in your AKS cluster is increased or decreased accordingly. The cluster autoscaler works with RBAC-enabled AKS clusters that run Kubernetes 1.10.x or higher.
50
+
To respond to changing pod demands, Kubernetes has a cluster autoscaler, which is currently in preview in AKS, that adjusts the number of nodes based on the requested compute resources in the node pool. By default, the cluster autoscaler checks the Metrics API server every 10 seconds for any required changes in node count. If the cluster autoscale determines that a change is required, the number of nodes in your AKS cluster is increased or decreased accordingly. The cluster autoscaler works with RBAC-enabled AKS clusters that run Kubernetes 1.10.x or higher.
@@ -59,15 +59,15 @@ To get started with the cluster autoscaler in AKS, see [Cluster Autoscaler on AK
59
59
60
60
### Scale up events
61
61
62
-
If a node does not have sufficient compute resources to run a requested pod, that pod cannot progress through the scheduling process. The pod cannot start unless additional compute resources are available within the node pool.
62
+
If a node doesn't have sufficient compute resources to run a requested pod, that pod can't progress through the scheduling process. The pod can't start unless additional compute resources are available within the node pool.
63
63
64
-
When the cluster autoscaler notices pods that cannot be scheduled due to node pool resource constraints, the number of nodes within the node pool is increased to provide the additional compute resources. When those additional nodes are successfully deployed and available for use within the node pool, the pods are then scheduled to run on them.
64
+
When the cluster autoscaler notices pods that can't be scheduled because of node pool resource constraints, the number of nodes within the node pool is increased to provide the additional compute resources. When those additional nodes are successfully deployed and available for use within the node pool, the pods are then scheduled to run on them.
65
65
66
66
If your application needs to scale rapidly, some pods may remain in a state waiting to be scheduled until the additional nodes deployed by the cluster autoscaler can accept the scheduled pods. For applications that have high burst demands, you can scale with virtual nodes and Azure Container Instances.
67
67
68
68
### Scale down events
69
69
70
-
The cluster autoscaler also monitors the pod scheduling status for nodes that have not recently received new scheduling requests. This scenario indicates that the node pool has more compute resources than are required, and that the number of nodes can be decreased.
70
+
The cluster autoscaler also monitors the pod scheduling status for nodes that haven't recently received new scheduling requests. This scenario indicates the node pool has more compute resources than are required, and the number of nodes can be decreased.
71
71
72
72
A node that passes a threshold for no longer being needed for 10 minutes by default is scheduled for deletion. When this situation occurs, pods are scheduled to run on other nodes within the node pool, and the cluster autoscaler decreases the number of nodes.
0 commit comments