Skip to content

Commit f8301df

Browse files
committed
Merging changes synced from https://github.com/MicrosoftDocs/azure-docs-pr (branch live)
2 parents fc7011f + cd70273 commit f8301df

File tree

60 files changed

+846
-688
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

60 files changed

+846
-688
lines changed

articles/active-directory/authentication/howto-password-ban-bad-on-premises-faq.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -134,6 +134,10 @@ No.
134134

135135
Audit mode is only supported in the on-premises Active Directory environment. Azure AD is implicitly always in "enforce" mode when it evaluates passwords.
136136

137+
**Q: My users see the traditional Windows error message when a password is rejected by Azure AD Password Protection. Is it possible to customize this error message so that users know what really happened?**
138+
139+
No. The error message seen by users when a password is rejected by a domain controller is controlled by the client machine, not by the domain controller. This behavior happens whether a password is rejected by the default Active Directory password policies or by a password-filter-based solution such as Azure AD Password Protection.
140+
137141
## Additional content
138142

139143
The following links are not part of the core Azure AD Password Protection documentation but may be a useful source of additional information on the feature.

articles/active-directory/authentication/howto-password-ban-bad-on-premises-operations.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,9 +52,12 @@ This message is only one example of several possible outcomes. The specific erro
5252

5353
Affected end users may need to work with their IT staff to understand the new requirements and be more able to choose secure passwords.
5454

55+
> [!NOTE]
56+
> Azure AD Password Protection has no control over the specific error message displayed by the client machine when a weak password is rejected.
57+
5558
## Enable Mode
5659

57-
This setting should normally be left in its default enabled (Yes) state. Configuring this setting to disabled (No) will cause all deployed Azure AD Password Protection DC agents to go into a quiescent mode where all passwords are accepted as-is, and no validation activities will be executed whatsoever (for example, not even audit events will be emitted).
60+
This setting should be left in its default enabled (Yes) state. Configuring this setting to disabled (No) will cause all deployed Azure AD Password Protection DC agents to go into a quiescent mode where all passwords are accepted as-is, and no validation activities will be executed whatsoever (for example, not even audit events will be emitted).
5861

5962
## Next steps
6063

articles/active-directory/develop/msal-b2c-overview.md

Lines changed: 23 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ ms.devlang: na
1313
ms.topic: conceptual
1414
ms.tgt_pltfrm: na
1515
ms.workload: identity
16-
ms.date: 05/04/2019
16+
ms.date: 09/16/2019
1717
ms.author: negoe
1818
ms.reviewer: nacanuma
1919
ms.custom: aaddev
@@ -53,21 +53,30 @@ git clone https://github.com/Azure-Samples/active-directory-b2c-javascript-msal-
5353

5454
1. Open the **index.html** file in the sample.
5555

56-
1. Configure the sample with the application ID and key that you recorded earlier while registering your application. Change the following lines of code by replacing the values with the names of your directory and APIs:
56+
1. Configure the sample with the client ID and key that you recorded earlier while registering your application. Change the following lines of code by replacing the values with the names of your directory and APIs:
5757

5858
```javascript
59-
// The current application coordinates were pre-registered in a B2C directory.
60-
61-
const msalConfig = {
62-
auth:{
63-
clientId: "Enter_the_Application_Id_here",
64-
authority: "https://login.microsoftonline.com/tfp/<your-tenant-name>.onmicrosoft.com/<your-sign-in-sign-up-policy>",
65-
b2cScopes: ["https://<your-tenant-name>.onmicrosoft.com/hello/demo.read"],
66-
webApi: 'http://localhost:5000/hello',
67-
};
68-
69-
// create UserAgentApplication instance
70-
const myMSALObj = new UserAgentApplication(msalConfig);
59+
// The current application coordinates were pre-registered in a B2C tenant.
60+
61+
var appConfig = {
62+
b2cScopes: ["https://fabrikamb2c.onmicrosoft.com/helloapi/demo.read"],
63+
webApi: "https://fabrikamb2chello.azurewebsites.net/hello"
64+
};
65+
66+
const msalConfig = {
67+
auth: {
68+
clientId: "e760cab2-b9a1-4c0d-86fb-ff7084abd902" //This is your client/application ID
69+
authority: "https://fabrikamb2c.b2clogin.com/fabrikamb2c.onmicrosoft.com/b2c_1_susi", //This is your tenant info
70+
validateAuthority: false
71+
},
72+
cache: {
73+
cacheLocation: "localStorage",
74+
storeAuthStateInCookie: true
75+
}
76+
};
77+
// create UserAgentApplication instance
78+
const myMSALObj = new Msal.UserAgentApplication(msalConfig);
79+
7180
```
7281

7382
The name of the [user flow](https://docs.microsoft.com/azure/active-directory-b2c/active-directory-b2c-reference-policies) in this tutorial is **B2C_1_signupsignin1**. If you're using a different user flow name, set the **authority** value to that name.

articles/aks/TOC.yml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,6 +100,8 @@
100100
href: acs-aks-migration.md
101101
- name: Supported Kubernetes version
102102
href: supported-kubernetes-versions.md
103+
- name: Security Hardening in host OS
104+
href: security-hardened-vm-host-image.md
103105
- name: How-to guides
104106
items:
105107
- name: Cluster operations
@@ -116,7 +118,7 @@
116118
- name: Delete an AKS cluster
117119
href: /cli/azure/aks#az-aks-delete
118120
maintainContext: true
119-
- name: Create an ACR with an AKS cluster (preview)
121+
- name: Create an ACR with an AKS cluster
120122
href: cluster-container-registry-integration.md
121123
- name: Create virtual nodes
122124
items:

articles/aks/acs-aks-migration.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -31,10 +31,9 @@ ACS and AKS differ in some key areas that affect migration. Before any migration
3131

3232
## Differences between Kubernetes versions
3333

34-
If you're migrating to a newer version of Kubernetes (for example, from 1.7.x to 1.9.x), review the following resources to understand a few changes to the Kubernetes API:
34+
If you're migrating to a newer version of Kubernetes, review the following resources to understand the Kubernetes versioning strategies:
3535

36-
* [Migrating a ThirdPartyResource to CustomResourceDefinition](https://kubernetes.io/docs/tasks/access-kubernetes-api/migrate-third-party-resource/)
37-
* [Workloads API changes in versions 1.8 and 1.9](https://kubernetes.io/docs/reference/workloads-18-19/)
36+
* [Kubernetes version and version skew support policy](https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions)
3837

3938
## Migration considerations
4039

articles/aks/cluster-container-registry-integration.md

Lines changed: 4 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -7,39 +7,24 @@ manager: gwallace
77

88
ms.service: container-service
99
ms.topic: article
10-
ms.date: 08/15/2018
10+
ms.date: 09/17/2018
1111
ms.author: mlearned
1212
---
1313

14-
# Preview - Authenticate with Azure Container Registry from Azure Kubernetes Service
14+
# Authenticate with Azure Container Registry from Azure Kubernetes Service
1515

1616
When you're using Azure Container Registry (ACR) with Azure Kubernetes Service (AKS), an authentication mechanism needs to be established. This article details the recommended configurations for authentication between these two Azure services.
1717

1818
You can set up the AKS to ACR integration in a few simple commands with the Azure CLI.
1919

20-
> [!IMPORTANT]
21-
> AKS preview features are self-service opt-in. Previews are provided "as-is" and "as available" and are excluded from the service level agreements and limited warranty. AKS Previews are partially covered by customer support on best effort basis. As such, these features are not meant for production use. For additional infromation, please see the following support articles:
22-
>
23-
> * [AKS Support Policies](support-policies.md)
24-
> * [Azure Support FAQ](faq.md)
25-
2620
## Before you begin
2721

2822
You must have the following:
2923

3024
* **Owner** or **Azure account administrator** role on the **Azure subscription**
31-
* You also need the Azure CLI version 2.0.70 or later and the aks-preview 0.4.8 extension
25+
* You also need the Azure CLI version 2.0.73 or later
3226
* You need [Docker installed](https://docs.docker.com/install/) on your client, and you need access to [docker hub](https://hub.docker.com/)
3327

34-
## Install latest AKS CLI preview extension
35-
36-
You need the **aks-preview 0.4.13** extension or later.
37-
38-
```azurecli
39-
az extension remove --name aks-preview
40-
az extension add -y --name aks-preview
41-
```
42-
4328
## Create a new AKS cluster with ACR integration
4429

4530
You can set up AKS and ACR integration during the initial creation of your AKS cluster. To allow an AKS cluster to interact with ACR, an Azure Active Directory **service principal** is used. The following CLI command allows you to authorize an existing ACR in your subscription and configures the appropriate **ACRPull** role for the service principal. Supply valid values for your parameters below. The parameters in brackets are optional.
@@ -48,7 +33,7 @@ az login
4833
az acr create -n myContainerRegistry -g myContainerRegistryResourceGroup --sku basic [in case you do not have an existing ACR]
4934
az aks create -n myAKSCluster -g myResourceGroup --attach-acr <acr-name-or-resource-id>
5035
```
51-
**An ACR resource id has the following format:
36+
**An ACR resource ID has the following format:**
5237

5338
/subscriptions/<subscription-d>/resourceGroups/<resource-group-name>/providers/Microsoft.ContainerRegistry/registries/{name}
5439

articles/aks/concepts-scale.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.author: zarhoads
1212

1313
# Scaling options for applications in Azure Kubernetes Service (AKS)
1414

15-
As you run applications in Azure Kubernetes Service (AKS), you may need to increase or decrease the amount of compute resources. As the number of application instances you need change, the number of underlying Kubernetes nodes may also need to change. You may also need to quickly provision a large number of additional application instances.
15+
As you run applications in Azure Kubernetes Service (AKS), you may need to increase or decrease the amount of compute resources. As the number of application instances you need change, the number of underlying Kubernetes nodes may also need to change. You also might need to quickly provision a large number of additional application instances.
1616

1717
This article introduces the core concepts that help you scale applications in AKS:
1818

@@ -23,7 +23,7 @@ This article introduces the core concepts that help you scale applications in AK
2323

2424
## Manually scale pods or nodes
2525

26-
You can manually scale replicas (pods) and nodes to test how your application responds to a change in available resources and state. Manually scaling resources also lets you define a set amount of resources to use to maintain a fixed cost, such as the number of nodes. To manually scale, you define the replica or node count, and the Kubernetes API schedules creating additional pods or draining nodes.
26+
You can manually scale replicas (pods) and nodes to test how your application responds to a change in available resources and state. Manually scaling resources also lets you define a set amount of resources to use to maintain a fixed cost, such as the number of nodes. To manually scale, you define the replica or node count. The Kubernetes API then schedules creating additional pods or draining nodes based on that replica or node count.
2727

2828
To get started with manually scaling pods and nodes see [Scale applications in AKS][aks-scale].
2929

@@ -39,15 +39,15 @@ To get started with the horizontal pod autoscaler in AKS, see [Autoscale pods in
3939

4040
### Cooldown of scaling events
4141

42-
As the horizontal pod autoscaler checks the Metrics API every 30 seconds, previous scale events may not have successfully completed before another check is made. This behavior could cause the horizontal pod autoscaler to change the number of replicas before the previous scale event has been able to receive application workload and the resource demands to adjust accordingly.
42+
As the horizontal pod autoscaler checks the Metrics API every 30 seconds, previous scale events may not have successfully completed before another check is made. This behavior could cause the horizontal pod autoscaler to change the number of replicas before the previous scale event could receive application workload and the resource demands to adjust accordingly.
4343

44-
To minimize these race events, cooldown or delay values are set. These values define how long the horizontal pod autoscaler must wait after a scale event before another scale event can be triggered. This behavior allows the new replica count to take effect and the Metrics API reflect the distributed workload. By default, the delay on scale up events is 3 minutes, and the delay on scale down events is 5 minutes
44+
To minimize these race events, cooldown or delay values are set. These values define how long the horizontal pod autoscaler must wait after a scale event before another scale event can be triggered. This behavior allows the new replica count to take effect and the Metrics API to reflect the distributed workload. By default, the delay on scale up events is 3 minutes, and the delay on scale down events is 5 minutes
4545

46-
Currently, you cannot tune these cooldown values from the default.
46+
Currently, you can't tune these cooldown values from the default.
4747

4848
## Cluster autoscaler
4949

50-
To respond to changing pod demands, Kubernetes has a cluster autoscaler (currently in preview in AKS) that adjusts the number of nodes based on the requested compute resources in the node pool. By default, the cluster autoscaler checks the Metrics API server every 10 seconds for any required changes in node count. If the cluster autoscale determines that a change is required, the number of nodes in your AKS cluster is increased or decreased accordingly. The cluster autoscaler works with RBAC-enabled AKS clusters that run Kubernetes 1.10.x or higher.
50+
To respond to changing pod demands, Kubernetes has a cluster autoscaler, which is currently in preview in AKS, that adjusts the number of nodes based on the requested compute resources in the node pool. By default, the cluster autoscaler checks the Metrics API server every 10 seconds for any required changes in node count. If the cluster autoscale determines that a change is required, the number of nodes in your AKS cluster is increased or decreased accordingly. The cluster autoscaler works with RBAC-enabled AKS clusters that run Kubernetes 1.10.x or higher.
5151

5252
![Kubernetes cluster autoscaler](media/concepts-scale/cluster-autoscaler.png)
5353

@@ -59,15 +59,15 @@ To get started with the cluster autoscaler in AKS, see [Cluster Autoscaler on AK
5959

6060
### Scale up events
6161

62-
If a node does not have sufficient compute resources to run a requested pod, that pod cannot progress through the scheduling process. The pod cannot start unless additional compute resources are available within the node pool.
62+
If a node doesn't have sufficient compute resources to run a requested pod, that pod can't progress through the scheduling process. The pod can't start unless additional compute resources are available within the node pool.
6363

64-
When the cluster autoscaler notices pods that cannot be scheduled due to node pool resource constraints, the number of nodes within the node pool is increased to provide the additional compute resources. When those additional nodes are successfully deployed and available for use within the node pool, the pods are then scheduled to run on them.
64+
When the cluster autoscaler notices pods that can't be scheduled because of node pool resource constraints, the number of nodes within the node pool is increased to provide the additional compute resources. When those additional nodes are successfully deployed and available for use within the node pool, the pods are then scheduled to run on them.
6565

6666
If your application needs to scale rapidly, some pods may remain in a state waiting to be scheduled until the additional nodes deployed by the cluster autoscaler can accept the scheduled pods. For applications that have high burst demands, you can scale with virtual nodes and Azure Container Instances.
6767

6868
### Scale down events
6969

70-
The cluster autoscaler also monitors the pod scheduling status for nodes that have not recently received new scheduling requests. This scenario indicates that the node pool has more compute resources than are required, and that the number of nodes can be decreased.
70+
The cluster autoscaler also monitors the pod scheduling status for nodes that haven't recently received new scheduling requests. This scenario indicates the node pool has more compute resources than are required, and the number of nodes can be decreased.
7171

7272
A node that passes a threshold for no longer being needed for 10 minutes by default is scheduled for deletion. When this situation occurs, pods are scheduled to run on other nodes within the node pool, and the cluster autoscaler decreases the number of nodes.
7373

0 commit comments

Comments
 (0)