You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/active-directory-b2c/configure-authentication-sample-web-app.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -75,6 +75,7 @@ To create the web app registration, use the following steps:
75
75
1. Under **Name**, enter a name for the application (for example, *webapp1*).
76
76
1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**.
77
77
1. Under **Redirect URI**, select **Web** and then, in the URL box, enter `https://localhost:44316/signin-oidc`.
78
+
1. Under **Implicit grant and hybrid flows**, select the **ID tokens (used for implicit and hybrid flows)** checkbox.
78
79
1. Under **Permissions**, select the **Grant admin consent to openid and offline access permissions** checkbox.
This topic covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security.
17
17
18
18
>[!NOTE]
19
-
>Number matching is a key security upgrade to traditional second factor notifications in Microsoft Authenticator that will begin to be enabled by default for all users starting February 27, 2023.<br>
19
+
>Number matching is a key security upgrade to traditional second factor notifications in Microsoft Authenticator. We will remove the admin controls and enforce the number match experience tenant-wide for all users starting February 27, 2023.<br>
20
20
>We highly recommend enabling number matching in the near term for improved sign-in security.
Whilst AKS customers are able to route egress traffic through an Azure Load Balancer, there are limitations on the amount of outbound flows of traffic that is possible.
13
-
14
-
Azure NAT Gateway allows up to 64,512 outbound UDP and TCP traffic flows per IP address with a maximum of 16 IP addresses.
15
-
16
-
This article will show you how to create an AKS cluster with a Managed NAT Gateway for egress traffic.
12
+
While you can route egress traffic through an Azure Load Balancer, there are limitations on the amount of outbound flows of traffic you can have. Azure NAT Gateway allows up to 64,512 outbound UDP and TCP traffic flows per IP address with a maximum of 16 IP addresses.
17
13
14
+
This article shows you how to create an AKS cluster with a Managed NAT Gateway for egress traffic and how to disable OutboundNAT on Windows.
18
15
19
16
## Before you begin
20
17
21
-
To use Managed NAT gateway, you must have the following:
18
+
To use Managed NAT gateway, you must have the following prerequisites:
22
19
23
-
* The latest version of the Azure CLI
20
+
* The latest version of [Azure CLI][az-cli]
24
21
* Kubernetes version 1.20.x or above
25
22
26
23
## Create an AKS cluster with a Managed NAT Gateway
27
-
To create an AKS cluster with a new Managed NAT Gateway, use `--outbound-type managedNATGateway` as well as `--nat-gateway-managed-outbound-ip-count` and `--nat-gateway-idle-timeout` when running `az aks create`. The following example creates a *myresourcegroup* resource group, then creates a *natcluster* AKS cluster in *myresourcegroup* with a Managed NAT Gateway, two outbound IPs, and an idle timeout of 4 minutes.
28
24
25
+
To create an AKS cluster with a new Managed NAT Gateway, use `--outbound-type managedNATGateway`, `--nat-gateway-managed-outbound-ip-count`, and `--nat-gateway-idle-timeout` when running `az aks create`. The following example creates a *myresourcegroup* resource group, then creates a *natcluster* AKS cluster in *myresourcegroup* with a Managed NAT Gateway, two outbound IPs, and an idle timeout of 4 minutes.
26
+
27
+
To create an AKS cluster with a new Managed NAT Gateway, use `--outbound-type managedNATGateway`, `--nat-gateway-managed-outbound-ip-count`, and `--nat-gateway-idle-timeout` when running `az aks create`. The following example creates a *myResourceGroup* resource group, then creates a *natCluster* AKS cluster in *myResourceGroup* with a Managed NAT Gateway, two outbound IPs, and an idle timeout of 30 seconds.
29
28
30
29
```azurecli-interactive
31
-
az group create --name myresourcegroup --location southcentralus
30
+
az group create --name myResourceGroup --location southcentralus
32
31
```
33
32
34
33
```azurecli-interactive
@@ -45,7 +44,8 @@ az aks create \
45
44
> If no value the outbound IP address is specified, the default value is one.
46
45
47
46
### Update the number of outbound IP addresses
48
-
To update the outbound IP address or idle timeout, use `--nat-gateway-managed-outbound-ip-count` or `--nat-gateway-idle-timeout` when running `az aks update`. For example:
47
+
48
+
To update the outbound IP address or idle timeout, use `--nat-gateway-managed-outbound-ip-count` or `--nat-gateway-idle-timeout` when running `az aks update`.
49
49
50
50
```azurecli-interactive
51
51
az aks update \
@@ -55,68 +55,76 @@ az aks update \
55
55
```
56
56
57
57
## Create an AKS cluster with a user-assigned NAT Gateway
58
+
58
59
To create an AKS cluster with a user-assigned NAT Gateway, use `--outbound-type userAssignedNATGateway` when running `az aks create`. This configuration requires bring-your-own networking (via [Kubenet][byo-vnet-kubenet] or [Azure CNI][byo-vnet-azure-cni]) and that the NAT Gateway is preconfigured on the subnet. The following commands create the required resources for this scenario. Make sure to run them all in the same session so that the values stored to variables are still available for the `az aks create` command.
59
60
60
-
1. Create the resource group:
61
+
1. Create the resource group.
62
+
61
63
```azurecli-interactive
62
-
az group create --name myresourcegroup \
64
+
az group create --name myResourceGroup \
63
65
--location southcentralus
64
66
```
65
67
66
-
2. Create a managed identity for network permissions and store the ID to `$IDENTITY_ID` for later use:
68
+
2. Create a managed identity for network permissions and store the ID to `$IDENTITY_ID` for later use.
69
+
67
70
```azurecli-interactive
68
71
IDENTITY_ID=$(az identity create \
69
-
--resource-group myresourcegroup \
70
-
--name natclusterid \
72
+
--resource-group myResourceGroup \
73
+
--name natClusterId \
71
74
--location southcentralus \
72
75
--query id \
73
76
--output tsv)
74
77
```
75
78
76
-
3. Create a public IP for the NAT gateway:
79
+
3. Create a public IP for the NAT gateway.
80
+
77
81
```azurecli-interactive
78
82
az network public-ip create \
79
-
--resource-group myresourcegroup \
80
-
--name mynatgatewaypip \
83
+
--resource-group myResourceGroup \
84
+
--name myNatGatewayPip \
81
85
--location southcentralus \
82
86
--sku standard
83
87
```
84
88
85
-
4. Create the NAT gateway:
89
+
4. Create the NAT gateway.
90
+
86
91
```azurecli-interactive
87
92
az network nat gateway create \
88
-
--resource-group myresourcegroup \
89
-
--name mynatgateway \
93
+
--resource-group myResourceGroup \
94
+
--name myNatGateway \
90
95
--location southcentralus \
91
-
--public-ip-addresses mynatgatewaypip
96
+
--public-ip-addresses myNatGatewayPip
92
97
```
93
98
94
-
5. Create a virtual network:
99
+
5. Create a virtual network.
100
+
95
101
```azurecli-interactive
96
102
az network vnet create \
97
-
--resource-group myresourcegroup \
98
-
--name myvnet \
103
+
--resource-group myResourceGroup \
104
+
--name myVnet \
99
105
--location southcentralus \
100
106
--address-prefixes 172.16.0.0/20
101
107
```
102
108
103
-
6. Create a subnet in the virtual network using the NAT gateway and store the ID to `$SUBNET_ID` for later use:
109
+
6. Create a subnet in the virtual network using the NAT gateway and store the ID to `$SUBNET_ID` for later use.
110
+
104
111
```azurecli-interactive
105
112
SUBNET_ID=$(az network vnet subnet create \
106
-
--resource-group myresourcegroup \
107
-
--vnet-name myvnet \
108
-
--name natcluster \
113
+
--resource-group myResourceGroup \
114
+
--vnet-name myVnet \
115
+
--name natCluster \
109
116
--address-prefixes 172.16.0.0/22 \
110
-
--nat-gateway mynatgateway \
117
+
--nat-gateway myNatGateway \
111
118
--query id \
112
119
--output tsv)
113
120
```
114
121
115
-
7. Create an AKS cluster using the subnet with the NAT gateway and the managed identity:
122
+
7. Create an AKS cluster using the subnet with the NAT gateway and the managed identity.
123
+
116
124
```azurecli-interactive
117
125
az aks create \
118
-
--resource-group myresourcegroup \
119
-
--name natcluster \
126
+
--resource-group myResourceGroup \
127
+
--name natCluster \
120
128
--location southcentralus \
121
129
--network-plugin azure \
122
130
--vnet-subnet-id $SUBNET_ID \
@@ -125,11 +133,76 @@ To create an AKS cluster with a user-assigned NAT Gateway, use `--outbound-type
125
133
--assign-identity $IDENTITY_ID
126
134
```
127
135
128
-
## Next Steps
129
-
- For more information on Azure NAT Gateway, see [Azure NAT Gateway][nat-docs].
136
+
## Disable OutboundNAT for Windows
130
137
131
-
<!-- LINKS - internal -->
138
+
Windows OutboundNAT can cause certain connection and communication issues with your AKS pods. Some of these issues include:
139
+
140
+
* **Unhealthy backend status**: When you deploy an AKS cluster with [Application Gateway Ingress Control (AGIC)][agic] and [Application Gateway][app-gw] in different VNets, the backend health status becomes "Unhealthy." The outbound connectivity fails because the peered networked IP isn't present in the CNI config of the Windows nodes.
141
+
* **Node port reuse**: Windows OutboundNAT uses port to translate your pod IP to your Windows node host IP, which can cause an unstable connection to the external service due to a port exhaustion issue.
142
+
* **Invalid traffic routing to internal service endpoints**: When you create a load balancer service with `externalTrafficPolicy` set to *Local*, kube-proxy on Windows doesn't create the proper rules in the IPTables to route traffic to the internal service endpoints.
143
+
144
+
Windows enables OutboundNAT by default. You can now manually disable OutboundNAT when creating new Windows agent pools.
145
+
146
+
### Prerequisites
147
+
148
+
* You need to use `aks-preview` and register the feature flag.
149
+
150
+
1. Install or update `aks-preview`.
151
+
152
+
```azurecli
153
+
# Install aks-preview
154
+
155
+
az extension add --name aks-preview
156
+
157
+
# Update aks-preview
132
158
159
+
az extension update --name aks-preview
160
+
```
161
+
162
+
2. Register the feature flag.
163
+
164
+
```azurecli
165
+
az feature register --namespace Microsoft.ContainerService --name DisableWindowsOutboundNATPreview
166
+
```
167
+
168
+
3. Check the registration status.
169
+
170
+
```azurecli
171
+
az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/DisableWindowsOutboundNATPreview')].{Name:name,State:properties.state}"
172
+
```
173
+
174
+
4. Refresh the registration of the `Microsoft.ContainerService` resource provider.
175
+
176
+
```azurecli
177
+
az provider register --namespace Microsoft.ContainerService
178
+
```
179
+
180
+
* Your clusters must have a Managed NAT Gateway (which may increase the overall cost).
181
+
* If you're using Kubernetes version 1.25 or older, you need to [update your deployment configuration][upgrade-kubernetes].
182
+
* If you need to switch from a load balancer to NAT Gateway, you can either add a NAT Gateway into the VNet or run [`az aks upgrade`][aks-upgrade] to update the outbound type.
183
+
184
+
### Manually disable OutboundNAT for Windows
185
+
186
+
You can manually disable OutboundNAT for Windows when creating new Windows agent pools using `--disable-windows-outbound-nat`.
187
+
188
+
> [!NOTE]
189
+
> You can use an existing AKS cluster, but you may need to update the outbound type and add a node pool to enable `--disable-windows-outbound-nat`.
190
+
191
+
```azurecli
192
+
az aks nodepool add \
193
+
--resource-group myResourceGroup
194
+
--cluster-name natCluster
195
+
--name mynodepool
196
+
--node-count 3
197
+
--os-type Windows
198
+
--disable-windows-outbound-nat
199
+
```
200
+
201
+
## Next steps
202
+
203
+
For more information on Azure NAT Gateway, see [Azure NAT Gateway][nat-docs].
Copy file name to clipboardExpand all lines: articles/aks/use-mariner.md
+9-10Lines changed: 9 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ description: Learn how to use the Mariner container host on Azure Kubernetes Ser
5
5
services: container-service
6
6
ms.topic: article
7
7
ms.custom: ignite-2022
8
-
ms.date: 11/17/2022
8
+
ms.date: 12/08/2022
9
9
---
10
10
11
11
# Use the Mariner container host on Azure Kubernetes Service (AKS)
@@ -49,16 +49,15 @@ Mariner is available for use in the same regions as AKS.
49
49
50
50
Mariner currently has the following limitations:
51
51
52
-
* Mariner does not yet have image SKUs for GPU, ARM64, SGX, or FIPS.
53
-
* Mariner does not yet have FedRAMP, FIPS, or CIS certification.
54
-
* Mariner cannot yet be deployed through Azure portal or Terraform.
52
+
* Mariner doesn't yet have image SKUs for GPU, ARM64, SGX, or FIPS.
53
+
* Mariner doesn't yet have FedRAMP, FIPS, or CIS certification.
54
+
* Mariner can't yet be deployed through the Azure portal.
55
55
* Qualys, Trivy, and Microsoft Defender for Containers are the only vulnerability scanning tools that support Mariner today.
56
-
* The Mariner container host is a Gen 2 image. Mariner does not plan to offer a Gen 1 SKU.
57
-
* Node configurations are not yet supported.
58
-
* Mariner is not yet supported in GitHub actions.
59
-
* Mariner does not support AppArmor. Support for SELinux can be manually configured.
60
-
* Some addons, extensions, and open-source integrations may not be supported yet on Mariner. Azure Monitor, Grafana, Helm, Key Vault, and Container Insights are confirmed to be supported.
61
-
* AKS diagnostics does not yet support Mariner.
56
+
* The Mariner container host is a Gen 2 image. Mariner doesn't plan to offer a Gen 1 SKU.
57
+
* Node configurations aren't yet supported.
58
+
* Mariner isn't yet supported in GitHub actions.
59
+
* Mariner doesn't support AppArmor. Support for SELinux can be manually configured.
60
+
* Some addons, extensions, and open-source integrations may not be supported yet on Mariner. Azure Monitor, Grafana, Helm, Key Vault, and Container Insights are supported.
0 commit comments