You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/concepts-network.md
+22-16Lines changed: 22 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,16 +24,16 @@ This article introduces the core concepts that provide networking to your applic
24
24
25
25
## Kubernetes networking basics
26
26
27
-
Kubernetes employs a virtual networking layer to manage access within and between your applications or their components. This involves the following key aspects:
27
+
Kubernetes employs a virtual networking layer to manage access within and between your applications or their components:
28
28
29
29
-**Kubernetes nodes and virtual network**: Kubernetes nodes are connected to a virtual network. This setup enables pods (basic units of deployment in Kubernetes) to have both inbound and outbound connectivity.
30
30
31
-
-**Kube-proxy component**: Running on each node, kube-proxy is responsible for providing the necessary network features.
31
+
-**Kube-proxy component**: kube-proxy runs on each node and is responsible for providing the necessary network features.
32
32
33
33
Regarding specific Kubernetes functionalities:
34
34
35
-
-**Services**: These are used to logically group pods, allowing direct access to them through a specific IP address or DNS name on a designated port.
36
-
-**Service types**: This feature lets you specify the kind of Service you wish to create.
35
+
-**Services**: Services is used to logically group pods, allowing direct access to them through a specific IP address or DNS name on a designated port.
36
+
-**Service types**: Specifies the kind of Service you wish to create.
37
37
-**Load balancer**: You can use a load balancer to distribute network traffic evenly across various resources.
38
38
-**Ingress controllers**: These facilitate Layer 7 routing, which is essential for directing application traffic.
39
39
-**Egress traffic control**: Kubernetes allows you to manage and control outbound traffic from cluster nodes.
@@ -48,13 +48,13 @@ In the context of the Azure platform:
48
48
49
49
## Services
50
50
51
-
To simplify the network configuration for application workloads, Kubernetes uses *Services* to logically group a set of pods together and provide network connectivity. You can specify a Kubernetes *ServiceType* to specify what kind of Service you want, for example if you want to expose a Service onto an external IP address that's outside of your cluster. For more information, see the Kubernetes documentation for[Publishing Services (ServiceTypes)][service-types].
51
+
To simplify the network configuration for application workloads, Kubernetes uses *Services* to logically group a set of pods together and provide network connectivity. You can specify a Kubernetes *ServiceType* to define the type of Service you want. For example, if you want to expose a Service on an external IP address outside of your cluster. For more information, see the Kubernetes documentation on[Publishing Services (ServiceTypes)][service-types].
52
52
53
53
The following ServiceTypes are available:
54
54
55
55
***ClusterIP**
56
56
57
-
ClusterIP creates an internal IP address for use within the AKS cluster. This Service is good for *internal-only applications* that support other workloads within the cluster. This is the default that's used if you don't explicitly specify a type for a Service.
57
+
ClusterIP creates an internal IP address for use within the AKS cluster. The ClusterIP Service is good for *internal-only applications* that support other workloads within the cluster. ClusterIP is the default used if you don't explicitly specify a type for a Service.
58
58
59
59
![Diagram showing ClusterIP traffic flow in an AKS cluster][aks-clusterip]
60
60
@@ -114,12 +114,12 @@ For more information, see [Configure kubenet networking for an AKS cluster][aks-
114
114
115
115
### Azure CNI (advanced) networking
116
116
117
-
With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. This approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow, so it's important to plan properly. To avoid these planning challenges, it is possible to enable the feature [Azure CNI networking for dynamic allocation of IPs and enhanced subnet support][configure-azure-cni-dynamic-ip-allocation].
117
+
With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. This approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow, so it's important to plan properly. To avoid these planning challenges, it's possible to enable the feature [Azure CNI networking for dynamic allocation of IPs and enhanced subnet support][configure-azure-cni-dynamic-ip-allocation].
118
118
119
119
> [!NOTE]
120
120
> Due to Kubernetes limitations, the Resource Group name, the Virtual Network name and the subnet name must be 63 characters or less.
121
121
122
-
Unlike kubenet, traffic to endpoints in the same virtual network isn't NAT'd to the node's primary IP. The source address for traffic inside the virtual network is the pod IP. Traffic that's external to the virtual network still NATs to the node's primary IP.
122
+
Unlike kubenet, traffic to endpoints in the same virtual network isn't translated (NAT) to the node's primary IP. The source address for traffic inside the virtual network is the pod IP. Traffic that's external to the virtual network still NATs to the node's primary IP.
123
123
124
124
Nodes use the [Azure CNI][cni-networking] Kubernetes plugin.
125
125
@@ -129,19 +129,19 @@ For more information, see [Configure Azure CNI for an AKS cluster][aks-configure
129
129
130
130
### Azure CNI Overlay networking
131
131
132
-
[Azure CNI Overlay][azure-cni-overlay] represents an evolution of Azure CNI, addressing scalability and planning challenges arising from the assignment of VNet IPs to pods. It achieves this by assigning private CIDR IPs to pods, which are separate from the VNet and can be reused across multiple clusters. Additionally, Azure CNI Overlay can scale beyond the 400 node limit enforced in Kubenet clusters. Azure CNI Overlay is the recommended option for most clusters.
132
+
[Azure CNI Overlay][azure-cni-overlay] represents an evolution of Azure CNI, addressing scalability and planning challenges arising from the assignment of virtual network IPs to pods. Azure CNI Overlay assigns private CIDR IPs to pods. The private IPs are separate from the virtual network and can be reused across multiple clusters. Azure CNI Overlay can scale beyond the 400 node limit enforced in Kubenet clusters. Azure CNI Overlay is the recommended option for most clusters.
133
133
134
134
### Azure CNI Powered by Cilium
135
135
136
-
[Azure CNI Powered by Cilium][azure-cni-powered-by-cilium] uses [Cilium](https://cilium.io) to provide high-performance networking, observability, and network policy enforcement. It integrates natively with [Azure CNI Overlay][azure-cni-overlay] for scalable IP address management (IPAM)
136
+
[Azure CNI Powered by Cilium][azure-cni-powered-by-cilium] uses [Cilium](https://cilium.io) to provide high-performance networking, observability, and network policy enforcement. It integrates natively with [Azure CNI Overlay][azure-cni-overlay] for scalable IP address management (IPAM).
137
137
138
-
Additionally, Cilium enforces network policies by default, without requiring a separate network policy engine. Using eBPF programs and a more efficient API object structure, Azure CNI Powered by Cilium can scale beyond [Azure Network Policy Manager's limits of 250 nodes / 20K pod][use-network-policies].
138
+
Additionally, Cilium enforces network policies by default, without requiring a separate network policy engine. Azure CNI Powered by Cilium can scale beyond [Azure Network Policy Manager's limits of 250 nodes / 20-K pod][use-network-policies] by using ePBF programs and a more efficient API object structure.
139
139
140
140
Azure CNI Powered by Cilium is the recommended option for clusters that require network policy enforcement.
141
141
142
142
### Bring your own CNI
143
143
144
-
It is possible to install in AKS a third party CNI using the [Bring your own CNI][use-byo-cni] feature.
144
+
It's possible to install in AKS a non-Microsoft CNI using the [Bring your own CNI][use-byo-cni] feature.
145
145
146
146
### Compare network models
147
147
@@ -159,6 +159,12 @@ Both kubenet and Azure CNI provide network connectivity for your AKS clusters. H
159
159
* Pods get full virtual network connectivity and can be directly reached via their private IP address from connected networks.
160
160
* Requires more IP address space.
161
161
162
+
| Network model | When to use |
163
+
|---------------|-------------|
164
+
|**Kubenet**| • IP address space conversation is a priority. </br> • Simple configuration. </br> • Fewer than 400 nodes per cluster. </br> • Kubernetes internal or external load balancers are sufficient for reaching pods from outside the cluster. </br> • Manually managing and maintaining user defined routes is acceptable. |
165
+
|**Azure CNI**| • Full virtual network connectivity is required for pods. </br> • Advanced AKS features (such as virtual nodes) are needed. </br> • Sufficient IP address space is available. </br> • Pod to pod and pod to virtual machine connectivity is needed. </br> • External resources need to reach pods directly. </br> • AKS network policies are required. |
166
+
|**Azure CNI Overlay**| • IP address shortage is a concern. </br> • Scaling up to 1,000 nodes and 250 pods per node is sufficient. </br> • Extra hop for pod connectivity is acceptable. </br> • Simpler network configuration. </br> • AKS egress requirements can be met. |
167
+
162
168
The following behavior differences exist between kubenet and Azure CNI:
@@ -168,10 +174,10 @@ The following behavior differences exist between kubenet and Azure CNI:
168
174
| Pod-VM connectivity; VM in the same virtual network | Works when initiated by pod | Works both ways | Works when initiated by pod | Works when initiated by pod |
169
175
| Pod-VM connectivity; VM in peered virtual network | Works when initiated by pod | Works both ways | Works when initiated by pod | Works when initiated by pod |
170
176
| On-premises access using VPN or Express Route | Works when initiated by pod | Works both ways | Works when initiated by pod | Works when initiated by pod |
171
-
| Expose Kubernetes services using a load balancer service, App Gateway, or ingress controller | Supported | Supported |[No Application Gateway Ingress Controller (AGIC) support][azure-cni-overlay-limitations]| Same limitations when using Overlay mode |
177
+
| Access to resources secured by service endpoints | Supported | Supported | Supported ||
178
+
| Expose Kubernetes services using a load balancer service, App Gateway, or ingress controller | Supported | Supported | Supported| Same limitations when using Overlay mode |
172
179
| Support for Windows node pools | Not Supported | Supported | Supported |[Available only for Linux and not for Windows.][azure-cni-powered-by-cilium-limitations]|
173
-
174
-
For both kubenet and Azure CNI plugins, the DNS service is provided by CoreDNS, a deployment running in AKS with its own autoscaler. For more information on CoreDNS on Kubernetes, see [Customizing DNS Service](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/). CoreDNS by default is configured to forward unknown domains to the DNS functionality of the Azure Virtual Network where the AKS cluster is deployed. Hence, Azure DNS and Private Zones will work for pods running in AKS.
180
+
| Default Azure DNS and Private Zones | Supported | Supported | Supported ||
175
181
176
182
For more information on Azure CNI and kubenet and to help determine which option is best for you, see [Configure Azure CNI networking in AKS][azure-cni-aks] and [Use kubenet networking in AKS][aks-configure-kubenet-networking].
177
183
@@ -211,7 +217,7 @@ To learn more about the AGIC add-on for AKS, see [What is Application Gateway In
211
217
212
218
### SSL/TLS termination
213
219
214
-
SSL/TLS termination is another common feature of Ingress. On large web applications accessed via HTTPS, the Ingress resource handles the TLS termination rather than within the application itself. To provide automatic TLS certification generation and configuration, you can configure the Ingress resource to use providers such as "Let's Encrypt".
220
+
SSL/TLS termination is another common feature of Ingress. On large web applications accessed via HTTPS, the Ingress resource handles the TLS termination rather than within the application itself. To provide automatic TLS certification generation and configuration, you can configure the Ingress resource to use providers such as "Let's Encrypt."
215
221
216
222
For more information on configuring an NGINX ingress controller with Let's Encrypt, see [Ingress and TLS][aks-ingress-tls].
Copy file name to clipboardExpand all lines: articles/application-gateway/tcp-tls-proxy-overview.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,7 +38,7 @@ Process flow:
38
38
## Limitations
39
39
40
40
- A WAF v2 SKU gateway allows the creation of TLS or TCP listeners and backends to support HTTP and non-HTTP traffic through the same resource. However, it does not inspect traffic on TLS and TCP listeners for exploits and vulnerabilities.
41
-
-Advanced features like path-based routing, redirections, rewrite Headers, and URLs are only available with Layer 7 (HTTP & HTTPS) protocols.
41
+
-The default [draining timeout](configuration-http-settings.md#connection-draining) value for backend servers is 30 seconds. At present, a user-defined draining value is not supported.
Copy file name to clipboardExpand all lines: articles/azure-monitor/logs/cost-logs.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,7 +42,7 @@ The following [standard columns](log-standard-columns.md) are common to all tabl
42
42
43
43
### Excluded tables
44
44
45
-
Some tables are free from data ingestion charges altogether, including [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity), [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat), [Usage](/azure/azure-monitor/reference/tables/usage), and [Operation](/azure/azure-monitor/reference/tables/operation). This information will always be indicated by the [_IsBillable](log-standard-columns.md#_isbillable) column, which indicates whether a record was excluded from billing for data ingestion.
45
+
Some tables are free from data ingestion charges altogether, including [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity), [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat), [Usage](/azure/azure-monitor/reference/tables/usage), and [Operation](/azure/azure-monitor/reference/tables/operation). This information will always be indicated by the [_IsBillable](log-standard-columns.md#_isbillable) column, which indicates whether a record was excluded from billing for data ingestion, retention and archive.
Copy file name to clipboardExpand all lines: articles/communication-services/quickstarts/events/includes/create-event-subscription-az-cli.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,21 +39,21 @@ For a list of Communication Services events, see [Communication Services Events]
39
39
To list all the existing event subscriptions set up for an Azure Communication Services resource, by using [the Azure CLI](/cli/azure/get-started-with-azure-cli), use the [`az eventgrid event-subscription list`](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-list) command.
To update an existing event subscription by using [the Azure CLI](/cli/azure/get-started-with-azure-cli), use the [`az eventgrid event-subscription update`](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-update) command.
Copy file name to clipboardExpand all lines: articles/defender-for-cloud/release-notes.md
+7Lines changed: 7 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,11 +24,18 @@ If you're looking for items older than six months, you can find them in the [Arc
24
24
25
25
|Date | Update |
26
26
|----------|----------|
27
+
| February 26 |[Cloud support for Defender for Containers](#cloud-support-for-defender-for-containers)|
27
28
| February 20 |[New version of Defender Agent for Defender for Containers](#new-version-of-defender-agent-for-defender-for-containers)|
28
29
| February 18|[Open Container Initiative (OCI) image format specification support](#open-container-initiative-oci-image-format-specification-support)|
29
30
| February 13 |[AWS container vulnerability assessment powered by Trivy retired](#aws-container-vulnerability-assessment-powered-by-trivy-retired)|
30
31
| February 8 |[Recommendations released for preview: four recommendations for Azure Stack HCI resource type](#recommendations-released-for-preview-four-recommendations-for-azure-stack-hci-resource-type)|
31
32
33
+
### Cloud support for Defender for Containers
34
+
35
+
February 26, 2024
36
+
37
+
Azure Kubernetes Service (AKS) threat detection features in Defender for Containers are now fully supported in commercial, Azure Government, and Azure China 21Vianet clouds. [Review](support-matrix-defender-for-containers.md#azure) supported features.
38
+
32
39
### New version of Defender Agent for Defender for Containers
0 commit comments