Skip to content

Commit bd71ee3

Browse files
authored
Merge pull request #260615 from MicrosoftDocs/main
12/7/2023 AM Publish
2 parents b868304 + 156ea0c commit bd71ee3

File tree

82 files changed

+631
-415
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

82 files changed

+631
-415
lines changed

articles/aks/configure-kubenet-dual-stack.md

Lines changed: 116 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
---
22
title: Configure dual-stack kubenet networking in Azure Kubernetes Service (AKS)
33
titleSuffix: Azure Kubernetes Service
4-
description: Learn how to configure dual-stack kubenet networking in Azure Kubernetes Service (AKS)
4+
description: Learn how to configure dual-stack kubenet networking in Azure Kubernetes Service (AKS).
55
author: asudbring
66
ms.author: allensu
77
ms.subservice: aks-networking
8-
ms.custom: devx-track-azurecli, build-2023, devx-track-linux
98
ms.topic: how-to
10-
ms.date: 06/27/2023
9+
ms.date: 12/07/2023
10+
ms.custom: devx-track-azurecli, build-2023, devx-track-linux
1111
---
1212

1313
# Use dual-stack kubenet networking in Azure Kubernetes Service (AKS)
@@ -301,10 +301,60 @@ Once the cluster has been created, you can deploy your workloads. This article w
301301
> There are currently **two limitations** pertaining to IPv6 services in AKS.
302302
>
303303
> 1. Azure Load Balancer sends health probes to IPv6 destinations from a link-local address. In Azure Linux node pools, this traffic can't be routed to a pod, so traffic flowing to IPv6 services deployed with `externalTrafficPolicy: Cluster` fail. IPv6 services must be deployed with `externalTrafficPolicy: Local`, which causes `kube-proxy` to respond to the probe on the node.
304-
> 2. Only the first IP address for a service will be provisioned to the load balancer, so a dual-stack service only receives a public IP for its first-listed IP family. To provide a dual-stack service for a single deployment, please create two services targeting the same selector, one for IPv4 and one for IPv6.
304+
> 2. Starting from AKS v1.27, you can directly create a dualstack service. However, for older versions, only the first IP address for a service will be provisioned to the load balancer, so a dual-stack service only receives a public IP for its first-listed IP family. To provide a dual-stack service for a single deployment, please create two services targeting the same selector, one for IPv4 and one for IPv6.
305305
306306
# [kubectl](#tab/kubectl)
307307
308+
### AKS starting from v1.27
309+
310+
1. Expose the NGINX deployment using the `kubectl expose deployment nginx` command.
311+
312+
```bash-interactive
313+
kubectl expose deployment nginx --name=nginx --port=80 --type=LoadBalancer --overrides='{"spec":{"ipFamilyPolicy": "PreferDualStack", "ipFamilies": ["IPv4", "IPv6"]}}'
314+
```
315+
316+
You receive an output that shows the services have been exposed.
317+
318+
```output
319+
service/nginx exposed
320+
```
321+
322+
2. Once the deployment is exposed and the `LoadBalancer` services are fully provisioned, get the IP addresses of the services using the `kubectl get services` command.
323+
324+
```bash-interactive
325+
kubectl get services
326+
```
327+
328+
```output
329+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
330+
nginx LoadBalancer 10.0.223.73 2603:1030:20c:9::22d,4.156.88.133 80:30664/TCP 2m11s
331+
```
332+
333+
```bash-interactive
334+
kubectl get services nginx -ojsonpath='{.spec.clusterIPs}'
335+
```
336+
337+
```output
338+
["10.0.223.73","fd17:d93e:db1f:f771::54e"]
339+
```
340+
341+
3. Verify functionality via a command-line web request from an IPv6 capable host. Azure Cloud Shell isn't IPv6 capable.
342+
343+
```bash-interactive
344+
SERVICE_IP=$(kubectl get services nginx -o jsonpath='{.status.loadBalancer.ingress[1].ip}')
345+
curl -s "http://[${SERVICE_IP}]" | head -n5
346+
```
347+
348+
```html
349+
<!DOCTYPE html>
350+
<html>
351+
<head>
352+
<title>Welcome to nginx!</title>
353+
<style>
354+
```
355+
356+
### AKS older than v1.27
357+
308358
1. Expose the NGINX deployment using the `kubectl expose deployment nginx` command.
309359
310360
```bash-interactive
@@ -348,6 +398,68 @@ Once the cluster has been created, you can deploy your workloads. This article w
348398
349399
# [YAML](#tab/yaml)
350400
401+
### AKS starting from v1.27
402+
403+
1. Expose the NGINX deployment using the following YAML manifest.
404+
405+
```yml
406+
apiVersion: v1
407+
kind: Service
408+
metadata:
409+
labels:
410+
app: nginx
411+
name: nginx
412+
spec:
413+
externalTrafficPolicy: Cluster
414+
ipFamilyPolicy: PreferDualStack
415+
ipFamilies:
416+
- IPv4
417+
- IPv6
418+
ports:
419+
- port: 80
420+
protocol: TCP
421+
targetPort: 80
422+
selector:
423+
app: nginx
424+
type: LoadBalancer
425+
```
426+
427+
2. Once the deployment is exposed and the `LoadBalancer` services are fully provisioned, get the IP addresses of the services using the `kubectl get services` command.
428+
429+
```bash-interactive
430+
kubectl get services
431+
```
432+
433+
```output
434+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
435+
nginx LoadBalancer 10.0.223.73 2603:1030:20c:9::22d,4.156.88.133 80:30664/TCP 2m11s
436+
```
437+
438+
```bash-interactive
439+
kubectl get services nginx -ojsonpath='{.spec.clusterIPs}'
440+
```
441+
442+
```output
443+
["10.0.223.73","fd17:d93e:db1f:f771::54e"]
444+
```
445+
446+
3. Verify functionality via a command-line web request from an IPv6 capable host. Azure Cloud Shell isn't IPv6 capable.
447+
448+
```bash-interactive
449+
SERVICE_IP=$(kubectl get services nginx -o jsonpath='{.status.loadBalancer.ingress[1].ip}')
450+
curl -s "http://[${SERVICE_IP}]" | head -n5
451+
```
452+
453+
```html
454+
<!DOCTYPE html>
455+
<html>
456+
<head>
457+
<title>Welcome to nginx!</title>
458+
<style>
459+
```
460+
461+
### AKS older than v1.27
462+
351463
1. Expose the NGINX deployment using the following YAML manifest.
352464
353465
```yml

articles/azure-arc/resource-bridge/upgrade.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Upgrade Arc resource bridge
33
description: Learn how to upgrade Arc resource bridge using either cloud-managed upgrade or manual upgrade.
4-
ms.date: 11/27/2023
4+
ms.date: 12/07/2023
55
ms.topic: how-to
66
---
77

@@ -73,7 +73,7 @@ Currently, private cloud providers differ in how they perform Arc resource bridg
7373

7474
For Arc-enabled VMware vSphere, manual upgrade is available, but appliances on version 1.0.15 and higher automatically receive cloud-managed upgrade as the default experience. Appliances that are earlier than version 1.0.15 must be manually upgraded. A manual upgrade only upgrades the appliance to the next version, not the latest version. If you have multiple versions to upgrade, another option is to review the steps for [performing a recovery](/azure/azure-arc/vmware-vsphere/recover-from-resource-bridge-deletion), then delete the appliance VM and perform the recovery steps. This deploys a new Arc resource bridge using the latest version and reconnects pre-existing Azure resources.
7575

76-
Azure Arc VM management (preview) on Azure Stack HCI supports upgrade of an Arc resource bridge on Azure Stack HCI, version 22H2 up until appliance version 1.0.14 and `az arcappliance` CLI extension version 0.2.33. These upgrades can be done through manual upgrade. However, Azure Stack HCI, version 22H2 won't be supported for appliance version 1.0.15 or higher, because it's being deprecated. Customers on Azure Stack HCI, version 22H2 will receive limited support. To use appliance version 1.0.15 or higher, you must transition to Azure Stack HCI, version 23H2 (preview). In version 23H2 (preview), the LCM tool manages upgrades across all components as a "validated recipe" package. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq).
76+
For Azure Arc VM management (preview) on Azure Stack HCI, to use appliance version 1.0.15 or higher, you must be on Azure Stack HCI, version 23H2 (preview). In version 23H2 (preview), the LCM tool manages upgrades across all components as a "validated recipe" package. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq). Customers on Azure Stack HCI, version 22H2 will receive limited support.
7777

7878
For Arc-enabled System Center Virtual Machine Manager (SCVMM), the manual upgrade feature is available for appliance version 1.0.14 and higher. Appliances below version 1.0.14 need to perform the recovery option to get to version 1.0.15 or higher. Review the steps for [performing the recovery operation](/azure/azure-arc/system-center-virtual-machine-manager/disaster-recovery), then delete the appliance VM from SCVMM and perform the recovery steps. This deploys a new resource bridge and reconnects pre-existing Azure resources.
7979

articles/azure-functions/functions-bindings-event-hubs-output.md

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -195,6 +195,27 @@ def eventhub_output(req: func.HttpRequest, event: func.Out[str]):
195195
return 'ok'
196196
```
197197

198+
Here's Python code that sends multiple messages:
199+
```python
200+
import logging
201+
import azure.functions as func
202+
from typing import List
203+
204+
app = func.FunctionApp()
205+
206+
@app.function_name(name="eventhub_output")
207+
@app.route(route="eventhub_output")
208+
@app.event_hub_output(arg_name="event",
209+
event_hub_name="<EVENT_HUB_NAME>",
210+
connection="<CONNECTION_SETTING>")
211+
212+
def eventhub_output(req: func.HttpRequest, event: func.Out[List[str]]) -> func.HttpResponse:
213+
my_messages=["message1", "message2","message3"]
214+
event.set(my_messages)
215+
return func.HttpResponse(f"Messages sent")
216+
```
217+
218+
198219
# [v1](#tab/python-v1)
199220

200221
The following examples show Event Hubs binding data in the *function.json* file.
@@ -223,6 +244,19 @@ def main(timer: func.TimerRequest) -> str:
223244
return 'Message created at: {}'.format(timestamp)
224245
```
225246

247+
Here's Python code that sends multiple messages:
248+
```python
249+
import logging
250+
from typing import List
251+
import azure.functions as func
252+
253+
254+
def main(req: func.HttpRequest, messages:func.Out[List[str]]) -> func.HttpResponse:
255+
logging.info('Python HTTP trigger function processed a request.')
256+
messages.set([{"message1"}, {"message2"}])
257+
return func.HttpResponse(f"Messages sent")
258+
```
259+
226260
---
227261

228262
::: zone-end

articles/azure-netapp-files/configure-virtual-wan.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ This article will explain how to deploy and access an Azure NetApp Files volume
2424

2525
## Considerations
2626

27-
* You should be familiar with network policies for Azure NetApp Files [private endpoints](../private-link/disable-private-endpoint-network-policy.md). Refer to [Route Azure NetApp Files traffic from on-premises via Azure Firewall](#route-azure-netapp-files-traffic-from-on-premises-via-azure-firewall) for further information.
27+
* Azure NetApp Files connectivity over Virtual WAN is supported only when using Standard networking features. For more information see [Supported network topologies](azure-netapp-files-network-topologies.md#supported-network-topologies).
2828

2929
## Before you begin
3030

@@ -57,7 +57,7 @@ The following image of the Azure portal shows an example virtual hub of effectiv
5757
:::image type="content" source="../media/azure-netapp-files/effective-routes.png" alt-text="Screenshot of effective routes in Azure portal.":::
5858

5959
> [!IMPORTANT]
60-
> Azure NetApp Files mount leverages Azure Private Endpoint. The specific IP address entry is required, even if a CIDR to which the Azure NetApp Files volume IP address belongs is pointing to the Azure Firewall as its next hop. For example, 10.2.0.5/32 should be listed even though 10.0.0.0/8 is listed with the Azure Firewall as the next hop.
60+
> Azure NetApp Files mount leverages private IP addresses within a delegated [subnet](azure-netapp-files-network-topologies.md#subnets). The specific IP address entry is required, even if a CIDR to which the Azure NetApp Files volume IP address belongs is pointing to the Azure Firewall as its next hop. For example, 10.2.0.5/32 should be listed even though 10.0.0.0/8 is listed with the Azure Firewall as the next hop.
6161
6262
## List Azure NetApp Files volume IP under virtual hub effective routes
6363

articles/azure-resource-manager/bicep/key-vault-parameter.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,7 @@ The following procedure shows how to create a role with the minimum permission,
141141
New-AzRoleDefinition -InputFile "<path-to-role-file>"
142142
New-AzRoleAssignment `
143143
-ResourceGroupName ExampleGroup `
144-
-RoleDefinitionName "Key Vault resource manager template deployment operator" `
144+
-RoleDefinitionName "Key Vault Bicep deployment operator" `
145145
-SignInName <user-principal-name>
146146
```
147147

0 commit comments

Comments
 (0)