You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/app-service/includes/configure-azure-storage/azure-storage-linux-container-pivot.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -225,6 +225,7 @@ To validate that the Azure Storage is mounted successfully for the app:
225
225
### Troubleshooting
226
226
227
227
- The mount directory in the custom container should be empty. Any content stored at this path is deleted when the Azure Storage is mounted, if you specify a directory under */home*, for example. If you migrate files for an existing app, make a backup of the app and its content before you begin.
228
+
- When mounting an NFS share, you'll need to ensure that Secure Transfer Required is disabled on the storage account. App Service doesn't support mounting NFS shares when this is enabled. It uses port 2409 and virtual network integration and private endpoints as the security measure.
228
229
- If you delete an Azure Storage account, container, or share, remove the corresponding storage mount configuration in the app to avoid possible error scenarios.
229
230
- We don't recommend that you use storage mounts for local databases, such as SQLite, or for any other applications and components that rely on file handles and locks.
230
231
- Ensure the following ports are open when using virtual network integration: Azure Files: 80 and 445. Azure Blobs: 80 and 443.
Copy file name to clipboardExpand all lines: articles/azure-functions/opentelemetry-howto.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -68,7 +68,7 @@ Create specific application settings in your function app based on the OpenTelem
68
68
**[OTEL_EXPORTER_OTLP_HEADERS](functions-app-settings.md#otel_exporter_otlp_headers)**: (Optional) list of headers to apply to all outgoing data. This setting is used by many endpoints to pass an API key.
69
69
70
70
::: zone pivot="programming-language-python"
71
-
**[PYTHON_ENABLE_OPENTELEMETRY](./functions-app-settings.md#python_applicationinsights_enable_telemetry)**: set to `true` so that the Functions host allows the Java worker process to stream OpenTelemetry logs directly, which prevents duplicate host-level entries.
71
+
**[PYTHON_ENABLE_OPENTELEMETRY](./functions-app-settings.md#python_applicationinsights_enable_telemetry)**: set to `true` so that the Functions host allows the Python worker process to stream OpenTelemetry logs directly, which prevents duplicate host-level entries.
72
72
::: zone-end
73
73
74
74
If your endpoint requires you to set other environment variables, you need to also add them to your application settings. For more information, see the [OTLP Exporter Configuration documentation](https://opentelemetry.io/docs/languages/sdk-configuration/otlp-exporter/).
Copy file name to clipboardExpand all lines: articles/operator-nexus/concepts-cluster-deployment-overview.md
+11-4Lines changed: 11 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ author: sbatchu
5
5
ms.author: sbatchu
6
6
ms.service: azure-operator-nexus
7
7
ms.topic: conceptual
8
-
ms.date: 06/07/2024
8
+
ms.date: 08/05/2024
9
9
ms.custom: template-concept
10
10
---
11
11
@@ -20,6 +20,13 @@ During the cluster deployment, cluster undergoes various lifecycle phases, which
20
20
21
21
Hardware Validation is initiated during the cluster deployment process, assessing the state of hardware components for the machines provided through the Cluster's rack definition. Based on the results of these checks and any user skipped machines, a determination is done on whether sufficient nodes passed and/or are available to meet the thresholds necessary for deployment to continue.
22
22
23
+
> **Note:**
24
+
> Hardware validation thresholds are enforced for various node types to ensure reliable cluster operation:
25
+
> Management nodes are divided into two roles: Kubernetes Control Plane (KCP) nodes and Nexus Management Plane Nodes (NMP) nodes.
26
+
> -**KCP nodes:** Must achieve a 100% hardware validation success rate since they make up the control plane.
27
+
> -**NMP nodes:** These are grouped into two management groups, with each group required to meet a 50% hardware validation success rate.
28
+
> -**Compute nodes:** Must meet the thresholds specified by the deployment input.
29
+
23
30
Hardware validation results for a given server are written into the Log Analytics Workspace(LAW), which is provided as part of the cluster creation. The results include the following categories:
24
31
- system_info
25
32
- drive_info
@@ -31,11 +38,11 @@ This article provides instructions on how to check hardware results information
31
38
32
39
### Bootstrap phase:
33
40
34
-
Once the Hardware Validation is successful, bootstrap image is generated for cluster deploy action on the cluster manager. This image iso URL is used to bootstrap the ephemeral node, which would deploy the target cluster components, which are provisioning the kubernetes control plane (KCP), Nexus Management plane (NMP), and storage appliance. These various states are reflected in the cluster status, which these stages are executed as part of the ephemeral bootstrap workflow.
41
+
Once the Hardware Validation is successful and the thresholds necessary for deployment to continue are met, bootstrap image is generated for cluster deploy action on the cluster manager. This image iso URL is used to bootstrap the ephemeral node, which would deploy the target cluster components, which are provisioning the kubernetes control plane (KCP), Nexus Management plane (NMP), and storage appliance. These various states are reflected in the cluster status, which these stages are executed as part of the ephemeral bootstrap workflow.
35
42
36
43
The ephemeral bootstrap node sequentially provisions each KCP node, and if a KCP node fails to provision, the cluster deployment action fails, marking the cluster status as failed. The Bootstrap operator manages the provisioning process for bare-metal nodes using the PXE boot approach.
37
44
38
-
After successful provisioning of KCP nodes, the deployment action proceeds to provision NMP nodes in parallel. If an NMP node fails to provision, the cluster deployment action fails, resulting in the cluster status being marked as failed.
45
+
Once KCP nodes are successfully provisioned, the deployment action proceeds to provision NMP nodes in parallel. Each management group must achieve at least a 50% provisioning success rate. If this requirement is not met, the cluster deployment action fails, resulting in the cluster status being marked as failed.
39
46
40
47
Upon successful provisioning of NMP nodes, up to two storage appliances are created before the deployment action proceeds with provisioning the compute nodes. Compute nodes are provisioned in parallel, and once the defined compute node threshold is met, the cluster status transitions from Deploying to Running. However, the remaining nodes continue undergoing the provisioning process until they too are successfully provisioned.
41
48
@@ -44,4 +51,4 @@ Upon successful provisioning of NMP nodes, up to two storage appliances are crea
44
51
45
52
-**List cluster**: List cluster information in the provided resource group or subscription.
46
53
-**Show cluster**: Get properties of the provided cluster.
47
-
-**Update cluster**: Update properties or tags of the provided cluster.
54
+
-**Update cluster**: Update properties or tags of the provided cluster.
Copy file name to clipboardExpand all lines: articles/virtual-network/ip-services/public-ip-upgrade-vm.md
+5Lines changed: 5 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -108,6 +108,11 @@ There is no way to evaluate upgrading a Public IP without completing the action.
108
108
109
109
Yes, the process of upgrading a Zonal Basic SKU Public IP to a Zonal Standard SKU Public IP is identical and works in the script.
110
110
111
+
### If I specify a NIC associated with a public IP targeted for migration in the Application Gateway backend pool, will this script remove it from the pool?
112
+
113
+
Yes, it will be removed. After running the script, you will need to manually reassign the NIC to the Application Gateway backend pool.
114
+
Alternatively, you can avoid this issue by explicitly specifying the private IP address in the backend pool configuration before migration.
115
+
111
116
## Use Resource Graph to list VMs with Public IPs requiring upgrade
112
117
113
118
### Query to list virtual machines with Basic SKU public IP addresses
0 commit comments