Skip to content

Commit 4e3ce46

Browse files
authored
Merge pull request #303867 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents e9e6e9d + 8cdfd41 commit 4e3ce46

File tree

4 files changed

+18
-5
lines changed

4 files changed

+18
-5
lines changed

articles/app-service/includes/configure-azure-storage/azure-storage-linux-container-pivot.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -225,6 +225,7 @@ To validate that the Azure Storage is mounted successfully for the app:
225225
### Troubleshooting
226226

227227
- The mount directory in the custom container should be empty. Any content stored at this path is deleted when the Azure Storage is mounted, if you specify a directory under */home*, for example. If you migrate files for an existing app, make a backup of the app and its content before you begin.
228+
- When mounting an NFS share, you'll need to ensure that Secure Transfer Required is disabled on the storage account. App Service doesn't support mounting NFS shares when this is enabled. It uses port 2409 and virtual network integration and private endpoints as the security measure.
228229
- If you delete an Azure Storage account, container, or share, remove the corresponding storage mount configuration in the app to avoid possible error scenarios.
229230
- We don't recommend that you use storage mounts for local databases, such as SQLite, or for any other applications and components that rely on file handles and locks.
230231
- Ensure the following ports are open when using virtual network integration: Azure Files: 80 and 445. Azure Blobs: 80 and 443.

articles/azure-functions/opentelemetry-howto.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ Create specific application settings in your function app based on the OpenTelem
6868
**[OTEL_EXPORTER_OTLP_HEADERS](functions-app-settings.md#otel_exporter_otlp_headers)**: (Optional) list of headers to apply to all outgoing data. This setting is used by many endpoints to pass an API key.
6969

7070
::: zone pivot="programming-language-python"
71-
**[PYTHON_ENABLE_OPENTELEMETRY](./functions-app-settings.md#python_applicationinsights_enable_telemetry)**: set to `true` so that the Functions host allows the Java worker process to stream OpenTelemetry logs directly, which prevents duplicate host-level entries.
71+
**[PYTHON_ENABLE_OPENTELEMETRY](./functions-app-settings.md#python_applicationinsights_enable_telemetry)**: set to `true` so that the Functions host allows the Python worker process to stream OpenTelemetry logs directly, which prevents duplicate host-level entries.
7272
::: zone-end
7373

7474
If your endpoint requires you to set other environment variables, you need to also add them to your application settings. For more information, see the [OTLP Exporter Configuration documentation](https://opentelemetry.io/docs/languages/sdk-configuration/otlp-exporter/).

articles/operator-nexus/concepts-cluster-deployment-overview.md

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ author: sbatchu
55
ms.author: sbatchu
66
ms.service: azure-operator-nexus
77
ms.topic: conceptual
8-
ms.date: 06/07/2024
8+
ms.date: 08/05/2024
99
ms.custom: template-concept
1010
---
1111

@@ -20,6 +20,13 @@ During the cluster deployment, cluster undergoes various lifecycle phases, which
2020

2121
Hardware Validation is initiated during the cluster deployment process, assessing the state of hardware components for the machines provided through the Cluster's rack definition. Based on the results of these checks and any user skipped machines, a determination is done on whether sufficient nodes passed and/or are available to meet the thresholds necessary for deployment to continue.
2222

23+
> **Note:**
24+
> Hardware validation thresholds are enforced for various node types to ensure reliable cluster operation:
25+
> Management nodes are divided into two roles: Kubernetes Control Plane (KCP) nodes and Nexus Management Plane Nodes (NMP) nodes.
26+
> - **KCP nodes:** Must achieve a 100% hardware validation success rate since they make up the control plane.
27+
> - **NMP nodes:** These are grouped into two management groups, with each group required to meet a 50% hardware validation success rate.
28+
> - **Compute nodes:** Must meet the thresholds specified by the deployment input.
29+
2330
Hardware validation results for a given server are written into the Log Analytics Workspace(LAW), which is provided as part of the cluster creation. The results include the following categories:
2431
- system_info
2532
- drive_info
@@ -31,11 +38,11 @@ This article provides instructions on how to check hardware results information
3138

3239
### Bootstrap phase:
3340

34-
Once the Hardware Validation is successful, bootstrap image is generated for cluster deploy action on the cluster manager. This image iso URL is used to bootstrap the ephemeral node, which would deploy the target cluster components, which are provisioning the kubernetes control plane (KCP), Nexus Management plane (NMP), and storage appliance. These various states are reflected in the cluster status, which these stages are executed as part of the ephemeral bootstrap workflow.
41+
Once the Hardware Validation is successful and the thresholds necessary for deployment to continue are met, bootstrap image is generated for cluster deploy action on the cluster manager. This image iso URL is used to bootstrap the ephemeral node, which would deploy the target cluster components, which are provisioning the kubernetes control plane (KCP), Nexus Management plane (NMP), and storage appliance. These various states are reflected in the cluster status, which these stages are executed as part of the ephemeral bootstrap workflow.
3542

3643
The ephemeral bootstrap node sequentially provisions each KCP node, and if a KCP node fails to provision, the cluster deployment action fails, marking the cluster status as failed. The Bootstrap operator manages the provisioning process for bare-metal nodes using the PXE boot approach.
3744

38-
After successful provisioning of KCP nodes, the deployment action proceeds to provision NMP nodes in parallel. If an NMP node fails to provision, the cluster deployment action fails, resulting in the cluster status being marked as failed.
45+
Once KCP nodes are successfully provisioned, the deployment action proceeds to provision NMP nodes in parallel. Each management group must achieve at least a 50% provisioning success rate. If this requirement is not met, the cluster deployment action fails, resulting in the cluster status being marked as failed.
3946

4047
Upon successful provisioning of NMP nodes, up to two storage appliances are created before the deployment action proceeds with provisioning the compute nodes. Compute nodes are provisioned in parallel, and once the defined compute node threshold is met, the cluster status transitions from Deploying to Running. However, the remaining nodes continue undergoing the provisioning process until they too are successfully provisioned.
4148

@@ -44,4 +51,4 @@ Upon successful provisioning of NMP nodes, up to two storage appliances are crea
4451

4552
- **List cluster**: List cluster information in the provided resource group or subscription.
4653
- **Show cluster**: Get properties of the provided cluster.
47-
- **Update cluster**: Update properties or tags of the provided cluster.
54+
- **Update cluster**: Update properties or tags of the provided cluster.

articles/virtual-network/ip-services/public-ip-upgrade-vm.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -108,6 +108,11 @@ There is no way to evaluate upgrading a Public IP without completing the action.
108108

109109
Yes, the process of upgrading a Zonal Basic SKU Public IP to a Zonal Standard SKU Public IP is identical and works in the script.
110110

111+
### If I specify a NIC associated with a public IP targeted for migration in the Application Gateway backend pool, will this script remove it from the pool?
112+
113+
Yes, it will be removed. After running the script, you will need to manually reassign the NIC to the Application Gateway backend pool.
114+
Alternatively, you can avoid this issue by explicitly specifying the private IP address in the backend pool configuration before migration.
115+
111116
## Use Resource Graph to list VMs with Public IPs requiring upgrade
112117

113118
### Query to list virtual machines with Basic SKU public IP addresses

0 commit comments

Comments
 (0)