Skip to content

Commit d904e21

Browse files
authored
Merge pull request #286812 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents 043d423 + a055309 commit d904e21

File tree

6 files changed

+22
-10
lines changed

6 files changed

+22
-10
lines changed

articles/azure-arc/kubernetes/azure-rbac.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -86,10 +86,10 @@ For a conceptual overview of this feature, see [Azure RBAC on Azure Arc-enabled
8686
1. Add the following specification under `volumes`:
8787

8888
```yml
89-
- name: azure-rbac
90-
hostPath:
89+
- hostPath
9190
path: /etc/guard
9291
type: Directory
92+
name: azure-rbac
9393
```
9494

9595
1. Add the following specification under `volumeMounts`:

articles/azure-arc/kubernetes/extensions-release.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -215,10 +215,9 @@ For more information, see [What is Edge Storage Accelerator?](../edge-storage-ac
215215

216216
## Connected registry on Arc-enabled Kubernetes
217217

218-
- **Supported distributions**: Connected registry for Arc-enabled Kubernetes clusters.
219-
- **Supported Azure regions**: All regions where Azure Arc-enabled Kubernetes is available.
218+
- **Supported distributions**: AKS enabled by Azure Arc, Kubernetes using kind.
220219

221-
The connected registry extension for Azure Arc enables you to sync container images between your Azure Container Registry (ACR) and your local on-prem Azure Arc-enabled Kubernetes cluster. The extension is deployed to the local or remote cluster and uses a synchronization schedule and window to sync images between the on-prem connected registry and the cloud ACR registry.
220+
The connected registry extension for Azure Arc allows you to synchronize container images between your Azure Container Registry (ACR) and your on-premises Azure Arc-enabled Kubernetes cluster. This extension can be deployed to either a local or remote cluster and utilizes a synchronization schedule and window to ensure seamless syncing of images between the on-premises connected registry and the cloud-based ACR.
222221

223222
For more information, see [Connected Registry for Arc-enabled Kubernetes clusters](../../container-registry/quickstart-connected-registry-arc-cli.md).
224223

articles/databox/data-box-deploy-export-ordered.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,9 @@ Complete the following configuration prerequisites for Data Box service and devi
3535
* Make sure that you have an existing resource group that you can use with your Azure Data Box.
3636

3737
* Make sure that your Azure Storage account that you want to export data from is one of the supported Storage account types as described [Supported storage accounts for Data Box](data-box-system-requirements.md#supported-storage-accounts).
38+
39+
> [!NOTE]
40+
> The Export functionality will not include Access Control List (ACL) or metadata regarding the files and folders. If you are exporting Azure Files data, you may consider using a tool such as Robocopy to apply ACLs to the target folders prior to import.
3841
3942
### For device
4043

articles/import-export/storage-import-export-data-to-files.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ Before you create an import job to transfer data into Azure Files, carefully rev
4141

4242
## Step 1: Prepare the drives
4343

44-
This step generates a journal file. The journal file stores basic information such as drive serial number, encryption key, and storage account details.
44+
Attach the external disk to the file share and run WAImportExport.exe file. This step generates a journal file. The journal file stores basic information such as drive serial number, encryption key, and storage account details.
4545

4646
Do the following steps to prepare the drives.
4747

articles/openshift/create-cluster.md

Lines changed: 13 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: johnmarc
66
ms.topic: article
77
ms.service: azure-redhat-openshift
88
ms.custom: devx-track-azurecli
9-
ms.date: 06/12/2024
9+
ms.date: 09/13/2024
1010
#Customer intent: As a developer, I want learn how to create an Azure Red Hat OpenShift cluster, scale it, and then clean up resources so that I am not charged for what I'm not using.
1111
---
1212

@@ -105,7 +105,7 @@ If you provide a custom domain for your cluster, note the following points:
105105
106106
* The OpenShift console will be available at a URL such as `https://console-openshift-console.apps.example.com`, instead of the built-in domain `https://console-openshift-console.apps.<random>.<location>.aroapp.io`.
107107
108-
* By default, OpenShift uses self-signed certificates for all of the routes created on custom domains `*.apps.example.com`. If you choose to use custom DNS after connecting to the cluster, you will need to follow the OpenShift documentation to [configure a custom CA for your ingress controller](https://docs.openshift.com/container-platform/4.6/security/certificates/replacing-default-ingress-certificate.html) and a [custom CA for your API server](https://docs.openshift.com/container-platform/4.6/security/certificates/api-server.html).
108+
* By default, OpenShift uses self-signed certificates for all of the routes created on custom domains `*.apps.example.com`. If you choose to use custom DNS after connecting to the cluster, you will need to follow the OpenShift documentation to [configure a custom CA for your ingress controller](https://docs.openshift.com/container-platform/latest/security/certificates/replacing-default-ingress-certificate.html) and a [custom CA for your API server](https://docs.openshift.com/container-platform/latest/security/certificates/api-server.html).
109109
110110
### Create a virtual network containing two empty subnets
111111
@@ -208,8 +208,14 @@ Run the following command to create a cluster. If you choose to use either of th
208208
* Optionally, you can [pass your Red Hat pull secret](#get-a-red-hat-pull-secret-optional), which enables your cluster to access Red Hat container registries along with other content. Add the `--pull-secret @pull-secret.txt` argument to your command.
209209
* Optionally, you can [use a custom domain](#prepare-a-custom-domain-for-your-cluster-optional). Add the `--domain foo.example.com` argument to your command, replacing `foo.example.com` with your own custom domain.
210210

211+
212+
<!--
211213
> [!NOTE]
212214
> If you're adding any optional arguments to your command, be sure to close the argument on the preceding line of the command with a trailing backslash.
215+
-->
216+
217+
> [!NOTE]
218+
> The maximum number of worker nodes definable at creation time is 50. You can scale out up to 250 nodes after the cluster is created.
213219
214220
```azurecli-interactive
215221
az aro create \
@@ -220,7 +226,11 @@ az aro create \
220226
--worker-subnet worker-subnet
221227
```
222228

223-
After executing the `az aro create` command, it normally takes about 35 minutes to create a cluster.
229+
After executing the `az aro create` command, it normally takes about 45 minutes to create a cluster.
230+
231+
#### Large scale ARO clusters
232+
233+
If you are looking to deploy an Azure Red Hat OpenShift cluster with more than 100 worker nodes please see the [Deploy a large Azure Red Hat OpenShift cluster](howto-large-clusters.md)
224234

225235
#### Selecting a different ARO version
226236

articles/service-bus-messaging/service-bus-integrate-with-rabbitmq.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Here's a few scenarios in which we can make use of these capabilities:
1717
- **Hybrid Cloud**: Your company just acquired a third party that uses RabbitMQ for their messaging needs. They are on a different cloud. While they transition to Azure you can already start sharing data by bridging RabbitMQ with Azure Service Bus.
1818
- **Third-Party Integration**: A third party uses RabbitMQ as a broker, and wants to send their data to us, but they are outside our organization. We can provide them with SAS Key giving them access to a limited set of Azure Service Bus queues where they can forward their messages to.
1919

20-
The list goes on, but we can solve most of these use cases by bridging RabbitMQ to Azure.
20+
The list goes on, but we can solve most of these use cases by [bridging](/azure/architecture/patterns/messaging-bridge) RabbitMQ to Azure.
2121

2222
First you need to create a free Azure account by signing up [here](https://azure.microsoft.com/free/)
2323

0 commit comments

Comments
 (0)