You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-arc/kubernetes/extensions-release.md
+2-3Lines changed: 2 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -215,10 +215,9 @@ For more information, see [What is Edge Storage Accelerator?](../edge-storage-ac
215
215
216
216
## Connected registry on Arc-enabled Kubernetes
217
217
218
-
-**Supported distributions**: Connected registry for Arc-enabled Kubernetes clusters.
219
-
-**Supported Azure regions**: All regions where Azure Arc-enabled Kubernetes is available.
218
+
-**Supported distributions**: AKS enabled by Azure Arc, Kubernetes using kind.
220
219
221
-
The connected registry extension for Azure Arc enables you to sync container images between your Azure Container Registry (ACR) and your local on-prem Azure Arc-enabled Kubernetes cluster. The extension is deployed to the local or remote cluster and uses a synchronization schedule and window to sync images between the on-prem connected registry and the cloud ACR registry.
220
+
The connected registry extension for Azure Arc allows you to synchronize container images between your Azure Container Registry (ACR) and your on-premises Azure Arc-enabled Kubernetes cluster. This extension can be deployed to either a local or remote cluster and utilizes a synchronization schedule and window to ensure seamless syncing of images between the on-premises connected registry and the cloud-based ACR.
222
221
223
222
For more information, see [Connected Registry for Arc-enabled Kubernetes clusters](../../container-registry/quickstart-connected-registry-arc-cli.md).
Copy file name to clipboardExpand all lines: articles/databox/data-box-deploy-export-ordered.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,6 +35,9 @@ Complete the following configuration prerequisites for Data Box service and devi
35
35
* Make sure that you have an existing resource group that you can use with your Azure Data Box.
36
36
37
37
* Make sure that your Azure Storage account that you want to export data from is one of the supported Storage account types as described [Supported storage accounts for Data Box](data-box-system-requirements.md#supported-storage-accounts).
38
+
39
+
> [!NOTE]
40
+
> The Export functionality will not include Access Control List (ACL) or metadata regarding the files and folders. If you are exporting Azure Files data, you may consider using a tool such as Robocopy to apply ACLs to the target folders prior to import.
Copy file name to clipboardExpand all lines: articles/import-export/storage-import-export-data-to-files.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,7 +41,7 @@ Before you create an import job to transfer data into Azure Files, carefully rev
41
41
42
42
## Step 1: Prepare the drives
43
43
44
-
This step generates a journal file. The journal file stores basic information such as drive serial number, encryption key, and storage account details.
44
+
Attach the external disk to the file share and run WAImportExport.exe file. This step generates a journal file. The journal file stores basic information such as drive serial number, encryption key, and storage account details.
Copy file name to clipboardExpand all lines: articles/openshift/create-cluster.md
+13-3Lines changed: 13 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ ms.author: johnmarc
6
6
ms.topic: article
7
7
ms.service: azure-redhat-openshift
8
8
ms.custom: devx-track-azurecli
9
-
ms.date: 06/12/2024
9
+
ms.date: 09/13/2024
10
10
#Customer intent: As a developer, I want learn how to create an Azure Red Hat OpenShift cluster, scale it, and then clean up resources so that I am not charged for what I'm not using.
11
11
---
12
12
@@ -105,7 +105,7 @@ If you provide a custom domain for your cluster, note the following points:
105
105
106
106
* The OpenShift console will be available at a URL such as `https://console-openshift-console.apps.example.com`, instead of the built-in domain `https://console-openshift-console.apps.<random>.<location>.aroapp.io`.
107
107
108
-
* By default, OpenShift uses self-signed certificates for all of the routes created on custom domains `*.apps.example.com`. If you choose to use custom DNS after connecting to the cluster, you will need to follow the OpenShift documentation to [configure a custom CA for your ingress controller](https://docs.openshift.com/container-platform/4.6/security/certificates/replacing-default-ingress-certificate.html) and a [custom CA for your API server](https://docs.openshift.com/container-platform/4.6/security/certificates/api-server.html).
108
+
* By default, OpenShift uses self-signed certificates for all of the routes created on custom domains `*.apps.example.com`. If you choose to use custom DNS after connecting to the cluster, you will need to follow the OpenShift documentation to [configure a custom CA for your ingress controller](https://docs.openshift.com/container-platform/latest/security/certificates/replacing-default-ingress-certificate.html) and a [custom CA for your API server](https://docs.openshift.com/container-platform/latest/security/certificates/api-server.html).
109
109
110
110
### Create a virtual network containing two empty subnets
111
111
@@ -208,8 +208,14 @@ Run the following command to create a cluster. If you choose to use either of th
208
208
* Optionally, you can [pass your Red Hat pull secret](#get-a-red-hat-pull-secret-optional), which enables your cluster to access Red Hat container registries along with other content. Add the `--pull-secret @pull-secret.txt` argument to your command.
209
209
* Optionally, you can [use a custom domain](#prepare-a-custom-domain-for-your-cluster-optional). Add the `--domain foo.example.com` argument to your command, replacing `foo.example.com` with your own custom domain.
210
210
211
+
212
+
<!--
211
213
> [!NOTE]
212
214
> If you're adding any optional arguments to your command, be sure to close the argument on the preceding line of the command with a trailing backslash.
215
+
-->
216
+
217
+
> [!NOTE]
218
+
> The maximum number of worker nodes definable at creation time is 50. You can scale out up to 250 nodes after the cluster is created.
213
219
214
220
```azurecli-interactive
215
221
az aro create \
@@ -220,7 +226,11 @@ az aro create \
220
226
--worker-subnet worker-subnet
221
227
```
222
228
223
-
After executing the `az aro create` command, it normally takes about 35 minutes to create a cluster.
229
+
After executing the `az aro create` command, it normally takes about 45 minutes to create a cluster.
230
+
231
+
#### Large scale ARO clusters
232
+
233
+
If you are looking to deploy an Azure Red Hat OpenShift cluster with more than 100 worker nodes please see the [Deploy a large Azure Red Hat OpenShift cluster](howto-large-clusters.md)
Copy file name to clipboardExpand all lines: articles/service-bus-messaging/service-bus-integrate-with-rabbitmq.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ Here's a few scenarios in which we can make use of these capabilities:
17
17
-**Hybrid Cloud**: Your company just acquired a third party that uses RabbitMQ for their messaging needs. They are on a different cloud. While they transition to Azure you can already start sharing data by bridging RabbitMQ with Azure Service Bus.
18
18
-**Third-Party Integration**: A third party uses RabbitMQ as a broker, and wants to send their data to us, but they are outside our organization. We can provide them with SAS Key giving them access to a limited set of Azure Service Bus queues where they can forward their messages to.
19
19
20
-
The list goes on, but we can solve most of these use cases by bridging RabbitMQ to Azure.
20
+
The list goes on, but we can solve most of these use cases by [bridging](/azure/architecture/patterns/messaging-bridge) RabbitMQ to Azure.
21
21
22
22
First you need to create a free Azure account by signing up [here](https://azure.microsoft.com/free/)
0 commit comments