Skip to content

Commit 377efda

Browse files
authored
Merge pull request #217060 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents 92ff9b0 + eb5b71a commit 377efda

File tree

12 files changed

+40
-43
lines changed

12 files changed

+40
-43
lines changed

articles/active-directory/governance/entitlement-management-organization.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -76,22 +76,22 @@ To add an external Azure AD directory or domain as a connected organization, fol
7676

7777
1. Select the **Directory + domain** tab, and then select **Add directory + domain**.
7878

79-
The **Select directories + domains** pane opens.
79+
Then **Select directories + domains** pane opens.
8080

81-
1. In the search box, enter a domain name to search for the Azure AD directory or domain. Be sure to enter the entire domain name.
81+
1. In the search box, enter a domain name to search for the Azure AD directory or domain. You can also add domains that are not in Azure AD. Be sure to enter the entire domain name.
8282

83-
1. Confirm that the organization name and authentication type are correct. User sign in, prior to being able to access the MyAccess portal, depends on the authentication type for their organization. If the authentication type for a connected organization is Azure AD, all users with an account in any verified domain of that Azure AD directory will sign into their directory, and then can request access to access packages that allow that connected organization. If the authentication type is One-time passcode, this allows users with email addresses from just that domain to visit the MyAccess portal. After they authenticate with the passcode, the user can make a request.
83+
1. Confirm that the organization name(s) and authentication type(s) are correct. User sign in, prior to being able to access the MyAccess portal, depends on the authentication type for their organization. If the authentication type for a connected organization is Azure AD, all users with an account in any verified domain of that Azure AD directory will sign into their directory, and then can request access to access packages that allow that connected organization. If the authentication type is One-time passcode, this allows users with email addresses from just that domain to visit the MyAccess portal. After they authenticate with the passcode, the user can make a request.
8484

8585
![The "Select directories + domains" pane](./media/entitlement-management-organization/organization-select-directories-domains.png)
8686

8787
> [!NOTE]
8888
> Access from some domains could be blocked by the Azure AD business to business (B2B) allow or deny list. For more information, see [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md).
8989
90-
1. Select **Add** to add the Azure AD directory or domain. Currently, you can add only one Azure AD directory or domain per connected organization.
90+
1. Select **Add** to add the Azure AD directory or domain. **You can add multiple Azure AD directories and domains**.
9191

92-
1. After you've added the Azure AD directory or domain, select **Select**.
92+
1. After you've added the Azure AD directories or domains, select **Select**.
9393

94-
The organization appears in the list.
94+
The organization(s) appears in the list.
9595

9696
![The "Directory + domain" pane](./media/entitlement-management-organization/organization-directory-domain.png)
9797

articles/azure-monitor/app/statsbeat.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.reviwer: heya
99

1010
# Statsbeat in Azure Application Insights
1111

12-
Statsbeat collects essential and non-essential [custom metric](../essentials/metrics-custom-overview.md) about Application Insights SDKs and auto-instrumentation. Statsbeat serves three benefits for Azure Monitor Application insights customers:
12+
Statsbeat collects essential and non-essential [custom metric](../essentials/metrics-custom-overview.md) about Application Insights SDKs and auto-instrumentation. Statsbeat serves three benefits for Azure Monitor Application Insights customers:
1313
- Service Health and Reliability (outside-in monitoring of connectivity to ingestion endpoint)
1414
- Support Diagnostics (self-help insights and CSS insights)
1515
- Product Improvement (insights for design optimizations)

articles/azure-vmware/includes/vmware-software-versions.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,9 +19,9 @@ The VMware solution software versions used in new deployments of Azure VMware So
1919
| ESXi | 7.0 U3c |
2020
| vSAN | 7.0 U3c |
2121
| vSAN on-disk format | 10 |
22-
| HCX | 4.3.3 |
22+
| HCX | 4.4.2 |
2323
| NSX-T Data Center <br />**NOTE:** NSX-T Data Center is the only supported version of NSX Data Center. | [[!INCLUDE [nsxt-version](nsxt-version.md)]](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/rn/VMware-NSX-T-Data-Center-312-Release-Notes.html) |
2424

2525

26-
The currently running software version is applied to new clusters added to an existing private cloud. For more information, see the [VMware software version requirements for HCX](https://docs.vmware.com/en/VMware-HCX/4.3/hcx-user-guide/GUID-54E5293B-8707-4D29-BFE8-EE63539CC49B.html) and [Understanding vSAN on-disk format versions and compatibility](https://kb.vmware.com/s/article/2148493).
26+
The currently running software version is applied to new clusters added to an existing private cloud. For more information, see the [VMware software version requirements for HCX](https://docs.vmware.com/en/VMware-HCX/4.4.2/rn/vmware-hcx-442-release-notes/index.html) and [Understanding vSAN on-disk format versions and compatibility](https://kb.vmware.com/s/article/2148493).
2727

articles/container-registry/container-registry-transfer-troubleshooting.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -9,36 +9,36 @@ ms.topic: article
99

1010
# ACR Transfer troubleshooting
1111

12-
* **Template deployment failures or errors**
12+
## Template deployment failures or errors
1313
* If a pipeline run fails, look at the `pipelineRunErrorMessage` property of the run resource.
1414
* For common template deployment errors, see [Troubleshoot ARM template deployments](../azure-resource-manager/templates/template-tutorial-troubleshoot.md)
15-
* **Problems accessing Key Vault**<a name="problems-accessing-key-vault"></a>
15+
## Problems accessing Key Vault
1616
* If your pipelineRun deployment fails with a `403 Forbidden` error when accessing Azure Key Vault, verify that your pipeline managed identity has adequate permissions.
1717
* A pipelineRun uses the exportPipeline or importPipeline managed identity to fetch the SAS token secret from your Key Vault. ExportPipelines and importPipelines are provisioned with either a system-assigned or user-assigned managed identity. This managed identity is required to have `secret get` permissions on the Key Vault in order to read the SAS token secret. Ensure that an access policy for the managed identity was added to the Key Vault. For more information, reference [Give the ExportPipeline identity keyvault policy access](./container-registry-transfer-cli.md#give-the-exportpipeline-identity-keyvault-policy-access) and [Give the ImportPipeline identity keyvault policy access](./container-registry-transfer-cli.md#give-the-importpipeline-identity-keyvault-policy-access).
18-
* **Problems accessing storage**<a name="problems-accessing-storage"></a>
18+
## Problems accessing storage
1919
* If you see a `403 Forbidden` error from storage, you likely have a problem with your SAS token.
2020
* The SAS token might not currently be valid. The SAS token might be expired or the storage account keys might have changed since the SAS token was created. Verify that the SAS token is valid by attempting to use the SAS token to authenticate for access to the storage account container. For example, put an existing blob endpoint followed by the SAS token in the address bar of a new Microsoft Edge InPrivate window or upload a blob to the container with the SAS token by using `az storage blob upload`.
2121
* The SAS token might not have sufficient Allowed Resource Types. Verify that the SAS token has been given permissions to Service, Container, and Object under Allowed Resource Types (`srt=sco` in the SAS token).
2222
* The SAS token might not have sufficient permissions. For export pipelines, the required SAS token permissions are Read, Write, List, and Add. For import pipelines, the required SAS token permissions are Read, Delete, and List. (The Delete permission is required only if the import pipeline has the `DeleteSourceBlobOnSuccess` option enabled.)
2323
* The SAS token might not be configured to work with HTTPS only. Verify that the SAS token is configured to work with HTTPS only (`spr=https` in the SAS token).
24-
* **Problems with export or import of storage blobs**
24+
## Problems with export or import of storage blobs
2525
* SAS token may be invalid, or may have insufficient permissions for the specified export or import run. See [Problems accessing storage](#problems-accessing-storage).
2626
* Existing storage blob in source storage account might not be overwritten during multiple export runs. Confirm that the OverwriteBlob option is set in the export run and the SAS token has sufficient permissions.
2727
* Storage blob in target storage account might not be deleted after successful import run. Confirm that the DeleteBlobOnSuccess option is set in the import run and the SAS token has sufficient permissions.
2828
* Storage blob not created or deleted. Confirm that container specified in export or import run exists, or specified storage blob exists for manual import run.
29-
* **Problems with Source Trigger Imports**
29+
## Problems with Source Trigger Imports
3030
* The SAS token must have the List permission for Source Trigger imports to work.
3131
* Source Trigger imports will only fire if the Storage Blob has a Last Modified time within the last 60 days.
3232
* The Storage Blob must have a valid ContentMD5 property in order to be imported by the Source Trigger feature.
3333
* The Storage Blob must have the "category":"acr-transfer-blob" blob metadata in order to be imported by the Source Trigger feature. This metadata is added automatically during an Export Pipeline Run, but may be stripped when moved from storage account to storage account depending on the method of copy.
34-
* **AzCopy issues**
34+
## AzCopy issues
3535
* See [Troubleshoot AzCopy issues](../storage/common/storage-use-azcopy-configure.md).
36-
* **Artifacts transfer problems**
36+
## Artifacts transfer problems
3737
* Not all artifacts, or none, are transferred. Confirm spelling of artifacts in export run, and name of blob in export and import runs. Confirm you're transferring a maximum of 50 artifacts.
3838
* Pipeline run might not have completed. An export or import run can take some time.
3939
* For other pipeline issues, provide the deployment [correlation ID](../azure-resource-manager/templates/deployment-history.md) of the export run or import run to the Azure Container Registry team.
4040
* To create ACR Transfer resources such as `exportPipelines`,` importPipelines`, and `pipelineRuns`, the user must have at least Contributor access on the ACR subscription. Otherwise, they'll see authorization to perform the transfer denied or scope is invalid errors.
41-
* **Problems pulling the image in a physically isolated environment**
41+
## Problems pulling the image in a physically isolated environment
4242
* If you see errors regarding foreign layers or attempts to resolve mcr.microsoft.com when attempting to pull an image in a physically isolated environment, your image manifest likely has non-distributable layers. Due to the nature of a physically isolated environment, these images will often fail to pull. You can confirm that this is the case by checking the image manifest for any references to external registries. If so, you'll need to push the non-distributable layers to your public cloud ACR prior to deploying an export pipeline-run for that image. For guidance on how to do this, see [How do I push non-distributable layers to a registry?](./container-registry-faq.yml#how-do-i-push-non-distributable-layers-to-a-registry-)
4343

4444
<!-- LINKS - External -->

articles/load-balancer/load-balancer-basic-upgrade-guidance.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,8 +34,6 @@ This section lists out some key differences between these two Load Balancer SKUs
3434
| ---- | ---- | ---- |
3535
| **Backend type** | IP based, NIC based | NIC based |
3636
| **Protocol** | TCP, UDP | TCP, UDP |
37-
| **[Frontend IP configurations](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer)** | Supports up to 600 configurations | Supports up to 200 configurations |
38-
| **[Backend pool size](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer)** | Supports up to 1000 instances | Supports up to 300 instances |
3937
| **Backend pool endpoints** | Any virtual machines or virtual machine scale sets in a single virtual network | Virtual machines in a single availability set or virtual machine scale set |
4038
| **[Health probe types](load-balancer-custom-probe-overview.md#probe-types)** | TCP, HTTP, HTTPS | TCP, HTTP |
4139
| **[Health probe down behavior](load-balancer-custom-probe-overview.md#probe-down-behavior)** | TCP connections stay alive on an instance probe down and on all probes down | TCP connections stay alive on an instance probe down. All TCP connections end when all probes are down |

articles/load-balancer/load-balancer-faqs.yml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -86,6 +86,11 @@ sections:
8686
answer: |
8787
Follow these [instructions](../firewall/integrate-lb.md) to configure your load balancer with an Azure Firewall.
8888
89+
- question: |
90+
Can I use my custom IP address prefix (BYOIP) with Azure Load Balancer?
91+
answer: |
92+
Yes, this scenario is supported. You will need to create a public IP prefix and public IP address from your custom IP address prefix before using it with your load balancer. To learn more, visit [Manage a custom IP address prefix](/azure/virtual-network/ip-services/manage-custom-ip-address-prefix).
93+
8994
- question: |
9095
How do I configure my load balancer with an Azure SQL Server Always On availability group?
9196
answer: |

articles/load-balancer/skus.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,8 +29,6 @@ To compare and understand the differences between Basic and Standard SKU, see th
2929
| **Scenario** | Equipped for load-balancing network layer traffic when high performance and ultra-low latency is needed. Routes traffic within and across regions, and to availability zones for high resiliency. | Equipped for small-scale applications that don't need high availability or redundancy. Not compatible with availability zones. |
3030
| **Backend type** | IP based, NIC based | NIC based |
3131
| **Protocol** | TCP, UDP | TCP, UDP |
32-
| **[Frontend IP Configurations](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer)** | Supports up to 600 configurations | Supports up to 200 configurations |
33-
| **[Backend pool size](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer)** | Supports up to 5000 instances | Supports up to 300 instances |
3432
| **Backend pool endpoints** | Any virtual machines or virtual machine scale sets in a single virtual network | Virtual machines in a single availability set or virtual machine scale set |
3533
| **[Health probes](./load-balancer-custom-probe-overview.md#probe-types)** | TCP, HTTP, HTTPS | TCP, HTTP |
3634
| **[Health probe down behavior](./load-balancer-custom-probe-overview.md#probe-down-behavior)** | TCP connections stay alive on an instance probe down __and__ on all probes down. | TCP connections stay alive on an instance probe down. All TCP connections end when all probes are down. |

articles/mysql/flexible-server/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -176,7 +176,7 @@ One advantage of running your workload in Azure is its global reach. The flexibl
176176
| Central US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
177177
| China East 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
178178
| China North 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
179-
| China North 3 |:heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
179+
| China North 3 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
180180
| East Asia (Hong Kong) | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
181181
| East US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
182182
| East US 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |

articles/openshift/howto-create-private-cluster-4x.md

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -37,13 +37,19 @@ If you choose to install and use the CLI locally, this tutorial requires that yo
3737
az provider register -n Microsoft.RedHatOpenShift --wait
3838
```
3939
40-
1. Register the `Microsoft.Compute` resource provider:
40+
1. Register the `Microsoft.Compute` resource provider (if you haven't already):
4141
4242
```azurecli-interactive
4343
az provider register -n Microsoft.Compute --wait
4444
```
4545
46-
1. Register the `Microsoft.Storage` resource provider:
46+
1. Register the `Microsoft.Network` resource provider (if you haven't already):
47+
48+
```azurecli-interactive
49+
az provider register -n Microsoft.Network --wait
50+
```
51+
52+
1. Register the `Microsoft.Storage` resource provider (if you haven't already):
4753
4854
```azurecli-interactive
4955
az provider register -n Microsoft.Storage --wait
@@ -190,7 +196,7 @@ After executing the `az aro create` command, it normally takes about 35 minutes
190196
>[!IMPORTANT]
191197
> If you choose to specify a custom domain, for example **foo.example.com**, the OpenShift console will be available at a URL such as `https://console-openshift-console.apps.foo.example.com`, instead of the built-in domain `https://console-openshift-console.apps.<random>.<location>.aroapp.io`.
192198
>
193-
> By default OpenShift uses self-signed certificates for all of the routes created on `*.apps.<random>.<location>.aroapp.io`. If you choose Custom DNS, after connecting to the cluster, you will need to follow the OpenShift documentation to [configure a custom CA for your ingress controller](https://docs.openshift.com/container-platform/4.3/authentication/certificates/replacing-default-ingress-certificate.html) and [custom CA for your API server](https://docs.openshift.com/container-platform/4.3/authentication/certificates/api-server.html).
199+
> By default OpenShift uses self-signed certificates for all of the routes created on `*.apps.<random>.<location>.aroapp.io`. If you choose Custom DNS, after connecting to the cluster, you will need to follow the OpenShift documentation to [configure a custom certificate for your ingress controller](https://docs.openshift.com/container-platform/4.8/security/certificates/replacing-default-ingress-certificate.html) and [custom certificate for your API server](https://docs.openshift.com/container-platform/4.8/security/certificates/api-server.html).
194200
195201
## Connect to the private cluster
196202

articles/openshift/support-lifecycle.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.date: 06/16/2021
1010

1111
# Support lifecycle for Azure Red Hat OpenShift 4
1212

13-
Red Hat releases minor versions of Red Hat OpenShift Container Platform (OCP) roughly every three months. These releases include new features and improvements. Patch releases are more frequent (typically weekly) and are only intended for critical bug fixes within a minor version. These patch releases may include fixes for security vulnerabilities or major bugs.
13+
Red Hat releases minor versions of Red Hat OpenShift Container Platform (OCP) roughly every four months. These releases include new features and improvements. Patch releases are more frequent (typically weekly) and are only intended for critical bug fixes within a minor version. These patch releases may include fixes for security vulnerabilities or major bugs.
1414

1515
Azure Red Hat OpenShift is built from specific releases of OCP. This article covers the versions of OCP that are supported for Azure Red Hat OpenShift and details about upgrades, deprecations, and support policy.
1616

0 commit comments

Comments
 (0)