You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/active-directory/governance/entitlement-management-organization.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -76,22 +76,22 @@ To add an external Azure AD directory or domain as a connected organization, fol
76
76
77
77
1. Select the **Directory + domain** tab, and then select **Add directory + domain**.
78
78
79
-
The**Select directories + domains** pane opens.
79
+
Then**Select directories + domains** pane opens.
80
80
81
-
1. In the search box, enter a domain name to search for the Azure AD directory or domain. Be sure to enter the entire domain name.
81
+
1. In the search box, enter a domain name to search for the Azure AD directory or domain. You can also add domains that are not in Azure AD. Be sure to enter the entire domain name.
82
82
83
-
1. Confirm that the organization name and authentication type are correct. User sign in, prior to being able to access the MyAccess portal, depends on the authentication type for their organization. If the authentication type for a connected organization is Azure AD, all users with an account in any verified domain of that Azure AD directory will sign into their directory, and then can request access to access packages that allow that connected organization. If the authentication type is One-time passcode, this allows users with email addresses from just that domain to visit the MyAccess portal. After they authenticate with the passcode, the user can make a request.
83
+
1. Confirm that the organization name(s) and authentication type(s) are correct. User sign in, prior to being able to access the MyAccess portal, depends on the authentication type for their organization. If the authentication type for a connected organization is Azure AD, all users with an account in any verified domain of that Azure AD directory will sign into their directory, and then can request access to access packages that allow that connected organization. If the authentication type is One-time passcode, this allows users with email addresses from just that domain to visit the MyAccess portal. After they authenticate with the passcode, the user can make a request.
> Access from some domains could be blocked by the Azure AD business to business (B2B) allow or deny list. For more information, see [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md).
89
89
90
-
1. Select **Add** to add the Azure AD directory or domain. Currently, you can add only one Azure AD directory or domain per connected organization.
90
+
1. Select **Add** to add the Azure AD directory or domain. **You can add multiple Azure AD directories and domains**.
91
91
92
-
1. After you've added the Azure AD directory or domain, select **Select**.
92
+
1. After you've added the Azure AD directories or domains, select **Select**.
Copy file name to clipboardExpand all lines: articles/azure-monitor/app/statsbeat.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ ms.reviwer: heya
9
9
10
10
# Statsbeat in Azure Application Insights
11
11
12
-
Statsbeat collects essential and non-essential [custom metric](../essentials/metrics-custom-overview.md) about Application Insights SDKs and auto-instrumentation. Statsbeat serves three benefits for Azure Monitor Application insights customers:
12
+
Statsbeat collects essential and non-essential [custom metric](../essentials/metrics-custom-overview.md) about Application Insights SDKs and auto-instrumentation. Statsbeat serves three benefits for Azure Monitor Application Insights customers:
13
13
- Service Health and Reliability (outside-in monitoring of connectivity to ingestion endpoint)
14
14
- Support Diagnostics (self-help insights and CSS insights)
15
15
- Product Improvement (insights for design optimizations)
Copy file name to clipboardExpand all lines: articles/azure-vmware/includes/vmware-software-versions.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,9 +19,9 @@ The VMware solution software versions used in new deployments of Azure VMware So
19
19
| ESXi | 7.0 U3c |
20
20
| vSAN | 7.0 U3c |
21
21
| vSAN on-disk format | 10 |
22
-
| HCX | 4.3.3|
22
+
| HCX | 4.4.2|
23
23
| NSX-T Data Center <br />**NOTE:** NSX-T Data Center is the only supported version of NSX Data Center. |[[!INCLUDE [nsxt-version](nsxt-version.md)]](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/rn/VMware-NSX-T-Data-Center-312-Release-Notes.html)|
24
24
25
25
26
-
The currently running software version is applied to new clusters added to an existing private cloud. For more information, see the [VMware software version requirements for HCX](https://docs.vmware.com/en/VMware-HCX/4.3/hcx-user-guide/GUID-54E5293B-8707-4D29-BFE8-EE63539CC49B.html) and [Understanding vSAN on-disk format versions and compatibility](https://kb.vmware.com/s/article/2148493).
26
+
The currently running software version is applied to new clusters added to an existing private cloud. For more information, see the [VMware software version requirements for HCX](https://docs.vmware.com/en/VMware-HCX/4.4.2/rn/vmware-hcx-442-release-notes/index.html) and [Understanding vSAN on-disk format versions and compatibility](https://kb.vmware.com/s/article/2148493).
Copy file name to clipboardExpand all lines: articles/container-registry/container-registry-transfer-troubleshooting.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,36 +9,36 @@ ms.topic: article
9
9
10
10
# ACR Transfer troubleshooting
11
11
12
-
***Template deployment failures or errors**
12
+
## Template deployment failures or errors
13
13
* If a pipeline run fails, look at the `pipelineRunErrorMessage` property of the run resource.
14
14
* For common template deployment errors, see [Troubleshoot ARM template deployments](../azure-resource-manager/templates/template-tutorial-troubleshoot.md)
* If your pipelineRun deployment fails with a `403 Forbidden` error when accessing Azure Key Vault, verify that your pipeline managed identity has adequate permissions.
17
17
* A pipelineRun uses the exportPipeline or importPipeline managed identity to fetch the SAS token secret from your Key Vault. ExportPipelines and importPipelines are provisioned with either a system-assigned or user-assigned managed identity. This managed identity is required to have `secret get` permissions on the Key Vault in order to read the SAS token secret. Ensure that an access policy for the managed identity was added to the Key Vault. For more information, reference [Give the ExportPipeline identity keyvault policy access](./container-registry-transfer-cli.md#give-the-exportpipeline-identity-keyvault-policy-access) and [Give the ImportPipeline identity keyvault policy access](./container-registry-transfer-cli.md#give-the-importpipeline-identity-keyvault-policy-access).
* If you see a `403 Forbidden` error from storage, you likely have a problem with your SAS token.
20
20
* The SAS token might not currently be valid. The SAS token might be expired or the storage account keys might have changed since the SAS token was created. Verify that the SAS token is valid by attempting to use the SAS token to authenticate for access to the storage account container. For example, put an existing blob endpoint followed by the SAS token in the address bar of a new Microsoft Edge InPrivate window or upload a blob to the container with the SAS token by using `az storage blob upload`.
21
21
* The SAS token might not have sufficient Allowed Resource Types. Verify that the SAS token has been given permissions to Service, Container, and Object under Allowed Resource Types (`srt=sco` in the SAS token).
22
22
* The SAS token might not have sufficient permissions. For export pipelines, the required SAS token permissions are Read, Write, List, and Add. For import pipelines, the required SAS token permissions are Read, Delete, and List. (The Delete permission is required only if the import pipeline has the `DeleteSourceBlobOnSuccess` option enabled.)
23
23
* The SAS token might not be configured to work with HTTPS only. Verify that the SAS token is configured to work with HTTPS only (`spr=https` in the SAS token).
24
-
***Problems with export or import of storage blobs**
24
+
## Problems with export or import of storage blobs
25
25
* SAS token may be invalid, or may have insufficient permissions for the specified export or import run. See [Problems accessing storage](#problems-accessing-storage).
26
26
* Existing storage blob in source storage account might not be overwritten during multiple export runs. Confirm that the OverwriteBlob option is set in the export run and the SAS token has sufficient permissions.
27
27
* Storage blob in target storage account might not be deleted after successful import run. Confirm that the DeleteBlobOnSuccess option is set in the import run and the SAS token has sufficient permissions.
28
28
* Storage blob not created or deleted. Confirm that container specified in export or import run exists, or specified storage blob exists for manual import run.
29
-
***Problems with Source Trigger Imports**
29
+
## Problems with Source Trigger Imports
30
30
* The SAS token must have the List permission for Source Trigger imports to work.
31
31
* Source Trigger imports will only fire if the Storage Blob has a Last Modified time within the last 60 days.
32
32
* The Storage Blob must have a valid ContentMD5 property in order to be imported by the Source Trigger feature.
33
33
* The Storage Blob must have the "category":"acr-transfer-blob" blob metadata in order to be imported by the Source Trigger feature. This metadata is added automatically during an Export Pipeline Run, but may be stripped when moved from storage account to storage account depending on the method of copy.
34
-
***AzCopy issues**
34
+
## AzCopy issues
35
35
* See [Troubleshoot AzCopy issues](../storage/common/storage-use-azcopy-configure.md).
36
-
***Artifacts transfer problems**
36
+
## Artifacts transfer problems
37
37
* Not all artifacts, or none, are transferred. Confirm spelling of artifacts in export run, and name of blob in export and import runs. Confirm you're transferring a maximum of 50 artifacts.
38
38
* Pipeline run might not have completed. An export or import run can take some time.
39
39
* For other pipeline issues, provide the deployment [correlation ID](../azure-resource-manager/templates/deployment-history.md) of the export run or import run to the Azure Container Registry team.
40
40
* To create ACR Transfer resources such as `exportPipelines`,` importPipelines`, and `pipelineRuns`, the user must have at least Contributor access on the ACR subscription. Otherwise, they'll see authorization to perform the transfer denied or scope is invalid errors.
41
-
***Problems pulling the image in a physically isolated environment**
41
+
## Problems pulling the image in a physically isolated environment
42
42
* If you see errors regarding foreign layers or attempts to resolve mcr.microsoft.com when attempting to pull an image in a physically isolated environment, your image manifest likely has non-distributable layers. Due to the nature of a physically isolated environment, these images will often fail to pull. You can confirm that this is the case by checking the image manifest for any references to external registries. If so, you'll need to push the non-distributable layers to your public cloud ACR prior to deploying an export pipeline-run for that image. For guidance on how to do this, see [How do I push non-distributable layers to a registry?](./container-registry-faq.yml#how-do-i-push-non-distributable-layers-to-a-registry-)
Copy file name to clipboardExpand all lines: articles/load-balancer/load-balancer-basic-upgrade-guidance.md
-2Lines changed: 0 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,8 +34,6 @@ This section lists out some key differences between these two Load Balancer SKUs
34
34
| ---- | ---- | ---- |
35
35
|**Backend type**| IP based, NIC based | NIC based |
36
36
|**Protocol**| TCP, UDP | TCP, UDP |
37
-
|**[Frontend IP configurations](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer)**| Supports up to 600 configurations | Supports up to 200 configurations |
38
-
|**[Backend pool size](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer)**| Supports up to 1000 instances | Supports up to 300 instances |
39
37
|**Backend pool endpoints**| Any virtual machines or virtual machine scale sets in a single virtual network | Virtual machines in a single availability set or virtual machine scale set |
|**[Health probe down behavior](load-balancer-custom-probe-overview.md#probe-down-behavior)**| TCP connections stay alive on an instance probe down and on all probes down | TCP connections stay alive on an instance probe down. All TCP connections end when all probes are down |
Copy file name to clipboardExpand all lines: articles/load-balancer/load-balancer-faqs.yml
+5Lines changed: 5 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -86,6 +86,11 @@ sections:
86
86
answer: |
87
87
Follow these [instructions](../firewall/integrate-lb.md) to configure your load balancer with an Azure Firewall.
88
88
89
+
- question: |
90
+
Can I use my custom IP address prefix (BYOIP) with Azure Load Balancer?
91
+
answer: |
92
+
Yes, this scenario is supported. You will need to create a public IP prefix and public IP address from your custom IP address prefix before using it with your load balancer. To learn more, visit [Manage a custom IP address prefix](/azure/virtual-network/ip-services/manage-custom-ip-address-prefix).
93
+
89
94
- question: |
90
95
How do I configure my load balancer with an Azure SQL Server Always On availability group?
Copy file name to clipboardExpand all lines: articles/load-balancer/skus.md
-2Lines changed: 0 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,8 +29,6 @@ To compare and understand the differences between Basic and Standard SKU, see th
29
29
|**Scenario**| Equipped for load-balancing network layer traffic when high performance and ultra-low latency is needed. Routes traffic within and across regions, and to availability zones for high resiliency. | Equipped for small-scale applications that don't need high availability or redundancy. Not compatible with availability zones. |
30
30
|**Backend type**| IP based, NIC based | NIC based |
31
31
|**Protocol**| TCP, UDP | TCP, UDP |
32
-
|**[Frontend IP Configurations](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer)**| Supports up to 600 configurations | Supports up to 200 configurations |
33
-
|**[Backend pool size](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer)**| Supports up to 5000 instances | Supports up to 300 instances |
34
32
|**Backend pool endpoints**| Any virtual machines or virtual machine scale sets in a single virtual network | Virtual machines in a single availability set or virtual machine scale set |
|**[Health probe down behavior](./load-balancer-custom-probe-overview.md#probe-down-behavior)**| TCP connections stay alive on an instance probe down __and__ on all probes down. | TCP connections stay alive on an instance probe down. All TCP connections end when all probes are down. |
Copy file name to clipboardExpand all lines: articles/openshift/howto-create-private-cluster-4x.md
+9-3Lines changed: 9 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,13 +37,19 @@ If you choose to install and use the CLI locally, this tutorial requires that yo
37
37
az provider register -n Microsoft.RedHatOpenShift --wait
38
38
```
39
39
40
-
1. Register the `Microsoft.Compute` resource provider:
40
+
1. Register the `Microsoft.Compute` resource provider (if you haven't already):
41
41
42
42
```azurecli-interactive
43
43
az provider register -n Microsoft.Compute --wait
44
44
```
45
45
46
-
1. Register the `Microsoft.Storage` resource provider:
46
+
1. Register the `Microsoft.Network` resource provider (if you haven't already):
47
+
48
+
```azurecli-interactive
49
+
az provider register -n Microsoft.Network --wait
50
+
```
51
+
52
+
1. Register the `Microsoft.Storage` resource provider (if you haven't already):
47
53
48
54
```azurecli-interactive
49
55
az provider register -n Microsoft.Storage --wait
@@ -190,7 +196,7 @@ After executing the `az aro create` command, it normally takes about 35 minutes
190
196
>[!IMPORTANT]
191
197
> If you choose to specify a custom domain, for example **foo.example.com**, the OpenShift console will be available at a URL such as `https://console-openshift-console.apps.foo.example.com`, instead of the built-in domain `https://console-openshift-console.apps.<random>.<location>.aroapp.io`.
192
198
>
193
-
> By default OpenShift uses self-signed certificates for all of the routes created on `*.apps.<random>.<location>.aroapp.io`. If you choose Custom DNS, after connecting to the cluster, you will need to follow the OpenShift documentation to [configure a custom CA for your ingress controller](https://docs.openshift.com/container-platform/4.3/authentication/certificates/replacing-default-ingress-certificate.html) and [custom CA for your API server](https://docs.openshift.com/container-platform/4.3/authentication/certificates/api-server.html).
199
+
> By default OpenShift uses self-signed certificates for all of the routes created on `*.apps.<random>.<location>.aroapp.io`. If you choose Custom DNS, after connecting to the cluster, you will need to follow the OpenShift documentation to [configure a custom certificate for your ingress controller](https://docs.openshift.com/container-platform/4.8/security/certificates/replacing-default-ingress-certificate.html) and [custom certificate for your API server](https://docs.openshift.com/container-platform/4.8/security/certificates/api-server.html).
Copy file name to clipboardExpand all lines: articles/openshift/support-lifecycle.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ ms.date: 06/16/2021
10
10
11
11
# Support lifecycle for Azure Red Hat OpenShift 4
12
12
13
-
Red Hat releases minor versions of Red Hat OpenShift Container Platform (OCP) roughly every three months. These releases include new features and improvements. Patch releases are more frequent (typically weekly) and are only intended for critical bug fixes within a minor version. These patch releases may include fixes for security vulnerabilities or major bugs.
13
+
Red Hat releases minor versions of Red Hat OpenShift Container Platform (OCP) roughly every four months. These releases include new features and improvements. Patch releases are more frequent (typically weekly) and are only intended for critical bug fixes within a minor version. These patch releases may include fixes for security vulnerabilities or major bugs.
14
14
15
15
Azure Red Hat OpenShift is built from specific releases of OCP. This article covers the versions of OCP that are supported for Azure Red Hat OpenShift and details about upgrades, deprecations, and support policy.
0 commit comments