Skip to content

Commit 51f9454

Browse files
Merge pull request #205383 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents f821198 + 8e7dff3 commit 51f9454

15 files changed

+100
-44
lines changed

articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -399,6 +399,20 @@ Another MFA-related error message is the one described previously: "Your credent
399399

400400
![Screenshot of the message that says your credentials didn't work.](./media/howto-vm-sign-in-azure-ad-windows/your-credentials-did-not-work.png)
401401

402+
If you've configured a legacy per-user **Enabled/Enforced Azure AD Multi-Factor Authentication** setting and you see the error above, you can resolve the problem by removing the per-user MFA setting through these commands:
403+
404+
```
405+
# Get StrongAuthenticationRequirements configure on a user
406+
(Get-MsolUser -UserPrincipalName [email protected]).StrongAuthenticationRequirements
407+
408+
# Clear StrongAuthenticationRequirements from a user
409+
$mfa = @()
410+
Set-MsolUser -UserPrincipalName [email protected] -StrongAuthenticationRequirements $mfa
411+
412+
# Verify StrongAuthenticationRequirements are cleared from the user
413+
(Get-MsolUser -UserPrincipalName [email protected]).StrongAuthenticationRequirements
414+
```
415+
402416
If you haven't deployed Windows Hello for Business and if that isn't an option for now, you can configure a Conditional Access policy that excludes the Azure Windows VM Sign-In app from the list of cloud apps that require MFA. To learn more about Windows Hello for Business, see [Windows Hello for Business overview](/windows/security/identity-protection/hello-for-business/hello-identity-verification).
403417

404418
> [!NOTE]

articles/active-directory/fundamentals/5-secure-access-b2b.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,6 +84,8 @@ Some organizations use a list of known ‘bad actor’ domains provided by their
8484

8585
You can control both inbound and outbound access using Cross Tenant Access Settings. In addition, you can trust MFA, Compliant device, and hybrid Azure Active Directory joined device (HAADJ) claims from all or a subset of external Azure AD tenants. When you configure an organization specific policy, it applies to the entire Azure AD tenant and will cover all users from that tenant regardless of the user’s domain suffix.
8686

87+
You can enable collaboration across Microsoft clouds such as Microsoft Azure China 21Vianet or Microsoft Azure Government with additional configuration. Determine if any of your collaboration partners reside in a different Microsoft cloud. If so, you should [enable collaboration with these partners using Cross Tenant Access Settings](/azure/active-directory/external-identities/cross-cloud-settings).
88+
8789
If you wish to allow inbound access to only specific tenants (allowlist), you can set the default policy to block access and then create organization policies to granularly allow access on a per user, group, and application basis.
8890

8991
If you wish to block access to specific tenants (blocklist), you can set the default policy as allow and then create organization policies that block access to those specific tenants.
@@ -254,4 +256,4 @@ See the following articles on securing external access to resources. We recommen
254256

255257
8. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md)
256258

257-
9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
259+
9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)

articles/application-gateway/configuration-http-settings.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,12 @@ This setting combined with HTTPS in the listener supports [end-to-end TLS](ssl-o
4646

4747
This setting specifies the port where the back-end servers listen to traffic from the application gateway. You can configure ports ranging from 1 to 65535.
4848

49+
## Trusted root certificate
50+
51+
If you select HTTPS as the back-end protocol, the Application Gateway requires a trusted root certificate to trust the back-end pool for end-to-end SSL. By default, the **Use well known CA certificate** option is set to **No**. If you plan to use a self-signed certificate, or a certificate signed by an internal Certificate Authority, then you must provide the Application Gateway the matching public certificate that the back-end pool will be using. This certificate must be uploaded directly to the Application Gateway in .CER format.
52+
53+
If you plan to use a certificate on the back-end pool that is signed by a trusted public Certificate Authority, then you can set the **Use well known CA certificate** option to **Yes** and skip uploading a public certificate.
54+
4955
## Request timeout
5056

5157
This setting is the number of seconds that the application gateway waits to receive a response from the back-end server.

articles/application-gateway/overview-v2.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -85,6 +85,7 @@ The following table compares the features available with each SKU.
8585
| WebSocket support | ✓ | ✓ |
8686
| HTTP/2 support | ✓ | ✓ |
8787
| Connection draining | ✓ | ✓ |
88+
| Proxy NTML authentication | ✓ | |
8889

8990
> [!NOTE]
9091
> The autoscaling v2 SKU now supports [default health probes](application-gateway-probe-overview.md#default-health-probe) to automatically monitor the health of all resources in its back-end pool and highlight those backend members that are considered unhealthy. The default health probe is automatically configured for backends that don't have any custom probe configuration. To learn more, see [health probes in application gateway](application-gateway-probe-overview.md).

articles/application-gateway/private-link-configure.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,7 @@ The Private link configuration defines the infrastructure used by Application Ga
5151
- **Frontend IP Configuration**: The frontend IP address that private link should forward traffic to on Application Gateway.
5252
- **Private IP address settings**: specify at least one IP address
5353
1. Select **Add**.
54+
1. Within your **Application Gateways** properties blade, obtain and make a note of the **Resource ID**, you will require this if setting up a Private Endpoint within a diffrerent Azure AD tenant
5455

5556
**Configure Private Endpoint**
5657

@@ -67,6 +68,9 @@ A private endpoint is a network interface that uses a private IP address from th
6768
> [!Note]
6869
> If the public or private IP configuration resource is missing when trying to select a _Target sub-resource_ on the _Resource_ tab of private endpoint creation, please ensure a listener is actively utilizing the respected frontend IP configuration. Frontend IP configurations without an associated listener will not be shown as a _Target sub-resource_.
6970
71+
> [!Note]
72+
> If you are setting up the **Private Endpoint** from within another tenant, you will need to utilise the Azure Application Gateway Resource ID, along with sub-resource as either _appGwPublicFrontendIp_ or _appGwPrivateFrontendIp_, depending upon your Azure Application Gateway Private Link Frontend IP Configuration.
73+
7074
# [Azure PowerShell](#tab/powershell)
7175

7276
To configure Private link on an existing Application Gateway via Azure PowerShell, the following commands can be referenced:

articles/application-gateway/quick-create-cli.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -153,7 +153,8 @@ az network application-gateway create \
153153
--public-ip-address myAGPublicIPAddress \
154154
--vnet-name myVNet \
155155
--subnet myAGSubnet \
156-
--servers "$address1" "$address2"
156+
--servers "$address1" "$address2" \
157+
--priority 100
157158
```
158159

159160
It can take up to 30 minutes for Azure to create the application gateway. After it's created, you can view the following settings in the **Settings** section of the **Application gateway** page:

articles/healthcare-apis/fhir/using-rest-client.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ The line starting with `@name` contains a variable that captures the HTTP respon
5353
5454
```
5555
### Get access token
56-
@name getAADToken
56+
# @name getAADToken
5757
POST https://login.microsoftonline.com/{{tenantid}}/oauth2/token
5858
Content-Type: application/x-www-form-urlencoded
5959

articles/machine-learning/how-to-configure-network-isolation-with-v2.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -100,11 +100,15 @@ ws.update(v1_legacy_mode=false)
100100

101101
# [Azure CLI extension v1](#tab/azurecliextensionv1)
102102

103-
The Azure CLI [extension v1 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml(v1)/workspace#az-ml(v1)-workspace-update) command. To enable the parameter for a workspace, add the parameter `--v1-legacy-mode true`.
103+
The Azure CLI [extension v1 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml(v1)/workspace#az-ml(v1)-workspace-update) command. To disable the parameter for a workspace, add the parameter `--v1-legacy-mode False`.
104104

105105
> [!IMPORTANT]
106106
> The `v1-legacy-mode` parameter is only available in version 1.41.0 or newer of the Azure CLI extension for machine learning v1 (`azure-cli-ml`). Use the `az version` command to view version information.
107107
108+
```azurecli
109+
az ml workspace update -g <myresourcegroup> -w <myworkspace> --v1-legacy-mode False
110+
```
111+
108112
The return value of the `az ml workspace update` command may not show the updated value. To view the current state of the parameter, use the following command:
109113

110114
```azurecli
@@ -116,4 +120,4 @@ az ml workspace show -g <myresourcegroup> -w <myworkspace> --query v1LegacyMode
116120
## Next steps
117121

118122
* [Use a private endpoint with Azure Machine Learning workspace](how-to-configure-private-link.md).
119-
* [Create private link for managing Azure resources](../azure-resource-manager/management/create-private-link-access-portal.md).
123+
* [Create private link for managing Azure resources](../azure-resource-manager/management/create-private-link-access-portal.md).

articles/machine-learning/how-to-network-security-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -121,7 +121,7 @@ In this section, you learn how to secure the training environment in Azure Machi
121121
To secure the training environment, use the following steps:
122122

123123
1. Create an Azure Machine Learning [compute instance and computer cluster in the virtual network](how-to-secure-training-vnet.md#compute-cluster) to run the training job.
124-
1. If your compute cluster or compute instance does not use a public IP address, you must [Allow inbound communication](how-to-secure-training-vnet.md#required-public-internet-access) so that management services can submit jobs to your compute resources.
124+
1. If your compute cluster or compute instance uses a public IP address, you must [Allow inbound communication](how-to-secure-training-vnet.md#required-public-internet-access) so that management services can submit jobs to your compute resources.
125125

126126
> [!TIP]
127127
> Compute cluster and compute instance can be created with or without a public IP address. If created with a public IP address, you get a load balancer with a public IP to accept the inbound access from Azure batch service and Azure Machine Learning service. You need to configure User Defined Routing (UDR) if you use a firewall. If created without a public IP, you get a private link service to accept the inbound access from Azure batch service and Azure Machine Learning service without a public IP.

articles/machine-learning/how-to-train-distributed-gpu.md

Lines changed: 34 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -255,55 +255,66 @@ run = Experiment(ws, 'experiment_name').submit(run_config)
255255
256256
[PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/stable/) is a lightweight open-source library that provides a high-level interface for PyTorch. Lightning abstracts away many of the lower-level distributed training configurations required for vanilla PyTorch. Lightning allows you to run your training scripts in single GPU, single-node multi-GPU, and multi-node multi-GPU settings. Behind the scene, it launches multiple processes for you similar to `torch.distributed.launch`.
257257
258-
For single-node training (including single-node multi-GPU), you can run your code on Azure ML without needing to specify a `distributed_job_config`. For multi-node training, Lightning requires the following environment variables to be set on each node of your training cluster:
258+
For single-node training (including single-node multi-GPU), you can run your code on Azure ML without needing to specify a `distributed_job_config`.
259+
To run an experiment using multiple nodes with multiple GPUs, there are 2 options:
259260
260-
- MASTER_ADDR
261-
- MASTER_PORT
262-
- NODE_RANK
261+
- Using PyTorch configuration (recommended): Define `PyTorchConfiguration` and specify `communication_backend="Nccl"`, `node_count`, and `process_count` (note that this is the total number of processes, ie, `num_nodes * process_count_per_node`). In Lightning Trainer module, specify both `num_nodes` and `gpus` to be consistent with `PyTorchConfiguration`. For example, `num_nodes = node_count` and `gpus = process_count_per_node`.
263262
264-
To run multi-node Lightning training on Azure ML, follow the [per-node-launch](#per-node-launch) guidance, but note that currently, the `ddp` strategy works only when you run an experiment using multiple nodes, with one GPU per node.
263+
- Using MPI Configuration:
265264
266-
To run an experiment using multiple nodes with multiple GPUs:
267-
268-
- Define `MpiConfiguration` and specify `node_count`. Don't specify `process_count` because Lightning internally handles launching the worker processes for each node.
269-
- For PyTorch jobs, Azure ML handles setting the MASTER_ADDR, MASTER_PORT, and NODE_RANK environment variables that Lightning requires:
265+
- Define `MpiConfiguration` and specify both `node_count` and `process_count_per_node`. In Lightning Trainer, specify both `num_nodes` and `gpus` to be respectively the same as `node_count` and `process_count_per_node` from `MpiConfiguration`.
266+
- For multi-node training with MPI, Lightning requires the following environment variables to be set on each node of your training cluster:
267+
- MASTER_ADDR
268+
- MASTER_PORT
269+
- NODE_RANK
270+
- LOCAL_RANK
271+
272+
Manually set these environment variables that Lightning requires in the main training scripts:
270273
271274
```python
272275
import os
276+
from argparse import ArgumentParser
273277
274-
def set_environment_variables_for_nccl_backend(single_node=False, master_port=6105):
275-
if not single_node:
276-
master_node_params = os.environ["AZ_BATCH_MASTER_NODE"].split(":")
277-
os.environ["MASTER_ADDR"] = master_node_params[0]
278-
279-
# Do not overwrite master port with that defined in AZ_BATCH_MASTER_NODE
280-
if "MASTER_PORT" not in os.environ:
281-
os.environ["MASTER_PORT"] = str(master_port)
278+
def set_environment_variables_for_mpi(num_nodes, gpus_per_node, master_port=54965):
279+
if num_nodes > 1:
280+
os.environ["MASTER_ADDR"], os.environ["MASTER_PORT"] = os.environ["AZ_BATCH_MASTER_NODE"].split(":")
282281
else:
283282
os.environ["MASTER_ADDR"] = os.environ["AZ_BATCHAI_MPI_MASTER_NODE"]
284-
os.environ["MASTER_PORT"] = "54965"
283+
os.environ["MASTER_PORT"] = str(master_port)
285284
286-
os.environ["NCCL_SOCKET_IFNAME"] = "^docker0,lo"
287285
try:
288-
os.environ["NODE_RANK"] = os.environ["OMPI_COMM_WORLD_RANK"]
286+
os.environ["NODE_RANK"] = str(int(os.environ.get("OMPI_COMM_WORLD_RANK")) // gpus_per_node)
289287
# additional variables
290288
os.environ["MASTER_ADDRESS"] = os.environ["MASTER_ADDR"]
291289
os.environ["LOCAL_RANK"] = os.environ["OMPI_COMM_WORLD_LOCAL_RANK"]
292290
os.environ["WORLD_SIZE"] = os.environ["OMPI_COMM_WORLD_SIZE"]
293291
except:
294292
# fails when used with pytorch configuration instead of mpi
295293
pass
294+
295+
if __name__ == "__main__":
296+
parser = ArgumentParser()
297+
parser.add_argument("--num_nodes", type=int, required=True)
298+
parser.add_argument("--gpus_per_node", type=int, required=True)
299+
args = parser.parse_args()
300+
set_environment_variables_for_mpi(args.num_nodes, args.gpus_per_node)
301+
302+
trainer = Trainer(
303+
num_nodes=args.num_nodes,
304+
gpus=args.gpus_per_node
305+
)
296306
```
297307
298-
- Lightning handles computing the world size from the Trainer flags `--gpus` and `--num_nodes` and manages rank and local rank internally:
308+
Lightning handles computing the world size from the Trainer flags `--gpus` and `--num_nodes`.
299309
300310
```python
301311
from azureml.core import ScriptRunConfig, Experiment
302312
from azureml.core.runconfig import MpiConfiguration
303313
304314
nnodes = 2
305-
args = ['--max_epochs', 50, '--gpus', 2, '--accelerator', 'ddp_spawn', '--num_nodes', nnodes]
306-
distr_config = MpiConfiguration(node_count=nnodes)
315+
gpus_per_node = 4
316+
args = ['--max_epochs', 50, '--gpus_per_node', gpus_per_node, '--accelerator', 'ddp', '--num_nodes', nnodes]
317+
distr_config = MpiConfiguration(node_count=nnodes, process_count_per_node=gpus_per_node)
307318
308319
run_config = ScriptRunConfig(
309320
source_directory='./src',

0 commit comments

Comments
 (0)