Skip to content

Commit 44e7c2b

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into mar15-metarefresh-twins
2 parents 0fd0441 + d6a8a1e commit 44e7c2b

File tree

99 files changed

+807
-527
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

99 files changed

+807
-527
lines changed

articles/active-directory-b2c/custom-policies-series-call-rest-api.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.service: active-directory
1010
ms.workload: identity
1111
ms.topic: how-to
1212
ms.custom: b2c-docs-improvements
13-
ms.date: 01/30/2023
13+
ms.date: 03/16/2023
1414
ms.author: kengaderdus
1515
ms.reviewer: yoelh
1616
ms.subservice: B2C
@@ -302,6 +302,12 @@ Then, update the *Metadata*, *InputClaimsTransformations*, and *InputClaims* of
302302
</InputClaims>
303303
```
304304

305+
## Receive data from REST API
306+
307+
If your REST API returns data, which you want to include as claims in your policy, you can receive it by specifying claims in the `OutputClaims` element of the RESTful technical profile. If the name of the claim defined in your policy is different from the name defined in the REST API, you need to map these names by using the `PartnerClaimType` attribute.
308+
309+
Use the steps in [Receiving data](api-connectors-overview.md?pivots=b2c-custom-policy#receiving-data) to learn how to format the data the custom policy expects, how to handle nulls values, and how to parse REST the API's nested JSON body.
310+
305311
## Next steps
306312
307313
Next, learn:

articles/active-directory-b2c/custom-policies-series-hello-world.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.service: active-directory
1010
ms.workload: identity
1111
ms.topic: how-to
1212
ms.custom: b2c-docs-improvements
13-
ms.date: 01/30/2023
13+
ms.date: 03/16/2023
1414
ms.author: kengaderdus
1515
ms.reviewer: yoelh
1616
ms.subservice: B2C
@@ -280,7 +280,7 @@ After the policy finishes execution, you're redirected to `https://jwt.ms`, and
280280
}.[Signature]
281281
```
282282

283-
Notice the `message` and `sub` claims, which we set as output claims](relyingparty.md#outputclaims) in the `RelyingParty` section.
283+
Notice the `message` and `sub` claims, which we set as [output claims](relyingparty.md#outputclaims) in the `RelyingParty` section.
284284

285285
## Next steps
286286

articles/active-directory-domain-services/troubleshoot-alerts.md

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.service: active-directory
1010
ms.subservice: domain-services
1111
ms.workload: identity
1212
ms.topic: troubleshooting
13-
ms.date: 03/02/2023
13+
ms.date: 03/15/2023
1414
ms.author: justinha
1515

1616
---
@@ -314,6 +314,19 @@ When the managed domain is enabled again, the managed domain's health automatica
314314
315315
[Check the Azure AD DS health](check-health.md) for alerts that indicate problems in the configuration of the managed domain. If you're able to resolve alerts that indicate a configuration issue, wait two hours and check back to see if the synchronization has completed. When ready, [open an Azure support request][azure-support] to re-enable the managed domain.
316316

317+
## AADDS600: Unresolved health alerts for 30 days
318+
319+
### Alert Message
320+
321+
*Microsoft can’t manage the domain controllers for this managed domain due to unresolved health alerts \[IDs\]. This is blocking critical security updates as well as a planned migration to Windows Server 2019 for these domain controllers. Follow steps in the alert to resolve the issue. Failure to resolve this issue within 30 days will result in suspension of the managed domain.*
322+
323+
### Resolution
324+
325+
> [!WARNING]
326+
> If a managed domain is suspended for an extended period of time, there's a danger of it being deleted. Resolve the reason for suspension as quickly as possible. For more information, see [Understand the suspended states for Azure AD DS](suspension.md).
327+
328+
[Check the Azure AD DS health](check-health.md) for alerts that indicate problems in the configuration of the managed domain. If you're able to resolve alerts that indicate a configuration issue, wait six hours and check back to see if the alert is removed. [Open an Azure support request][azure-support] if you need assistance.
329+
317330
## Next steps
318331

319332
If you still have issues, [open an Azure support request][azure-support] for additional troubleshooting assistance.

articles/azure-app-configuration/quickstart-feature-flag-spring-boot.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ To create a new Spring Boot project:
7474

7575
> [!NOTE]
7676
> * If you need to support an older version of Spring Boot see our [old appconfiguration library](https://github.com/Azure/azure-sdk-for-java/blob/spring-cloud-starter-azure-appconfiguration-config_1.2.9/sdk/appconfiguration/spring-cloud-starter-azure-appconfiguration-config/README.md) and our [old feature flag library](https://github.com/Azure/azure-sdk-for-java/blob/spring-cloud-starter-azure-appconfiguration-config_1.2.9/sdk/appconfiguration/spring-cloud-azure-feature-management/README.md).
77-
> * There is a non-web Feature Management Library that doesn't have a dependency on spring-web. Refer to GitHub's [documentation](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/appconfiguration/azure-spring-cloud-feature-management) for differences.
77+
> * There is a non-web Feature Management Library that doesn't have a dependency on spring-web. Refer to GitHub's [documentation](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/spring-cloud-azure-feature-management) for differences.
7878

7979
## Connect to an App Configuration store
8080

articles/azure-arc/data/create-data-controller-indirect-cli.md

Lines changed: 12 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: Create data controller using CLI
3-
description: Create an Azure Arc data controller, on a typical multi-node Kubernetes cluster which you already have created, using the CLI.
3+
description: Create an Azure Arc data controller, on a typical multi-node Kubernetes cluster that you already have created, using the CLI.
44
services: azure-arc
55
ms.service: azure-arc
66
ms.subservice: azure-arc-data
@@ -20,12 +20,11 @@ Review the topic [Plan an Azure Arc-enabled data services deployment](plan-azure
2020

2121
### Install tools
2222

23-
To create the data controller using the CLI, you will need to install the `arcdata` extension for Azure (az) CLI.
23+
Before you begin, install the `arcdata` extension for Azure (az) CLI.
2424

2525
[Install the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]](install-client-tools.md)
2626

27-
Regardless of which target platform you choose, you will need to set the following environment variables prior to the creation for the data controller. These environment variables will become the credentials used for accessing the metrics and logs dashboards after data controller creation.
28-
27+
Regardless of which target platform you choose, you need to set the following environment variables prior to the creation for the data controller. These environment variables become the credentials used for accessing the metrics and logs dashboards after data controller creation.
2928

3029
### Set environment variables
3130

@@ -58,7 +57,7 @@ $ENV:AZDATA_METRICSUI_PASSWORD="<password for Grafana dashboard>"
5857

5958
### Connect to Kubernetes cluster
6059

61-
You will need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the creation of the Azure Arc data controller. How you connect to a Kubernetes cluster or service varies. See the documentation for the Kubernetes distribution or service that you are using on how to connect to the Kubernetes API server.
60+
Connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the creation of the Azure Arc data controller. How you connect to a Kubernetes cluster or service varies. See the documentation for the Kubernetes distribution or service that you are using on how to connect to the Kubernetes API server.
6261

6362
You can check to see that you have a current Kubernetes connection and confirm your current context with the following commands.
6463

@@ -86,7 +85,7 @@ The following sections provide instructions for specific types of Kubernetes pla
8685
8786
## Create on Azure Kubernetes Service (AKS)
8887

89-
By default, the AKS deployment profile uses the `managed-premium` storage class. The `managed-premium` storage class will only work if you have VMs that were deployed using VM images that have premium disks.
88+
By default, the AKS deployment profile uses the `managed-premium` storage class. The `managed-premium` storage class only works if you have VMs that were deployed using VM images that have premium disks.
9089

9190
If you are going to use `managed-premium` as your storage class, then you can run the following command to create the data controller. Substitute the placeholders in the command with your resource group name, subscription ID, and Azure location.
9291

@@ -162,7 +161,7 @@ Once you have run the command, continue on to [Monitoring the creation status](#
162161

163162
### Determine storage class
164163

165-
You will also need to determine which storage class to use by running the following command.
164+
To determine which storage class to use, run the following command.
166165

167166
```console
168167
kubectl get storageclass
@@ -204,10 +203,10 @@ az arcdata dc config replace --path ./custom/control.json --json-values "$.spec.
204203
Now you are ready to create the data controller using the following command.
205204

206205
> [!NOTE]
207-
> The `--path` parameter should point to the _directory_ containing the control.json file not to the control.json file itself.
206+
> The `--path` parameter should point to the _directory_ containing the control.json file not to the control.json file itself.
208207
209208
> [!NOTE]
210-
> When deploying to OpenShift Container Platform, you will need to specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
209+
> When deploying to OpenShift Container Platform, specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
211210
212211
```azurecli
213212
az arcdata dc create --path ./custom --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --infrastructure <infrastructure>
@@ -222,7 +221,7 @@ Once you have run the command, continue on to [Monitoring the creation status](#
222221

223222
By default, the kubeadm deployment profile uses a storage class called `local-storage` and service type `NodePort`. If this is acceptable you can skip the instructions below that set the desired storage class and service type and immediately run the `az arcdata dc create` command below.
224223

225-
If you want to customize your deployment profile to specify a specific storage class and/or service type, start by creating a new custom deployment profile file based on the kubeadm deployment profile by running the following command. This command will create a directory `custom` in your current working directory and a custom deployment profile file `control.json` in that directory.
224+
If you want to customize your deployment profile to specify a specific storage class and/or service type, start by creating a new custom deployment profile file based on the kubeadm deployment profile by running the following command. This command creates a directory `custom` in your current working directory and a custom deployment profile file `control.json` in that directory.
226225

227226
```azurecli
228227
az arcdata dc config init --source azure-arc-kubeadm --path ./custom
@@ -248,13 +247,13 @@ az arcdata dc config replace --path ./custom/control.json --json-values "spec.st
248247
By default, the kubeadm deployment profile uses `NodePort` as the service type. If you are using a Kubernetes cluster that is integrated with a load balancer, you can change the configuration using the following command.
249248

250249
```azurecli
251-
az arcdata dc config replace --path ./custom/control.json --json-values "$.spec.services[*].serviceType=LoadBalancer" --k8s-namespace <namespace> --use-k8s
250+
az arcdata dc config replace --path ./custom/control.json --json-values "$.spec.services[*].serviceType=LoadBalancer"
252251
```
253252

254253
Now you are ready to create the data controller using the following command.
255254

256255
> [!NOTE]
257-
> When deploying to OpenShift Container Platform, you will need to specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
256+
> When deploying to OpenShift Container Platform, specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
258257
259258
```azurecli
260259
az arcdata dc create --path ./custom --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --infrastructure <infrastructure>
@@ -297,7 +296,7 @@ Once you have run the command, continue on to [Monitoring the creation status](#
297296

298297
## Monitor the creation status
299298

300-
Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands:
299+
It takes a few minutes to create the controller completely. You can monitor the progress in another terminal window with the following commands:
301300

302301
> [!NOTE]
303302
> The example commands below assume that you created a data controller named `arc-dc` and Kubernetes namespace named `arc`. If you used different values update the script accordingly.

0 commit comments

Comments
 (0)