You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/active-directory-b2c/custom-policies-series-call-rest-api.md
+7-1Lines changed: 7 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ ms.service: active-directory
10
10
ms.workload: identity
11
11
ms.topic: how-to
12
12
ms.custom: b2c-docs-improvements
13
-
ms.date: 01/30/2023
13
+
ms.date: 03/16/2023
14
14
ms.author: kengaderdus
15
15
ms.reviewer: yoelh
16
16
ms.subservice: B2C
@@ -302,6 +302,12 @@ Then, update the *Metadata*, *InputClaimsTransformations*, and *InputClaims* of
302
302
</InputClaims>
303
303
```
304
304
305
+
## Receive data from RESTAPI
306
+
307
+
If your RESTAPI returns data, which you want to include as claims in your policy, you can receive it by specifying claims in the `OutputClaims` element of the RESTful technical profile. If the name of the claim defined in your policy is different from the name defined in the RESTAPI, you need to map these names by using the `PartnerClaimType` attribute.
308
+
309
+
Use the steps in [Receiving data](api-connectors-overview.md?pivots=b2c-custom-policy#receiving-data) to learn how to format the data the custom policy expects, how to handle nulls values, and how to parse REST the API's nested JSON body.
Copy file name to clipboardExpand all lines: articles/active-directory-domain-services/troubleshoot-alerts.md
+14-1Lines changed: 14 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ ms.service: active-directory
10
10
ms.subservice: domain-services
11
11
ms.workload: identity
12
12
ms.topic: troubleshooting
13
-
ms.date: 03/02/2023
13
+
ms.date: 03/15/2023
14
14
ms.author: justinha
15
15
16
16
---
@@ -314,6 +314,19 @@ When the managed domain is enabled again, the managed domain's health automatica
314
314
315
315
[Check the Azure AD DS health](check-health.md) for alerts that indicate problems in the configuration of the managed domain. If you're able to resolve alerts that indicate a configuration issue, wait two hours and check back to see if the synchronization has completed. When ready, [open an Azure support request][azure-support] to re-enable the managed domain.
316
316
317
+
## AADDS600: Unresolved health alerts for 30 days
318
+
319
+
### Alert Message
320
+
321
+
*Microsoft can’t manage the domain controllers for this managed domain due to unresolved health alerts \[IDs\]. This is blocking critical security updates as well as a planned migration to Windows Server 2019 for these domain controllers. Follow steps in the alert to resolve the issue. Failure to resolve this issue within 30 days will result in suspension of the managed domain.*
322
+
323
+
### Resolution
324
+
325
+
> [!WARNING]
326
+
> If a managed domain is suspended for an extended period of time, there's a danger of it being deleted. Resolve the reason for suspension as quickly as possible. For more information, see [Understand the suspended states for Azure AD DS](suspension.md).
327
+
328
+
[Check the Azure AD DS health](check-health.md) for alerts that indicate problems in the configuration of the managed domain. If you're able to resolve alerts that indicate a configuration issue, wait six hours and check back to see if the alert is removed. [Open an Azure support request][azure-support] if you need assistance.
329
+
317
330
## Next steps
318
331
319
332
If you still have issues, [open an Azure support request][azure-support] for additional troubleshooting assistance.
Copy file name to clipboardExpand all lines: articles/azure-app-configuration/quickstart-feature-flag-spring-boot.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -74,7 +74,7 @@ To create a new Spring Boot project:
74
74
75
75
> [!NOTE]
76
76
> * If you need to support an older version of Spring Boot see our [old appconfiguration library](https://github.com/Azure/azure-sdk-for-java/blob/spring-cloud-starter-azure-appconfiguration-config_1.2.9/sdk/appconfiguration/spring-cloud-starter-azure-appconfiguration-config/README.md) and our [old feature flag library](https://github.com/Azure/azure-sdk-for-java/blob/spring-cloud-starter-azure-appconfiguration-config_1.2.9/sdk/appconfiguration/spring-cloud-azure-feature-management/README.md).
77
-
> * There is a non-web Feature Management Library that doesn't have a dependency on spring-web. Refer to GitHub's [documentation](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/appconfiguration/azure-spring-cloud-feature-management) for differences.
77
+
> * There is a non-web Feature Management Library that doesn't have a dependency on spring-web. Refer to GitHub's [documentation](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/spring-cloud-azure-feature-management) for differences.
Copy file name to clipboardExpand all lines: articles/azure-arc/data/create-data-controller-indirect-cli.md
+12-13Lines changed: 12 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
---
2
2
title: Create data controller using CLI
3
-
description: Create an Azure Arc data controller, on a typical multi-node Kubernetes cluster which you already have created, using the CLI.
3
+
description: Create an Azure Arc data controller, on a typical multi-node Kubernetes cluster that you already have created, using the CLI.
4
4
services: azure-arc
5
5
ms.service: azure-arc
6
6
ms.subservice: azure-arc-data
@@ -20,12 +20,11 @@ Review the topic [Plan an Azure Arc-enabled data services deployment](plan-azure
20
20
21
21
### Install tools
22
22
23
-
To create the data controller using the CLI, you will need to install the `arcdata` extension for Azure (az) CLI.
23
+
Before you begin, install the `arcdata` extension for Azure (az) CLI.
24
24
25
25
[Install the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]](install-client-tools.md)
26
26
27
-
Regardless of which target platform you choose, you will need to set the following environment variables prior to the creation for the data controller. These environment variables will become the credentials used for accessing the metrics and logs dashboards after data controller creation.
28
-
27
+
Regardless of which target platform you choose, you need to set the following environment variables prior to the creation for the data controller. These environment variables become the credentials used for accessing the metrics and logs dashboards after data controller creation.
29
28
30
29
### Set environment variables
31
30
@@ -58,7 +57,7 @@ $ENV:AZDATA_METRICSUI_PASSWORD="<password for Grafana dashboard>"
58
57
59
58
### Connect to Kubernetes cluster
60
59
61
-
You will need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the creation of the Azure Arc data controller. How you connect to a Kubernetes cluster or service varies. See the documentation for the Kubernetes distribution or service that you are using on how to connect to the Kubernetes API server.
60
+
Connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the creation of the Azure Arc data controller. How you connect to a Kubernetes cluster or service varies. See the documentation for the Kubernetes distribution or service that you are using on how to connect to the Kubernetes API server.
62
61
63
62
You can check to see that you have a current Kubernetes connection and confirm your current context with the following commands.
64
63
@@ -86,7 +85,7 @@ The following sections provide instructions for specific types of Kubernetes pla
86
85
87
86
## Create on Azure Kubernetes Service (AKS)
88
87
89
-
By default, the AKS deployment profile uses the `managed-premium` storage class. The `managed-premium` storage class will only work if you have VMs that were deployed using VM images that have premium disks.
88
+
By default, the AKS deployment profile uses the `managed-premium` storage class. The `managed-premium` storage class only works if you have VMs that were deployed using VM images that have premium disks.
90
89
91
90
If you are going to use `managed-premium` as your storage class, then you can run the following command to create the data controller. Substitute the placeholders in the command with your resource group name, subscription ID, and Azure location.
92
91
@@ -162,7 +161,7 @@ Once you have run the command, continue on to [Monitoring the creation status](#
162
161
163
162
### Determine storage class
164
163
165
-
You will also need to determine which storage class to use by running the following command.
164
+
To determine which storage class to use, run the following command.
166
165
167
166
```console
168
167
kubectl get storageclass
@@ -204,10 +203,10 @@ az arcdata dc config replace --path ./custom/control.json --json-values "$.spec.
204
203
Now you are ready to create the data controller using the following command.
205
204
206
205
> [!NOTE]
207
-
> The `--path` parameter should point to the _directory_ containing the control.json file not to the control.json file itself.
206
+
> The `--path` parameter should point to the _directory_ containing the control.json file not to the control.json file itself.
208
207
209
208
> [!NOTE]
210
-
> When deploying to OpenShift Container Platform, you will need to specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
209
+
> When deploying to OpenShift Container Platform, specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
211
210
212
211
```azurecli
213
212
az arcdata dc create --path ./custom --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --infrastructure <infrastructure>
@@ -222,7 +221,7 @@ Once you have run the command, continue on to [Monitoring the creation status](#
222
221
223
222
By default, the kubeadm deployment profile uses a storage class called `local-storage` and service type `NodePort`. If this is acceptable you can skip the instructions below that set the desired storage class and service type and immediately run the `az arcdata dc create` command below.
224
223
225
-
If you want to customize your deployment profile to specify a specific storage class and/or service type, start by creating a new custom deployment profile file based on the kubeadm deployment profile by running the following command. This command will create a directory `custom` in your current working directory and a custom deployment profile file `control.json` in that directory.
224
+
If you want to customize your deployment profile to specify a specific storage class and/or service type, start by creating a new custom deployment profile file based on the kubeadm deployment profile by running the following command. This command creates a directory `custom` in your current working directory and a custom deployment profile file `control.json` in that directory.
226
225
227
226
```azurecli
228
227
az arcdata dc config init --source azure-arc-kubeadm --path ./custom
@@ -248,13 +247,13 @@ az arcdata dc config replace --path ./custom/control.json --json-values "spec.st
248
247
By default, the kubeadm deployment profile uses `NodePort` as the service type. If you are using a Kubernetes cluster that is integrated with a load balancer, you can change the configuration using the following command.
249
248
250
249
```azurecli
251
-
az arcdata dc config replace --path ./custom/control.json --json-values "$.spec.services[*].serviceType=LoadBalancer" --k8s-namespace <namespace> --use-k8s
250
+
az arcdata dc config replace --path ./custom/control.json --json-values "$.spec.services[*].serviceType=LoadBalancer"
252
251
```
253
252
254
253
Now you are ready to create the data controller using the following command.
255
254
256
255
> [!NOTE]
257
-
> When deploying to OpenShift Container Platform, you will need to specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
256
+
> When deploying to OpenShift Container Platform, specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
258
257
259
258
```azurecli
260
259
az arcdata dc create --path ./custom --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --infrastructure <infrastructure>
@@ -297,7 +296,7 @@ Once you have run the command, continue on to [Monitoring the creation status](#
297
296
298
297
## Monitor the creation status
299
298
300
-
Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands:
299
+
It takes a few minutes to create the controller completely. You can monitor the progress in another terminal window with the following commands:
301
300
302
301
> [!NOTE]
303
302
> The example commands below assume that you created a data controller named `arc-dc` and Kubernetes namespace named `arc`. If you used different values update the script accordingly.
0 commit comments