Skip to content

Commit f8af7ab

Browse files
authored
Merge pull request #225282 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents 81ea3d2 + 0fb8b7b commit f8af7ab

28 files changed

+111
-85
lines changed

articles/active-directory-b2c/azure-ad-b2c-global-identity-solutions.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -122,13 +122,15 @@ The approach you choose will be based on the number of applications you host and
122122

123123
The performance advantage of using multiple tenants, in either the regional or funnel-based configuration, will be an improvement over using a single Azure AD B2C tenant for globally operating businesses.
124124

125-
When using the funnel-based approach, although the funnel tenant will be located in one region, but serve users globally, performance improvements will be maintained.
125+
When using the funnel-based approach, the funnel tenant will be located in one specific region and serve users globally. Since the funnel tenants operation utilizes a global component of the Azure AD B2C service, it will maintain a consistant level of performance regardless of where users login from.
126126

127127
![Screenshot shows the Azure AD B2C architecture.](./media/azure-ad-b2c-global-identity-solutions/azure-ad-b2c-architecture.png)
128128

129-
As shown in the diagram, the Azure AD B2C tenant in the funnel-based approach will only utilize the Policy Engine to perform the redirection to regional Azure AD B2C tenants. The Azure AD B2C Policy Engine component is globally distributed. Therefore, the funnel isn't constrained from a performance perspective, regardless of where the Azure AD B2C funnel tenant is provisioned. A performance loss is encountered due to the extra redirect between funnel and regional tenants in the funnel-based approach.
129+
As shown in the diagram above, the Azure AD B2C tenant in the funnel-based approach will only utilize the Policy Engine to perform the redirection to regional Azure AD B2C tenants. The Azure AD B2C Policy Engine component is globally distributed. Therefore, the funnel isn't constrained from a performance perspective, regardless of where the Azure AD B2C funnel tenant is provisioned. A performance loss is encountered due to the extra redirect between funnel and regional tenants in the funnel-based approach.
130130

131-
The regional tenants will perform directory calls into the Directory Store, which is the regionalized component.
131+
In the regional-based approach, since each user is directed to their most local Azure AD B2C, performance is consistant for all users logging in.
132+
133+
The regional tenants will perform directory calls into the Directory Store, which is the only regionalized component in both the funnel-based and regional-based architectures.
132134

133135
Additional latency is only encountered when the user has performed an authentication in a different region from which they had signed-up in. This is because, calls will be made across regions to reach the Directory Store where their profile lives to complete their authentication.
134136

articles/active-directory/develop/scenario-web-app-sign-user-app-configuration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -218,7 +218,7 @@ The initialization code differences are platform dependant. For ASP.NET Core and
218218

219219
# [ASP.NET Core](#tab/aspnetcore)
220220

221-
In ASP.NET Core web apps (and web APIs), the application is protected because you have a `Authorize` attribute on the controllers or the controller actions. This attribute checks that the user is authenticated. Prior to the release of .NET 6, the code initializaation wis in the *Startup.cs* file. New ASP.NET Core projects with .NET 6 no longer contain a *Startup.cs* file. Taking its place is the *Program.cs* file. The rest of this tutorial pertains to .NET 5 or lower.
221+
In ASP.NET Core web apps (and web APIs), the application is protected because you have a `Authorize` attribute on the controllers or the controller actions. This attribute checks that the user is authenticated. Prior to the release of .NET 6, the code initialization was in the *Startup.cs* file. New ASP.NET Core projects with .NET 6 no longer contain a *Startup.cs* file. Taking its place is the *Program.cs* file. The rest of this tutorial pertains to .NET 5 or lower.
222222

223223
> [!NOTE]
224224
> If you want to start directly with the new ASP.NET Core templates for Microsoft identity platform, that leverage Microsoft.Identity.Web, you can download a preview NuGet package containing project templates for .NET 5.0. Then, once installed, you can directly instantiate ASP.NET Core web applications (MVC or Blazor). See [Microsoft.Identity.Web web app project templates](https://aka.ms/ms-id-web/webapp-project-templates) for details. This is the simplest approach as it will do all the steps below for you.

articles/azure-arc/kubernetes/quickstart-connect-cluster.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
6363
```
6464
6565
* [Log in to Azure PowerShell](/powershell/azure/authenticate-azureps) using the identity (user or service principal) that you want to use for connecting your cluster to Azure Arc.
66-
* The identity used needs to at least have 'Read' and 'Write' permissions on the Azure Arc-enabled Kubernetes resource type (`Microsoft.Kubernetes/connectedClusters`).
66+
* The identity used needs to at least have 'Read' and 'Write' permissions on the Azure Arc-enabled Kubernetes resource type (`Microsoft.Kubernetes/connectedClusters`) and 'Read' permission on the resource group the Azure Arc Cluster is targeting.
6767
* The [Kubernetes Cluster - Azure Arc Onboarding built-in role](../../role-based-access-control/built-in-roles.md#kubernetes-cluster---azure-arc-onboarding) is useful for at-scale onboarding as it has the granular permissions required to only connect clusters to Azure Arc. This role doesn't have the permissions to update, delete, or modify any other clusters or other Azure resources.
6868
6969
* An up-and-running Kubernetes cluster. If you don't have one, you can create a cluster using one of these options:

articles/azure-monitor/essentials/diagnostic-settings.md

Lines changed: 12 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -205,15 +205,23 @@ After a few moments, the new setting appears in your list of settings for this r
205205

206206
# [PowerShell](#tab/powershell)
207207

208-
Use the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) cmdlet to create a diagnostic setting with [Azure PowerShell](../powershell-samples.md). See the documentation for this cmdlet for descriptions of its parameters.
208+
Use the [New-AzDiagnosticSetting](/powershell/module/az.monitor/new-azdiagnosticsetting?view=azps-9.1.0&preserve-view=true) cmdlet to create a diagnostic setting with [Azure PowerShell](../powershell-samples.md). See the documentation for this cmdlet for descriptions of its parameters.
209209

210210
> [!IMPORTANT]
211211
> You can't use this method for an activity log. Instead, use [Create diagnostic setting in Azure Monitor by using an Azure Resource Manager template](./resource-manager-diagnostic-settings.md) to create a Resource Manager template and deploy it with PowerShell.
212212
213-
The following example PowerShell cmdlet creates a diagnostic setting by using all three destinations.
213+
The following example PowerShell cmdlet creates a diagnostic setting for all logs and metrics for a key vault by using Log Analytics Workspace.
214214

215215
```powershell
216-
Set-AzDiagnosticSetting -Name KeyVault-Diagnostics -ResourceId /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.KeyVault/vaults/mykeyvault -Category AuditEvent -MetricCategory AllMetrics -Enabled $true -StorageAccountId /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount -WorkspaceId /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/oi-default-east-us/providers/microsoft.operationalinsights/workspaces/myworkspace -EventHubAuthorizationRuleId /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhub/authorizationrules/RootManageSharedAccessKey
216+
$KV= Get-AzKeyVault -ResourceGroupName <resource group name> -VaultName <key vault name>
217+
$Law= Get-AzOperationalInsightsWorkspace -ResourceGroupName <resource group name> -Name <workspace name> #LAW name is case sensitive
218+
219+
$metric = @()
220+
$log = @()
221+
$metric += New-AzDiagnosticSettingMetricSettingsObject -Enabled $true -Category AllMetrics -RetentionPolicyDay 30 -RetentionPolicyEnabled $true
222+
$log += New-AzDiagnosticSettingLogSettingsObject -Enabled $true -CategoryGroup allLogs -RetentionPolicyDay 30 -RetentionPolicyEnabled $true
223+
$log += New-AzDiagnosticSettingLogSettingsObject -Enabled $true -CategoryGroup audit -RetentionPolicyDay 30 -RetentionPolicyEnabled $true
224+
New-AzDiagnosticSetting -Name 'KeyVault-Diagnostics' -ResourceId $KV.ResourceId -WorkspaceId $Law.ResourceId -Log $log -Metric $metric -Verbose
217225
```
218226

219227
# [CLI](#tab/cli)
@@ -301,4 +309,4 @@ Every effort is made to ensure all log data is sent correctly to your destinatio
301309

302310
## Next step
303311

304-
[Read more about Azure platform logs](./platform-logs-overview.md)
312+
[Read more about Azure platform logs](./platform-logs-overview.md)

articles/backup/guidance-best-practices.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -367,7 +367,7 @@ You can configure such critical alerts and route them to any preferred notificat
367367

368368
#### Automatic Retry of Failed Backup Jobs
369369

370-
Many of the failure errors or the outage scenarios are transient in nature, and you can remediate by setting up the right Azure role-based access control (Azure RBAC) permissions3 or re-trigger the backup/restore job. As the solution to such failures is simple, that you don’t need tp invest time waiting for an engineer to manually trigger the job or to assign the relevant permission. Therefore, the smarter way to handle this scenario is to automate the retry of the failed jobs. This will highly minimize the time taken to recover from failures.
370+
Many of the failure errors or the outage scenarios are transient in nature, and you can remediate by setting up the right Azure role-based access control (Azure RBAC) permissions or re-trigger the backup/restore job. As the solution to such failures is simple, that you don’t need to invest time waiting for an engineer to manually trigger the job or to assign the relevant permission. Therefore, the smarter way to handle this scenario is to automate the retry of the failed jobs. This will highly minimize the time taken to recover from failures.
371371
You can achieve this by retrieving relevant backup data via Azure Resource Graph (ARG) and combine it with corrective [PowerShell/CLI procedure](/azure/architecture/framework/resiliency/auto-retry).
372372

373373
Watch the following video to learn how to re-trigger backup for all failed jobs (across vaults, subscriptions, tenants) using ARG and PowerShell.

articles/hdinsight/hdinsight-40-component-versioning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ The Open-source component versions associated with HDInsight 4.0 are listed in t
2828
| Apache Phoenix | 5 |
2929
| Apache Spark | 2.4.4 |
3030
| Apache Livy | 0.5 |
31-
| Apache Kafka | 2.1.1, 2.4.1 |
31+
| Apache Kafka | 2.1.1 |
3232
| Apache Ambari | 2.7.0 |
3333
| Apache Zeppelin | 0.8.0 |
3434

articles/hdinsight/hdinsight-50-component-versioning.md

Lines changed: 20 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -16,21 +16,21 @@ Starting June 1, 2022, we have started rolling out a new version of HDInsight 5.
1616

1717
The Open-source component versions associated with HDInsight 5.0 are listed in the following table.
1818

19-
| Component | HDInsight 5.0 | HDInsight 4.0 |
20-
|------------------------|---------------| --------------|
21-
|Apache Spark | 3.1.2 | 2.4.4|
22-
|Apache Hive | 3.1.2 | 3.1.2 |
23-
|Apache Kafka | - |2.1.1 and 2.4.1|
24-
|Apache Hadoop |3.1.1 | 3.1.1 |
25-
|Apache Tez |0.9.1 | 0.9.1 |
26-
|Apache Pig | 0.16.1 | 0.16.1 |
27-
|Apache Ranger | 1.1.0 | 1.1.0 |
28-
|Apache Sqoop | 1.5.0 | 1.5.0 |
29-
|Apache Oozie | 4.3.1 | 4.3.1 |
30-
|Apache Zookeeper | 3.4.6 | 3.4.6 |
31-
|Apache Livy | 0.5 | 0.5 |
32-
|Apache Ambari | 2.7.0 | 2.7.0 |
33-
|Apache Zeppelin | 0.8.0 | 0.8.0 |
19+
| Component | HDInsight 5.0 | HDInsight 4.0 |
20+
|------------------|---------------|---------------|
21+
| Apache Spark | 3.1.2 | 2.4.4 |
22+
| Apache Hive | 3.1.2 | 3.1.2 |
23+
| Apache Kafka | 2.4.1 | 2.1.1 |
24+
| Apache Hadoop | 3.1.1 | 3.1.1 |
25+
| Apache Tez | 0.9.1 | 0.9.1 |
26+
| Apache Pig | 0.16.1 | 0.16.1 |
27+
| Apache Ranger | 1.1.0 | 1.1.0 |
28+
| Apache Sqoop | 1.5.0 | 1.5.0 |
29+
| Apache Oozie | 4.3.1 | 4.3.1 |
30+
| Apache Zookeeper | 3.4.6 | 3.4.6 |
31+
| Apache Livy | 0.5 | 0.5 |
32+
| Apache Ambari | 2.7.0 | 2.7.0 |
33+
| Apache Zeppelin | 0.8.0 | 0.8.0 |
3434

3535
This table lists certain HDInsight 4.0 cluster types that have retired or will be retired soon.
3636

@@ -44,8 +44,8 @@ This table lists certain HDInsight 4.0 cluster types that have retired or will b
4444

4545
> [!NOTE]
4646
> * If you are using Azure User Interface to create a Spark Cluster for HDInsight, you will see from the dropdown list an additional version Spark 3.1.(HDI 5.0) along with the older versions. This version is a renamed version of Spark 3.1.(HDI 4.0) and it is backward compatible.
47-
> * This is only an UI level change, which doesn’t impact anything for the existing users and users who are already using the ARM template to build their clusters.
48-
> * For backward compatibility, ARM supports creating Spark 3.1 with HDI 4.0 and 5.0 versions which maps to same versions Sspark 3.1 (HDI 5.0)
47+
> * This is only a UI level change, which doesn’t impact anything for the existing users and users who are already using the ARM template to build their clusters.
48+
> * For backward compatibility, ARM supports creating Spark 3.1 with HDI 4.0 and 5.0 versions which maps to same versions Spark 3.1 (HDI 5.0)
4949
> * Spark 3.1 (HDI 5.0) cluster comes with HWC 2.0 which works well together with Interactive Query (HDI 5.0) cluster.
5050
5151
## Interactive Query
@@ -61,11 +61,11 @@ you need to select this version Interactive Query 3.1 (HDI 5.0).
6161

6262
## Kafka
6363

64-
**Known Issue –** Current ARM template supports only 4.0 even though it shows 5.0 image in portal Cluster creation may fail with the following error message if you select version 5.0 in the UI.
64+
Current ARM template supports HDI 5.0 for Kafka 2.4.1
6565

66-
`HDI Version'5.0" is not supported for clusterType ''Kafka" and component Version 2.4'.,Cluster component version is not applicable for HDI version: 5.0 cluster type: KAFKA (Code: BadRequest)`
66+
`HDI Version '5.0' is supported for clusterType "Kafka" and component Version '2.4'.`
6767

68-
We're working on this issue, and a fix will be rolled out shortly.
68+
We have fixed the arm templated issue.
6969

7070
### Upcoming version upgrades.
7171
HDInsight team is working on upgrading other open-source components.

articles/hdinsight/hdinsight-apache-kafka-spark-structured-streaming.md

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -82,15 +82,15 @@ kafkaStreamDF.select(from_json(col("value").cast("string"), schema) as "trip")
8282

8383
In both snippets, data is read from Kafka and written to file. The differences between the examples are:
8484

85-
| Batch | Streaming |
86-
| --- | --- |
87-
| `read` | `readStream` |
85+
| Batch | Streaming |
86+
|---------|---------------|
87+
| `read` | `readStream` |
8888
| `write` | `writeStream` |
89-
| `save` | `start` |
89+
| `save` | `start` |
9090

9191
The streaming operation also uses `awaitTermination(30000)`, which stops the stream after 30,000 ms.
9292

93-
To use Structured Streaming with Kafka, your project must have a dependency on the `org.apache.spark : spark-sql-kafka-0-10_2.11` package. The version of this package should match the version of Spark on HDInsight. For Spark 2.2.0 (available in HDInsight 3.6), you can find the dependency information for different project types at [https://search.maven.org/#artifactdetails%7Corg.apache.spark%7Cspark-sql-kafka-0-10_2.11%7C2.2.0%7Cjar](https://search.maven.org/#artifactdetails%7Corg.apache.spark%7Cspark-sql-kafka-0-10_2.11%7C2.2.0%7Cjar).
93+
To use Structured Streaming with Kafka, your project must have a dependency on the `org.apache.spark : spark-sql-kafka-0-10_2.11` package. The version of this package should match the version of Spark on HDInsight. For Spark 2.4 (available in HDInsight 4.0), you can find the dependency information for different project types at [https://search.maven.org/#artifactdetails%7Corg.apache.spark%7Cspark-sql-kafka-0-10_2.11%7C2.2.0%7Cjar](https://search.maven.org/#artifactdetails%7Corg.apache.spark%7Cspark-sql-kafka-0-10_2.11%7C2.2.0%7Cjar).
9494

9595
For the Jupyter Notebook used with this tutorial, the following cell loads this package dependency:
9696

@@ -125,26 +125,26 @@ To create an Azure Virtual Network, and then create the Kafka and Spark clusters
125125

126126
This template creates the following resources:
127127

128-
* A Kafka on HDInsight 3.6 cluster.
129-
* A Spark 2.2.0 on HDInsight 3.6 cluster.
128+
* A Kafka on HDInsight 4.0 or 5.0 cluster.
129+
* A Spark 2.4 or 3.1 on HDInsight 4.0 or 5.0 cluster.
130130
* An Azure Virtual Network, which contains the HDInsight clusters.
131131

132132
> [!IMPORTANT]
133-
> The structured streaming notebook used in this tutorial requires Spark 2.2.0 on HDInsight 3.6. If you use an earlier version of Spark on HDInsight, you receive errors when using the notebook.
133+
> The structured streaming notebook used in this tutorial requires Spark 2.4 or 3.1 on HDInsight 4.0 or 5.0. If you use an earlier version of Spark on HDInsight, you receive errors when using the notebook.
134134
135135
2. Use the following information to populate the entries on the **Customized template** section:
136136

137-
| Setting | Value |
138-
| --- | --- |
139-
| Subscription | Your Azure subscription |
140-
| Resource group | The resource group that contains the resources. |
141-
| Location | The Azure region that the resources are created in. |
142-
| Spark Cluster Name | The name of the Spark cluster. The first six characters must be different than the Kafka cluster name. |
143-
| Kafka Cluster Name | The name of the Kafka cluster. The first six characters must be different than the Spark cluster name. |
144-
| Cluster Login User Name | The admin user name for the clusters. |
145-
| Cluster Login Password | The admin user password for the clusters. |
146-
| SSH User Name | The SSH user to create for the clusters. |
147-
| SSH Password | The password for the SSH user. |
137+
| Setting | Value |
138+
|-------------------------| ------------------------------------------------------------------------------------------------------ |
139+
| Subscription | Your Azure subscription |
140+
| Resource group | The resource group that contains the resources. |
141+
| Location | The Azure region that the resources are created in. |
142+
| Spark Cluster Name | The name of the Spark cluster. The first six characters must be different than the Kafka cluster name. |
143+
| Kafka Cluster Name | The name of the Kafka cluster. The first six characters must be different than the Spark cluster name. |
144+
| Cluster Login User Name | The admin user name for the clusters. |
145+
| Cluster Login Password | The admin user password for the clusters. |
146+
| SSH User Name | The SSH user to create for the clusters. |
147+
| SSH Password | The password for the SSH user. |
148148

149149
:::image type="content" source="./media/hdinsight-apache-kafka-spark-structured-streaming/spark-kafka-template.png" alt-text="Screenshot of the customized template":::
150150

articles/hdinsight/kafka/apache-kafka-get-started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ To create an Apache Kafka cluster on HDInsight, use the following steps:
8989

9090
1. Review the configuration for the cluster. Change any settings that are incorrect. Finally, select **Create** to create the cluster.
9191

92-
:::image type="content" source="./media/apache-kafka-get-started/azure-hdinsight-40-portal-cluster-review-create-kafka.png" alt-text="Screenshot showing kafka cluster configuration summary for HDI version 4.0." border="true":::
92+
:::image type="content" source="./media/apache-kafka-get-started/azure-hdinsight-50-portal-cluster-review-create-kafka.png" alt-text="Screenshot showing kafka cluster configuration summary for HDI version 5.0." border="true":::
9393

9494

9595
It can take up to 20 minutes to create the cluster.

0 commit comments

Comments
 (0)