Skip to content

Commit 5fe68a1

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into privlink-dns-region
2 parents e62dd8e + 3f6fe9b commit 5fe68a1

25 files changed

+365
-492
lines changed

articles/aks/auto-upgrade-node-image.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.date: 02/03/2023
99

1010
# Automatically upgrade Azure Kubernetes Service cluster node operating system images (preview)
1111

12-
AKS now supports an exclusive channel dedicated to controlling node-level OS security updates. This channel, referred to as the node OS auto-upgrade channel, works in tandem with the existing [Autoupgrade][auto-upgrade] channel which is used for Kubernetes version upgrades.
12+
AKS now supports an exclusive channel dedicated to controlling node-level OS security updates. This channel, referred to as the node OS auto-upgrade channel, works in tandem with the existing [auto-upgrade][Autoupgrade] channel which is used for Kubernetes version upgrades.
1313

1414
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
1515

@@ -64,7 +64,7 @@ The following upgrade channels are available:
6464
|---|---|
6565
| `None`| Your nodes won't have security updates applied automatically. This means you're solely responsible for your security updates|N/A|
6666
| `Unmanaged`|OS updates will be applied automatically through the OS built-in patching infrastructure. Newly allocated machines will be unpatched initially and will be patched at some point by the OS's infrastructure|Ubuntu applies security patches through unattended upgrade roughly once a day around 06:00 UTC. Windows and Mariner don't apply security patches automatically, so this option behaves equivalently to `None`|
67-
| `SecurityPatch`|AKS will update the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only" on a regular basis. Where possible, patches will also be applied without disruption to existing nodes. Some patches, such as kernel patches, can't be applied to existing nodes without disruption. For such patches, the VHD will be updated and existing machines will be upgraded to that VHD following maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group.|N/A|
67+
| `SecurityPatch`|AKS will update the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only" on a regular basis. Where possible, patches will also be applied without disruption to existing nodes. Some patches, such as kernel patches, can't be applied to existing nodes without disruption. For such patches, the VHD will be updated and existing machines will be upgraded to that VHD following maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group. If you use this channel, Linux [unattended upgrades][unattended-upgrades] will be disabled by default.|N/A|
6868
| `NodeImage`|AKS will update the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades][unattended-upgrades] will be disabled by default.|
6969

7070
To set the node OS auto-upgrade channel when creating a cluster, use the *node-os-upgrade-channel* parameter, similar to the following example.
22.7 KB
Loading
2.02 KB
Loading
29.2 KB
Loading
9.03 KB
Loading

articles/app-service/toc.yml

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@
7171
- name: N-tier app
7272
href: tutorial-secure-ntier-app.md
7373
- name: Authenticate users
74-
href: tutorial-auth-aad.md
74+
href: scenario-secure-app-authentication-app-service-as-user.md
7575
- name: Connect to Azure resource as app
7676
items:
7777
- name: Overview
@@ -116,14 +116,16 @@
116116
items:
117117
- name: Set up App Service authentication
118118
href: scenario-secure-app-authentication-app-service-as-user.md
119-
- name: Connect to Microsoft Graph
119+
- name: App to Microsoft Graph authentication
120120
items:
121121
- name: .NET
122122
href: scenario-secure-app-access-microsoft-graph-as-user.md
123123
- name: JavaScript
124124
href: tutorial-connect-app-access-microsoft-graph-as-user-javascript.md
125-
- name: Connect to another App Service
125+
- name: App to app authentication
126126
href: tutorial-auth-aad.md
127+
128+
127129
- name: Isolate network traffic
128130
href: tutorial-networking-isolate-vnet.md
129131
- name: Host a RESTful API

articles/app-service/tutorial-auth-aad.md

Lines changed: 210 additions & 272 deletions
Large diffs are not rendered by default.

articles/machine-learning/.openpublishing.redirection.machine-learning.json

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,14 @@
11
{
22
"redirections": [
33
{
4-
"source_path_from_root": "/articles/machine-learning/how-to-train-with-custom-image.md",
5-
"redirect_url": "/azure/machine-learning/v1/how-to-train-with-custom-image",
6-
"redirect_document_id": true
4+
"source_path_from_root": "/articles/machine-learning/quickstart-spark-data-wrangling.md",
5+
"redirect_url": "/azure/machine-learning/apache-spark-environment-configuration",
6+
"redirect_document_id": true
7+
},
8+
{
9+
"source_path_from_root": "/articles/machine-learning/how-to-train-with-custom-image.md",
10+
"redirect_url": "/azure/machine-learning/v1/how-to-train-with-custom-image",
11+
"redirect_document_id": true
712
},
813
{
914
"source_path_from_root": "/articles/machine-learning/how-to-monitor-tensorboard.md",
Lines changed: 120 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,120 @@
1+
---
2+
title: Apache Spark - Environment Configuration
3+
titleSuffix: Azure Machine Learning
4+
description: Learn how to configure your Apache Spark environment for interactive data wrangling
5+
author: ynpandey
6+
ms.author: franksolomon
7+
ms.reviewer: franksolomon
8+
ms.service: machine-learning
9+
ms.subservice: mldata
10+
ms.topic: how-to
11+
ms.date: 03/06/2023
12+
#Customer intent: As a Full Stack ML Pro, I want to perform interactive data wrangling in Azure Machine Learning with Apache Spark.
13+
---
14+
15+
# Quickstart: Interactive Data Wrangling with Apache Spark in Azure Machine Learning (preview)
16+
17+
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
18+
19+
To handle interactive Azure Machine Learning notebook data wrangling, Azure Machine Learning integration with Azure Synapse Analytics (preview) provides easy access to the Apache Spark framework. This access allows for Azure Machine Learning Notebook interactive data wrangling.
20+
21+
In this quickstart guide, you learn how to perform interactive data wrangling using Azure Machine Learning Managed (Automatic) Synapse Spark compute, Azure Data Lake Storage (ADLS) Gen 2 storage account, and user identity passthrough.
22+
23+
## Prerequisites
24+
- An Azure subscription; if you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free) before you begin.
25+
- An Azure Machine Learning workspace. See [Create workspace resources](./quickstart-create-resources.md).
26+
- An Azure Data Lake Storage (ADLS) Gen 2 storage account. See [Create an Azure Data Lake Storage (ADLS) Gen 2 storage account](../storage/blobs/create-data-lake-storage-account.md).
27+
- To enable this feature:
28+
1. Navigate to the Azure Machine Learning studio UI
29+
2. In the icon section at the top right of the screen, select **Manage preview features** (megaphone icon)
30+
3. In the **Managed preview feature** panel, toggle the **Run notebooks and jobs on managed Spark** feature to **on**
31+
:::image type="content" source="./media/apache-spark-environment-configuration/how-to-enable-managed-spark-preview.png" lightbox="media/apache-spark-environment-configuration/how-to-enable-managed-spark-preview.png" alt-text="Screenshot showing the option to enable the Managed Spark preview.":::
32+
33+
## Store Azure storage account credentials as secrets in Azure Key Vault
34+
35+
To store Azure storage account credentials as secrets in the Azure Key Vault using the Azure portal user interface:
36+
37+
1. Navigate to your Azure Key Vault in the Azure portal.
38+
1. Select **Secrets** from the left panel.
39+
1. Select **+ Generate/Import**.
40+
41+
:::image type="content" source="media/apache-spark-environment-configuration/azure-key-vault-secrets-generate-import.png" alt-text="Screenshot showing the Azure Key Vault Secrets Generate Or Import tab.":::
42+
43+
1. At the **Create a secret** screen, enter a **Name** for the secret you want to create.
44+
1. Navigate to Azure Blob Storage Account, in the Azure portal, as seen in this image:
45+
46+
:::image type="content" source="media/apache-spark-environment-configuration/storage-account-access-keys.png" alt-text="Screenshot showing the Azure access key and connection string values screen.":::
47+
1. Select **Access keys** from the Azure Blob Storage Account page left panel.
48+
1. Select **Show** next to **Key 1**, and then **Copy to clipboard** to get the storage account access key.
49+
> [!Note]
50+
> Select appropriate options to copy
51+
> - Azure Blob storage container shared access signature (SAS) tokens
52+
> - Azure Data Lake Storage (ADLS) Gen 2 storage account service principal credentials
53+
> - tenant ID
54+
> - client ID and
55+
> - secret
56+
>
57+
> on the respective user interfaces while creating Azure Key Vault secrets for them.
58+
1. Navigate back to the **Create a secret** screen.
59+
1. In the **Secret value** textbox, enter the access key credential for the Azure storage account, which was copied to the clipboard in the earlier step.
60+
1. Select **Create**.
61+
62+
:::image type="content" source="media/apache-spark-environment-configuration/create-a-secret.png" alt-text="Screenshot showing the Azure secret creation screen.":::
63+
64+
> [!TIP]
65+
> [Azure CLI](../key-vault/secrets/quick-create-cli.md) and [Azure Key Vault secret client library for Python](../key-vault/secrets/quick-create-python.md#sign-in-to-azure) can also create Azure Key Vault secrets.
66+
67+
## Add role assignments in Azure storage accounts
68+
69+
We must ensure that the input and output data paths are accessible before we start interactive data wrangling. First, for
70+
71+
- the user identity of the Notebooks session logged-in user or
72+
- a service principal
73+
74+
assign **Reader** and **Storage Blob Data Reader** roles to the user identity of the logged-in user. However, in certain scenarios, we might want to write the wrangled data back to the Azure storage account. The **Reader** and **Storage Blob Data Reader** roles provide read-only access to the user identity or service principal. To enable read and write access, assign **Contributor** and **Storage Blob Data Contributor** roles to the user identity or service principal. To assign appropriate roles to the user identity:
75+
76+
1. Open the [Microsoft Azure portal](https://portal.azure.com).
77+
1. Search and select the **Storage accounts** service.
78+
79+
:::image type="content" source="media/apache-spark-environment-configuration/find-storage-accounts-service.png" lightbox="media/apache-spark-environment-configuration/find-storage-accounts-service.png" alt-text="Expandable screenshot showing Storage accounts service search and selection, in Microsoft Azure portal.":::
80+
81+
1. On the **Storage accounts** page, select the Azure Data Lake Storage (ADLS) Gen 2 storage account from the list. A page showing the storage account **Overview** will open.
82+
83+
:::image type="content" source="media/apache-spark-environment-configuration/storage-accounts-list.png" lightbox="media/apache-spark-environment-configuration/storage-accounts-list.png" alt-text="Expandable screenshot showing selection of the Azure Data Lake Storage (ADLS) Gen 2 storage account Storage account.":::
84+
85+
1. Select **Access Control (IAM)** from the left panel
86+
1. Select **Add role assignment**
87+
88+
:::image type="content" source="media/apache-spark-environment-configuration/storage-account-add-role-assignment.png" lightbox="media/apache-spark-environment-configuration/storage-account-add-role-assignment.png" alt-text="Screenshot showing the Azure access keys screen.":::
89+
90+
1. Find and select role **Storage Blob Data Contributor**
91+
1. Select **Next**
92+
93+
:::image type="content" source="media/apache-spark-environment-configuration/add-role-assignment-choose-role.png" lightbox="media/apache-spark-environment-configuration/add-role-assignment-choose-role.png" alt-text="Screenshot showing the Azure add role assignment screen.":::
94+
95+
1. Select **User, group, or service principal**.
96+
1. Select **+ Select members**.
97+
1. Search for the user identity below **Select**
98+
1. Select the user identity from the list, so that it shows under **Selected members**
99+
1. Select the appropriate user identity
100+
1. Select **Next**
101+
102+
:::image type="content" source="media/apache-spark-environment-configuration/add-role-assignment-choose-members.png" lightbox="media/apache-spark-environment-configuration/add-role-assignment-choose-members.png" alt-text="Screenshot showing the Azure add role assignment screen Members tab.":::
103+
104+
1. Select **Review + Assign**
105+
106+
:::image type="content" source="media/apache-spark-environment-configuration/add-role-assignment-review-and-assign.png" lightbox="media/apache-spark-environment-configuration/add-role-assignment-review-and-assign.png" alt-text="Screenshot showing the Azure add role assignment screen review and assign tab.":::
107+
1. Repeat steps 2-13 for **Contributor** role assignment.
108+
109+
Once the user identity has the appropriate roles assigned, data in the Azure storage account should become accessible.
110+
111+
> [!NOTE]
112+
> If an [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md) points to a Synapse Spark pool in an Azure Synapse workspace that has a managed virtual network associated with it, [a managed private endpoint to storage account should be configured](../synapse-analytics/security/connect-to-a-secure-storage-account.md) to ensure data access.
113+
114+
## Next steps
115+
- [Apache Spark in Azure Machine Learning (preview)](./apache-spark-azure-ml-concepts.md)
116+
- [Attach and manage a Synapse Spark pool in Azure Machine Learning (preview)](./how-to-manage-synapse-spark-pool.md)
117+
- [Interactive Data Wrangling with Apache Spark in Azure Machine Learning (preview)](./interactive-data-wrangling-with-apache-spark-azure-ml.md)
118+
- [Submit Spark jobs in Azure Machine Learning (preview)](./how-to-submit-spark-jobs.md)
119+
- [Code samples for Spark jobs using Azure Machine Learning CLI](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/spark)
120+
- [Code samples for Spark jobs using Azure Machine Learning Python SDK](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/spark)

0 commit comments

Comments
 (0)