Skip to content

Commit 1ac2d23

Browse files
committed
incorporating feedback
1 parent 3b3a097 commit 1ac2d23

4 files changed

+20
-20
lines changed

articles/machine-learning/tutorial-enable-materialization-backfill-data.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -59,8 +59,8 @@ You can create a new notebook and execute the instructions in this tutorial step
5959

6060
1. Configure the session:
6161

62-
1. Select **Configure session**.
63-
1. Select **Upload Conda file**.
62+
1. On the toolbar, select **Configure session**.
63+
1. On the **Python packages** tab, select **Upload Conda file**.
6464
1. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment).
6565
1. Increase the session time-out (idle time) to avoid frequent prerequisite reruns.
6666

@@ -193,8 +193,8 @@ This UAI is assigned to the feature store shortly. It requires these permissions
193193
| Scope | Role |
194194
|------------------------------------------------|--------------------------------------------|
195195
| Feature store | Azure Machine Learning Data Scientist role |
196-
| Storage account of the offline store on the feature store | Blob storage Data Contributor role |
197-
| Storage accounts of the source data | Blob storage Data Reader role |
196+
| Storage account of the offline store on the feature store | Storage Blob Data Contributor role |
197+
| Storage accounts of the source data | Storage Blob Data Reader role |
198198

199199
The next CLI commands assign the first two roles to the UAI. In this example, the "storage accounts of the source data" scope doesn't apply because you read the sample data from a public access blob storage. To use your own data sources, you must assign the required roles to the UAI. To learn more about access control, see [Manage access control for managed feature store](./how-to-setup-access-control-feature-store.md).
200200

@@ -210,17 +210,17 @@ The next CLI commands assign the first two roles to the UAI. In this example, th
210210

211211
---
212212

213-
### Grant the blob Data Reader role access to your user account in the offline store
213+
### Grant the Storage Blob Data Reader role access to your user account in the offline store
214214

215-
If the feature data is materialized, you need the blob Data Reader role to read feature data from the offline materialization store.
215+
If the feature data is materialized, you need the Storage Blob Data Reader role to read feature data from the offline materialization store.
216216

217217
Obtain your Azure AD object ID value from the Azure portal, as described in [Find the user object ID](/partner-center/find-ids-and-domain-names#find-the-user-object-id).
218218

219219
To learn more about access control, see [Manage access control for managed feature store](./how-to-setup-access-control-feature-store.md).
220220

221221
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-user-identity)]
222222

223-
The following steps grant the blob Data Reader role access to your user account:
223+
The following steps grant the Storage Blob Data Reader role access to your user account:
224224

225225
1. Attach the offline materialization store and UAI, to enable the offline store on the feature store.
226226

articles/machine-learning/tutorial-enable-recurrent-materialization-run-batch-inference.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,8 +44,8 @@ Before you proceed with the following procedures, be sure to complete the first,
4444

4545
1. Configure the session:
4646

47-
1. When the status bar displays **Configure session**, select it.
48-
1. Select **Upload conda file**.
47+
1. When the toolbar displays **Configure session**, select it.
48+
1. On the **Python packages** tab, select **Upload conda file**.
4949
1. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment).
5050
1. Optionally, increase the session time-out (idle time) to avoid frequent prerequisite reruns.
5151

articles/machine-learning/tutorial-experiment-train-models-using-features.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -46,8 +46,8 @@ Before you proceed with the following procedures, be sure to complete the first
4646

4747
1. Configure the session:
4848

49-
1. When the status bar displays **Configure session**, select it.
50-
1. Select **Upload conda file**.
49+
1. When the toolbar displays **Configure session**, select it.
50+
1. On the **Python packages** tab, select **Upload Conda file**.
5151
1. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment).
5252
1. Optionally, increase the session time-out (idle time) to avoid frequent prerequisite reruns.
5353

@@ -178,7 +178,7 @@ In the following steps, you select a list of features, run a training pipeline,
178178

179179
The first tutorial covered this step, when you registered the `transactions` feature set. Because you also have an `accounts` feature set, you can browse through the available features:
180180

181-
1. Go to the [Azure Machine Learning global landing page](https://ml.azure.com/home?flight=FeatureStores).
181+
1. Go to the [Azure Machine Learning global landing page](https://ml.azure.com/home).
182182
1. On the left pane, select **Feature stores**.
183183
1. In the list of feature stores, select the feature store that you created earlier.
184184

@@ -214,13 +214,13 @@ In this procedure, you manually trigger the training pipeline. In a production s
214214

215215
The training pipeline has these steps:
216216

217-
1. Generate training data. For its input, this built-in component takes the feature retrieval specification, the observation data, and the time-stamp column name. It then generates the training data as output. It runs these steps as a managed Spark job.
217+
1. Feature retrieval: For its input, this built-in component takes the feature retrieval specification, the observation data, and the time-stamp column name. It then generates the training data as output. It runs these steps as a managed Spark job.
218218

219-
1. Based on the training data, train the model and then generate a model (not yet registered).
219+
1. Training: Based on the training data, this step trains the model and then generates a model (not yet registered).
220220

221-
1. Evaluate whether the model performance and quality fall within a threshold. (In this tutorial, it's a placeholder step for illustration purposes.)
221+
1. Evaluation: This step validates whether the model performance and quality fall within a threshold. (In this tutorial, it's a placeholder step for illustration purposes.)
222222

223-
1. Register the model.
223+
1. Register the model: This step registers the model.
224224

225225
> [!NOTE]
226226
> In the second tutorial, you ran a backfill job to materialize data for the `transactions` feature set. The feature retrieval step reads feature values from the offline store for this feature set. The behavior is the same, even if you use the `get_offline_features()` API.

articles/machine-learning/tutorial-get-started-with-feature-store.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ This tutorial is the first part of a four-part series. Here, you learn how to:
3232
This tutorial series has two tracks:
3333

3434
* The SDK-only track uses only Python SDKs. Choose this track for pure, Python-based development and deployment.
35-
* The SDK and CLI track uses the Python SDK for feature set development and testing only, and it uses the CLI for CRUD operations (create, update, and delete). This track is useful in continuous integration and continuous delivery (CI/CD) or GitOps scenarios, where CLI/YAML is preferred.
35+
* The SDK and CLI track uses the Python SDK for feature set development and testing only, and it uses the CLI for CRUD (create, read, update, and delete) operations. This track is useful in continuous integration and continuous delivery (CI/CD) or GitOps scenarios, where CLI/YAML is preferred.
3636

3737
[!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)]
3838

@@ -75,7 +75,7 @@ This tutorial uses an Azure Machine Learning Spark notebook for development.
7575
1. On the **Configure session** panel, select **Python packages**.
7676

7777
1. Upload the Conda file:
78-
1. Select **Upload Conda file**.
78+
1. On the **Python packages** tab, select **Upload Conda file**.
7979
1. Browse to the directory that hosts the Conda file.
8080
1. Select **conda.yml**, and then select **Open**.
8181

@@ -231,7 +231,7 @@ As a best practice, entities help enforce use of the same join key definition ac
231231

232232
1. Initialize the feature store CRUD client.
233233

234-
As explained earlier in this tutorial, `MLClient` is used for creating, updating, and deleting a feature store asset. The notebook code cell sample shown here searches for the feature store that you created in an earlier step. Here, you can't reuse the same `ml_client` value that you used earlier in this tutorial, because it's scoped at the resource group level. Proper scoping is a prerequisite for feature store creation.
234+
As explained earlier in this tutorial, `MLClient` is used for creating, reading, updating, and deleting a feature store asset. The notebook code cell sample shown here searches for the feature store that you created in an earlier step. Here, you can't reuse the same `ml_client` value that you used earlier in this tutorial, because it's scoped at the resource group level. Proper scoping is a prerequisite for feature store creation.
235235

236236
In this code sample, the client is scoped at feature store level.
237237

@@ -247,7 +247,7 @@ As a best practice, entities help enforce use of the same join key definition ac
247247

248248
1. Initialize the feature store CRUD client.
249249

250-
As explained earlier in this tutorial, `MLClient` is used for creating, updating, and deleting a feature store asset. The notebook code cell sample shown here searches for the feature store that you created in an earlier step. Here, you can't reuse the same `ml_client` value that you used earlier in this tutorial, because it's scoped at the resource group level. Proper scoping is a prerequisite for feature store creation.
250+
As explained earlier in this tutorial, `MLClient` is used for creating, reading, updating, and deleting a feature store asset. The notebook code cell sample shown here searches for the feature store that you created in an earlier step. Here, you can't reuse the same `ml_client` value that you used earlier in this tutorial, because it's scoped at the resource group level. Proper scoping is a prerequisite for feature store creation.
251251

252252
In this code sample, the client is scoped at the feature store level, and it registers the `account` entity with the feature store. Additionally, it creates an account entity that has the join key `accountID` of type `string`.
253253

0 commit comments

Comments
 (0)