You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/tutorial-enable-materialization-backfill-data.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -59,8 +59,8 @@ You can create a new notebook and execute the instructions in this tutorial step
59
59
60
60
1. Configure the session:
61
61
62
-
1.Select**Configure session**.
63
-
1.Select**Upload Conda file**.
62
+
1.On the toolbar, select**Configure session**.
63
+
1.On the **Python packages** tab, select**Upload Conda file**.
64
64
1. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment).
65
65
1. Increase the session time-out (idle time) to avoid frequent prerequisite reruns.
66
66
@@ -193,8 +193,8 @@ This UAI is assigned to the feature store shortly. It requires these permissions
| Feature store | Azure Machine Learning Data Scientist role |
196
-
| Storage account of the offline store on the feature store |Blob storage Data Contributor role |
197
-
| Storage accounts of the source data |Blob storage Data Reader role |
196
+
| Storage account of the offline store on the feature store |Storage Blob Data Contributor role |
197
+
| Storage accounts of the source data |Storage Blob Data Reader role |
198
198
199
199
The next CLI commands assign the first two roles to the UAI. In this example, the "storage accounts of the source data" scope doesn't apply because you read the sample data from a public access blob storage. To use your own data sources, you must assign the required roles to the UAI. To learn more about access control, see [Manage access control for managed feature store](./how-to-setup-access-control-feature-store.md).
200
200
@@ -210,17 +210,17 @@ The next CLI commands assign the first two roles to the UAI. In this example, th
210
210
211
211
---
212
212
213
-
### Grant the blob Data Reader role access to your user account in the offline store
213
+
### Grant the Storage Blob Data Reader role access to your user account in the offline store
214
214
215
-
If the feature data is materialized, you need the blob Data Reader role to read feature data from the offline materialization store.
215
+
If the feature data is materialized, you need the Storage Blob Data Reader role to read feature data from the offline materialization store.
216
216
217
217
Obtain your Azure AD object ID value from the Azure portal, as described in [Find the user object ID](/partner-center/find-ids-and-domain-names#find-the-user-object-id).
218
218
219
219
To learn more about access control, see [Manage access control for managed feature store](./how-to-setup-access-control-feature-store.md).
220
220
221
221
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-user-identity)]
222
222
223
-
The following steps grant the blob Data Reader role access to your user account:
223
+
The following steps grant the Storage Blob Data Reader role access to your user account:
224
224
225
225
1. Attach the offline materialization store and UAI, to enable the offline store on the feature store.
Copy file name to clipboardExpand all lines: articles/machine-learning/tutorial-enable-recurrent-materialization-run-batch-inference.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,8 +44,8 @@ Before you proceed with the following procedures, be sure to complete the first,
44
44
45
45
1. Configure the session:
46
46
47
-
1. When the status bar displays **Configure session**, select it.
48
-
1.Select**Upload conda file**.
47
+
1. When the toolbar displays **Configure session**, select it.
48
+
1.On the **Python packages** tab, select**Upload conda file**.
49
49
1. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment).
50
50
1. Optionally, increase the session time-out (idle time) to avoid frequent prerequisite reruns.
Copy file name to clipboardExpand all lines: articles/machine-learning/tutorial-experiment-train-models-using-features.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,8 +46,8 @@ Before you proceed with the following procedures, be sure to complete the first
46
46
47
47
1. Configure the session:
48
48
49
-
1. When the status bar displays **Configure session**, select it.
50
-
1.Select **Upload conda file**.
49
+
1. When the toolbar displays **Configure session**, select it.
50
+
1.On the **Python packages** tab, select **Upload Conda file**.
51
51
1. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment).
52
52
1. Optionally, increase the session time-out (idle time) to avoid frequent prerequisite reruns.
53
53
@@ -178,7 +178,7 @@ In the following steps, you select a list of features, run a training pipeline,
178
178
179
179
The first tutorial covered this step, when you registered the `transactions` feature set. Because you also have an `accounts` feature set, you can browse through the available features:
180
180
181
-
1. Go to the [Azure Machine Learning global landing page](https://ml.azure.com/home?flight=FeatureStores).
181
+
1. Go to the [Azure Machine Learning global landing page](https://ml.azure.com/home).
182
182
1. On the left pane, select **Feature stores**.
183
183
1. In the list of feature stores, select the feature store that you created earlier.
184
184
@@ -214,13 +214,13 @@ In this procedure, you manually trigger the training pipeline. In a production s
214
214
215
215
The training pipeline has these steps:
216
216
217
-
1.Generate training data. For its input, this built-in component takes the feature retrieval specification, the observation data, and the time-stamp column name. It then generates the training data as output. It runs these steps as a managed Spark job.
217
+
1.Feature retrieval: For its input, this built-in component takes the feature retrieval specification, the observation data, and the time-stamp column name. It then generates the training data as output. It runs these steps as a managed Spark job.
218
218
219
-
1. Based on the training data, train the model and then generate a model (not yet registered).
219
+
1.Training: Based on the training data, this step trains the model and then generates a model (not yet registered).
220
220
221
-
1.Evaluate whether the model performance and quality fall within a threshold. (In this tutorial, it's a placeholder step for illustration purposes.)
221
+
1.Evaluation: This step validates whether the model performance and quality fall within a threshold. (In this tutorial, it's a placeholder step for illustration purposes.)
222
222
223
-
1. Register the model.
223
+
1. Register the model: This step registers the model.
224
224
225
225
> [!NOTE]
226
226
> In the second tutorial, you ran a backfill job to materialize data for the `transactions` feature set. The feature retrieval step reads feature values from the offline store for this feature set. The behavior is the same, even if you use the `get_offline_features()` API.
Copy file name to clipboardExpand all lines: articles/machine-learning/tutorial-get-started-with-feature-store.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ This tutorial is the first part of a four-part series. Here, you learn how to:
32
32
This tutorial series has two tracks:
33
33
34
34
* The SDK-only track uses only Python SDKs. Choose this track for pure, Python-based development and deployment.
35
-
* The SDK and CLI track uses the Python SDK for feature set development and testing only, and it uses the CLI for CRUD operations (create, update, and delete). This track is useful in continuous integration and continuous delivery (CI/CD) or GitOps scenarios, where CLI/YAML is preferred.
35
+
* The SDK and CLI track uses the Python SDK for feature set development and testing only, and it uses the CLI for CRUD (create, read, update, and delete) operations. This track is useful in continuous integration and continuous delivery (CI/CD) or GitOps scenarios, where CLI/YAML is preferred.
@@ -75,7 +75,7 @@ This tutorial uses an Azure Machine Learning Spark notebook for development.
75
75
1. On the **Configure session** panel, select **Python packages**.
76
76
77
77
1. Upload the Conda file:
78
-
1.Select**Upload Conda file**.
78
+
1.On the **Python packages** tab, select**Upload Conda file**.
79
79
1. Browse to the directory that hosts the Conda file.
80
80
1. Select **conda.yml**, and then select **Open**.
81
81
@@ -231,7 +231,7 @@ As a best practice, entities help enforce use of the same join key definition ac
231
231
232
232
1. Initialize the feature store CRUD client.
233
233
234
-
As explained earlier in this tutorial, `MLClient` is used for creating, updating, and deleting a feature store asset. The notebook code cell sample shown here searches for the feature store that you created in an earlier step. Here, you can't reuse the same `ml_client` value that you used earlier in this tutorial, because it's scoped at the resource group level. Proper scoping is a prerequisite for feature store creation.
234
+
As explained earlier in this tutorial, `MLClient` is used for creating, reading, updating, and deleting a feature store asset. The notebook code cell sample shown here searches for the feature store that you created in an earlier step. Here, you can't reuse the same `ml_client` value that you used earlier in this tutorial, because it's scoped at the resource group level. Proper scoping is a prerequisite for feature store creation.
235
235
236
236
In this code sample, the client is scoped at feature store level.
237
237
@@ -247,7 +247,7 @@ As a best practice, entities help enforce use of the same join key definition ac
247
247
248
248
1. Initialize the feature store CRUD client.
249
249
250
-
As explained earlier in this tutorial, `MLClient` is used for creating, updating, and deleting a feature store asset. The notebook code cell sample shown here searches for the feature store that you created in an earlier step. Here, you can't reuse the same `ml_client` value that you used earlier in this tutorial, because it's scoped at the resource group level. Proper scoping is a prerequisite for feature store creation.
250
+
As explained earlier in this tutorial, `MLClient` is used for creating, reading, updating, and deleting a feature store asset. The notebook code cell sample shown here searches for the feature store that you created in an earlier step. Here, you can't reuse the same `ml_client` value that you used earlier in this tutorial, because it's scoped at the resource group level. Proper scoping is a prerequisite for feature store creation.
251
251
252
252
In this code sample, the client is scoped at the feature store level, and it registers the `account` entity with the feature store. Additionally, it creates an account entity that has the join key `accountID` of type `string`.
0 commit comments