Skip to content

Commit adacbe9

Browse files
committed
Freshness update for tutorial-enable-recurrent-materialization-run-batch-inference.md . . .
1 parent 79ce2a6 commit adacbe9

File tree

1 file changed

+18
-18
lines changed

1 file changed

+18
-18
lines changed

articles/machine-learning/tutorial-enable-recurrent-materialization-run-batch-inference.md

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.subservice: core
99
ms.topic: tutorial
1010
author: fbsolo-ms1
1111
ms.author: franksolomon
12-
ms.date: 11/28/2023
12+
ms.date: 11/20/2024
1313
ms.reviewer: seramasu
1414
ms.custom: sdkv2, build-2023, ignite-2023, update-code
1515
#Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
@@ -19,7 +19,7 @@ ms.custom: sdkv2, build-2023, ignite-2023, update-code
1919

2020
This tutorial series shows how features seamlessly integrate all phases of the machine learning lifecycle: prototyping, training, and operationalization.
2121

22-
The first tutorial showed how to create a feature set specification with custom transformations, and then use that feature set to generate training data, enable materialization, and perform a backfill. The second tutorial showed how to enable materialization, and perform a backfill. It also showed how to experiment with features, as a way to improve model performance.
22+
The first tutorial showed how to create a feature set specification with custom transformations. It then showed how to use that feature set to generate training data, enable materialization, and perform a backfill. The second tutorial showed how to enable materialization and perform a backfill. It also showed how to experiment with features, as a way to improve model performance.
2323

2424
This tutorial explains how to:
2525

@@ -35,27 +35,27 @@ Before you proceed with this tutorial, be sure to complete the first and second
3535

3636
1. Configure the Azure Machine Learning Spark notebook.
3737

38-
To run this tutorial, you can create a new notebook and execute the instructions step by step. You can also open and run the existing notebook named *3. Enable recurrent materialization and run batch inference*. You can find that notebook, and all the notebooks in this series, in the *featurestore_sample/notebooks* directory. You can choose *sdk_only* or *sdk_and_cli*. Keep this tutorial open and refer to it for documentation links and more explanation.
38+
To run this tutorial, you can create a new notebook and execute the instructions, step by step. You can also open and run the existing notebook named *3. Enable recurrent materialization and run batch inference*. You can find that notebook, and all the notebooks in this series, in the *featurestore_sample/notebooks* directory. You can choose *sdk_only* or *sdk_and_cli*. Keep this tutorial open and refer to it for documentation links and more explanation.
3939

4040
1. In the **Compute** dropdown list in the top nav, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**.
4141

42-
2. Configure the session:
42+
1. Configure the session:
4343

4444
1. Select **Configure session** in the top status bar.
45-
2. Select the **Python packages** tab.
46-
3. Select **Upload conda file**.
47-
4. Select the `azureml-examples/sdk/python/featurestore-sample/project/env/online.yml` file from your local machine.
48-
5. Optionally, increase the session time-out (idle time) to avoid frequent prerequisite reruns.
45+
1. Select the **Python packages** tab.
46+
1. Select **Upload conda file**.
47+
1. Select the `azureml-examples/sdk/python/featurestore-sample/project/env/online.yml` file from your local machine.
48+
1. Optionally, increase the session time-out (idle time) to avoid frequent prerequisite reruns.
4949

50-
2. Start the Spark session.
50+
1. Start the Spark session.
5151

5252
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3.Enable-recurrent-materialization-run-batch-inference.ipynb?name=start-spark-session)]
5353

54-
3. Set up the root directory for the samples.
54+
1. Set up the root directory for the samples.
5555

5656
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3.Enable-recurrent-materialization-run-batch-inference.ipynb?name=root-dir)]
5757

58-
4. Set up the CLI.
58+
1. Set up the CLI.
5959
### [Python SDK](#tab/python)
6060

6161
Not applicable.
@@ -66,29 +66,29 @@ Before you proceed with this tutorial, be sure to complete the first and second
6666

6767
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3.Enable-recurrent-materialization-run-batch-inference.ipynb?name=install-ml-ext-cli)]
6868

69-
2. Authenticate.
69+
1. Authenticate.
7070

7171
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3.Enable-recurrent-materialization-run-batch-inference.ipynb?name=auth-cli)]
7272

73-
3. Set the default subscription.
73+
1. Set the default subscription.
7474

7575
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3.Enable-recurrent-materialization-run-batch-inference.ipynb?name=set-default-subs-cli)]
7676

7777
---
7878

79-
5. Initialize the project workspace CRUD (create, read, update, and delete) client.
79+
1. Initialize the project workspace CRUD (create, read, update, and delete) client.
8080

8181
The tutorial notebook runs from this current workspace.
8282

8383
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3.Enable-recurrent-materialization-run-batch-inference.ipynb?name=init-ws-crud-client)]
8484

85-
6. Initialize the feature store variables.
85+
1. Initialize the feature store variables.
8686

87-
Be sure to update the `featurestore_name` value, to reflect what you created in the first tutorial.
87+
To reflect what you created in the first tutorial, be sure to update the `featurestore_name` value.
8888

8989
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3.Enable-recurrent-materialization-run-batch-inference.ipynb?name=init-fs-crud-client)]
9090

91-
7. Initialize the feature store SDK client.
91+
1. Initialize the feature store SDK client.
9292

9393
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3.Enable-recurrent-materialization-run-batch-inference.ipynb?name=init-fs-core-sdk)]
9494

@@ -149,7 +149,7 @@ In the pipeline view:
149149
1. Paste the `Data` field value in the following cell, with separate name and version values. The last character is the version, preceded by a colon (`:`).
150150
1. Note the `predict_is_fraud` column that the batch inference pipeline generated.
151151

152-
In the batch inference pipeline (*/project/fraud_mode/pipelines/batch_inference_pipeline.yaml*) outputs, because you didn't provide `name` or `version` values for `outputs` of `inference_step`, the system created an untracked data asset with a GUID as the name value and `1` as the version value. In this cell, you derive and then display the data path from the asset.
152+
In the batch inference pipeline (*/project/fraud_mode/pipelines/batch_inference_pipeline.yaml*) outputs, the system created an untracked data asset with a GUID as the name value and `1` as the version value. This happened because you didn't provide `name` or `version` values for `outputs` of `inference_step`. In this cell, you derive and then display the data path from the asset.
153153

154154
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3.Enable-recurrent-materialization-run-batch-inference.ipynb?name=inspect-batch-inf-output-data)]
155155

0 commit comments

Comments
 (0)