You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/tutorial-explore-data.md
+17-15Lines changed: 17 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -121,22 +121,23 @@ Data asset creation also creates a *reference* to the data source location, alon
121
121
122
122
The next notebook cell creates the data asset. The code sample uploads the raw data file to the designated cloud storage resource.
123
123
124
-
Each time you create a data asset, you need a unique version for it. If the version already exists, you'll get an error. This code uses time to generate a unique version, each time the cell is run.
124
+
Each time you create a data asset, you need a unique version for it. If the version already exists, you'll get an error. In this code, we're using the "initial" for the first read of the data. If that version already exists, we'll skip creating it again.
125
125
126
-
You can also omit the **version** parameter, and a version number is generated for you, starting with 1 and then incrementing from there. In this tutorial, we want to refer to specific version numbers, so we create a version number instead.
126
+
You can also omit the **version** parameter, and a version number is generated for you, starting with 1 and then incrementing from there.
127
+
128
+
In this tutorial, we use the name "initial" as the first version. The [Create production machine learning pipelines](pipeline.ipynb) tutorial will also use this version of the data, so here we are using a value that you'll see again in that tutorial.
127
129
128
130
129
131
```python
130
132
from azure.ai.ml.entities import Data
131
133
from azure.ai.ml.constants import AssetTypes
132
-
import time
133
134
134
135
# update the 'my_path' variable to match the location of where you downloaded the data on your
Next, create a new _version_ of the data asset (the data automatically uploads to cloud storage):
244
-
245
-
> [!NOTE]
246
-
>
247
-
> This Python code cell sets **name** and **version** values for the data asset it creates. As a result, the code in this cell will fail if executed more than once, without a change to these values. Fixed **name** and **version** values offer a way to pass values that work for specific situations, without concern for auto-generated or randomly-generated values.
249
+
Next, create a new _version_ of the data asset (the data automatically uploads to cloud storage). For this version, we'll add a time value, so that each time this code is run, a different version number will be created.
248
250
249
251
250
252
@@ -254,7 +256,7 @@ from azure.ai.ml.constants import AssetTypes
254
256
import time
255
257
256
258
# Next, create a new *version* of the data asset (the data is automatically uploaded to cloud storage):
1. Complete the tutorial [Upload, access and explore your data](tutorial-explore-data.md) to create the data asset you need in this tutorial. Make sure you run all the code to create the initial data asset. Explore the data and revise it if you wish, but you'll only need the initial data in this tutorial.
55
+
54
56
1.[!INCLUDE [open or create notebook](includes/prereq-open-or-create.md)]
* Or, open **tutorials/get-started-notebooks/pipeline.ipynb** from the **Samples** section of studio. [!INCLUDE [clone notebook](includes/prereq-clone-notebook.md)]
@@ -95,56 +97,30 @@ ml_client = MLClient(
95
97
resource_group_name="<RESOURCE_GROUP>",
96
98
workspace_name="<AML_WORKSPACE_NAME>",
97
99
)
100
+
cpu_cluster =None
98
101
```
99
102
100
103
> [!NOTE]
101
104
> Creating MLClient will not connect to the workspace. The client initialization is lazy, it will wait for the first time it needs to make a call (this will happen when creating the `credit_data` data asset, two code cells from here).
102
105
103
-
## Register data from an external url
104
-
105
-
If you have been following along with the other tutorials in this series and already registered the data, you can fetch the same dataset from the workspace using `credit_dataset = ml_client.data.get("<DATA ASSET NAME>", version='<VERSION>')`. Then you may skip this section. To learn about data more in depth or if you would rather complete the data tutorial first, see [Upload, access and explore your data in Azure Machine Learning](tutorial-explore-data.md).
106
-
107
-
* Azure Machine Learning uses a `Data` object to register a reusable definition of data, and consume data within a pipeline. In the next section, you consume some data from web url as one example. Data from other sources can be created as well. `Data` assets from other sources can be created as well.
tags={"source_type": "web", "source": "UCI ML Repo"},
124
-
version="1.0.0",
125
-
)
126
-
```
106
+
## Access the registered data asset
127
107
128
-
This code just created a `Data` asset, ready to be consumed as an input by the pipeline that you'll define in the next sections. In addition, you can register the data to your workspace so it becomes reusable across pipelines.
108
+
Start by getting the data that you previously registered in the [Upload, access and explore your data](tutorial-explore-data.md) tutorial.
129
109
110
+
* Azure Machine Learning uses a `Data` object to register a reusable definition of data, and consume data within a pipeline.
130
111
Since this is the first time that you're making a call to the workspace, you may be asked to authenticate. Once the authentication is complete, you then see the dataset registration completion message.
In the future, you can fetch the same dataset from the workspace using `credit_dataset = ml_client.data.get("<DATA ASSET NAME>", version='<VERSION>')`.
143
-
144
-
## Create a compute resource to run your pipeline
120
+
## Create a compute resource to run your pipeline (Optional)
145
121
146
122
> [!NOTE]
147
-
> To try[serverless compute (preview)](./how-to-use-serverless-compute.md), skip this step and proceed to [create a job environment](#create-a-job-environment-for-pipeline-steps).
123
+
> To use[serverless compute (preview)](./how-to-use-serverless-compute.md) to run this pipeline, you can skip this compute creation step and proceed directly to [create a job environment](#create-a-job-environment-for-pipeline-steps).
148
124
149
125
Each step of an Azure Machine Learning pipeline can use a different compute resource for running the specific job of that step. It can be single or multi-node machines with Linux or Windows OS, or a specific compute fabric like Spark.
150
126
@@ -172,9 +148,8 @@ except Exception:
172
148
print("Creating a new cpu compute target...")
173
149
174
150
# Let's create the Azure Machine Learning compute object with the intended parameters
175
-
# if you run into an out of quota error, change the size to a comparable VM that is available.\
151
+
# if you run into an out of quota error, change the size to a comparable VM that is available.
176
152
# Learn more on https://azure.microsoft.com/en-us/pricing/details/machine-learning/.
177
-
178
153
cpu_cluster = AmlCompute(
179
154
name=cpu_compute_target,
180
155
# Azure Machine Learning Compute is the on-demand VM service
@@ -229,8 +204,8 @@ dependencies:
229
204
- pip:
230
205
- inference-schema[numpy-support]==1.3.0
231
206
- xlrd==2.0.1
232
-
- mlflow==1.26.1
233
-
- azureml-mlflow==1.42.0
207
+
- mlflow==2.4.1
208
+
- azureml-mlflow==1.51.0
234
209
```
235
210
236
211
The specification contains some usual packages, that you use in your pipeline (numpy, pip), together with some Azure Machine Learning specific packages (azureml-mlflow).
@@ -581,16 +556,15 @@ To code the pipeline, you use a specific `@dsl.pipeline` decorator that identifi
581
556
582
557
Here, we used *input data*, *split ratio* and *registered model name* as input variables. We then call the components and connect them via their inputs/outputs identifiers. The outputs of each step can be accessed via the `.outputs` property.
583
558
584
-
> [!NOTE]
585
-
> To use [serverless compute (preview)](./how-to-use-serverless-compute.md), replace `compute=cpu_compute_target` with `compute=azureml:serverless` in this code.
586
-
587
-
```pythons
559
+
```python
588
560
# the dsl decorator tells the sdk that we are defining an Azure Machine Learning pipeline
589
561
from azure.ai.ml import dsl, Input, Output
590
562
591
563
592
564
@dsl.pipeline(
593
-
compute=cpu_compute_target, # to use serverless compute, change this to: compute=azureml:serverless
565
+
compute=cpu_compute_target
566
+
if (cpu_cluster)
567
+
else"serverless", # "serverless" value runs pipeline on serverless compute
0 commit comments