Skip to content

Commit 6904991

Browse files
authored
Merge pull request #112586 from sdgilley/patch-43
change dataset example
2 parents 1d3dc94 + 5982b0e commit 6904991

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

articles/machine-learning/how-to-create-your-first-pipeline.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -77,8 +77,8 @@ Upload data files or directories to the datastore for them to be accessible from
7777

7878
```python
7979
def_blob_store.upload_files(
80-
["./data/20news.pkl"],
81-
target_path="20newsgroups",
80+
["iris.csv"],
81+
target_path="train-dataset",
8282
overwrite=True)
8383
```
8484

@@ -97,7 +97,7 @@ You create a `Dataset` using methods like [from_file](https://docs.microsoft.com
9797
```python
9898
from azureml.core import Dataset
9999

100-
iris_tabular_dataset = Dataset.Tabular.from_delimited_files([(def_blob_store, 'train-dataset/tabular/iris.csv')])
100+
iris_tabular_dataset = Dataset.Tabular.from_delimited_files([(def_blob_store, 'train-dataset/iris.csv')])
101101
```
102102

103103
Intermediate data (or output of a step) is represented by a [PipelineData](https://docs.microsoft.com/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata?view=azure-ml-py) object. `output_data1` is produced as the output of a step, and used as the input of one or more future steps. `PipelineData` introduces a data dependency between steps, and creates an implicit execution order in the pipeline. This object will be used later when creating pipeline steps.

0 commit comments

Comments
 (0)