Skip to content

Commit a53fb7f

Browse files
committed
edits
1 parent 74c0c3f commit a53fb7f

File tree

1 file changed

+16
-16
lines changed

1 file changed

+16
-16
lines changed

articles/machine-learning/how-to-create-component-pipeline-python.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -32,13 +32,13 @@ The example pipeline trains a small [Keras](https://keras.io/) convolutional neu
3232
In this article, you complete the following tasks:
3333

3434
> [!div class="checklist"]
35-
> * Prepare input data for the pipeline job
36-
> * Create three components to prepare the data, train an image, and score the model
37-
> * Build a pipeline from the components
38-
> * Get access to a workspace with compute
39-
> * Submit the pipeline job
40-
> * Review the output of the components and the trained neural network
41-
> * (Optional) Register the component for further reuse and sharing within the workspace
35+
> * Prepare input data for the pipeline job.
36+
> * Create three components to prepare the data, train an image, and score the model.
37+
> * Build a pipeline from the components.
38+
> * Get access to a workspace that has compute.
39+
> * Submit the pipeline job.
40+
> * Review the output of the components and the trained neural network.
41+
> * (Optional) Register the component for further reuse and sharing within the workspace.
4242
4343
If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
4444

@@ -95,11 +95,11 @@ For each component, you need to complete these steps:
9595

9696
The next section shows how to create the components in two ways. For the first two components, you use a Python function. For the third component you use YAML definition.
9797

98-
### Create the data-preparation component
98+
### Create the data preparation component
9999

100100
The first component in this pipeline converts the compressed data files of `fashion_ds` into two .csv files, one for training and the other for scoring. You use a Python function to define this component.
101101

102-
If you're following along with the example in the [Azure Machine Learning examples repo](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet), the source files are already available in the `prep/` folder. This folder contains two files to construct the component: `prep_component.py`, which defines the component, and `conda.yaml`, which defines the runtime environment of the component.
102+
If you're following along with the example in the [Azure Machine Learning examples repo](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet), the source files are already available in the `prep` folder. This folder contains two files to construct the component: `prep_component.py`, which defines the component, and `conda.yaml`, which defines the runtime environment of the component.
103103
104104
#### Define component by using a Python function
105105
@@ -130,9 +130,9 @@ This is what a component looks like in the studio UI:
130130
131131
:::image type="content" source="./media/how-to-create-component-pipeline-python/prep-data-component.png" alt-text="Screenshot of the Prep Data component in the UI and code." lightbox ="./media/how-to-create-component-pipeline-python/prep-data-component.png":::
132132
133-
Now, you've prepared all source files for the `Prep Data` component.
133+
You've now prepared all source files for the `Prep Data` component.
134134

135-
### Create the train model component
135+
### Create the model training component
136136

137137
In this section, you'll create a component for training the image classification model in a Python function, as you did with the `Prep Data` component.
138138
@@ -144,7 +144,7 @@ The source files for this component are in the `train` folder in the [Azure Mach
144144
* `train_component.py` defines the interface of the component and imports the function that's in `train.py`.
145145
* `conda.yaml` defines the runtime environment of the component.
146146

147-
#### Get a script that contains execution logic
147+
#### Get a script that contains the logic
148148

149149
The `train.py` file contains a normal Python function that performs the logic for training a Keras neural network for image classification. To view the code, see the [train.py file on GitHub](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/train/train.py).
150150

@@ -181,7 +181,7 @@ The `score.py` file contains a normal Python function that performs the training
181181
:::code language="python" source="~/azureml-examples-main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/score/score.py":::
182182

183183

184-
The code in score.py takes three command-line arguments: `input_data`, `input_model`, and `output_result`. The program scores the input model by using input data and then outputs the result.
184+
The code in `score.py` takes three command-line arguments: `input_data`, `input_model`, and `output_result`. The program scores the input model by using input data and then outputs the result.
185185

186186
#### Define the component via YAML
187187

@@ -199,7 +199,7 @@ In this section, you'll learn how to create a component specification in the val
199199
* The `command` section specifies the command to execute when the component runs.
200200
* The `environment` section contains a Docker image and a conda YAML file. The source file is in the [sample repository](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/score/conda.yaml).
201201
202-
You now have all the source files for model scoring component.
202+
You now have all the source files for the model scoring component.
203203
204204
## Load the components to build a pipeline
205205
@@ -265,7 +265,7 @@ Create an `MLClient` object to manage Azure Machine Learning services. If you us
265265
266266
#### Submit the pipeline job to the workspace
267267
268-
Now you have a handle to your workspace, you can submit your pipeline job:
268+
Now that you have a handle to your workspace, you can submit your pipeline job:
269269
270270
[!notebook-python[] (~/azureml-examples-main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=submit-pipeline)]
271271
@@ -284,7 +284,7 @@ You can monitor the pipeline run by selecting the link. Or you can block it unti
284284
> [!IMPORTANT]
285285
> The first pipeline run takes about 15 minutes. All dependencies are downloaded, a Docker image is created, and the Python environment is provisioned and created. Running the pipeline again takes significantly less time because those resources are reused instead of created. However, total runtime for the pipeline depends on the workload of your scripts and the processes that run in each pipeline step.
286286
287-
### Checkout outputs and debug your pipeline in the UI
287+
### Check outputs and debug your pipeline in the UI
288288
289289
You can select the `Link to Azure Machine Learning studio`, which is the job detail page of your pipeline. You'll see the pipeline graph:
290290

0 commit comments

Comments
 (0)