Skip to content

Commit c86f808

Browse files
committed
Rename media file name references
1 parent 1af16a6 commit c86f808

35 files changed

+30
-30
lines changed

articles/machine-learning/service/how-to-ui-sample-classification-predict-churn.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -22,15 +22,15 @@ Because you're trying to answer the question "Which one?" this is called a class
2222

2323
Here's the completed graph for this experiment:
2424

25-
![Experiment graph](./media/ui-sample-classification-predict-churn/experiment-graph.png)
25+
![Experiment graph](./media/how-to-ui-sample-classification-predict-churn/experiment-graph.png)
2626

2727
## Prerequisites
2828

2929
[!INCLUDE [aml-ui-prereq](../../../includes/aml-ui-prereq.md)]
3030

3131
4. Select the **Open** button for the Sample 5 experiment.
3232

33-
![Open the experiment](media/ui-sample-classification-predict-churn/open-sample5.png)
33+
![Open the experiment](media/how-to-ui-sample-classification-predict-churn/open-sample5.png)
3434

3535
## Data
3636

@@ -44,11 +44,11 @@ First, do some simple data processing.
4444

4545
- The raw dataset contains lots of missing values. Use the **Clean Missing Data** module to replace the missing values with 0.
4646

47-
![Clean the dataset](./media/ui-sample-classification-predict-churn/cleaned-dataset.png)
47+
![Clean the dataset](./media/how-to-ui-sample-classification-predict-churn/cleaned-dataset.png)
4848

4949
- The features and the corresponding churn, appetency, and up-selling labels are in different datasets. Use the **Add Columns** module to append the label columns to the feature columns. The first column, **Col1**, is the label column. The rest of the columns, **Var1**, **Var2**, and so on, are the feature columns.
5050

51-
![Add the column dataset](./media/ui-sample-classification-predict-churn/added-column1.png)
51+
![Add the column dataset](./media/how-to-ui-sample-classification-predict-churn/added-column1.png)
5252

5353
- Use the **Split Data** module to split the dataset into train and test sets.
5454

@@ -58,7 +58,7 @@ First, do some simple data processing.
5858

5959
Visualize the output of the **Evaluate Model** module to see the performance of the model on the test set. For the up-selling task, the ROC curve shows that the model does better than a random model. The area under the curve (AUC) is 0.857. At threshold 0.5, the precision is 0.7, the recall is 0.463, and the F1 score is 0.545.
6060

61-
![Evaluate the results](./media/ui-sample-classification-predict-churn/evaluate-result.png)
61+
![Evaluate the results](./media/how-to-ui-sample-classification-predict-churn/evaluate-result.png)
6262

6363
You can move the **Threshold** slider and see the metrics change for the binary classification task.
6464

articles/machine-learning/service/how-to-ui-sample-classification-predict-credit-risk-basic.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,15 +20,15 @@ Because the question is answering "Which one?" this is called a classification p
2020

2121
Here's the final experiment graph for this sample:
2222

23-
![Graph of the experiment](media/ui-sample-classification-predict-credit-risk-basic/overall-graph.png)
23+
![Graph of the experiment](media/how-to-ui-sample-classification-predict-credit-risk-basic/overall-graph.png)
2424

2525
## Prerequisites
2626

2727
[!INCLUDE [aml-ui-prereq](../../../includes/aml-ui-prereq.md)]
2828

2929
4. Select the **Open** button for the Sample 3 experiment:
3030

31-
![Open the experiment](media/ui-sample-classification-predict-credit-risk-basic/open-sample3.png)
31+
![Open the experiment](media/how-to-ui-sample-classification-predict-credit-risk-basic/open-sample3.png)
3232

3333
## Related sample
3434

@@ -54,7 +54,7 @@ Follow these steps to create the experiment:
5454

5555
## Results
5656

57-
![Evaluate the results](media/ui-sample-classification-predict-credit-risk-basic/evaluate-result.png)
57+
![Evaluate the results](media/how-to-ui-sample-classification-predict-credit-risk-basic/evaluate-result.png)
5858

5959
In the evaluation results, you can see that the AUC of the model is 0.776. At threshold 0.5, the precision is 0.621, the recall is 0.456, and the F1 score is 0.526.
6060

articles/machine-learning/service/how-to-ui-sample-classification-predict-credit-risk-cost-sensitive.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -22,15 +22,15 @@ If you're just getting started with machine learning, you can take a look at the
2222

2323
Here's the completed graph for this experiment:
2424

25-
[![Graph of the experiment](media/ui-sample-classification-predict-credit-risk-cost-sensitive/graph.png)](media/ui-sample-classification-predict-credit-risk-cost-sensitive/graph.png#lightbox)
25+
[![Graph of the experiment](media/how-to-ui-sample-classification-predict-credit-risk-cost-sensitive/graph.png)](media/how-to-ui-sample-classification-predict-credit-risk-cost-sensitive/graph.png#lightbox)
2626

2727
## Prerequisites
2828

2929
[!INCLUDE [aml-ui-prereq](../../../includes/aml-ui-prereq.md)]
3030

3131
4. Select the **Open** button for the Sample 4 experiment:
3232

33-
![Open the experiment](media/ui-sample-classification-predict-credit-risk-cost-sensitive/open-sample4.png)
33+
![Open the experiment](media/how-to-ui-sample-classification-predict-credit-risk-cost-sensitive/open-sample4.png)
3434

3535
## Data
3636

@@ -49,7 +49,7 @@ The cost of misclassifying a low-risk example as high is 1, and the cost of misc
4949

5050
Here's the graph of the experiment:
5151

52-
[![Graph of the experiment](media/ui-sample-classification-predict-credit-risk-cost-sensitive/graph.png)](media/ui-sample-classification-predict-credit-risk-cost-sensitive/graph.png#lightbox)
52+
[![Graph of the experiment](media/how-to-ui-sample-classification-predict-credit-risk-cost-sensitive/graph.png)](media/how-to-ui-sample-classification-predict-credit-risk-cost-sensitive/graph.png#lightbox)
5353

5454
## Data processing
5555

@@ -105,7 +105,7 @@ This sample uses the standard data science workflow to create, train, and test t
105105

106106
The following diagram shows a portion of this experiment, in which the original and replicated training sets are used to train two different SVM models. **Train Model** is connected to the training set, and **Score Model** is connected to the test set.
107107

108-
![Experiment graph](media/ui-sample-classification-predict-credit-risk-cost-sensitive/score-part.png)
108+
![Experiment graph](media/how-to-ui-sample-classification-predict-credit-risk-cost-sensitive/score-part.png)
109109

110110
In the evaluation stage of the experiment, you compute the accuracy of each of the four models. For this experiment, use **Evaluate Model** to compare examples that have the same misclassification cost.
111111

@@ -139,7 +139,7 @@ def azureml_main(dataframe1 = None, dataframe2 = None):
139139

140140
To view the results of the experiment, you can right-click the Visualize output of the last **Select Columns in Dataset** module.
141141

142-
![Visualize output](media/ui-sample-classification-predict-credit-risk-cost-sensitive/result.png)
142+
![Visualize output](media/how-to-ui-sample-classification-predict-credit-risk-cost-sensitive/result.png)
143143

144144
The first column lists the machine learning algorithm used to generate the model.
145145
The second column indicates the type of the training set.

articles/machine-learning/service/how-to-ui-sample-classification-predict-flight-delay.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -20,15 +20,15 @@ This problem can be approached as a classification problem, predicting two class
2020

2121
Here's the final experiment graph for this sample:
2222

23-
[![Graph of the experiment](media/ui-sample-classification-predict-flight-delay/experiment-graph.png)](media/ui-sample-classification-predict-credit-risk-cost-sensitive/graph.png#lightbox)
23+
[![Graph of the experiment](media/how-to-ui-sample-classification-predict-flight-delay/experiment-graph.png)](media/how-to-ui-sample-classification-predict-credit-risk-cost-sensitive/graph.png#lightbox)
2424

2525
## Prerequisites
2626

2727
[!INCLUDE [aml-ui-prereq](../../../includes/aml-ui-prereq.md)]
2828

2929
4. Select the **Open** button for the Sample 6 experiment:
3030

31-
![Open the experiment](media/ui-sample-classification-predict-flight-delay/open-sample6.png)
31+
![Open the experiment](media/how-to-ui-sample-classification-predict-flight-delay/open-sample6.png)
3232

3333
## Get the data
3434

@@ -50,13 +50,13 @@ To supplement the flight data, the **Weather Dataset** is used. The weather data
5050

5151
A dataset usually requires some pre-processing before it can be analyzed.
5252

53-
![data-process](media/ui-sample-classification-predict-flight-delay/data-process.png)
53+
![data-process](media/how-to-ui-sample-classification-predict-flight-delay/data-process.png)
5454

5555
### Flight data
5656

5757
The columns **Carrier**, **OriginAirportID**, and **DestAirportID** are saved as integers. However, they're categorical attributes, use the **Edit Metadata** module to convert them to categorical.
5858

59-
![edit-metadata](media/ui-sample-classification-predict-flight-delay/edit-metadata.png)
59+
![edit-metadata](media/how-to-ui-sample-classification-predict-flight-delay/edit-metadata.png)
6060

6161
Then use the **Select Columns** in Dataset module to exclude from the dataset columns that are possible target leakers: **DepDelay**, **DepDel15**, **ArrDelay**, **Canceled**, **Year**.
6262

@@ -76,18 +76,18 @@ Since weather data is reported in local time, time zone differences are accounte
7676

7777
Flight records are joined with weather data at origin of the flight (**OriginAirportID**) using the **Join Data** module.
7878

79-
![join flight and weather by origin](media/ui-sample-classification-predict-flight-delay/join-origin.png)
79+
![join flight and weather by origin](media/how-to-ui-sample-classification-predict-flight-delay/join-origin.png)
8080

8181

8282
Flight records are joined with weather data using the destination of the flight (**DestAirportID**).
8383

84-
![Join flight and weather by destination](media/ui-sample-classification-predict-flight-delay/join-destination.png)
84+
![Join flight and weather by destination](media/how-to-ui-sample-classification-predict-flight-delay/join-destination.png)
8585

8686
### Preparing Training and Test Samples
8787

8888
The **Split Data** module splits the data into April through September records for training, and October records for test.
8989

90-
![Split training and test data](media/ui-sample-classification-predict-flight-delay/split.png)
90+
![Split training and test data](media/how-to-ui-sample-classification-predict-flight-delay/split.png)
9191

9292
Year, month, and timezone columns are removed from the training dataset using the Select Columns module.
9393

@@ -110,7 +110,7 @@ Finally, to test the quality of the results, add the **Evaluate Model** module t
110110
## Evaluate
111111
The logistic regression model has AUC of 0.631 on the test set.
112112

113-
![evaluate](media/ui-sample-classification-predict-flight-delay/evaluate.png)
113+
![evaluate](media/how-to-ui-sample-classification-predict-flight-delay/evaluate.png)
114114

115115
## Next steps
116116

articles/machine-learning/service/how-to-ui-sample-regression-predict-automobile-price-basic.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -27,15 +27,15 @@ The fundamental steps of a training machine learning model are:
2727

2828
Here's the final, completed graph of the experiment we'll be working on. We'll provide the rationale for all the modules so you can make similar decisions on your own.
2929

30-
![Graph of the experiment](media/ui-sample-regression-predict-automobile-price-basic/overall-graph.png)
30+
![Graph of the experiment](media/how-to-ui-sample-regression-predict-automobile-price-basic/overall-graph.png)
3131

3232
## Prerequisites
3333

3434
[!INCLUDE [aml-ui-prereq](../../../includes/aml-ui-prereq.md)]
3535

3636
4. Select the **Open** button for the Sample 1 experiment:
3737

38-
![Open the experiment](media/ui-sample-regression-predict-automobile-price-basic/open-sample1.png)
38+
![Open the experiment](media/how-to-ui-sample-regression-predict-automobile-price-basic/open-sample1.png)
3939

4040
## Get the data
4141

@@ -47,7 +47,7 @@ The main data preparation tasks include data cleaning, integration, transformati
4747

4848
Use the **Select Columns in Dataset** module to exclude normalized-losses that have many missing values. Then use **Clean Missing Data** to remove the rows that have missing values. This helps to create a clean set of training data.
4949

50-
![Data pre-processing](./media/ui-sample-regression-predict-automobile-price-basic/data-processing.png)
50+
![Data pre-processing](./media/how-to-ui-sample-regression-predict-automobile-price-basic/data-processing.png)
5151

5252
## Train the model
5353

@@ -65,11 +65,11 @@ After the model is trained, you can use the **Score Model** and **Evaluate Model
6565

6666
**Score Model** generates predictions for the test dataset by using the trained model. To check the result, select the output port of **Score Model** and then select **Visualize**.
6767

68-
![Score result](./media/ui-sample-regression-predict-automobile-price-basic/score-result.png)
68+
![Score result](./media/how-to-ui-sample-regression-predict-automobile-price-basic/score-result.png)
6969

7070
Pass the scores to the **Evaluate Model** module to generate evaluation metrics. To check the result, select the output port of the **Evaluate Model** and then select **Visualize**.
7171

72-
![Evaluate result](./media/ui-sample-regression-predict-automobile-price-basic/evaluate-result.png)
72+
![Evaluate result](./media/how-to-ui-sample-regression-predict-automobile-price-basic/evaluate-result.png)
7373

7474
## Clean up resources
7575

articles/machine-learning/service/how-to-ui-sample-regression-predict-automobile-price-compare-algorithms.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -20,15 +20,15 @@ If you're just getting started with machine learning, take a look at the [basic
2020

2121
Here's the completed graph for this experiment:
2222

23-
[![Graph of the experiment](media/ui-sample-regression-predict-automobile-price-compare-algorithms/graph.png)](media/ui-sample-classification-predict-credit-risk-cost-sensitive/graph.png#lightbox)
23+
[![Graph of the experiment](media/how-to-ui-sample-regression-predict-automobile-price-compare-algorithms/graph.png)](media/how-to-ui-sample-classification-predict-credit-risk-cost-sensitive/graph.png#lightbox)
2424

2525
## Prerequisites
2626

2727
[!INCLUDE [aml-ui-prereq](../../../includes/aml-ui-prereq.md)]
2828

2929
4. Select the **Open** button for the Sample 2 experiment:
3030

31-
![Open the experiment](media/ui-sample-regression-predict-automobile-price-compare-algorithms/open-sample2.png)
31+
![Open the experiment](media/how-to-ui-sample-regression-predict-automobile-price-compare-algorithms/open-sample2.png)
3232

3333
## Experiment summary
3434

@@ -49,7 +49,7 @@ The main data preparation tasks include data cleaning, integration, transformati
4949

5050
Use the **Select Columns in Dataset** module to exclude normalized-losses that have many missing values. We then use **Clean Missing Data** to remove the rows that have missing values. This helps to create a clean set of training data.
5151

52-
![Data pre-processing](media/ui-sample-regression-predict-automobile-price-compare-algorithms/data-processing.png)
52+
![Data pre-processing](media/how-to-ui-sample-regression-predict-automobile-price-compare-algorithms/data-processing.png)
5353

5454
## Train the model
5555

@@ -76,7 +76,7 @@ Second, compare two algorithms on the testing dataset.
7676

7777
Here are the results:
7878

79-
![Compare the results](media/ui-sample-regression-predict-automobile-price-compare-algorithms/result.png)
79+
![Compare the results](media/how-to-ui-sample-regression-predict-automobile-price-compare-algorithms/result.png)
8080

8181
These results show that the model built with **Boosted Decision Tree Regression** has a lower root mean squared error than the model built on **Decision Forest Regression**.
8282

0 commit comments

Comments
 (0)