Skip to content

Commit 1cc49cc

Browse files
committed
Evaluate: Fix typos
1 parent ef9d5cd commit 1cc49cc

File tree

3 files changed

+3
-3
lines changed

3 files changed

+3
-3
lines changed

source/widgets/evaluate/confusionmatrix.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ The widget usually gets the evaluation results from [Test and Score](../evaluate
2121
![](images/ConfusionMatrix-stamped.png)
2222

2323
1. *Learners*: Choose a learning algorithm to display.
24-
2. *OUtput*: define what is sent to the output, namely predicted classes (*Predictions*) or their probabilities (*Probabilities*).
24+
2. *Output*: define what is sent to the output, namely predicted classes (*Predictions*) or their probabilities (*Probabilities*).
2525
3. The widget outputs every change if *Send Automatically* is ticked. If not, the user will need to click *Send Selected* to commit the changes.
2626
4. *Show*: select what data to see in the matrix.
2727
- **Number of instances** shows correctly and incorrectly classified instances numerically.

source/widgets/evaluate/parameterfitter.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,6 @@ Here is a simple example on how to fit parameters using the **Parameter Fitter**
2424

2525
Parameter Fitter enables observing performance for a varying number of trees. We set the range from 1 to 10, namely we will observe performance for every number of trees up to 10.
2626

27-
We see there's a slight peak in AUC value for cross-validation at 3 trees, while 8 trees seem to be optimal overall.
27+
We see there's a slight peak in AUC value for cross-validation at 3 trees, while 8 trees seem to be optimal overall. (Note that this is just a toy example!)
2828

2929
![](images/ParameterFitter-Example.png)

source/widgets/evaluate/testandscore.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ The *Learner* signal has an uncommon property: it can be connected to more than
2222
![](images/TestAndScore-stamped.png)
2323

2424
1. The widget supports various sampling methods.
25-
- [Cross-validation](https://en.wikipedia.org/wiki/Cross-validation_\(statistics\)) splits the data into a given *number of folds* (usually 5 or 10). The algorithm is tested by holding out examples from one fold at a time; the model is induced from other folds and examples from the held out fold are classified. This is repeated for all the folds. The *Statified* option ensures the folds are similar in terms of class distribution.
25+
- [Cross-validation](https://en.wikipedia.org/wiki/Cross-validation_\(statistics\)) splits the data into a given *number of folds* (usually 5 or 10). The algorithm is tested by holding out examples from one fold at a time; the model is induced from other folds and examples from the held out fold are classified. This is repeated for all the folds. The *Stratified* option ensures the folds are similar in terms of class distribution.
2626
- **Cross validation by feature** performs cross-validation but folds are defined by the selected categorical feature from meta-features.
2727
- **Random sampling** randomly splits the data into the training and testing set in the given proportion (e.g. 70:30, see *Training set size*); the whole procedure is repeated for a specified number of times (*Repeat train/test*). *Statified* option ensures the folds are similar in terms of class distribution.
2828
- **Leave-one-out** is similar, but it holds out one instance at a time, inducing the model from all others and then classifying the held out instances. This method is obviously very stable, reliable... and very slow.

0 commit comments

Comments
 (0)