Skip to content

Commit 3e45aba

Browse files
committed
set max column limit to 92 in readme
1 parent 4cc82c7 commit 3e45aba

File tree

1 file changed

+17
-11
lines changed

1 file changed

+17
-11
lines changed

README.md

Lines changed: 17 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -19,20 +19,24 @@ Pkg.add("FeatureSelection")
1919
# Example Usage
2020
Lets build a supervised recursive feature eliminator with `RandomForestRegressor`
2121
from DecisionTree.jl as our base model.
22-
But first we need a dataset to train on. We shall create a synthetic dataset popularly known in the R community as the friedman dataset#1. Notice how the target vector for this dataset depends on only the first
23-
five columns of feature table. So we expect that our recursive feature elimination should return the first
24-
columns as important features.
22+
But first we need a dataset to train on. We shall create a synthetic dataset popularly
23+
known in the R community as the friedman dataset#1. Notice how the target vector for this
24+
dataset depends on only the first five columns of feature table. So we expect that our
25+
recursive feature elimination should return the first columns as important features.
2526
```julia
2627
using MLJ # or, minimally, `using FeatureSelection, MLJModels, MLJBase`
2728
using StableRNGs
2829
rng = StableRNG(123)
2930
A = rand(rng, 50, 10)
3031
X = MLJ.table(A) # features
3132
y = @views(
32-
10 .* sin.(pi .* A[:, 1] .* A[:, 2]) + 20 .* (A[:, 3] .- 0.5).^ 2 .+ 10 .* A[:, 4] .+ 5 * A[:, 5]
33+
10 .* sin.(
34+
pi .* A[:, 1] .* A[:, 2]
35+
) + 20 .* (A[:, 3] .- 0.5).^ 2 .+ 10 .* A[:, 4] .+ 5 * A[:, 5]
3336
) # target
3437
```
35-
Now we that we have our data we can create our recursive feature elimination model and train it on our dataset
38+
Now we that we have our data we can create our recursive feature elimination model and
39+
train it on our dataset
3640
```julia
3741
RandomForestRegressor = @load RandomForestRegressor pkg=DecisionTree
3842
forest = RandomForestRegressor()
@@ -53,10 +57,10 @@ select just those features some new table including all the original features.
5357
in `?RecursiveFeatureElimination`.
5458

5559
Okay, let's say that we didn't know that our synthetic dataset depends on only five
56-
columns from our feature table. We could apply cross fold validation `CV(nfolds=5)` with our
57-
recursive feature elimination model to select the optimal value of
58-
`n_features` for our model. In this case we will use a simple Grid search with
59-
root mean square as the measure.
60+
columns from our feature table. We could apply cross fold validation `CV(nfolds=5)` with
61+
our recursive feature elimination model to select the optimal value of
62+
`n_features` for our model. In this case we will use a simple Grid search with root mean
63+
square as the measure.
6064
```julia
6165
rfe = RecursiveFeatureElimination(model = forest)
6266
tuning_rfe_model = TunedModel(
@@ -80,6 +84,8 @@ and call `predict` on the tuned model machine as shown below
8084
Xnew = MLJ.table(rand(rng, 50, 10)) # create test data
8185
predict(self_tuning_rfe_mach, Xnew)
8286
```
83-
In this case, prediction is done using the best recursive feature elimination model gotten from the tuning process above.
87+
In this case, prediction is done using the best recursive feature elimination model gotten
88+
from the tuning process above.
8489

85-
For more information various cross-validation strategies and `TunedModel` see [MLJ Documentation](https://alan-turing-institute.github.io/MLJ.jl/dev/)
90+
For more information various cross-validation strategies and `TunedModel` see
91+
[MLJ Documentation](https://alan-turing-institute.github.io/MLJ.jl/dev/)

0 commit comments

Comments
 (0)