|
22 | 22 | [](https://doi.org/10.5281/zenodo.11136007) |
23 | 23 | [](https://www.gnu.org/licenses/gpl-3.0) |
24 | 24 |
|
25 | | - |
26 | | -**EvoRBF** is mind-blowing framework for Radial Basis Function (RBF) networks. |
27 | | -We explain several keys components and provide several types of RBF networks that you will never see in other places. |
28 | | - |
29 | | - |
30 | 25 | | **EvoRBF** | **Evolving Radial Basis Function Network** | |
31 | 26 | |--------------------------------------|--------------------------------------------------------| |
32 | 27 | | **Free software** | GNU General Public License (GPL) V3 license | |
@@ -68,7 +63,10 @@ We explain several keys components and provide several types of RBF networks tha |
68 | 63 |
|
69 | 64 | # Theory |
70 | 65 |
|
71 | | -You can read several papers by using Google scholar search. There are many ways we can use Nature-inspired Algorithms |
| 66 | +**EvoRBF** is mind-blowing framework for Radial Basis Function (RBF) networks. |
| 67 | +We explain several keys components and provide several types of RBF networks that you will never see in other places. |
| 68 | + |
| 69 | +You can read several papers by using Google Scholar search. There are many ways we can use Nature-inspired Algorithms |
72 | 70 | to optimize Radial Basis Function network, for example, you can read [this paper](https://doi.org/10.1016/B978-0-443-18764-3.00015-1). |
73 | 71 | Here we will walk through some basic concepts and parameters that matter to this network. |
74 | 72 |
|
@@ -218,9 +216,9 @@ model = NiaRbfTuner(problem_type="classification", bounds=my_bounds, cv=3, scori |
218 | 216 | + Or RBF use random to find centers ==> Not good to split samples to different clusters. |
219 | 217 | 3. RBF needs to train the output weights. (This is 2nd phase) |
220 | 218 | 4. RBF do not use Gradient descent to calculate output weights, it used Moore–Penrose inverse (matrix multiplication, least square method) ==> so it is faster than MLP network. |
221 | | -5. Moore-Penrose inverse can find the exact solution ==> why you want to use Gradient or Metaheuristics here ==> Hell no. |
222 | | -6. In case of overfitting, what can we do with this network ==> We add Regularization method. |
223 | | -7. If you have large-scale dataset ==> Set more hidden nodes ==> Then increase the Regularization parameter. |
| 219 | +5. Moore-Penrose inverse can find the exact solution ==> So we don't have to use Gradient Descent or Approximation algorithm here. |
| 220 | +6. In case of overfitting, what can we do with this network ==> We add L2 regularization method. |
| 221 | +7. If you have large-scale dataset ==> Set more hidden nodes ==> Then increase the L2 regularization parameter. |
224 | 222 |
|
225 | 223 | ```code |
226 | 224 | 1. RbfRegressor, RbfClassifier: You need to set up 4 types of hyper-parameters. |
|
0 commit comments