1818- ** Minimal Compute Requirements:** 90% less computation than traditional deep learning
1919- ** Skip Connection Power:** Enhanced feature reuse for superior accuracy
2020
21- ![ edRVFL-SC Architecture] ( examples/block_diagram .png )
21+ ![ edRVFL-SC Architecture] ( examples/EDRVFL .png )
2222* Architecture diagram showing skip connections and ensemble prediction*
2323
2424## Key Features
@@ -69,31 +69,32 @@ print(f"Parameters: {params:,} | FLOPs: {flops:,}")
6969
7070## 📈 Performance Results
7171
72- ![ Actual vs Predicted] ( https:// examples/actual_vs_predicted.png)
72+ ![ Actual vs Predicted] ( examples/actual_vs_predicted.png )
7373* Model predictions vs actual values on California Housing dataset*
7474
75- ![ Feature Importance] ( https:// examples/feature_importance.png)
75+ ![ Feature Importance] ( examples/feature_importance.png )
7676* Feature importance analysis showing key predictive factors*
7777
78- ![ Error Distribution] ( https:// examples/error_distribution.png)
78+ ![ Error Distribution] ( examples/error_distribution.png )
7979* Prediction error distribution centered near zero*
8080
8181| Metric | Value |
8282| --------------------| ------------------------------|
8383| Training Time | 3.2 sec (vs 15 min for equivalent DNN) |
84- | RMSE | 0.72 ($72,000 error) |
85- | R² Score | 0.85 |
86- | FLOPs/Prediction | 1.2M (fits mobile devices) |
84+ | RMSE | 0.5848 |
85+ | R² Score | 0.7390 |
86+ | FLOPs/Prediction | 5,323,388 (fits mobile devices) |
87+
8788
8889## Key Hyperparameters
8990
9091| Parameter | Description | Default | Performance Tip |
9192| -----------------| --------------------------------------| ---------| ----------------------------------|
92- | ` num_units ` | Hidden neurons per layer | 128 | Increase for complex patterns |
93+ | ` num_units ` | Hidden neurons per layer | 512 | Increase for complex patterns |
9394| ` activation ` | Nonlinear function (relu, sigmoid, tanh, radbas) | relu | radbas for smooth data |
94- | ` lambda_ ` | Regularization coefficient | 0.01 | Higher prevents overfit |
95- | ` Lmax ` | Hidden layers | 3 | 5-7 layers optimal |
96- | ` deep_boosting ` | Layer scaling factor | 1.0 | 0.8-0.95 boosts accuracy |
95+ | ` lambda_ ` | Regularization coefficient | 0.0001 | Higher prevents overfit |
96+ | ` Lmax ` | Hidden layers | 7 | 5-7 layers optimal |
97+ | ` deep_boosting ` | Layer scaling factor | 0.5 | 0.8-0.95 |
9798
9899## Model Architecture
99100
@@ -111,21 +112,7 @@ The edRVFL-SC revolution features:
111112 - Average predictions from all layers
112113 - Natural regularization effect
113114
114- ## Diagram
115-
116- ``` mermaid
117- graph TD
118- A[Input Layer] -->|Bias Augmentation| B[Hidden Layer 1]
119- B -->|Random Weights| C[Activation: relu/sigmoid/tanh/radbas]
120- C -->|Skip Connection| D[Hidden Layer 2]
121- D -->|Random Weights| E[Activation]
122- E -->|Skip Connection| F[Hidden Layer N]
123- F -->|Ensemble| G[Output Layer]
124- A -->|Skip Connection| D
125- A -->|Skip Connection| F
126- C -->|Skip Connection| F
127- G --> H[Final Prediction]
128- ```
115+
129116
130117## Advanced Features
131118
0 commit comments