You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Refactor pre-training workflow to decouple base models, introduce a standalone pre-training script, and implement dynamic path injection with feature consistency validation.
pretrain_source: lstm_Alpha158 # (Optional) Declare dependency on base model
63
67
notes: "Optional notes"# Notes
64
68
```
65
69
70
+
#### Key Fields:
71
+
- **`tags: [basemodel]`**: Marks the model as a pre-trainable base model.
72
+
- **`pretrain_source`**: Tells the system which base model this upper-layer model depends on. The system will automatically look for the corresponding `_latest.pkl`.
73
+
66
74
> [!NOTE]
67
75
> **Distinction of Market Configurations**: The `market` field in the registry acts strictly as a **Model Metadata Tag** intended for CLI selection filtering via `--market` during incremental training or predictions. Actual data extraction bounds are perpetually steered by the global `market` setting inside `model_config.json`.
Complex models (e.g., GATs, ADD, IGMTF) require a pre-trained base model (e.g., LSTM or GRU) for weight initialization.
324
+
325
+
### Usage Scenarios
326
+
- Providing initialization weights for upper-layer models.
327
+
- When features (d_feat) are modified, requiring new compatible base models.
328
+
329
+
### Core Semantics
330
+
- **Pre-training is not logged in records**: It does not modify `latest_train_records.json`.
331
+
- **Metadata Validation**: Each pre-trained file comes with a `.json` metadata file. If an upper model's `d_feat` doesn't match the pre-trained file, training will be blocked.
0 commit comments