Skip to content

Commit 577284d

Browse files
committed
Update readme
1 parent 9da9fd9 commit 577284d

File tree

2 files changed

+104
-1
lines changed

2 files changed

+104
-1
lines changed

README.md

Lines changed: 104 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,106 @@
1-
# spacecast
1+
# SpaceCast
22

33
[![Linting](https://github.com/fmihpc/spacecast/actions/workflows/pre-commit.yml/badge.svg)](https://github.com/fmihpc/spacecast/actions/workflows/pre-commit.yml)
4+
5+
![](figures/example_forecast.png)
6+
7+
SpaceCast is a repository for graph-based neural space weather forecasting. The code uses [PyTorch Lightning](https://lightning.ai/pytorch-lightning/) for modeling, and [Weights & Biases](https://wandb.ai/) for logging. The code is based on [Neural-LAM](https://github.com/mllam/neural-lam) and uses [MDP](https://github.com/mllam/mllam-data-prep) for data prep, which lowers the bar to adapt progress in limited area modeling for space weather.
8+
9+
The repository contains LAM versions of:
10+
11+
* The graph-based model from [Keisler (2022)](https://arxiv.org/abs/2202.07575).
12+
* GraphCast, by [Lam et al. (2023)](https://arxiv.org/abs/2212.12794).
13+
* The hierarchical model from [Oskarsson et al. (2024)](https://arxiv.org/abs/2406.04759).
14+
15+
## Dependencies
16+
17+
Use Python 3.10 / 3.11 and
18+
19+
- `torch==2.5.1`
20+
- `pytorch-lightning==2.4.0`
21+
- `torch_geometric==2.6.1`
22+
- `mllam-data-prep==0.6.1`
23+
24+
Complete list of packages can be installed with `pip install -r requirements.txt`.
25+
26+
## Data
27+
28+
To create a training-ready dataset with [mllam-data-prep](https://github.com/mllam/mllam-data-prep), run:
29+
```
30+
mllam_data_prep data/vlasiator_mdp.yaml
31+
```
32+
33+
Simple, multiscale, and hierarchical graphs are created and stored in `.pt` format using the following commands:
34+
```
35+
python -m neural_lam.create_graph --config_path data/vlasiator_config.yaml --name simple --levels 1 --plot
36+
python -m neural_lam.create_graph --config_path data/vlasiator_config.yaml --name multiscale --levels 3 --plot
37+
python -m neural_lam.create_graph --config_path data/vlasiator_config.yaml --name hierarchical --hierarchical --levels 3 --plot
38+
```
39+
40+
To plot the graphs and store as `.html` files run:
41+
```
42+
python -m neural_lam.plot_graph --datastore_config_path data/vlasiator_config.yaml --graph ...
43+
```
44+
with `--graph` as `simple`, `multiscale` or `hierarchcial` and `--save` is the name of the output file.
45+
46+
## Logging
47+
48+
If you'd like to login and use [W&B](https://wandb.ai/), run:
49+
```
50+
wandb login
51+
```
52+
If you prefer to just log things locally, run:
53+
```
54+
wandb off
55+
```
56+
See [docs](https://docs.wandb.ai/) for more details.
57+
58+
## Training
59+
60+
The first stage of a probabilistic model can be trained something like this (where in later stages you add `kl_beta` and `crps_weight`):
61+
62+
```
63+
python -m neural_lam.train_model \
64+
--config_path data/vlasiator_config.yaml \
65+
--num_workers 2 \
66+
--precision bf16-mixed \
67+
--model graph_efm \
68+
--graph multiscale \
69+
--hidden_dim 64 \
70+
--processor_layers 4 \
71+
--ensemble_size 5 \
72+
--batch_size 1 \
73+
--lr 0.001 \
74+
--kl_beta 0 \
75+
--crps_weight 0 \
76+
--ar_steps_train 1 \
77+
--epochs 500 \
78+
--val_interval 50 \
79+
--ar_steps_eval 4 \
80+
--val_steps_to_log 1 2 3
81+
```
82+
83+
Distributed data parallel training is supported. Specify number of nodes with the `--node` argument. For a full list of training options see `python neural_lam.train_model --help`.
84+
85+
## Evaluation
86+
87+
Inference uses the same script as training, with the same choice of parameters, and some to have an extra look at like `--eval test`, `--ar_steps_eval 30` and `--n_example_pred 1` to evaluate 30 second forecasts on the test set with 1 example forecast plotted.
88+
89+
```
90+
python -m neural_lam.train_model \
91+
--config_path data/vlasiator_config.yaml \
92+
--model graph_efm \
93+
--graph hierarchical \
94+
--num_nodes 1 \
95+
--num_workers 2 \
96+
--batch_size 1 \
97+
--hidden_dim 64 \
98+
--processor_layers 2 \
99+
--ensemble_size 5 \
100+
--ar_steps_eval 30 \
101+
--precision bf16-mixed \
102+
--n_example_pred 1 \
103+
--eval test \
104+
--load ckpt_path
105+
```
106+
where a model checkpoint from a given path given to the `--load` in `.ckpt` format.

figures/example_forecast.png

1.27 MB
Loading

0 commit comments

Comments
 (0)