Skip to content

Commit 09fd5d9

Browse files
authored
Update Paper Links (#320)
1 parent b1b24ac commit 09fd5d9

File tree

3 files changed

+13
-13
lines changed

3 files changed

+13
-13
lines changed

README.md

Lines changed: 10 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
<div align="center">
66
<h3 style="font-size: 22px">Efficient and Modular ML on Temporal Graphs</h3>
77
<a href="https://tgm.readthedocs.io/en/latest"/><strong style="font-size: 18px;"/>Read Our Docs»</strong></a>
8-
<a href="https://github.com/tgm-team/tgm"/><strong style="font-size: 18px;"/>Read Our Paper»</strong></a>
8+
<a href="https://arxiv.org/abs/2510.07586"/><strong style="font-size: 18px;"/>Read Our Paper»</strong></a>
99
<br/>
1010
<br/>
1111

@@ -90,10 +90,10 @@ TGM is organized as a **three-layer architecture**:
9090
1. **ML Layer**
9191
9292
- Materializes batches directly on-device for model computation.
93-
- Supports **node-, link-, and graph-level prediction**.
93+
- Supports node-, link-, and graph-level prediction.
9494
9595
> \[!TIP\]
96-
> Check out [our paper](https://tgm.readthedocs.io/) for technical details.
96+
> Check out [our paper](https://arxiv.org/abs/2510.07586) for technical details.
9797
9898
### Minimal Example
9999
@@ -115,7 +115,7 @@ train_data, val_data, test_data = DGData.from_tgb("tgbn-trade").split()
115115
train_dg = DGraph(train_data)
116116
train_loader = DGDataLoader(train_dg, batch_unit="Y")
117117
118-
# tgbl-trade has no static node features, so we create Gaussian ones (dim=64)
118+
# tgbn-trade has no static node features, so we create Gaussian ones (dim=64)
119119
static_node_feats = torch.randn((train_dg.num_nodes, 64))
120120
121121
class RecurrentGCN(torch.nn.Module):
@@ -183,15 +183,14 @@ python examples/linkproppred/tgat.py --dataset tgbl-wiki --device cuda
183183
184184
## Citation
185185
186-
If you use TGM in your work, please cite [our paper](https://github.com/tgm-team/tgm):
186+
If you use TGM in your work, please cite [our paper](https://arxiv.org/abs/2510.07586):
187187
188188
```bibtex
189-
@article{TODO,
190-
title = "TODO",
191-
author = "TODO",
192-
journal = "TODO",
193-
year = "2025",
194-
url = "TODO"
189+
@misc{chmura2025tgm,
190+
title = {TGM: A Modular and Efficient Library for Machine Learning on Temporal Graphs},
191+
author = {Chmura, Jacob and Huang, Shenyang and Ngo, Tran Gia Bao and Parviz, Ali and Poursafaei, Farimah and Leskovec, Jure and Bronstein, Michael and Rabusseau, Guillaume and Fey, Matthias and Rabbany, Reihaneh},
192+
year = {2025},
193+
note = {arXiv:2510.07586}
195194
}
196195
```
197196

docs/architecture.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,4 +22,4 @@ TGM is organized as a **three-layer architecture**:
2222
- Supports **node-, link-, and graph-level prediction**.
2323

2424
> \[!TIP\]
25-
> Check out [our paper](https://tgm.readthedocs.io/) for technical details.
25+
> Check out [our paper](https://arxiv.org/abs/2510.07586) for technical details.

docs/quickstart.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ train_data, val_data, test_data = DGData.from_tgb("tgbn-trade").split()
2020
train_dg = DGraph(train_data)
2121
train_loader = DGDataLoader(train_dg, batch_unit="Y")
2222

23-
# tgbl-trade has no static node features, so we create Gaussian ones (dim=64)
23+
# tgbn-trade has no static node features, so we create Gaussian ones (dim=64)
2424
static_node_feats = torch.randn((train_dg.num_nodes, 64))
2525

2626
class RecurrentGCN(torch.nn.Module):
@@ -83,3 +83,4 @@ python examples/linkproppred/tgat.py --dataset tgbl-wiki --device cuda
8383

8484
- Explore more of our [examples](../examples/)
8585
- Dive deeper into TGM with our [tutorials](./tutorials/)
86+
- Read [our paper](https://arxiv.org/abs/2510.07586)

0 commit comments

Comments
 (0)