You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+12-8Lines changed: 12 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ All model examples can be found [here](https://github.com/dmlc/dgl/tree/master/e
18
18
A summary of part of the model accuracy and training speed with the Pytorch backend (on Amazon EC2 p3.2x instance (w/ V100 GPU)), as compared with the best open-source implementations:
| CUDA 9.0 |`pip install --pre dgl-cu90`|`pip install dgl-cu90`|
83
+
| CUDA 9.2 |`pip install --pre dgl-cu92`|`pip install dgl-cu92`|
84
+
| CUDA 10.0 |`pip install --pre dgl-cu100`|`pip install dgl-cu100`|
85
+
| CUDA 10.1 |`pip install --pre dgl-cu101`|`pip install dgl-cu101`|
82
86
83
87
### From source
84
88
@@ -188,7 +192,7 @@ If you use DGL in a scientific publication, we would appreciate citations to the
188
192
@article{wang2019dgl,
189
193
title={Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs},
190
194
url={https://arxiv.org/abs/1909.01315},
191
-
author={{Wang, Minjie and Yu, Lingfan and Zheng, Da and Gan, Quan and Gai, Yu and Ye, Zihao and Li, Mufei and Zhou, Jinjing and Huang, Qi and Ma, Chao and Huang, Ziyue and Guo, Qipeng and Zhang, Hao and Lin, Haibin and Zhao, Junbo and Li, Jinyang and Smola, Alexander J and Zhang, Zheng},
195
+
author={Wang, Minjie and Yu, Lingfan and Zheng, Da and Gan, Quan and Gai, Yu and Ye, Zihao and Li, Mufei and Zhou, Jinjing and Huang, Qi and Ma, Chao and Huang, Ziyue and Guo, Qipeng and Zhang, Hao and Lin, Haibin and Zhao, Junbo and Li, Jinyang and Smola, Alexander J and Zhang, Zheng},
192
196
journal={ICLR Workshop on Representation Learning on Graphs and Manifolds},
In comparison, GraphVite uses 4 GPUs and takes 14 minutes. Thus, DGL-KE trains TransE on FB15k twice as fast as GraphVite while using much few resources. More performance information on GraphVite can be found [here](https://github.com/DeepGraphLearning/graphvite).
0 commit comments