You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+27-2Lines changed: 27 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,9 +13,9 @@ FastFold provides a **high-performance implementation of Evoformer** with the fo
13
13
1. Excellent kernel performance on GPU platform
14
14
2. Supporting Dynamic Axial Parallelism(DAP)
15
15
* Break the memory limit of single GPU and reduce the overall training time
16
-
*Distributed inference can significantly speed up inference and make extremely long sequence inference possible
16
+
*DAP can significantly speed up inference and make ultra-long sequence inference possible
17
17
3. Ease of use
18
-
*Replace a few lines and you can use FastFold in your project
18
+
*Huge performance gains with a few lines changes
19
19
* You don't need to care about how the parallel part is implemented
20
20
21
21
## Installation
@@ -38,6 +38,24 @@ cd FastFold
38
38
python setup.py install --cuda_ext
39
39
```
40
40
41
+
## Usage
42
+
43
+
You can use `Evoformer` as `nn.Module` in your project after `from fastfold.model import Evoformer`:
44
+
45
+
```python
46
+
from fastfold.model import Evoformer
47
+
evoformer_layer = Evoformer()
48
+
```
49
+
50
+
If you want to use Dynamic Axial Parallelism, add a line of initialize with `fastfold.distributed.init_dap` after `torch.distributed.init_process_group`.
If you want to benchmark with [OpenFold](https://github.com/aqlaboratory/openfold), you need to install OpenFold first and benchmark with option `--openfold`:
0 commit comments