Skip to content

Releases: lucidrains/native-sparse-attention-pytorch

0.0.14

20 Feb 13:45

Choose a tag to compare

wire up flex attention for sliding windows

0.0.12

20 Feb 13:16

Choose a tag to compare

allow for the strategy combine "mlp" to be customized as well, but th…

0.0.11

20 Feb 13:11

Choose a tag to compare

small test for customizable compress mlp

0.0.9

20 Feb 13:00

Choose a tag to compare

give the compress mlp some depth, then allow it to be customizable by…

0.0.8

19 Feb 21:48

Choose a tag to compare

Full Changelog: 0.0.7...0.0.8

0.0.7

19 Feb 20:58

Choose a tag to compare

redo get_at with gather, but keep around the ein notation for readabi…

0.0.6

19 Feb 20:36

Choose a tag to compare

fix after changing importance score to compressed attention values

0.0.5

19 Feb 20:25

Choose a tag to compare

coordinate descent was unstable, just use a one hot straight through …

0.0.4

19 Feb 19:55

Choose a tag to compare

Full Changelog: 0.0.3...0.0.4

0.0.3

19 Feb 16:29

Choose a tag to compare

fix an issue with mask, make sure it converges for enwik8