We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent c7cbcd9 commit 04b74faCopy full SHA for 04b74fa
litgpt/attention.py
@@ -16,7 +16,6 @@
16
)
17
from litgpt.config import Config
18
19
-
20
# Currently, `torch.nn.functional.scaled_dot_product_attention` does not
21
# properly support the case `enabla_gqa=True` (i.e., keys and values have
22
# less heads than queries). In this case, it is best to extend keys and
0 commit comments