Commit 09d57aa
committed
[Executorch][llm] Make mask tensor float only for sdpa
Now that we support quantized sdpa query tensor can be quantized and attention mask can be float (the only type allowed).
So this check doesnt make sense anymore.
Differential Revision: [D77516821](https://our.internmc.facebook.com/intern/diff/D77516821/)
ghstack-source-id: 293661338
Pull Request resolved: #121311 parent 7b9ab92 commit 09d57aa
1 file changed
+2
-2
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
59 | 59 | | |
60 | 60 | | |
61 | 61 | | |
62 | | - | |
63 | | - | |
| 62 | + | |
| 63 | + | |
64 | 64 | | |
65 | 65 | | |
66 | 66 | | |
| |||
0 commit comments