Commit d783c26
[VLM] Fix mllama targets (#1402)
## Purpose ##
* When #1389 landed, modules being skipped by ignore were no longer
being skipped. However, this requires that the sequential targets list
be correct. Mllama defaults to targeting vision layers, and hence the
vision tower was being traced, leading to errors.
```python3
_no_split_modules = [
"MllamaVisionEncoderLayer",
"MllamaCrossAttentionDecoderLayer",
"MllamaSelfAttentionDecoderLayer",
]
```
## Changes ##
* Only target text decoder layers, not vision decoder layers
## Testing ##
* #1335 passes
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>1 parent 564140d commit d783c26
1 file changed
+1
-0
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
32 | 32 | | |
33 | 33 | | |
34 | 34 | | |
| 35 | + | |
35 | 36 | | |
36 | 37 | | |
37 | 38 | | |
| |||
0 commit comments