Commit a65fa49
committed
fix(examples): te_llama compatibility with HuggingFace transformers >= 4.57
The te_llama.py example was failing with HuggingFace transformers 4.57+
due to API changes in how decoder layer outputs are handled.
Changes:
- Handle case where hidden_states is passed as a tuple (older HF versions)
- Return tensor directly instead of wrapped in tuple (HF 4.57+ expects this)
- Fix regex pattern to use raw string (fixes SyntaxWarning)
Error fixed:
AttributeError: 'tuple' object has no attribute 'contiguous'
Tested with:
- transformer_engine 2.5.0
- transformers 4.57.3
- PyTorch container nvcr.io/nvidia/pytorch:25.08-py3
Signed-off-by: Santosh Bhavani <[email protected]>1 parent 702fc5e commit a65fa49
1 file changed
+10
-5
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
72 | 72 | | |
73 | 73 | | |
74 | 74 | | |
75 | | - | |
76 | | - | |
77 | | - | |
78 | | - | |
| 75 | + | |
| 76 | + | |
| 77 | + | |
| 78 | + | |
| 79 | + | |
| 80 | + | |
| 81 | + | |
| 82 | + | |
| 83 | + | |
79 | 84 | | |
80 | 85 | | |
81 | 86 | | |
| |||
162 | 167 | | |
163 | 168 | | |
164 | 169 | | |
165 | | - | |
| 170 | + | |
166 | 171 | | |
167 | 172 | | |
168 | 173 | | |
| |||
0 commit comments