transformer qwen3 Porting to optimum-habana#2234
transformer qwen3 Porting to optimum-habana#2234rkumar2patel wants to merge 3 commits intohuggingface:mainfrom
Conversation
regisss
left a comment
There was a problem hiding this comment.
LGTM.
Will let @karol-brejna-i and @gplutop7 approve too before merging since these tests are not part of the Github CI.
|
The code quality check failed, please run |
@rkumar2patel You'll probably need to merge the latest main branch into yours and then run |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Suggestion - Add the guards:
if not is_habana_available():
pytest.skip("HPU not available", allow_module_level=True)
if os.environ.get("HF_HUB_OFFLINE") == "1":
pytest.skip("Requires HF Hub access (HF_HUB_OFFLINE=1).", allow_module_level=True)
There was a problem hiding this comment.
Since this guard is not present in other optimum-habana models, I have followed the same approach for consistency. If we determine it is required, we should incorporate it across all models rather than selectively, to avoid any inconsistency.
This PR ports qwen3 model testing to the optimum-habana library for G2 hardware acceleration. The testing suite provides comprehensive coverage for the qwen3 model implementation on Habana Processing Units (HPUs).
Key changes:
Added complete test suite for qwen3 model with HPU-specific adaptations
Configured HPU device targeting and Habana-specific transformers integration