Skip to content

Conversation

yao-matrix
Copy link
Contributor

make below 12 cases pass on XPU w/ latest torch.

tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral
tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral_4bit
tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral_batched
tests/models/aya_vision/test_modeling_aya_vision.py::AyaVisionIntegrationTest::test_small_model_integration_batched_generate
tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_batch
tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_batch_crops
tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_multiimage
tests/models/glm4v/test_processor_glm4v.py::Glm4vProcessorTest::test_apply_chat_template_video_frame_sampling
tests/models/internvl/test_modeling_internvl.py::InternVLLlamaIntegrationTest::test_llama_small_model_integration_batched_generate
tests/models/internvl/test_modeling_internvl.py::InternVLQwen2IntegrationTest::test_qwen2_small_model_integration_batched_generate
tests/models/llava_onevision/test_modeling_llava_onevision.py::LlavaOnevisionForConditionalGenerationIntegrationTest::test_small_model_integration_test_multi_image
tests/models/llava_onevision/test_modeling_llava_onevision.py::LlavaOnevisionForConditionalGenerationIntegrationTest::test_small_model_integration_test_multi_image_nested

@SunMarc @ydshieh , pls help review, thx very much.

Copy link
Member

@SunMarc SunMarc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks ! Any idea what caused the outputs to change for xpu ?

@yao-matrix
Copy link
Contributor Author

Thanks ! Any idea what caused the outputs to change for xpu ?

@SunMarc, XPU is still stabilizing the op implementations, Hugging Face, vLLM and other top-of-stack libraries are keeping report founded bugs or issues to them, and Intel PyTorch team is keeping fixing bugs and improving numerical behaviors, so the op implementations are keep enhancing rather than keeping still, which will impact model-level output. The good thing is: the output is more and more CUDA alike, which means the op enhancement is towards to CUDA-numerical-compatible way.

Copy link
Contributor

github-actions bot commented Oct 6, 2025

[For maintainers] Suggested jobs to run (before merge)

run-slow: aya_vision, gemma3, llava, llava_onevision

@SunMarc
Copy link
Member

SunMarc commented Oct 6, 2025

@bot /style

Copy link
Contributor

github-actions bot commented Oct 6, 2025

Style bot fixed some files and pushed the changes.

@SunMarc SunMarc enabled auto-merge (squash) October 6, 2025 15:29
@SunMarc SunMarc merged commit 11e4b5e into huggingface:main Oct 6, 2025
19 checks passed
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@yao-matrix yao-matrix deleted the issue-561 branch October 6, 2025 18:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants