Skip to content

Conversation

@yao-matrix
Copy link
Contributor

loose expected_max_diff from 5e-1 to 8e-1 to make KandinskyV22PipelineInpaintCombinedFastTests::test_float16_inference pass on XPU.

KandinskyV22PipelineInpaintCombinedFastTests::test_float16_inference
pass on XPU

Signed-off-by: Matrix Yao <[email protected]>
@yao-matrix
Copy link
Contributor Author

@hlky , pls help review, thx

Copy link
Contributor

@hlky hlky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @yao-matrix. This pipeline is not commonly used and already requires high tolerance so no issue increasing it further. If in the future we find some larger difference in required tolerances like 1e-4 on one backend and 1e-3 on another, then we could consider something similar to the expected slice changes where the expected_max_diff is set per backend type.

@hlky hlky merged commit aa541b9 into huggingface:main Apr 14, 2025
8 checks passed
@yao-matrix
Copy link
Contributor Author

sure, will co-work w/ you to make it once needed

@yao-matrix yao-matrix deleted the issue78 branch April 14, 2025 22:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants