Skip to content
Open
2 changes: 1 addition & 1 deletion docs/source/en/api/pipelines/qwenimage.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ The `guidance_scale` parameter in the pipeline is there to support future guidan

With [`QwenImageEditPlusPipeline`], one can provide multiple images as input reference.

```
```py
import torch
from PIL import Image
from diffusers import QwenImageEditPlusPipeline
Expand Down
2 changes: 1 addition & 1 deletion tests/pipelines/qwenimage/test_qwenimage_edit_plus.py
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ def test_inference(self):
self.assertEqual(generated_image.shape, (3, 32, 32))

# fmt: off
expected_slice = torch.tensor([[0.5637, 0.6341, 0.6001, 0.5620, 0.5794, 0.5498, 0.5757, 0.6389, 0.4174, 0.3597, 0.5649, 0.4894, 0.4969, 0.5255, 0.4083, 0.4986]])
expected_slice = torch.tensor([0.5637, 0.6341, 0.6001, 0.5620, 0.5794, 0.5498, 0.5757, 0.6389, 0.4174, 0.3597, 0.5649, 0.4894, 0.4969, 0.5255, 0.4083, 0.4986])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is that needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The generated_slice is a 1D tensor of length 16. So we have to make expected_slice 1D so it directly matches with the shape of generated_slice. And Also other pipelines has the same expected_slice in 1D.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's discard this change in this PR for now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay Discarded this change now.

# fmt: on

generated_slice = generated_image.flatten()
Expand Down