Skip to content

Conversation

@galbria
Copy link
Contributor

@galbria galbria commented Oct 26, 2025

What does this PR do?

Fixes # (issue)

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the PR. Excited for FIBO to make strides!

I have left a bunch of comments, most of which should be easily resolvable. If not, please let me know.

Additionally, I think:

  • It'd be nice to include a code snippet for folks to test it out (@linoytsaban @asomoza).
  • Remove the custom block implementations from the PR, host them on the Hub (just like this one), and guide the users about how to use them alongside the pipeline.

output_height, output_width, _ = image.shape
assert (output_height, output_width) == (expected_height, expected_width)

@unittest.skipIf(torch_device not in ["cuda", "xpu"], reason="float16 requires CUDA or XPU")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can remove this test I guess. If not, would you mind explaining why we had to override it here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we used it to debug something, its redundant and removed

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the test is still being kept here?

@sayakpaul sayakpaul requested a review from DN6 October 27, 2025 08:30
@galbria galbria requested review from DN6 and sayakpaul October 27, 2025 13:48
Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a few more comments. I think we should let the users know that they should absolutely use the structured prompt in the docs.

output_height, output_width, _ = image.shape
assert (output_height, output_width) == (expected_height, expected_width)

@unittest.skipIf(torch_device not in ["cuda", "xpu"], reason="float16 requires CUDA or XPU")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the test is still being kept here?

galbria and others added 5 commits October 27, 2025 15:54
- Updated BriaFiboAttnProcessor and BriaFiboAttention classes to reflect changes from Flux equivalents.
- Modified the _unpack_latents method in BriaFiboPipeline to improve clarity.
- Increased the default max_sequence_length to 3000 and added a new optional parameter do_patching.
- Cleaned up test_pipeline_bria_fibo.py by removing unused imports and skipping unsupported tests.
Comment on lines 740 to 741
latents = latents.unsqueeze(dim=2)
latents = list(torch.unbind(latents, dim=0))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kfirbria Hmm that's unusual. Is the input shape to the decoder in this format (batch_size, channels, 1, height, width)?

@galbria galbria requested review from DN6 and sayakpaul October 28, 2025 07:48
galbria and others added 6 commits October 28, 2025 08:17
- Updated class names from FIBO to BriaFibo for consistency across the module.
- Modified instances of FIBOEmbedND, FIBOTimesteps, TextProjection, and TimestepProjEmbeddings to reflect the new naming.
- Ensured all references in the BriaFiboTransformer2DModel are updated accordingly.
…oTransformer2DModel and BriaFiboPipeline classes to dummy objects for enhanced compatibility with torch and transformers.
… in pipeline module

- Added documentation comments indicating the source of copied code in BriaFiboTransformerBlock and _pack_latents methods.
- Corrected the import statement for BriaFiboPipeline in the pipelines module.
galbria and others added 5 commits October 28, 2025 09:47
…ration from existing implementations

- Updated comments in BriaFiboAttnProcessor, BriaFiboAttention, and BriaFiboPipeline to reflect that the code is inspired by other modules rather than copied.
- Enhanced clarity on the origins of the methods to maintain proper attribution.
…riaFibo classes

- Introduced a new documentation file for BriaFiboTransformer2DModel.
- Updated comments in BriaFiboAttnProcessor, BriaFiboAttention, and BriaFiboPipeline to clarify the origins of the code, indicating copied sources for better attribution.
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot!

@sayakpaul sayakpaul merged commit 84e1657 into huggingface:main Oct 28, 2025
11 checks passed
@vladmandic
Copy link
Contributor

pr has been merged, but docs do not contain anything on how to use?
https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/bria_fibo.md
and is there timeline when is model going to be released?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants