Skip to content

Commit a748a83

Browse files
junqiangwuyiyixuxugithub-actions[bot]hadoop-imagen
authored
Add support for LongCat-Image (#12828)
* Add LongCat-Image * Update src/diffusers/models/transformers/transformer_longcat_image.py Co-authored-by: YiYi Xu <[email protected]> * Update src/diffusers/models/transformers/transformer_longcat_image.py Co-authored-by: YiYi Xu <[email protected]> * Update src/diffusers/models/transformers/transformer_longcat_image.py Co-authored-by: YiYi Xu <[email protected]> * Update src/diffusers/pipelines/longcat_image/pipeline_longcat_image.py Co-authored-by: YiYi Xu <[email protected]> * Update src/diffusers/pipelines/longcat_image/pipeline_longcat_image.py Co-authored-by: YiYi Xu <[email protected]> * Update src/diffusers/pipelines/longcat_image/pipeline_longcat_image.py Co-authored-by: YiYi Xu <[email protected]> * Update src/diffusers/models/transformers/transformer_longcat_image.py Co-authored-by: YiYi Xu <[email protected]> * Update src/diffusers/pipelines/longcat_image/pipeline_longcat_image.py Co-authored-by: YiYi Xu <[email protected]> * fix code * add doc * Update src/diffusers/pipelines/longcat_image/pipeline_longcat_image_edit.py Co-authored-by: YiYi Xu <[email protected]> * Update src/diffusers/pipelines/longcat_image/pipeline_longcat_image_edit.py Co-authored-by: YiYi Xu <[email protected]> * Update src/diffusers/pipelines/longcat_image/pipeline_longcat_image.py Co-authored-by: YiYi Xu <[email protected]> * Update src/diffusers/pipelines/longcat_image/pipeline_longcat_image.py Co-authored-by: YiYi Xu <[email protected]> * Update src/diffusers/pipelines/longcat_image/pipeline_longcat_image.py Co-authored-by: YiYi Xu <[email protected]> * Update src/diffusers/pipelines/longcat_image/pipeline_longcat_image.py Co-authored-by: YiYi Xu <[email protected]> * fix code & mask style & fix-copies * Apply style fixes * fix single input rewrite error --------- Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: hadoop-imagen <hadoop-imagen@psxfb7pxrbvmh3oq-worker-0.psxfb7pxrbvmh3oq.hadoop-aipnlp.svc.cluster.local>
1 parent 5851928 commit a748a83

File tree

16 files changed

+2354
-1
lines changed

16 files changed

+2354
-1
lines changed

docs/source/en/_toctree.yml

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -365,6 +365,8 @@
365365
title: HunyuanVideoTransformer3DModel
366366
- local: api/models/latte_transformer3d
367367
title: LatteTransformer3DModel
368+
- local: api/models/longcat_image_transformer2d
369+
title: LongCatImageTransformer2DModel
368370
- local: api/models/ltx_video_transformer3d
369371
title: LTXVideoTransformer3DModel
370372
- local: api/models/lumina2_transformer2d
@@ -402,7 +404,7 @@
402404
- local: api/models/wan_transformer_3d
403405
title: WanTransformer3DModel
404406
- local: api/models/z_image_transformer2d
405-
title: ZImageTransformer2DModel
407+
title: ZImageTransformer2DModel
406408
title: Transformers
407409
- sections:
408410
- local: api/models/stable_cascade_unet
@@ -563,6 +565,8 @@
563565
title: Latent Diffusion
564566
- local: api/pipelines/ledits_pp
565567
title: LEDITS++
568+
- local: api/pipelines/longcat_image
569+
title: LongCat-Image
566570
- local: api/pipelines/lumina2
567571
title: Lumina 2.0
568572
- local: api/pipelines/lumina
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# LongCatImageTransformer2DModel
14+
15+
The model can be loaded with the following code snippet.
16+
17+
```python
18+
from diffusers import LongCatImageTransformer2DModel
19+
20+
transformer = LongCatImageTransformer2DModel.from_pretrained("meituan-longcat/LongCat-Image ", subfolder="transformer", torch_dtype=torch.bfloat16)
21+
```
22+
23+
## LongCatImageTransformer2DModel
24+
25+
[[autodoc]] LongCatImageTransformer2DModel
Lines changed: 114 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,114 @@
1+
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# LongCat-Image
14+
15+
<div class="flex flex-wrap space-x-1">
16+
<img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
17+
</div>
18+
19+
20+
We introduce LongCat-Image, a pioneering open-source and bilingual (Chinese-English) foundation model for image generation, designed to address core challenges in multilingual text rendering, photorealism, deployment efficiency, and developer accessibility prevalent in current leading models.
21+
22+
23+
### Key Features
24+
- 🌟 **Exceptional Efficiency and Performance**: With only **6B parameters**, LongCat-Image surpasses numerous open-source models that are several times larger across multiple benchmarks, demonstrating the immense potential of efficient model design.
25+
- 🌟 **Superior Editing Performance**: LongCat-Image-Edit model achieves state-of-the-art performance among open-source models, delivering leading instruction-following and image quality with superior visual consistency.
26+
- 🌟 **Powerful Chinese Text Rendering**: LongCat-Image demonstrates superior accuracy and stability in rendering common Chinese characters compared to existing SOTA open-source models and achieves industry-leading coverage of the Chinese dictionary.
27+
- 🌟 **Remarkable Photorealism**: Through an innovative data strategy and training framework, LongCat-Image achieves remarkable photorealism in generated images.
28+
- 🌟 **Comprehensive Open-Source Ecosystem**: We provide a complete toolchain, from intermediate checkpoints to full training code, significantly lowering the barrier for further research and development.
29+
30+
For more details, please refer to the comprehensive [***LongCat-Image Technical Report***](https://arxiv.org/abs/2412.11963)
31+
32+
33+
## Usage Example
34+
35+
```py
36+
import torch
37+
import diffusers
38+
from diffusers import LongCatImagePipeline
39+
40+
weight_dtype = torch.bfloat16
41+
pipe = LongCatImagePipeline.from_pretrained("meituan-longcat/LongCat-Image", torch_dtype=torch.bfloat16 )
42+
pipe.to('cuda')
43+
# pipe.enable_model_cpu_offload()
44+
45+
prompt = '一个年轻的亚裔女性,身穿黄色针织衫,搭配白色项链。她的双手放在膝盖上,表情恬静。背景是一堵粗糙的砖墙,午后的阳光温暖地洒在她身上,营造出一种宁静而温馨的氛围。镜头采用中距离视角,突出她的神态和服饰的细节。光线柔和地打在她的脸上,强调她的五官和饰品的质感,增加画面的层次感与亲和力。整个画面构图简洁,砖墙的纹理与阳光的光影效果相得益彰,突显出人物的优雅与从容。'
46+
image = pipe(
47+
prompt,
48+
height=768,
49+
width=1344,
50+
guidance_scale=4.0,
51+
num_inference_steps=50,
52+
num_images_per_prompt=1,
53+
generator=torch.Generator("cpu").manual_seed(43),
54+
enable_cfg_renorm=True,
55+
enable_prompt_rewrite=True,
56+
).images[0]
57+
image.save(f'./longcat_image_t2i_example.png')
58+
```
59+
60+
61+
This pipeline was contributed by LongCat-Image Team. The original codebase can be found [here](https://github.com/meituan-longcat/LongCat-Image).
62+
63+
Available models:
64+
<div style="overflow-x: auto; margin-bottom: 16px;">
65+
<table style="border-collapse: collapse; width: 100%;">
66+
<thead>
67+
<tr>
68+
<th style="white-space: nowrap; padding: 8px; border: 1px solid #d0d7de; background-color: #f6f8fa;">Models</th>
69+
<th style="white-space: nowrap; padding: 8px; border: 1px solid #d0d7de; background-color: #f6f8fa;">Type</th>
70+
<th style="padding: 8px; border: 1px solid #d0d7de; background-color: #f6f8fa;">Description</th>
71+
<th style="padding: 8px; border: 1px solid #d0d7de; background-color: #f6f8fa;">Download Link</th>
72+
</tr>
73+
</thead>
74+
<tbody>
75+
<tr>
76+
<td style="white-space: nowrap; padding: 8px; border: 1px solid #d0d7de;">LongCat&#8209;Image</td>
77+
<td style="white-space: nowrap; padding: 8px; border: 1px solid #d0d7de;">Text&#8209;to&#8209;Image</td>
78+
<td style="padding: 8px; border: 1px solid #d0d7de;">Final Release. The standard model for out&#8209;of&#8209;the&#8209;box inference.</td>
79+
<td style="padding: 8px; border: 1px solid #d0d7de;">
80+
<span style="white-space: nowrap;">🤗&nbsp;<a href="https://huggingface.co/meituan-longcat/LongCat-Image">Huggingface</a></span>
81+
</td>
82+
</tr>
83+
<tr>
84+
<td style="white-space: nowrap; padding: 8px; border: 1px solid #d0d7de;">LongCat&#8209;Image&#8209;Dev</td>
85+
<td style="white-space: nowrap; padding: 8px; border: 1px solid #d0d7de;">Text&#8209;to&#8209;Image</td>
86+
<td style="padding: 8px; border: 1px solid #d0d7de;">Development. Mid-training checkpoint, suitable for fine-tuning.</td>
87+
<td style="padding: 8px; border: 1px solid #d0d7de;">
88+
<span style="white-space: nowrap;">🤗&nbsp;<a href="https://huggingface.co/meituan-longcat/LongCat-Image-Dev">Huggingface</a></span>
89+
</td>
90+
</tr>
91+
<tr>
92+
<td style="white-space: nowrap; padding: 8px; border: 1px solid #d0d7de;">LongCat&#8209;Image&#8209;Edit</td>
93+
<td style="white-space: nowrap; padding: 8px; border: 1px solid #d0d7de;">Image Editing</td>
94+
<td style="padding: 8px; border: 1px solid #d0d7de;">Specialized model for image editing.</td>
95+
<td style="padding: 8px; border: 1px solid #d0d7de;">
96+
<span style="white-space: nowrap;">🤗&nbsp;<a href="https://huggingface.co/meituan-longcat/LongCat-Image-Edit">Huggingface</a></span>
97+
</td>
98+
</tr>
99+
</tbody>
100+
</table>
101+
</div>
102+
103+
## LongCatImagePipeline
104+
105+
[[autodoc]] LongCatImagePipeline
106+
- all
107+
- __call__
108+
109+
## LongCatImagePipelineOutput
110+
111+
[[autodoc]] pipelines.longcat_image.pipeline_output.LongCatImagePipelineOutput
112+
113+
114+

src/diffusers/__init__.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -235,6 +235,7 @@
235235
"Kandinsky3UNet",
236236
"Kandinsky5Transformer3DModel",
237237
"LatteTransformer3DModel",
238+
"LongCatImageTransformer2DModel",
238239
"LTXVideoTransformer3DModel",
239240
"Lumina2Transformer2DModel",
240241
"LuminaNextDiT2DModel",
@@ -532,6 +533,8 @@
532533
"LDMTextToImagePipeline",
533534
"LEditsPPPipelineStableDiffusion",
534535
"LEditsPPPipelineStableDiffusionXL",
536+
"LongCatImageEditPipeline",
537+
"LongCatImagePipeline",
535538
"LTXConditionPipeline",
536539
"LTXImageToVideoPipeline",
537540
"LTXLatentUpsamplePipeline",
@@ -970,6 +973,7 @@
970973
Kandinsky3UNet,
971974
Kandinsky5Transformer3DModel,
972975
LatteTransformer3DModel,
976+
LongCatImageTransformer2DModel,
973977
LTXVideoTransformer3DModel,
974978
Lumina2Transformer2DModel,
975979
LuminaNextDiT2DModel,
@@ -1237,6 +1241,8 @@
12371241
LDMTextToImagePipeline,
12381242
LEditsPPPipelineStableDiffusion,
12391243
LEditsPPPipelineStableDiffusionXL,
1244+
LongCatImageEditPipeline,
1245+
LongCatImagePipeline,
12401246
LTXConditionPipeline,
12411247
LTXImageToVideoPipeline,
12421248
LTXLatentUpsamplePipeline,

src/diffusers/models/__init__.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -101,6 +101,7 @@
101101
_import_structure["transformers.transformer_hunyuan_video_framepack"] = ["HunyuanVideoFramepackTransformer3DModel"]
102102
_import_structure["transformers.transformer_hunyuanimage"] = ["HunyuanImageTransformer2DModel"]
103103
_import_structure["transformers.transformer_kandinsky"] = ["Kandinsky5Transformer3DModel"]
104+
_import_structure["transformers.transformer_longcat_image"] = ["LongCatImageTransformer2DModel"]
104105
_import_structure["transformers.transformer_ltx"] = ["LTXVideoTransformer3DModel"]
105106
_import_structure["transformers.transformer_lumina2"] = ["Lumina2Transformer2DModel"]
106107
_import_structure["transformers.transformer_mochi"] = ["MochiTransformer3DModel"]
@@ -208,6 +209,7 @@
208209
HunyuanVideoTransformer3DModel,
209210
Kandinsky5Transformer3DModel,
210211
LatteTransformer3DModel,
212+
LongCatImageTransformer2DModel,
211213
LTXVideoTransformer3DModel,
212214
Lumina2Transformer2DModel,
213215
LuminaNextDiT2DModel,

src/diffusers/models/transformers/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,7 @@
3333
from .transformer_hunyuan_video_framepack import HunyuanVideoFramepackTransformer3DModel
3434
from .transformer_hunyuanimage import HunyuanImageTransformer2DModel
3535
from .transformer_kandinsky import Kandinsky5Transformer3DModel
36+
from .transformer_longcat_image import LongCatImageTransformer2DModel
3637
from .transformer_ltx import LTXVideoTransformer3DModel
3738
from .transformer_lumina2 import Lumina2Transformer2DModel
3839
from .transformer_mochi import MochiTransformer3DModel

0 commit comments

Comments
 (0)