|
| 1 | +## Use Cases |
| 2 | + |
| 3 | +### Instruction-based Image Editing |
| 4 | + |
| 5 | +Image-text-to-image models can be used to edit images based on natural language instructions. For example, you can provide an image of a summer landscape and the instruction "Make it winter, add snow" to generate a winter version of the same scene. |
| 6 | + |
| 7 | +### Style Transfer |
| 8 | + |
| 9 | +These models can apply artistic styles or transformations to images based on text descriptions. For instance, you can transform a photo into a painting style by providing prompts like "Make it look like a Van Gogh painting" or "Convert to watercolor style." |
| 10 | + |
| 11 | +### Image Variations |
| 12 | + |
| 13 | +Generate variations of an existing image by providing different text prompts. This is useful for creative workflows where you want to explore different versions of the same image with specific modifications. |
| 14 | + |
| 15 | +### Guided Image Generation |
| 16 | + |
| 17 | +Use a reference image along with text prompts to guide the generation process. This allows for more controlled image generation compared to text-to-image models alone, as the reference image provides structural guidance. |
| 18 | + |
| 19 | +### Image Inpainting and Outpainting |
| 20 | + |
| 21 | +Fill in missing or masked parts of an image based on text descriptions, or extend an image beyond its original boundaries with text-guided generation. |
| 22 | + |
| 23 | +## Task Variants |
| 24 | + |
| 25 | +### Instruction-based Editing |
| 26 | + |
| 27 | +Models that follow natural language instructions to edit images, which can perform complex edits like object removal, color changes, and compositional modifications. |
| 28 | + |
| 29 | +### Reference-guided Generation |
| 30 | + |
| 31 | +Models that use a reference image to guide the generation process while incorporating text prompts to control specific attributes or modifications. |
| 32 | + |
| 33 | +### Conditional Image-to-Image |
| 34 | + |
| 35 | +Models that perform specific transformations based on text conditions, such as changing weather conditions, time of day, or seasonal variations. |
| 36 | + |
| 37 | +## Inference |
| 38 | + |
| 39 | +You can use the Diffusers library to interact with image-text-to-image models. |
| 40 | + |
| 41 | +```python |
| 42 | +from diffusers import FluxControlPipeline |
| 43 | +from PIL import Image |
| 44 | +import torch |
| 45 | + |
| 46 | +# Load the model |
| 47 | +pipe = FluxControlPipeline.from_pretrained( |
| 48 | + "black-forest-labs/FLUX.2-dev", |
| 49 | + torch_dtype=torch.bfloat16 |
| 50 | +).to("cuda") |
| 51 | + |
| 52 | +# Load input image |
| 53 | +image = Image.open("input.jpg").convert("RGB") |
| 54 | + |
| 55 | +# Edit the image with a text prompt |
| 56 | +prompt = "Make it a snowy winter scene" |
| 57 | +edited_image = pipe(prompt=prompt, image=image).images[0] |
| 58 | +edited_image.save("edited_image.png") |
| 59 | +``` |
| 60 | + |
| 61 | +## Useful Resources |
| 62 | + |
| 63 | +- [FLUX.2 Model Card](https://huggingface.co/black-forest-labs/FLUX.2-dev) |
| 64 | +- [Diffusers documentation on Image-to-Image](https://huggingface.co/docs/diffusers/using-diffusers/img2img) |
| 65 | +- [ControlNet for Conditional Image Generation](https://huggingface.co/docs/diffusers/using-diffusers/controlnet) |
0 commit comments