Skip to content

Conversation

@stevhliu
Copy link
Member

@stevhliu stevhliu commented Sep 10, 2025

Reworks the "Prompt techniques" guide to include better and clearer prompt writing instructions and cleans up the prompt weighting section a bit.

Also cleans up the "Batched inference" docs and includes example outputs.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.


Prompt weighting is also supported for adapters like [Textual inversion](./textual_inversion_inference) and [DreamBooth](./dreambooth).

## Prompt enhancing with GPT2
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@asomoza, is this still a useful or popular technique? Seems like quite a bit of work to code up the prompt enhancer (coming up with the words and styles) when you can just write a prompt with those same words in the first place.

No strong opinions either way, just looking for opportunities to streamline the docs a bit more :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this one in particular is not popular now, this was initially a technique used in Fooocus which made it really impressive back in the days, but today you can use very small llms locally or online to do the same and better, we have come a long way, we should probably remove this.

Instead I suggest to link some good models or we can even link to a search like this one: https://huggingface.co/models?sort=downloads&search=prompt+enhancer

There are also some models that have their own prompt enhancer like HunyuanImage-2.1, so this technique is still relevant but now it's being done by the researchers and model owners right out of the box instead of the users.

@stevhliu stevhliu requested a review from sayakpaul September 10, 2025 22:51
@sayakpaul sayakpaul requested a review from asomoza September 22, 2025 08:33
@sayakpaul
Copy link
Member

@asomoza could you give this a review?

Copy link
Member

@asomoza asomoza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks, looks great, I answered the question and left an additional comment


Prompt weighting is also supported for adapters like [Textual inversion](./textual_inversion_inference) and [DreamBooth](./dreambooth).

## Prompt enhancing with GPT2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this one in particular is not popular now, this was initially a technique used in Fooocus which made it really impressive back in the days, but today you can use very small llms locally or online to do the same and better, we have come a long way, we should probably remove this.

Instead I suggest to link some good models or we can even link to a search like this one: https://huggingface.co/models?sort=downloads&search=prompt+enhancer

There are also some models that have their own prompt enhancer like HunyuanImage-2.1, so this technique is still relevant but now it's being done by the researchers and model owners right out of the box instead of the users.

@stevhliu
Copy link
Member Author

Thanks @asomoza! Feedback has been addressed, let me know if there is anything else, otherwise I think we can merge 🙂

Copy link
Member

@asomoza asomoza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@stevhliu stevhliu merged commit 3eb4078 into huggingface:main Oct 14, 2025
1 check passed
@stevhliu stevhliu deleted the prompts branch October 14, 2025 20:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants