Use AB results to nudge CLIP output #22
chrisgoringe
started this conversation in
Ideas
Replies: 2 comments
-
SDXL uses two CLIP encoders, l and g. They return |
Beta Was this translation helpful? Give feedback.
0 replies
-
Could this be done as a LoRA? By patching the end of the CLIP? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
The AB tool effectively creates a mapping from image to aesthetic value. We've found that training a model to map from the CLIP encodes of those images to aesthetic value can be quite accurate.
So that suggests we could generate a direction in the CLIP space that is liked (sum of the CLIP vectors weighted by their aesthetic score, for instance).
Could this then be added to the CLIP encoding of a prompt?
Beta Was this translation helpful? Give feedback.
All reactions