Replies: 2 comments 3 replies
-
I fully agree that the loss is basically a sanity-check for most embeddings, although for significantly larger batch sizes and low learning rates, there is a visible downward trend (if all goes well), which is reassuring |
Beta Was this translation helpful? Give feedback.
3 replies
-
Good morning
Thanks for your reply on this video is my face
Thank u again i apreciat if u pull out all my video
…On Tue, 21 Feb 2023, 04:26 morphinapg, ***@***.***> wrote:
That's interesting. I was under the impression that you'd want a higher
learning rate for a higher batch size. Do you have some tips?
—
Reply to this email directly, view it on GitHub
<#7777 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A5ETC3P5TL2KRELXWHIC3Y3WYQ7ZFANCNFSM6AAAAAAUZRBHYM>
.
You are receiving this because you are subscribed to this thread.Message
ID:
<AUTOMATIC1111/stable-diffusion-webui/repo-discussions/7777/comments/5059728
@github.com>
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Do we all agree that there is no use of textual_inversion_loss.csv from a user perspective?
I understand that training loss does not have a direct relationship with training quality, but I have never seen a graph for textual inversion "converging", all I have seen is the same fluctuating moving average which never trends down consistently, even when the "subjective" quality fo the training is good.
I would love to be proven wrong, and see that someone is getting some use from an objective measure, rather than just eyeballing whether a textual inversion training session is good or not.
Beta Was this translation helpful? Give feedback.
All reactions