Replies: 1 comment
-
|
Looks like I've already found the answer for the first question. The source of |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I asked Grok to analyze the difference between the default "sd_xl_base.yaml" and a yaml that OT generates for me(at the bottom).
Its answer called new questions:
scale_factor?cloud://Models/Model.safetensor?Key Differences Between OneTrainer Config and Base SDXL Config
This page outlines the key differences between the YAML configuration file generated by OneTrainer after fine-tuning and the assumed base SDXL configuration (e.g.,
sd_xl_base.yamlfrom Stability AI’sgenerative-modelsrepository).scale_factor0.130250.18215disable_first_stage_autocasttruefalse(or unspecified)first_stage_configresolution: 256,lossconfig: torch.nn.Identity,monitor: val/rec_lossmonitorfieldresolution: 256may be a legacy or misaligned value (SDXL typically operates at 1024x1024, 128x128 latents).lossconfigandmonitorsuggest a training-derived config adapted for inference, with no functional impact.Analysis of Differences
1.
scale_factor: 0.130250.18215.2.
disable_first_stage_autocast: truefalse.3.
first_stage_configTweaksresolution: 256and training-related fields (monitor: val/rec_loss) not typically emphasized in inference configs.resolutionvalue is ambiguous and likely non-functional for SDXL’s 1024x1024 output pipeline.Implications
scale_factorshift may misalign this model with the original SDXL refiner unless it was fine-tuned similarly.disable_first_stage_autocast: true) could enhance quality but at a higher memory and speed cost.My yaml:
Beta Was this translation helpful? Give feedback.
All reactions