HELPLESS WITH THESE THREE ERRORS - PLEASE GUIDE TO RECTIFY? #1239
Unanswered
tmprabubiz
asked this question in
Q&A
Replies: 2 comments 4 replies
-
Did you find an answer? |
Beta Was this translation helpful? Give feedback.
1 reply
-
edit: |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
WINDOWS 10 PRO PC / i9 PROCESSOR / GPU: RTX 2060 SUPER 8GB / CPU: 64GB / USING SYMLINKS INSTEAD OF DEFINING PATHS IN ARGUMENTS
1. I WANT TO AVOID THIS WARNING THAT COMES FIRST WHILE STARTING THE FORGE
2. HOW TO RECTIFY THIS VARIABLE SETTING
3. WHENEVER I CHANGE THE FLUX MODELS, WHY FORGE CHECKS ALL THE EMBEDDINGS AS BELOW
ac_neg1.png: no embedded information found.
AS-YoungV2-neg.png: no embedded information found.
BadDream.png: no embedded information found.
bad_prompt_version2-neg.png: no embedded information found.
comicbookpencils.png: no embedded information found.
easynegative.png: no embedded information found.
FastNegativeV2.png: no embedded information found.
ImgFixerPre0.3.png: no embedded information found.
NEG-fixl-2.png: no embedded information found.
negativeXL_D.png: no embedded information found.
negative_hand-neg.png: no embedded information found.
ng_deepnegative_v1_75t.png: no embedded information found.
unaestheticXL_bp5.png: no embedded information found.
verybadimagenegative_v1.3.png: no embedded information found
👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇
NOW THE COMMAND PROMPTS FROM STARTING TO GENERATION (HIGHLIGHTED THE ERRORS SAID ABOVE)
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-304-g394da019
Commit hash: 394da01
CUDA 12.1
Launching Web UI with arguments: --precision full --opt-split-attention --always-batch-cond-uncond --no-half --skip-torch-cuda-test --cuda-malloc --cuda-stream --theme dark
Using cudaMallocAsync backend.
Total VRAM 8192 MB, total RAM 65447 MB
pytorch version: 2.3.1+cu121
xformers version: 0.0.27
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2060 SUPER : cudaMallocAsync
VAE dtype preferences: [torch.float32] -> torch.float32
CUDA Using Stream: True
J:\webui_forge_cu121_torch231\system\python\lib\site-packages\transformers\utils\hub.py:127: FutureWarning: Using
TRANSFORMERS_CACHE
is deprecated and will be removed in v5 of Transformers. UseHF_HOME
instead.warnings.warn(
Using xformers cross attention
Using xformers attention for VAE
ControlNet preprocessor location: J:\webui_forge_cu121_torch231\webui\models\ControlNetPreprocessor
22:43:41 - ReActor - STATUS - Running v0.7.1-a1 on Device: CUDA
2024-08-17 22:43:50,873 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': 'J:\webui_forge_cu121_torch231\webui\models\Stable-diffusion\Flux\FLUX.1-schnell-dev-merged-fp8-4step.safetensors', 'hash': '9e0fb423'}, 'additional_modules': [], 'unet_storage_dtype': 'nf4'}
Running on local URL: http://127.0.0.1:7860
To create a public link, set
share=True
inlaunch()
.Startup time: 28.7s (prepare environment: 4.5s, launcher: 2.5s, import torch: 4.7s, initialize shared: 0.2s, other imports: 1.1s, list SD models: 0.4s, load scripts: 3.3s, initialize extra networks: 7.6s, create ui: 2.9s, gradio launch: 1.5s).
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
Model selected: {'checkpoint_info': {'filename': 'J:\webui_forge_cu121_torch231\webui\models\Stable-diffusion\Flux\flux1DevSchnellBNB_flux1DevBNBNF4.safetensors', 'hash': '0184473b'}, 'additional_modules': [], 'unet_storage_dtype': 'nf4'}
Loading Model: {'checkpoint_info': {'filename': 'J:\webui_forge_cu121_torch231\webui\models\Stable-diffusion\Flux\flux1DevSchnellBNB_flux1DevBNBNF4.safetensors', 'hash': '0184473b'}, 'additional_modules': [], 'unet_storage_dtype': 'nf4'}
[Unload] Trying to free 953674316406250018963456.00 MB for cuda:0 with 0 models keep loaded ...
StateDict Keys: {'transformer': 2350, 'vae': 244, 'text_encoder': 198, 'text_encoder_2': 220, 'ignore': 0}
Using Detected T5 Data Type: torch.float8_e4m3fn
Working with z of shape (1, 16, 32, 32) = 16384 dimensions.
K-Model Created: {'storage_dtype': 'nf4', 'computation_dtype': torch.float16}
ac_neg1.png: no embedded information found.
AS-YoungV2-neg.png: no embedded information found.
BadDream.png: no embedded information found.
bad_prompt_version2-neg.png: no embedded information found.
comicbookpencils.png: no embedded information found.
easynegative.png: no embedded information found.
FastNegativeV2.png: no embedded information found.
ImgFixerPre0.3.png: no embedded information found.
NEG-fixl-2.png: no embedded information found.
negativeXL_D.png: no embedded information found.
negative_hand-neg.png: no embedded information found.
ng_deepnegative_v1_75t.png: no embedded information found.
unaestheticXL_bp5.png: no embedded information found.
verybadimagenegative_v1.3.png: no embedded information found.
Model loaded in 9.0s (unload existing model: 0.2s, forge model load: 8.8s).
Skipping unconditional conditioning when CFG = 1. Negative Prompts are ignored.
To load target model JointTextEncoder
Begin to load 1 model
[Unload] Trying to free 7725.00 MB for cuda:0 with 0 models keep loaded ...
[Memory Management] Current Free GPU Memory: 7136.91 MB
[Memory Management] Required Model Memory: 5154.62 MB
[Memory Management] Required Inference Memory: 1024.00 MB
[Memory Management] Estimated Remaining GPU Memory: 958.29 MB
Moving model(s) has taken 5.64 seconds
Distilled CFG Scale: 3.5
To load target model KModel
Begin to load 1 model
[Unload] Trying to free 9411.13 MB for cuda:0 with 0 models keep loaded ...
[Unload] Current free memory is 1159.67 MB ...
[Unload] Unload model JointTextEncoder
[Memory Management] Current Free GPU Memory: 7095.57 MB
[Memory Management] Required Model Memory: 6246.84 MB
[Memory Management] Required Inference Memory: 1024.00 MB
[Memory Management] Estimated Remaining GPU Memory: -175.28 MB
[Memory Management] Loaded to CPU Swap: 1461.51 MB (blocked method)
[Memory Management] Loaded to GPU: 4785.26 MB
Moving model(s) has taken 7.78 seconds
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [01:17<00:00, 3.88s/it]
To load target model IntegratedAutoencoderKL████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [01:12<00:00, 3.82s/it]
Begin to load 1 model
[Unload] Trying to free 8991.55 MB for cuda:0 with 0 models keep loaded ...
[Unload] Current free memory is 2063.04 MB ...
[Unload] Unload model KModel
[Memory Management] Current Free GPU Memory: 6895.08 MB
[Memory Management] Required Model Memory: 319.75 MB
[Memory Management] Required Inference Memory: 1024.00 MB
[Memory Management] Estimated Remaining GPU Memory: 5551.33 MB
Moving model(s) has taken 2.14 seconds
Warning: Ran out of memory when regular VAE decoding, retrying with tiled VAE decoding.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:05<00:00, 5.44s/it]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:05<00:00, 5.40s/it]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:04<00:00, 4.36s/it]
Total progress: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [01:31<00:00, 4.55s/it]
T
PLEASE GUIDE TO RECTIFY THESE ERRORS
Beta Was this translation helpful? Give feedback.
All reactions