My xformers come back without me setting anything :) #1365
danilomaiaweb
started this conversation in
General
Replies: 1 comment 1 reply
-
According to some comments on various posts, and on reddit, xformers are not necessary in Forge, the built-in functionality is faster without them. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I updated my Forge again at this moment and mysteriously, without declaring in the argument line on my webui-user.bat, my xformer appeared without me configuring it and it is running perfectly... Go figure, right? These are things from the beyond. :)
Stable Diffusion PATH: F:\ForgeFlux\webui
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-376-ga616d6e6
Commit hash: a616d6e6214ee3afb0294611b5880e5cf0009b50
CUDA 12.1
Launching Web UI with arguments: --precision full --opt-split-attention --always-batch-cond-uncond --no-half --skip-torch-cuda-test --pin-shared-memory --cuda-malloc --cuda-stream --ckpt-dir 'F:\ModelsForge\Checkpoints' --lora-dir 'F:\ModelsForge\Loras'
Using cudaMallocAsync backend.
Total VRAM 8191 MB, total RAM 32705 MB
pytorch version: 2.3.1+cu121
xformers version: 0.0.27
Set vram state to: NORMAL_VRAM
Always pin shared GPU memory
Device: cuda:0 NVIDIA GeForce RTX 3050 : cudaMallocAsync
VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16
CUDA Using Stream: True
Using xformers cross attention
Using xformers attention for VAE
ControlNet preprocessor location: F:\ForgeFlux\webui\models\ControlNetPreprocessor
[-] ADetailer initialized. version: 24.8.0, num models: 10
sd-webui-prompt-all-in-one background API service started successfully.
23:57:15 - ReActor - STATUS - Running v0.7.1-a1 on Device: CUDA
2024-08-20 23:57:17,471 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': 'F:\ModelsForge\Checkpoints\fluxFusionDSNF4GGUFQ4Q5Q8Fp8Fp16_v0BnbNf4AIO.safetensors', 'hash': '35aa31f8'}, 'additional_modules': [], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Running on local URL: http://127.0.0.1:7860
To create a public link, set
share=True
inlaunch()
.IIB Database file has been successfully backed up to the backup folder.
Startup time: 34.6s (prepare environment: 12.6s, import torch: 6.7s, initialize shared: 0.1s, other imports: 0.5s, load scripts: 5.6s, create ui: 4.9s, gradio launch: 3.0s, app_started_callback: 1.1s).
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
[GPU Setting] You will use 87.50% GPU memory (7167.00 MB) to load weights, and use 12.50% GPU memory (1024.00 MB) to do matrix computation.
This is Amazing !!!
Beta Was this translation helpful? Give feedback.
All reactions