Skip to content

[bug]: VRAM usage statistics are wrongΒ #12

@lstein

Description

@lstein

Is there an existing issue for this problem?

  • I have searched the existing issues

Install method

Invoke's Launcher

Operating system

Linux

GPU vendor

Nvidia (CUDA)

GPU model

No response

GPU VRAM

No response

Version number

v6.10.0

Browser

No response

System Information

No response

What happened

I ran several generations and looked in the server log output. At the end of generation, the system prints out performance information on each of the executed nodes, including the time used for each node and the amount of VRAM used by that node.

The first time I ran a generation, I got a display like this:

                    Node   Calls   Seconds  VRAM Used 
                        string       1    0.003s     0.000G 
                       integer       1    0.001s     0.000G 
                 core_metadata       1    0.000s     0.000G
          z_image_model_loader       1    0.001s     0.000G
          z_image_text_encoder       1   12.996s     4.594G
                       collect       1    0.001s     4.288G 
               z_image_denoise       1   21.837s    10.370G 
                   z_image_l2i       1    0.850s    12.190G 

This looks correct. The string, integer, core_metadata and z_image_model_loader nodes do not use GPU, so I expect zero consumption of VRAM.

However, the second and subsequent times I ran a generation, I got displays like this:

                          Node   Calls   Seconds  VRAM Used
                        string       1    0.000s     9.920G 
                       integer       1    0.000s     9.920G 
                 core_metadata       1    0.000s     9.920G 
          z_image_model_loader       1    0.000s     9.920G 
          z_image_text_encoder       1    0.000s     9.920G 
                       collect       1    0.000s     9.920G 
               z_image_denoise       1    6.219s    10.389G 
                   z_image_l2i       1    0.499s    12.047G                                                                         

This is not right. It looks like the routine that calculates VRAM is using values from the previous run.

What you expected to happen

See above.

How to reproduce the problem

Run generations two or more times and look at the log output.

Additional context

No response

Discord username

No response

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions