Is it normal for an M1 Mac Mini to max out CPU during enquiry #308
Replies: 2 comments
-
Just realised that the GPU also participated. Don't know if it is because I changed the model from chat type to generic type (are these the right terms??) or it has something to do with the enquiry itself. |
Beta Was this translation helpful? Give feedback.
-
In my experience running localGPT on a Linux machine with an NVidia GPU, there is a balance of CPU/GPU usage during ingestion. Usually it's a bunch of CPU usage at first (presumably parsing documents) and then the GPU kicks in (presumably during the initial training of the model). With my "presumably"s, I mean that I haven't profiled or analyzed the code, but that's what I assume is happening, so take it with a grain of salt. But in my experience, it's normal for both CPU/GPU to be used. When querying the database, however, it's almost all GPU usage. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I am an absolute newbie so please excuse me for my ignorance.
when I run ingest.py, GPU max out through out the process. So I think my MPS implementation is successful. But when I run localGPT.py, only the CPU seems to be working hard but not the GPU. Is that normal?
Thank you very much.
Beta Was this translation helpful? Give feedback.
All reactions