Replies: 2 comments
-
try putting the same commands under Powershell, I think it worked there but not on CMD then remember to use: python run_localGPT.py --device_type cuda |
Beta Was this translation helpful? Give feedback.
0 replies
-
I try!
…On Wed, 25 Oct 2023, 15:56 nirodhie, ***@***.***> wrote:
try putting the same commands under Powershell, I think it worked there
but not on CMD
then remember to use:
python run_localGPT.py --device_type cuda
—
Reply to this email directly, view it on GitHub
<#523 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BB2KT6KEINZ2ER43MBZ2MLTYBELBDAVCNFSM6AAAAAA5FCKYR2VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TGOBRHAYDG>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Using Anaconda, from virtual environment (Win11), I installed llamacpp by following procedure. No errors.
The nVidia driviers are detected by environment (nvcc --version), but when runinng locally, it runs on CPU - why?
(D:\LLM\LocalGPT\localGPT) D:\LLM\LocalGPT\localGPT>set CMAKE_ARGS=-DLLAMA_CUBLAS=on
(D:\LLM\LocalGPT\localGPT) D:\LLM\LocalGPT\localGPT>set FORCE_CMAKE=1
(D:\LLM\LocalGPT\localGPT) D:\LLM\LocalGPT\localGPT>pip install llama-cpp-python==0.1.83 --no-cache-dir
Successfully installed diskcache-5.6.3 llama-cpp-python-0.1.83
(D:\LLM\LocalGPT\localGPT) D:\LLM\LocalGPT\localGPT>nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Aug_15_22:09:35_Pacific_Daylight_Time_2023
Cuda compilation tools, release 12.2, V12.2.140
Build cuda_12.2.r12.2/compiler.33191640_0
(D:\LLM\LocalGPT\localGPT) D:\LLM\LocalGPT\localGPT>python run_localGPT.py
2023-09-24 18:54:06,090 - INFO - run_localGPT.py:221 - Running on: cpu
The drivers are visible, so I did not set path to them prior to llamacpp installation.
REM set PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2\bin;%PATH%
What did I do wrong and how to correct this? I got zero error on complete installation.
Beta Was this translation helpful? Give feedback.
All reactions