Replies: 2 comments
-
I have not been able to replicate this on my end but this seems to be a common issue. Can you share your PC Configuration? I can suggest two possible things to test.
Let me know if any of this helps. |
Beta Was this translation helpful? Give feedback.
-
Same issue here, running on WSL-2 followed every step but at the ingest.py it gave me this Tried running it again but ended up with same error Running on a 3700x with Vega 64, 16gb of RAM, tried both cpu and gpu versions but no success. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
All the steps work fine but then on this last stage:
python3 run_localGPT.py
It always "kills" itself. Doesn't matter if I use GPU or CPU version. Any advice on this?
thanks
--
Running on: cuda
load INSTRUCTOR_Transformer
max_seq_length 512
Using embedded DuckDB with persistence: data will be stored in: /home/achillez/devel/localGPT/DB
Downloading tokenizer.model: 100%|██████████████████████████████████████████████████| 500k/500k [00:00<00:00, 27.8MB/s]
Downloading (…)cial_tokens_map.json: 100%|████████████████████████████████████████████| 411/411 [00:00<00:00, 4.74MB/s]
Downloading (…)okenizer_config.json: 100%|████████████████████████████████████████████| 715/715 [00:00<00:00, 9.55MB/s]
Downloading (…)lve/main/config.json: 100%|████████████████████████████████████████████| 582/582 [00:00<00:00, 6.48MB/s]
Downloading (…)model.bin.index.json: 100%|█████████████████████████████████████████| 26.8k/26.8k [00:00<00:00, 174MB/s]
Downloading (…)l-00001-of-00002.bin: 100%|████████████████████████████████████████| 9.98G/9.98G [01:44<00:00, 95.7MB/s]
Downloading (…)l-00002-of-00002.bin: 100%|████████████████████████████████████████| 3.50G/3.50G [00:36<00:00, 96.8MB/s]
Downloading shards: 100%|████████████████████████████████████████████████████████████████| 2/2 [02:20<00:00, 70.32s/it]
Killed
Beta Was this translation helpful? Give feedback.
All reactions