Replies: 6 comments
-
|
Same for the ~/.ollama/ OR /path/to/.ollama 😁 |
Beta Was this translation helpful? Give feedback.
-
|
Thanks for the feature request! Just to clarify -- are you trying to avoid re-downloading models that you already have in your llama.cpp directory, or are you looking to use models that aren't in LlamaBarn's catalog? |
Beta Was this translation helpful? Give feedback.
-
|
I want to avoid re-downloading stuff but also would like to have separate access if i want to sym link a name to the models for another tool to easily access like ollama but some models only work with llama.cpp like the LFM2*.gguf models. I might be doing something wrong with my ollama implementation, but it easily works with llama.cpp. I have my own tool in the making named ggufy that sym links all of my llama.cpp and ollama models that auto wraps the cloud models under ollama's stuff and tries to do all this. It would be cool to see a better version from the ggml-org team through |
Beta Was this translation helpful? Give feedback.
-
|
@erusev It would be great to be able to use any .gguf model. so either having a "+" button somewhere where we can paste the hf repo url or be able to point to a local folder to make the model available. |
Beta Was this translation helpful? Give feedback.
-
|
@andrecardozo-work yes, this is a must have. I have my own curated model files on my disk, and i don't want to redownload or symlink. Maybe something like "default models folder" will solve it? |
Beta Was this translation helpful? Give feedback.
-
|
I'm moving this to Discussions to gather feedback while we keep the issue tracker focused on our current priorities. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Current Setup ✓
.llamabarn directory location: /path/to/.llamabarn (on external drive)
~/.llamabarn symlink: Points to /path/to/.llamabarn (just a pointer in your home directory)
Model symlinks: All stored in /path/to/.llamabarn/ and point to models in /path/to/llama.cpp/
Beta Was this translation helpful? Give feedback.
All reactions