-
Notifications
You must be signed in to change notification settings - Fork 83
Manage completion models
- No servers required
Completion models configurations are stored and could be reused. For simplicity the term "completion models" will be used as a synonim for Completion models configurations. Completion models could be for local models (run by llama-vscode) and for externally run servers. They have properties: name, local start command (llama-server command to start a server with this model locally), ai model (as required by the provider), endpoint, is key required
Completion models configurations could be added/deleted/viewed/selected/deselected/added from huggingface/exported/imported
Select "Completion models..." from llama-vscode menu
-
Add models
Enter the requested properties.
For local models name, local start command and endpoint are required
For external servers name and endpoint are required
Use models, which support FIM (Fill In the Middle), for example Qwen2.5-Coder-1.5B-Q8_0-GGUF -
Delete models
Select the model you want to delete from the list and delete it. -
View
Select a model from the list to view all the details for this model -
Selected
Select a model from the list to select it. If the model is a local one (has a command in local start command) a llama.cpp server with this model will be started. Only one completion model could be selected at a time. -
Deselect
Deselect the currently selected model. If the model is local, the llama.cpp server will be started. -
Add model from huggingface
Enter search words to find a model from huggingface. If the model is selected it will be automatically downloaded (if not yet done) and a llama.cpp server will be started with it. -
Export
A model could be export as a .json files. This file could be shared with other used, modified if needed and imported again. Select a model to export it. -
Import
A model could be imported from a .json file - select a file to import it.