You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The VectorCode 0.7.16 update introduced a new feature: prompt library builder. It uses the prompt library
feature in codecompanion.nvim to help you quickly build RAG-enabled chatbot that allows you to talk to an LLM that has access to files in a specific local directory.
With some configurations, the VectorCode extension will create an embedding collection in the VectorCode database. It'll also set up some basic (and generic) prompt to tell the LLM to call the ${vectorcode_query} tool with appropriate parameters and use the tool result as context to answer your questions with citations.
This is particularly useful if you're working with something non-standard. For example, you may be using a custom build of neovim (built from source of a particular commit), and a usual web search may not give you accurate information. This is because neovim iterates very fast, and it's very likely that neither the official website nor the latest master branch contains the same code/documentation that you're running.
It might be important to note that, it's normal for a directory to take longer to be vectorised if it's the first time you use ectorCode on it, because it needs to compute a lot of embeddings. If the directory is updated, it's usually faster because VectorCode compares file hashes and skips the unchanged files. There will be notifications (and LSP progress if you're using the LSP backend) when the vectorisation is done.
Showcase
Out of the box, VectorCode provides a preset called Neovim Tutor that vectorises the neovim lua runtime files and the help documentation. You can select it from the :CodeCompanionActions menu and ask questions. However, for demonstration purpose, I'll show you how to make a new vectorisation-enabled prompt, so that you can define your own RAG chatbot. In this demo, I'll be using the Kitty terminal emulator as an example.
First, let's declare the files that we want to add to the database:
require("codecompanion").setup({
extensions= {
vectorcode= {
---@typeVectorCode.CodeCompanion.ExtensionOptsopts= {
prompt_library= {
["Kitty Assistant"] = {
-- This is where the kitty documentation lives on my systemproject_root="/usr/share/doc/kitty/",
-- matches all *.txt files under the project root.-- You can also use absolute paths here.file_patterns= { "**/*.txt" },
},
},
},
},
},
})
Then, you'll see a new action called Kitty Assistant in the action menu when you type :CodeCompanionActions:
Press Enter, and you'll see a chat buffer with some default prompts already filled in:
At the same time, a background job will be started in the background to add the files
to the database.
Once it's done, you can ask your questions as usual and the LLM will try to call the @{vectorcode_query} tool and provide truth-grounded answers:
Extra Tricks
If you're using lazy.nvim, you can use it to create an AI-enabled helper to learn how to work with CodeCompanion.nvim itself:
{
"olimorris/codecompanion.nvim",
opts=function(plugin, opts)
return {
extensions= {
vectorcode= {
---@typeVectorCode.CodeCompanion.ExtensionOptsopts= {
prompt_library= {
["CodeCompanion Assistant"] = {
-- This points to wherever `codecompanion` lives -- in your plugin directoryproject_root=plugin.dir,
-- matches all *.lua and *.md files under the project root.file_patterns= { "lua/codecompanion/**.lua", "doc/**/*.md" },
},
},
},
},
},
}
end,
}
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
The VectorCode 0.7.16 update introduced a new feature: prompt library builder. It uses the prompt library
feature in codecompanion.nvim to help you quickly build RAG-enabled chatbot that allows you to talk to an LLM that has access to files in a specific local directory.
With some configurations, the VectorCode extension will create an embedding collection in the VectorCode database. It'll also set up some basic (and generic) prompt to tell the LLM to call the
${vectorcode_query}tool with appropriate parameters and use the tool result as context to answer your questions with citations.This is particularly useful if you're working with something non-standard. For example, you may be using a custom build of neovim (built from source of a particular commit), and a usual web search may not give you accurate information. This is because neovim iterates very fast, and it's very likely that neither the official website nor the latest master branch contains the same code/documentation that you're running.
It might be important to note that, it's normal for a directory to take longer to be vectorised if it's the first time you use ectorCode on it, because it needs to compute a lot of embeddings. If the directory is updated, it's usually faster because VectorCode compares file hashes and skips the unchanged files. There will be notifications (and LSP progress if you're using the LSP backend) when the vectorisation is done.
Showcase
Out of the box, VectorCode provides a preset called
Neovim Tutorthat vectorises the neovim lua runtime files and the help documentation. You can select it from the:CodeCompanionActionsmenu and ask questions. However, for demonstration purpose, I'll show you how to make a new vectorisation-enabled prompt, so that you can define your own RAG chatbot. In this demo, I'll be using the Kitty terminal emulator as an example.First, let's declare the files that we want to add to the database:
Then, you'll see a new action called

Kitty Assistantin the action menu when you type:CodeCompanionActions:Press

Enter, and you'll see a chat buffer with some default prompts already filled in:At the same time, a background job will be started in the background to add the files
to the database.
Once it's done, you can ask your questions as usual and the LLM will try to call the

@{vectorcode_query}tool and provide truth-grounded answers:Extra Tricks
If you're using
lazy.nvim, you can use it to create an AI-enabled helper to learn how to work with CodeCompanion.nvim itself:{ "olimorris/codecompanion.nvim", opts = function(plugin, opts) return { extensions = { vectorcode = { ---@type VectorCode.CodeCompanion.ExtensionOpts opts = { prompt_library = { ["CodeCompanion Assistant"] = { -- This points to wherever `codecompanion` lives -- in your plugin directory project_root = plugin.dir, -- matches all *.lua and *.md files under the project root. file_patterns = { "lua/codecompanion/**.lua", "doc/**/*.md" }, }, }, }, }, }, } end, }Beta Was this translation helpful? Give feedback.
All reactions