Skip to content

Conversation

@igardev
Copy link
Collaborator

@igardev igardev commented May 4, 2025

Use Ctrl+Shift+; or select from llama.vscode menu "Ask with AI with project context", enter the question and press enter.
The program will search for chunks of text which are close to the query, and send the top 5 to the AI together with the query.

The chunks are created on opening the project, they are in memory and are lost on closing VS Code.
The chunks are filtered on 2 steps - first by using BM25 (keywords are extracted with a REST request to the chat model) and the result is filtered by comparing embeddings of the chunks and query.

Embeddings server (property endpoint_embeddings, default http://127.0.0.1:8010) and Chat server (property endpoint_chat, default http://127.0.0.1:8011) need to run to use this functionality. Tested with all-MiniLM-L6-v2 - a very small embedding server.

rag_* properties could be configured to fine tune the RAG search.

Another pull request for webui will improve the user experience (the request will be sent immediately, no need to click Send button from webui).

@ggerganov ggerganov merged commit d375aed into master May 9, 2025
@ggerganov ggerganov deleted the chat-with-project-context branch May 9, 2025 09:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants