-
Notifications
You must be signed in to change notification settings - Fork 8
Open
Labels
enhancementNew feature or requestNew feature or requestideaNew ideas that are not well-definedNew ideas that are not well-defined
Description
Is your feature request related to a problem? Please describe.
lgtm-ai is currently a CLI tool. Its stateless, local nature makes it easy and fast to use, but also limits its functionality. Specifically:
- Advanced strategies for context retrieval (beyond the PR files) are not possible or hard to implement (see Improve code context given to the LLMΒ #62).
- Real-time or automated webhook-driven interactivity (e.g., updating context on new commits, responding to comments [Add command to explain piece of code in a PRΒ #48]) cannot be supported.
- Real time(-ish) feedback on reviews and comments (π / π ).
Describe the solution you'd like
Introduce an optional backend service for lgtm-ai, while keeping the CLI as the frontend. The backend would provide:
- Project onboarding β allow repositories to be registered with the backend.
- Vector DB indexing β index repository content (code, docs, PRs/issues) for retrieval.
- Webhook support β listen for events like commits, PR merges, comments, and update the index or trigger actions.
- Context retrieval endpoint β given a PR diff, return the top-K relevant context chunks from the vector DB using RAG.
The CLI could then optionally query this backend when configured, while still working fully offline in local-only mode.
Possible lgtm config
[context]
use_backend_service = true
project_id = "xx"use_backend_service = false (default) β CLI works as today (local, PR files only).
use_backend_service = true β CLI fetches enriched context from backend for RAG-powered reviews.
Describe alternatives you've considered
#62
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or requestideaNew ideas that are not well-definedNew ideas that are not well-defined