Skip to content

Support optional backend service for RAG-powered context retrievalΒ #124

@scastlara

Description

@scastlara

Is your feature request related to a problem? Please describe.
lgtm-ai is currently a CLI tool. Its stateless, local nature makes it easy and fast to use, but also limits its functionality. Specifically:

Describe the solution you'd like
Introduce an optional backend service for lgtm-ai, while keeping the CLI as the frontend. The backend would provide:

  1. Project onboarding – allow repositories to be registered with the backend.
  2. Vector DB indexing – index repository content (code, docs, PRs/issues) for retrieval.
  3. Webhook support – listen for events like commits, PR merges, comments, and update the index or trigger actions.
  4. Context retrieval endpoint – given a PR diff, return the top-K relevant context chunks from the vector DB using RAG.

The CLI could then optionally query this backend when configured, while still working fully offline in local-only mode.

Possible lgtm config

[context]
use_backend_service = true
project_id = "xx"

use_backend_service = false (default) β†’ CLI works as today (local, PR files only).
use_backend_service = true β†’ CLI fetches enriched context from backend for RAG-powered reviews.

Describe alternatives you've considered
#62

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or requestideaNew ideas that are not well-defined

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions