A minimalistic approach to Retrieval-Augmented Generation (RAG) that prevents hallucination by ensuring all generated content is explicitly derived from source documents.
Traditional RAG systems retrieve relevant documents and then allow an LLM to freely generate responses based on that context. This can lead to hallucinations where the model invents facts not present in the source material.
Verbatim RAG solves this by extracting verbatim text spans from documents and composing responses entirely from these exact passages, with direct citations linking back to sources.
For extraction, we can use LLM-based span extractors or fine-tuned encoder-based models like ModernBERT. We've trained our own ModernBERT model for this purpose, which is available on HuggingFace (we've trained it on the RAGBench dataset).
With this approach, the whole RAG pipeline can be run without any usage of LLMs, and with using SPLADE embeddings, the pipeline can be run entirely on CPU, making it lightweight and efficient.
# Install the package
pip install verbatim-ragfrom verbatim_rag import VerbatimIndex, VerbatimRAG
from verbatim_rag.ingestion import DocumentProcessor
from verbatim_rag.vector_stores import LocalMilvusStore
from verbatim_rag.embedding_providers import SpladeProvider
# Process documents with intelligent chunking
processor = DocumentProcessor()
# Process PDFs from URLs
document = processor.process_url(
url="https://aclanthology.org/2025.bionlp-share.8.pdf",
title="KR Labs at ArchEHR-QA 2025: A Verbatim Approach for Evidence-Based Question Answering",
metadata={"authors": ["Adam Kovacs", "Paul Schmitt", "Gabor Recski"]}
)
# Create embedding provider and vector store
sparse_provider = SpladeProvider(
model_name="opensearch-project/opensearch-neural-sparse-encoding-doc-v2-distill",
device="cpu"
)
vector_store = LocalMilvusStore(
db_path="./index.db",
collection_name="verbatim_rag",
enable_dense=False,
enable_sparse=True,
)
# Create index with providers
index = VerbatimIndex(
vector_store=vector_store,
sparse_provider=sparse_provider
)
index.add_documents([document])
# Then query the index
rag = VerbatimRAG(index)
response = rag.query("What is the main contribution of the paper?")
print(response.answer)Set your OpenAI API key before using the system:
export OPENAI_API_KEY=your_api_key_here- Document Processing: Documents are processed using docling for format conversion and chonkie for chunking
- Document Indexing: Documents are indexed using vector embeddings (both dense and sparse)
- Template Management: Response templates are created and stored for common question types
- Query Processing:
- Relevant documents are retrieved
- Key passages are extracted verbatim using either LLM-based or fine-tuned span extractors
- Responses are structured using templates
- Citations link back to source documents
This ensures all responses are grounded in the source material, preventing hallucinations.
- VerbatimRAG (
verbatim_rag/core.py): Main orchestrator that coordinates document retrieval, span extraction, and response generation - VerbatimIndex (
verbatim_rag/index.py): Vector-based document indexing and retrieval - SpanExtractor (
verbatim_rag/extractors.py): Abstract interface for extracting relevant text spans from documents- LLMSpanExtractor: Uses OpenAI models to identify relevant spans
- ModelSpanExtractor: Uses fine-tuned BERT-based models for span classification
- DocumentProcessor (
verbatim_rag/ingestion/): Docling + Chonkie integration for intelligent document processing - Document (
verbatim_rag/document.py): Core document representation with metadata
- Documents are processed and chunked using docling and chonkie
- Documents are indexed using vector embeddings
- User queries retrieve relevant documents
- Span extractors identify verbatim passages that answer the question
- Response templates structure the final answer with citations
- All responses include exact text spans and document references
The package includes a full web interface with React frontend and FastAPI backend:
# Start API server
python api/app.py
# Start React frontend (in another terminal)
cd frontend/
npm install
npm startWe've trained our own encoder model based on ModernBERT for sentence classification. This model is designed to classify text spans as relevant or not, providing a robust alternative to LLM-based extractors.
You can find our model on HuggingFace: KRLabsOrg/verbatim-rag-modern-bert-v1.
You can use it with the defined index as follows:
from verbatim_rag.core import VerbatimRAG
from verbatim_rag.index import VerbatimIndex
from verbatim_rag.extractors import ModelSpanExtractor
from verbatim_rag.vector_stores import LocalMilvusStore
from verbatim_rag.embedding_providers import SpladeProvider
# Load your trained extractor
extractor = ModelSpanExtractor("KRLabsOrg/verbatim-rag-modern-bert-v1")
# Create embedding provider and vector store
sparse_provider = SpladeProvider(
model_name="opensearch-project/opensearch-neural-sparse-encoding-doc-v2-distill",
device="cpu"
)
vector_store = LocalMilvusStore(
db_path="./index.db",
collection_name="verbatim_rag",
enable_dense=False,
enable_sparse=True,
)
# Create index with providers
# (Assuming you have already populated the index)
index = VerbatimIndex(
vector_store=vector_store,
sparse_provider=sparse_provider
)
# Create VerbatimRAG system with custom extractor
rag_system = VerbatimRAG(
index=index,
extractor=extractor,
k=5
)
# Query the system
response = rag_system.query("Main findings of the paper?")
print(response.answer)If you use Verbatim RAG in your research, please cite our paper:
@inproceedings{kovacs-etal-2025-kr,
title = "{KR} Labs at {A}rch{EHR}-{QA} 2025: A Verbatim Approach for Evidence-Based Question Answering",
author = "Kovacs, Adam and
Schmitt, Paul and
Recski, Gabor",
editor = "Soni, Sarvesh and
Demner-Fushman, Dina",
booktitle = "Proceedings of the 24th Workshop on Biomedical Language Processing (Shared Tasks)",
month = aug,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.bionlp-share.8/",
pages = "69--74",
ISBN = "979-8-89176-276-3",
abstract = "We present a lightweight, domain{-}agnostic verbatim pipeline for evidence{-}grounded question answering. Our pipeline operates in two steps: first, a sentence-level extractor flags relevant note sentences using either zero-shot LLM prompts or supervised ModernBERT classifiers. Next, an LLM drafts a question-specific template, which is filled verbatim with sentences from the extraction step. This prevents hallucinations and ensures traceability. In the ArchEHR{-}QA 2025 shared task, our system scored 42.01{\%}, ranking top{-}10 in core metrics and outperforming the organiser{'}s 70B{-}parameter Llama{-}3.3 baseline. We publicly release our code and inference scripts under an MIT license."
}