Skip to content

feyzollahi/SimpleRAG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Enhanced Retrieval-Augmented Generation (RAG) System

Advanced document Q&A with semantic search, reranking, and hybrid retrieval.

Features

  • Hybrid dense/sparse retrieval
  • Semantic chunking
  • Query expansion
  • Reranking
  • Document compression
  • GPU/CPU support

Quick Start

  1. Clone the repository

    git clone https://github.com/feyzollahi/SimpleRAG.git
    cd SimpleRAG
  2. Install dependencies

    poetry install
  3. Configure environment variables

    Create a .env file in the project root (see below for example).

    HF_TOKEN=your_huggingface_token
    MODEL_NAME=google/gemma-2b-it
    EMBEDDING_MODEL_NAME=BAAI/bge-small-en-v1.5
    RERANKER_MODEL_NAME=BAAI/bge-reranker-base
    DOC_DIR=docs
    CACHE_DIR=.cache
    
    • HF_TOKEN is required for some Hugging Face models. Get it from Hugging Face.
    • Adjust other variables as needed.
  4. Add your documents

    Place .txt, .md, or .pdf files in the docs directory.

  5. Run the app

    streamlit run app.py

.env Example

HF_TOKEN=your_huggingface_token
MODEL_NAME=google/gemma-2b-it
EMBEDDING_MODEL_NAME=BAAI/bge-small-en-v1.5
RERANKER_MODEL_NAME=BAAI/bge-reranker-base
DOC_DIR=docs
CACHE_DIR=.cache

Usage

  • Enter your question in the app UI.
  • Adjust retrieval and generation settings in the sidebar.
  • View sources and context for each answer.

Notes

  • For best results, use high-quality documents.
  • GPU is recommended for faster inference.
  • All configuration is managed via .env.

License

MIT

Acknowledgements

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages