The backend powering a RAG system for querying LangChain documentation
This project provides the backend for LangChainDoc.com, enabling querying of LangChain documentation. It uses:
- LangGraph for orchestrating the retrieval and response generation
- Vector database for storing and retrieving documentation content
- LLMs for generating responses with developer insights
- Semantic Search: Find relevant documentation based on meaning
- Context-Aware Responses: Responses consider multiple documentation sources
This project has been tested with:
- Vector Database: Pinecone
- LLM: OpenAI
The system is structured to work with other providers, but implementations for alternatives would need to be added.
Before you begin, ensure you have the following API keys for:
- Copy
.env.example
to.env
cp .env.example .env
-
Add your API keys and configuration to
.env
-
Install the LangGraph CLI
pip install --upgrade "langgraph-cli[inmem]"
- Launch LangGraph Server Start the LangGraph API server locally:
langgraph dev
This backend system works with the LangChainDoc Client to provide a complete user experience.
This project is maintained by Luc Ebert, a LangChain developer.
Contributions are welcome! Please feel free to submit a Pull Request.
For questions and support, please open an issue in the repository.