AI in Education (EdTech) focused on syllabus-bound, hallucination-free answers with adaptive study modes.
Acadrix is a RAG-powered academic assistant that lets students upload their study materials and ask questions — getting precise, document-grounded answers with zero hallucination. Built for students who want to study smarter, not harder.
The backend is hosted on Render's free tier, which spins down after 15 minutes of inactivity. The first request (login/register) may take up to ~45 seconds while the server wakes up. This is expected behaviour — subsequent requests will be fast.
Please be patient on first load. The app is fully functional once the server is awake.
| Link | |
|---|---|
| Frontend | acadrix.vercel.app |
| Backend API Docs | acadrix.onrender.com/docs |
💡 Tip: Open the backend API docs link first to wake the server before navigating to the frontend.
📸 Screenshots above show the app running after the backend has warmed up.
- Document-Grounded Answers — Responses are strictly sourced from uploaded materials. If it's not in your documents, Acadrix won't make it up.
- Adaptive Study Modes — Switch between Direct Answer mode for quick answers and Socratic mode for guided learning through questions.
- Conversation Memory — Follow-up questions like "explain again" or "I didn't understand" are handled intelligently by reusing previous context.
- Persistent Indexes — FAISS indexes are stored in MongoDB GridFS, surviving server restarts and redeployments.
- Multi-Document Support — Upload multiple documents and query across all of them simultaneously.
- Query History — Every question and answer is saved and accessible for review.
- JWT Authentication — Secure user accounts with token-based authentication.
- Source Citations — Every answer includes the source file and chunk it was derived from.
Frontend
- React + Vite
- React Router
- Axios
- React Markdown
Backend
- FastAPI
- MongoDB Atlas — database + GridFS for FAISS index persistence
- FAISS — vector similarity search
- Hugging Face Inference API — document embeddings (
all-MiniLM-L6-v2) - Groq (LLaMA 3.3 70B) — LLM inference
- JWT — authentication
- pdfplumber / python-pptx — document parsing
acadrix/
├── backend/
│ ├── main.py
│ ├── auth.py
│ ├── config.py
│ ├── database.py
│ ├── models.py
│ ├── routers/
│ │ ├── auth.py
│ │ ├── documents.py
│ │ ├── query.py
│ │ └── history.py
│ └── pipeline/
│ ├── ingest.py
│ ├── embeddings.py
│ ├── query.py
│ └── vector_store.py
└── frontend/
└── src/
├── api.js
├── App.jsx
├── index.css
├── components/
│ ├── Sidebar.jsx
│ └── ProtectedRoute.jsx
├── context/
│ └── AuthContext.jsx
└── pages/
├── Login.jsx
├── Register.jsx
├── Dashboard.jsx
├── Query.jsx
└── History.jsx
- Python 3.10+
- Node.js 18+
- MongoDB Atlas account
- Groq API key
- Hugging Face account + API token
cd backend
python -m venv venv
venv\Scripts\activate # Windows
source venv/bin/activate # Mac/Linux
pip install -r requirements.txtCreate a .env file in the backend/ folder (see .env.example):
uvicorn main:app --reloadBackend runs at http://127.0.0.1:8000
API docs at http://127.0.0.1:8000/docs
cd frontend
npm installCreate a .env file in the frontend/ folder:
VITE_API_URL=http://127.0.0.1:8000
npm run devFrontend runs at http://localhost:5173
| Variable | Description |
|---|---|
MONGODB_URI |
MongoDB Atlas connection string |
DATABASE_NAME |
Database name (e.g. acadrix) |
JWT_SECRET_KEY |
Secret key for JWT token signing |
GROQ_API_KEY |
Groq API key for LLM inference |
HF_API_TOKEN |
Hugging Face API token for embeddings |
UPLOAD_DIR |
Directory for uploaded files (e.g. uploads) |
FAISS_INDEX_PATH |
Directory for local FAISS indexes (e.g. faiss_index) |
DEBUG |
true for development, false for production |
| Variable | Description |
|---|---|
VITE_API_URL |
Backend API URL |
- User uploads a PDF, PPTX or TXT document
- Backend parses and chunks the document into smaller pieces
- Chunks are sent to Hugging Face Inference API which converts them into vectors using
all-MiniLM-L6-v2 - Vectors are stored in a FAISS index, saved to MongoDB GridFS for persistence across server restarts
- User asks a question
- If it's a follow-up ("explain again", "I didn't understand" etc.), the previous answer is reused directly — no FAISS search needed
- For new questions, the question is converted to a vector via HF API and FAISS finds the most semantically similar chunks
- Top chunks + question are sent to LLaMA 3.3 70B via Groq API
- LLM generates a grounded answer strictly from the retrieved chunks
- Answer + source citations returned to the user
Study Modes:
- Direct — straight answer from your documents
- Socratic — guiding questions to help you discover the answer yourself
- Free tier backend (Render) has a cold start delay of up to ~60 seconds after inactivity, see the notice at the top.
- Follow-up detection is keyword-based. Single-letter or highly ambiguous replies may not be recognized as follow-ups.
Built by Jashruth K A




