Master your next interview with AI — role-specific questions, model answers, real-time evaluation, and performance analytics.
Topics: interview-preparation · coding-challenges · competitive-programming · data-structures-algorithms · deep-learning · education-ai · large-language-models · machine-learning · python · system-design
Interview Prep Master is a comprehensive AI-powered interview preparation platform that adapts to your target role, seniority level, and interview type. Whether preparing for a software engineering technical screen, a system design round, a behavioural interview, or a data science case study, the platform generates relevant, realistic questions and provides structured feedback on your answers.
The question generation engine uses an LLM with role-specific context: for a Senior Software Engineer at a FAANG company, it generates system design questions (design YouTube, design a rate limiter), LeetCode-style algorithmic problems with increasing difficulty, and behavioural questions aligned to specific engineering competencies (technical leadership, cross-functional collaboration, dealing with ambiguity). For data science roles, it generates case study questions, statistics/probability problems, and ML system design scenarios.
The answer evaluation module uses the STAR framework (Situation, Task, Action, Result) for behavioural questions and a custom technical rubric for system design and coding questions. Feedback includes a score (1–10), identification of missing elements, and a model answer that demonstrates best practice.
Interview performance is a skill that can be systematically improved with deliberate practice. Most candidates under-prepare because practice resources are generic, scattered, and lack immediate feedback. This platform was built to provide the kind of focused, role-specific, feedback-rich preparation that was previously only available through expensive coaching.
User inputs: role, company, seniority, interview type
│
Question Generator (LLM, role-specific context)
│
User Answer Input
│
Answer Evaluator (LLM rubric: STAR / technical)
├── Score (1–10)
├── Missing elements
└── Model answer
│
Session History + Weakness Tracker Dashboard
Generate questions tailored to exact role title (SWE, DS, PM, DE), seniority level (L3–L7), and company culture — questions for Google system design differ from startup engineering culture questions.
Separate tabs for Technical (algorithms, system design, SQL, ML), Behavioural (leadership, conflict, failure, growth), Case Study (for consulting/DS), and HR (compensation, culture fit).
Paste your answer and receive instant LLM evaluation: STAR completeness for behavioural, technical correctness for coding, architecture quality for system design — with a 1–10 score.
Every question can generate a model answer demonstrating best practice — a concrete benchmark to compare your approach against.
Start with easier questions and unlock harder ones as you score above threshold, creating an adaptive difficulty ladder that builds confidence before stretching capability.
Session history visualised as a radar chart of performance across question categories, highlighting consistent weak areas for targeted practice.
Timed mock interview sessions (30/45/60 minutes) with randomised question selection, minimal feedback during the session, and a comprehensive debrief report at the end.
Toggle company profiles (Google, Meta, Amazon, Stripe, etc.) to receive questions styled after known interview formats and values for that company.
| Library / Tool | Role | Why This Choice |
|---|---|---|
| Streamlit | Application UI | Multi-tab layout, timer, radar chart |
| OpenAI GPT-4o / Gemini | Question + answer evaluation | Role-specific prompts, STAR rubric, model answers |
| pandas | Session history | Question/answer/score DataFrame storage and analysis |
| Plotly | Performance visualisation | Radar chart, score histograms, progress timeline |
| python-dotenv | Config | API key management |
Key packages detected in this repo:
streamlit·google-generativeai·markdown2·weasyprint
- Python 3.9+ (or Node.js 18+ for TypeScript/JS projects)
pipornpmpackage manager- Relevant API keys (see Configuration section)
git clone https://github.com/Devanik21/Interview-Prep-Master.git
cd Interview-Prep-Master
python -m venv venv && source venv/bin/activate
pip install streamlit openai google-generativeai pandas plotly python-dotenv
echo 'OPENAI_API_KEY=sk-...' > .env
streamlit run app.py# Launch
streamlit run app.py
# CLI question generation
python generate_questions.py \
--role 'Senior Software Engineer' \
--level senior \
--type system_design \
--count 10| Variable | Default | Description |
|---|---|---|
OPENAI_API_KEY |
(required) |
LLM API key |
DEFAULT_ROLE |
Software Engineer |
Pre-selected role on load |
MOCK_DURATION_MIN |
45 |
Default mock interview duration in minutes |
DIFFICULTY_THRESHOLD |
7.0 |
Score needed to unlock next difficulty level |
Copy
.env.exampleto.envand populate all required values before running.
Interview-Prep-Master/
├── README.md
├── requirements.txt
├── app.py
└── ...
- Voice interview mode with Whisper speech-to-text and TTS for realistic spoken practice
- Pair programming simulation with a live code editor and AI pair programmer
- Community question bank where users contribute and rate questions
- Company-specific insider prep packs with known question patterns
- Offer letter negotiation simulator using historical compensation data
Contributions, issues, and feature requests are welcome. Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/your-feature) - Commit your changes (
git commit -m 'feat: add your feature') - Push to your branch (
git push origin feature/your-feature) - Open a Pull Request
Please follow conventional commit messages and ensure any new code is documented.
LLM evaluation is a strong approximation but not a substitute for real interview practice with human interviewers. Use this tool for preparation and pattern recognition, and supplement with mock interviews from peers or professional coaches.
Devanik Debnath
B.Tech, Electronics & Communication Engineering
National Institute of Technology Agartala
This project is open source and available under the MIT License.
Crafted with curiosity, precision, and a belief that good software is worth building well.