Skip to content

Devanik21/Interview-Prep-Master

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Interview Prep Master

Language Stars Forks Author Status

Master your next interview with AI — role-specific questions, model answers, real-time evaluation, and performance analytics.


Topics: interview-preparation · coding-challenges · competitive-programming · data-structures-algorithms · deep-learning · education-ai · large-language-models · machine-learning · python · system-design

Overview

Interview Prep Master is a comprehensive AI-powered interview preparation platform that adapts to your target role, seniority level, and interview type. Whether preparing for a software engineering technical screen, a system design round, a behavioural interview, or a data science case study, the platform generates relevant, realistic questions and provides structured feedback on your answers.

The question generation engine uses an LLM with role-specific context: for a Senior Software Engineer at a FAANG company, it generates system design questions (design YouTube, design a rate limiter), LeetCode-style algorithmic problems with increasing difficulty, and behavioural questions aligned to specific engineering competencies (technical leadership, cross-functional collaboration, dealing with ambiguity). For data science roles, it generates case study questions, statistics/probability problems, and ML system design scenarios.

The answer evaluation module uses the STAR framework (Situation, Task, Action, Result) for behavioural questions and a custom technical rubric for system design and coding questions. Feedback includes a score (1–10), identification of missing elements, and a model answer that demonstrates best practice.


Motivation

Interview performance is a skill that can be systematically improved with deliberate practice. Most candidates under-prepare because practice resources are generic, scattered, and lack immediate feedback. This platform was built to provide the kind of focused, role-specific, feedback-rich preparation that was previously only available through expensive coaching.


Architecture

User inputs: role, company, seniority, interview type
        │
  Question Generator (LLM, role-specific context)
        │
  User Answer Input
        │
  Answer Evaluator (LLM rubric: STAR / technical)
  ├── Score (1–10)
  ├── Missing elements
  └── Model answer
        │
  Session History + Weakness Tracker Dashboard

Features

Role-Specific Question Generation

Generate questions tailored to exact role title (SWE, DS, PM, DE), seniority level (L3–L7), and company culture — questions for Google system design differ from startup engineering culture questions.

Multi-Category Question Sets

Separate tabs for Technical (algorithms, system design, SQL, ML), Behavioural (leadership, conflict, failure, growth), Case Study (for consulting/DS), and HR (compensation, culture fit).

Real-Time Answer Evaluation

Paste your answer and receive instant LLM evaluation: STAR completeness for behavioural, technical correctness for coding, architecture quality for system design — with a 1–10 score.

Model Answer Generation

Every question can generate a model answer demonstrating best practice — a concrete benchmark to compare your approach against.

Difficulty Progression

Start with easier questions and unlock harder ones as you score above threshold, creating an adaptive difficulty ladder that builds confidence before stretching capability.

Weakness Tracking Dashboard

Session history visualised as a radar chart of performance across question categories, highlighting consistent weak areas for targeted practice.

Mock Interview Mode

Timed mock interview sessions (30/45/60 minutes) with randomised question selection, minimal feedback during the session, and a comprehensive debrief report at the end.

Company-Specific Mode

Toggle company profiles (Google, Meta, Amazon, Stripe, etc.) to receive questions styled after known interview formats and values for that company.


Tech Stack

Library / Tool Role Why This Choice
Streamlit Application UI Multi-tab layout, timer, radar chart
OpenAI GPT-4o / Gemini Question + answer evaluation Role-specific prompts, STAR rubric, model answers
pandas Session history Question/answer/score DataFrame storage and analysis
Plotly Performance visualisation Radar chart, score histograms, progress timeline
python-dotenv Config API key management

Key packages detected in this repo: streamlit · google-generativeai · markdown2 · weasyprint


Getting Started

Prerequisites

  • Python 3.9+ (or Node.js 18+ for TypeScript/JS projects)
  • pip or npm package manager
  • Relevant API keys (see Configuration section)

Installation

git clone https://github.com/Devanik21/Interview-Prep-Master.git
cd Interview-Prep-Master
python -m venv venv && source venv/bin/activate
pip install streamlit openai google-generativeai pandas plotly python-dotenv
echo 'OPENAI_API_KEY=sk-...' > .env
streamlit run app.py

Usage

# Launch
streamlit run app.py

# CLI question generation
python generate_questions.py \
  --role 'Senior Software Engineer' \
  --level senior \
  --type system_design \
  --count 10

Configuration

Variable Default Description
OPENAI_API_KEY (required) LLM API key
DEFAULT_ROLE Software Engineer Pre-selected role on load
MOCK_DURATION_MIN 45 Default mock interview duration in minutes
DIFFICULTY_THRESHOLD 7.0 Score needed to unlock next difficulty level

Copy .env.example to .env and populate all required values before running.


Project Structure

Interview-Prep-Master/
├── README.md
├── requirements.txt
├── app.py
└── ...

Roadmap

  • Voice interview mode with Whisper speech-to-text and TTS for realistic spoken practice
  • Pair programming simulation with a live code editor and AI pair programmer
  • Community question bank where users contribute and rate questions
  • Company-specific insider prep packs with known question patterns
  • Offer letter negotiation simulator using historical compensation data

Contributing

Contributions, issues, and feature requests are welcome. Please:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/your-feature)
  3. Commit your changes (git commit -m 'feat: add your feature')
  4. Push to your branch (git push origin feature/your-feature)
  5. Open a Pull Request

Please follow conventional commit messages and ensure any new code is documented.


Notes

LLM evaluation is a strong approximation but not a substitute for real interview practice with human interviewers. Use this tool for preparation and pattern recognition, and supplement with mock interviews from peers or professional coaches.


Author

Devanik Debnath
B.Tech, Electronics & Communication Engineering
National Institute of Technology Agartala

GitHub LinkedIn


License

This project is open source and available under the MIT License.


Crafted with curiosity, precision, and a belief that good software is worth building well.

About

AI interview preparation platform — role-specific Q&A generation (technical/behavioural/system design), STAR framework evaluation, adaptive difficulty, and weakness tracking.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages