A local-run LeetCode-style coding practice system that lets you browse, code, and test problems 100% offline—perfect for planes, cruises, or any no-internet scenario.
- Node.js 16+ (Download here)
- Any modern web browser
Note: Internet is only required for the initial setup and build. Once built, the application works completely offline.
# Double-click or run in terminal
start-local.batNon-interactive usage (CI or automation):
# Accept defaults and copy from .env.example if present
start-local.bat --yes
# Or set an environment variable (PowerShell)
set START_LOCAL_NONINTERACTIVE=1 && start-local.bat# Make executable (first time only)
chmod +x start-local.sh
# Run the startup script
./start-local.shThe scripts will automatically:
- Check Node.js installation
- Install dependencies (npm install) - Requires internet
- Build the application (npm run build) - Requires internet
- Start the local server
Then open http://localhost:3000 in your browser!
Note: After the initial build, you can use the application offline without rebuilding.
# Clone the repository
git clone https://github.com/yourusername/OfflineLeetPractice.git
cd OfflineLeetPractice
# Install dependencies - Requires internet
npm install
# Build for production - Requires internet
npm run build
# Start the server (works offline)
npm start- Local Problem Library: 10+ classic algorithm problems included
- AI Problem Generator: Generate unlimited custom problems with various AI providers
- Multi-Language Support: Code and test in JavaScript, Python, Java, C++, or C
- Monaco Code Editor: VS Code-like editing experience
- Instant Testing: Run tests immediately with detailed results
- Performance Metrics: Execution time and memory usage tracking
- Dynamic Problem Management: Add/edit problems without rebuilding
- Custom Problem Creation: Describe what you want to practice
- Complete Solutions: Each problem includes working reference solutions
- Comprehensive Testing: Auto-generated test cases including edge cases
- Instant Integration: Problems automatically added to your local library
- Browse Problems: View the problem list with difficulty and tags
- Select a Problem: Click on any problem to open the detail page
- Code Your Solution: Use the Monaco editor (supports autocomplete, syntax highlighting)
- Run Tests: Click "Submit & Run Tests" to execute your code
- View Results: See test results with performance metrics
- Access AI Generator: Click the "AI Generator" button on the homepage
- Describe Your Need: Enter what type of problem you want
- Generate Problem: AI creates a complete problem with test cases and solutions
- Practice Immediately: Generated problem is auto-added to your library
To use the AI problem generator, you can configure any of these AI providers (or multiple):
If no .env file exists when you run the provided startup scripts (start-local.sh or start-local.bat), the script will detect this as a first-time startup and offer to interactively configure AI features for you. In non-interactive mode (use --yes or START_LOCAL_NONINTERACTIVE=1) the script will try to copy .env.example to .env if present; otherwise it will create a minimal .env with default model names and empty API keys. The interactive flow will:
- Ask whether you want to enable AI features.
- For each provider (OpenAI, DeepSeek, Qwen, Claude, Ollama) ask whether to enable it, then prompt for model name and API key (for Ollama it will ask for endpoint and model).
- Provide sensible defaults if you just press Enter:
- OpenAI model:
gpt-4-turbo - DeepSeek model:
deepseek-chat - Qwen model:
qwen-turbo - Claude model:
claude-3-haiku-20240307 - Ollama endpoint:
http://localhost:11434, model:llama3
- OpenAI model:
The script will write your choices into a .env file in the project root. If a .env already exists, the scripts will skip configuration. To change AI settings later, edit the .env file directly.
- Get API Key: Obtain an API key from DeepSeek Platform
- Configure Key: Add your API key to the application using one of these methods:
- Create a
.env.localfile in the project root with:DEEPSEEK_API_KEY=your_deepseek_api_key_here
- Or set as an environment variable in your system:
# Windows (PowerShell) $env:DEEPSEEK_API_KEY="your_deepseek_api_key_here" # macOS/Linux (Bash) export DEEPSEEK_API_KEY="your_deepseek_api_key_here"
- Create a
- Get API Key: Obtain an API key from OpenAI Platform
- Configure Key: Add your API key to the application using one of these methods:
- Create a
.env.localfile in the project root with:OPENAI_API_KEY=your_openai_api_key_here
- Or set as an environment variable in your system:
# Windows (PowerShell) $env:OPENAI_API_KEY="your_openai_api_key_here" # macOS/Linux (Bash) export OPENAI_API_KEY="your_openai_api_key_here"
- Create a
- Get API Key: Obtain an API key from Qwen Platform
- Configure Key: Add your API key to the application using one of these methods:
- Create a
.env.localfile in the project root with:QWEN_API_KEY=your_qwen_api_key_here
- Or set as an environment variable in your system:
# Windows (PowerShell) $env:QWEN_API_KEY="your_qwen_api_key_here" # macOS/Linux (Bash) export QWEN_API_KEY="your_qwen_api_key_here"
- Create a
- Get API Key: Obtain an API key from Claude Platform
- Configure Key: Add your API key to the application using one of these methods:
- Create a
.env.localfile in the project root with:CLAUDE_API_KEY=your_claude_api_key_here
- Or set as an environment variable in your system:
# Windows (PowerShell) $env:CLAUDE_API_KEY="your_claude_api_key_here" # macOS/Linux (Bash) export CLAUDE_API_KEY="your_claude_api_key_here"
- Create a
-
Install Ollama: Download and install Ollama from https://ollama.com/
-
Download Model: Run
ollama pull llama3to download the recommended model -
Configure Ollama: Configure Ollama using one of these methods:
- Create a
.env.localfile in the project root with:# Optional: Set Ollama endpoint (default: http://localhost:11434) # OLLAMA_ENDPOINT=http://localhost:11434 # Optional: Set Ollama model (default: llama3) # OLLAMA_MODEL=llama3
- Or set as environment variables in your system:
# Windows (PowerShell) $env:OLLAMA_ENDPOINT="http://localhost:11434" # Optional $env:OLLAMA_MODEL="llama3" # Optional # macOS/Linux (Bash) export OLLAMA_ENDPOINT=http://localhost:11434 # Optional export OLLAMA_MODEL=llama3 # Optional
- Create a
-
Start Ollama: Ensure the Ollama service is running (it usually starts automatically)
If you have multiple AI providers configured, the system will automatically prefer them in this order:
- Ollama (local)
- OpenAI
- Claude
- Qwen
- DeepSeek
You can switch between providers using the UI controls on the AI Generator page.
The system automatically detects which providers are configured through a server-side check. The frontend fetches this configuration via the /api/ai-providers endpoint, ensuring proper security and compliance with Next.js environment variable restrictions.
- Navigate to the AI Generator page by clicking the "🤖 AI Generator" button on the homepage
- Enter your problem request in English or Chinese, for example:
- "Generate a medium difficulty array manipulation problem"
- "我想做一道动态规划题目"
- "Create a binary search problem with edge cases"
- Click "Generate Problem" and wait for the AI to create your custom problem
- The generated problem will automatically be added to your local problem library
- Click "Try Last Generated Problem" to immediately start solving it
See AI_GENERATOR_README.md for more detailed configuration instructions and troubleshooting guide!
- Manual Addition: Use the "Add Problem" page for custom problems
- JSON Import: Upload or paste problem data in JSON format
- Direct Edit: Modify
public/problems.jsonfor immediate changes (no rebuild needed)
- Frontend: React 18 + Next.js 13 + TypeScript
- UI Framework: Mantine v7 (Modern React components)
- Code Editor: Monaco Editor (VS Code engine)
- Code Execution: vm2 (Secure JavaScript sandbox)
OfflineLeetPractice/
├── pages/ # Next.js pages and API routes
│ ├── api/
│ │ ├── problems.ts # Problem data API
│ │ ├── run.ts # Code execution API
│ │ ├── generate-problem.ts # AI problem generation API
│ │ └── add-problem.ts # Manual problem addition API
│ ├── problems/[id].tsx # Problem detail page
│ ├── generator.tsx # AI Generator page
│ ├── add-problem.tsx # Manual problem addition page
│ └── index.tsx # Homepage
├── problems/
│ └── problems.json # Local problem database
├── src/
│ ├── components/ # React components
│ │ ├── ProblemGenerator.tsx # AI Generator component
│ │ ├── ProblemForm.tsx # Manual problem form
│ │ └── LanguageThemeControls.tsx # Language/theme switcher
│ └── styles/ # Global styles
├── start-local.bat # Windows startup script
├── start-local.sh # Unix startup script
└── AI_GENERATOR_README.md # AI Generator detailed docs
The application supports adding/modifying problems in offline environments without rebuilding!
- Edit the Problem Database: Open
public/problems.jsonin your built application folder - Add Your Problem: Follow the JSON format (see
MODIFY-PROBLEMS-GUIDE.mdfor details) - Save and Refresh: Changes take effect immediately!
Example: Add a new problem by editing public/problems.json:
{
"id": "reverse-string",
"title": {
"en": "Reverse String",
"zh": "反转字符串"
},
"difficulty": "Easy",
"tags": ["string"],
"description": {
"en": "Write a function that reverses a string.",
"zh": "编写一个函数来反转字符串。"
},
"template": {
"js": "function reverseString(s) {\n // Your code here\n}\nmodule.exports = reverseString;"
},
"tests": [
{ "input": "[\"h\",\"e\",\"l\",\"l\",\"o\"]", "output": "[\"o\",\"l\",\"l\",\"e\",\"h\"]" }
]
}See MODIFY-PROBLEMS-GUIDE.md for complete instructions!
Currently supports multiple programming languages for problem solving:
- JavaScript - Full support with VM sandbox execution
- Python - Full support with interpreter execution
- Java - Full support with compilation and execution
- C++ - Full support with compilation and execution
- C - Full support with compilation and execution
All languages are supported in the AI problem generator with appropriate templates and test cases.
We welcome contributions! Areas for improvement:
- More Problems: Add classic algorithm challenges
- More Languages: Python, Java, C++ support
- Enhanced Features: Better performance analytics
MIT License - Feel free to use, modify, and distribute!
Happy Coding at 30,000 feet!