A Model Context Protocol (MCP) server for LLM-agnostic programming setup and context optimization. Automatically detects project types, generates universal coding standards, and optimizes token usage across all LLM platforms.
- Context Setup Hell: Manual creation of context files for each LLM
- Token Waste: Verbose, unoptimized context eating your LLM budget
- LLM Lock-in: Different file formats for each LLM platform
- Inconsistent Standards: No unified coding practices across projects
- ~40% token reduction through optimized context generation
- Universal compatibility with Claude, ChatGPT, Gemini, and others
- One-click setup replaces hours of manual configuration
- Consistent standards across all your projects and LLMs
# Clone the repository
git clone <repository-url>
cd llm-programming-setup
# Install dependencies
pip install -e .
Add to your claude_desktop_config.json
:
{
"mcpServers": {
"llm-programming-setup": {
"command": "python",
"args": ["-m", "llm_programming_setup_mcp.server"]
}
}
}
In Claude (or any MCP-compatible LLM):
Use scan_project to detect my project type
Use generate_context to create my LLM context file
Use show_copy_instructions to see how to use it with other LLMs
Tool | Description | Usage |
---|---|---|
scan_project |
Detect programming language and project type | Auto-scans current directory |
generate_context |
Create universal LLM_CONTEXT.md file | Works with any detected project |
show_copy_instructions |
Display setup guide for all LLMs | Copy-paste instructions |
estimate_tokens |
Calculate token count and costs | Optimize before sending to LLM |
optimize_context |
Suggest token-saving improvements | Actionable optimization tips |
- Python: FastAPI, Django, Flask, Data Science
- JavaScript/TypeScript: React, Node.js, Vue, Angular
- C#/.NET: ASP.NET Core, WPF, Console apps
- C/C++: CMake, Embedded systems
- Flutter/Dart: Mobile and Desktop apps
- Mixed Projects: Multi-language detection with priority
Generated LLM_CONTEXT.md
works with all LLMs:
# LLM Context - my-python-app
*Generated on 2025-06-15 by llm-programming-setup-mcp*
## Project Detection Results
**Detected Language:** python (confidence: 95%)
**Project Type:** Python application project
## Applicable Coding Standards
[Universal coding principles that work with any LLM]
## How to Use This Context
### Claude: Rename to CLAUDE.md
### ChatGPT: Copy to Project Custom Instructions
### Gemini: Copy to .idx/airules.md
[Detailed instructions for each platform]
- Smart Detection: Only includes relevant standards
- Structured Format: Hierarchical organization for better parsing
- Cost Estimation: Shows token count and LLM costs upfront
- Optimization Suggestions: Identifies verbose sections to trim
- Multi-LLM Efficiency: One context file serves all platforms
llm-programming-setup/
├── src/llm_programming_setup_mcp/ # MCP Server Code
│ ├── server.py # Main MCP server
│ ├── tools/ # MCP tool implementations
│ │ ├── project_scanner.py # Language detection
│ │ ├── context_generator.py # Universal context creation
│ │ ├── template_processor.py # Copy instructions
│ │ └── token_optimizer.py # Token analysis & optimization
│ └── templates/ # File templates
├── rules/ # Configuration & Standards
│ ├── goto.yaml # Language detection rules
│ ├── coding-standards/ # Language-specific standards
│ ├── user-preferences.yaml # Customization options
│ └── repository-standards.md # Repository organization
├── docs/ # Documentation
│ ├── claude-code-integration.md # Integration guide
│ ├── demo-workspace-tutorial.md # Demo usage guide
│ └── examples/ # Configuration examples
├── demo/ # Test Environment
│ ├── sample-project/ # Incomplete project for testing
│ └── sample-output.md # Example generated context
├── pyproject.toml # Python package configuration
├── CLAUDE.md # Original project configuration
└── README.md # This documentation
# Modify rules/goto.yaml to add new languages
new_language:
files: ["*.newext", "config.new"]
standards: ["coding-standards/general-principles.md", "coding-standards/new-language.md"]
description: "New Language Project"
# Check token usage before sending to LLM
Use estimate_tokens with context_file="LLM_CONTEXT.md"
# Get optimization suggestions
Use optimize_context with context_file="LLM_CONTEXT.md"
- Generate once:
generate_context
- Use everywhere: Follow
show_copy_instructions
- Optimize regularly:
optimize_context
when context grows
The demo/
directory provides a complete test environment to experience the MCP server workflow:
demo/sample-project/
- Intentionally incomplete Python project for testingdemo/sample-output.md
- Example of generated universal LLM context
Try the full workflow:
cd demo/sample-project/
Use scan_project # Detect Python project, identify missing files
Use generate_context # Create universal LLM context
Use show_copy_instructions # Learn how to use with any LLM
For detailed instructions, see Demo Workspace Tutorial.
This isn't just another framework - it's THE token optimization solution:
- Token-First Design: Every feature reduces LLM costs
- True LLM Independence: Never get locked into one vendor
- Measurable ROI: Concrete metrics show savings
- Zero Bloat: Lightweight, efficient, purpose-built
- Cost Transparency: Always know what you're spending
"While other frameworks add complexity, we subtract cost. Every feature is designed to make your LLM workflows more efficient and affordable."
- Fork the repository
- Create feature branch:
git checkout -b feature/amazing-feature
- Add your improvements (new languages, optimizations, etc.)
- Test with:
python -m pytest tests/
- Submit pull request
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: See
docs/
folder for detailed guides
MIT License - see LICENSE for details.
Token-optimized • LLM-agnostic • MCP-powered • Built for efficiency