Skip to content

ghostintheshell-192/llm-programming-setup

Repository files navigation

LLM Programming Setup MCP Server

Token Optimization LLM Agnostic MCP Compatible

A Model Context Protocol (MCP) server for LLM-agnostic programming setup and context optimization. Automatically detects project types, generates universal coding standards, and optimizes token usage across all LLM platforms.

What Problem Does This Solve?

  • Context Setup Hell: Manual creation of context files for each LLM
  • Token Waste: Verbose, unoptimized context eating your LLM budget
  • LLM Lock-in: Different file formats for each LLM platform
  • Inconsistent Standards: No unified coding practices across projects

Proven Results

  • ~40% token reduction through optimized context generation
  • Universal compatibility with Claude, ChatGPT, Gemini, and others
  • One-click setup replaces hours of manual configuration
  • Consistent standards across all your projects and LLMs

Quick Start

1. Install the MCP Server

# Clone the repository
git clone <repository-url>
cd llm-programming-setup

# Install dependencies
pip install -e .

2. Configure with Claude Desktop

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "llm-programming-setup": {
      "command": "python",
      "args": ["-m", "llm_programming_setup_mcp.server"]
    }
  }
}

3. Use the Tools

In Claude (or any MCP-compatible LLM):

Use scan_project to detect my project type
Use generate_context to create my LLM context file  
Use show_copy_instructions to see how to use it with other LLMs

Available MCP Tools

Tool Description Usage
scan_project Detect programming language and project type Auto-scans current directory
generate_context Create universal LLM_CONTEXT.md file Works with any detected project
show_copy_instructions Display setup guide for all LLMs Copy-paste instructions
estimate_tokens Calculate token count and costs Optimize before sending to LLM
optimize_context Suggest token-saving improvements Actionable optimization tips

Supported Project Types

  • Python: FastAPI, Django, Flask, Data Science
  • JavaScript/TypeScript: React, Node.js, Vue, Angular
  • C#/.NET: ASP.NET Core, WPF, Console apps
  • C/C++: CMake, Embedded systems
  • Flutter/Dart: Mobile and Desktop apps
  • Mixed Projects: Multi-language detection with priority

Universal Context Output

Generated LLM_CONTEXT.md works with all LLMs:

# LLM Context - my-python-app
*Generated on 2025-06-15 by llm-programming-setup-mcp*

## Project Detection Results
**Detected Language:** python (confidence: 95%)
**Project Type:** Python application project

## Applicable Coding Standards
[Universal coding principles that work with any LLM]

## How to Use This Context
### Claude: Rename to CLAUDE.md
### ChatGPT: Copy to Project Custom Instructions  
### Gemini: Copy to .idx/airules.md
[Detailed instructions for each platform]

Token Optimization Features

  • Smart Detection: Only includes relevant standards
  • Structured Format: Hierarchical organization for better parsing
  • Cost Estimation: Shows token count and LLM costs upfront
  • Optimization Suggestions: Identifies verbose sections to trim
  • Multi-LLM Efficiency: One context file serves all platforms

Project Structure

llm-programming-setup/
├── src/llm_programming_setup_mcp/  # MCP Server Code
│   ├── server.py                   # Main MCP server
│   ├── tools/                      # MCP tool implementations
│   │   ├── project_scanner.py      # Language detection
│   │   ├── context_generator.py    # Universal context creation
│   │   ├── template_processor.py   # Copy instructions
│   │   └── token_optimizer.py      # Token analysis & optimization
│   └── templates/                  # File templates
├── rules/                          # Configuration & Standards
│   ├── goto.yaml                   # Language detection rules
│   ├── coding-standards/           # Language-specific standards
│   ├── user-preferences.yaml       # Customization options
│   └── repository-standards.md     # Repository organization
├── docs/                           # Documentation
│   ├── claude-code-integration.md  # Integration guide
│   ├── demo-workspace-tutorial.md  # Demo usage guide
│   └── examples/                   # Configuration examples
├── demo/                           # Test Environment
│   ├── sample-project/             # Incomplete project for testing
│   └── sample-output.md            # Example generated context
├── pyproject.toml                  # Python package configuration
├── CLAUDE.md                       # Original project configuration
└── README.md                       # This documentation

Advanced Usage

Custom Project Detection

# Modify rules/goto.yaml to add new languages
new_language:
  files: ["*.newext", "config.new"]
  standards: ["coding-standards/general-principles.md", "coding-standards/new-language.md"]
  description: "New Language Project"

Token Budget Management

# Check token usage before sending to LLM
Use estimate_tokens with context_file="LLM_CONTEXT.md"

# Get optimization suggestions
Use optimize_context with context_file="LLM_CONTEXT.md"

Multi-LLM Workflow

  1. Generate once: generate_context
  2. Use everywhere: Follow show_copy_instructions
  3. Optimize regularly: optimize_context when context grows

Demo & Testing

The demo/ directory provides a complete test environment to experience the MCP server workflow:

  • demo/sample-project/ - Intentionally incomplete Python project for testing
  • demo/sample-output.md - Example of generated universal LLM context

Try the full workflow:

cd demo/sample-project/
Use scan_project    # Detect Python project, identify missing files
Use generate_context    # Create universal LLM context
Use show_copy_instructions    # Learn how to use with any LLM

For detailed instructions, see Demo Workspace Tutorial.

Unique Value Proposition

This isn't just another framework - it's THE token optimization solution:

  1. Token-First Design: Every feature reduces LLM costs
  2. True LLM Independence: Never get locked into one vendor
  3. Measurable ROI: Concrete metrics show savings
  4. Zero Bloat: Lightweight, efficient, purpose-built
  5. Cost Transparency: Always know what you're spending

"While other frameworks add complexity, we subtract cost. Every feature is designed to make your LLM workflows more efficient and affordable."

Contributing

  1. Fork the repository
  2. Create feature branch: git checkout -b feature/amazing-feature
  3. Add your improvements (new languages, optimizations, etc.)
  4. Test with: python -m pytest tests/
  5. Submit pull request

Support & Feedback

License

MIT License - see LICENSE for details.


Token-optimized • LLM-agnostic • MCP-powered • Built for efficiency

About

a setup to integrate llms easily into your workflow while minimizing tokens spent on gaining context awareness

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages