Skip to content

gauravsurtani/master-prompt

Repository files navigation

Master Prompt Creator

A web-based tool designed to help users construct high-quality, well-structured, and effective prompts for Large Language Models (LLMs) through a guided, step-by-step process with model-specific optimization.

Overview

The Master Prompt Creator transforms a user's basic idea into a detailed, optimized prompt using official best practices from OpenAI, Anthropic, and Google. It employs a wizard-like interface to ask clarifying questions based on established prompt engineering principles, then generates both a structured prompt and an AI-enhanced version optimized for the user's target LLM.

🎯 Key Features

Core Functionality

  • Guided Prompt Construction: Multi-step questionnaire covering Role, Directive, Context, Constraints, Output Format, Tone, Examples, Creativity Level, and Error Handling
  • Model-Specific Optimization: Tailored enhancement based on target LLM (GPT-4, Claude, Gemini)
  • AI-Powered Example Generation: Gemini API generates relevant input-output pairs
  • Automated Prompt Enhancement: Expert-level prompt optimization using model-specific best practices
  • Quality Assessment: Real-time prompt scoring (0-100%) with improvement recommendations

Advanced Features

  • Official Resource Integration: Direct links to OpenAI, Anthropic, and Google prompting guides
  • Model-Specific Techniques: Applies platform-specific optimization (XML tags for Claude, system messages for GPT-4, etc.)
  • Quality Indicators: Visual scoring with green/yellow/red quality badges
  • Side-by-Side Comparison: Raw vs. enhanced prompts with copy functionality
  • Zero Dependencies: Single HTML file with no external frameworks
  • Responsive Design: Modern UI with Tailwind CSS

🚀 How It Works

Step-by-Step Process

  1. Initial Task: User enters the core task they want the LLM to perform

  2. Guided Questionnaire: 8-step wizard covering all essential prompt components:

    • Directive (specific action/command)
    • Role/Persona (expert identity)
    • Context (background information)
    • Tone & Audience (communication style)
    • Output Format (structure requirements)
    • Constraints (limitations/rules)
    • Examples (few-shot demonstrations)
    • Creativity Level (factual vs. creative approach)
    • Error Handling (uncertainty management)
    • Target LLM (optimization preference)
  3. AI-Powered Assistance:

    • Generate examples automatically using Gemini API
    • Refine text with AI enhancement at any step
  4. Quality Assessment: Real-time prompt scoring with specific recommendations

  5. Model-Specific Enhancement:

    • GPT-4: System messages, few-shot prompting, chain-of-thought
    • Claude: XML structure, Constitutional AI principles
    • Gemini: System instructions, structured output, multimodal support
  6. Final Output: Side-by-side comparison of raw and enhanced prompts with quality indicators

Technology Stack

  • HTML: For the application structure.
  • Tailwind CSS: For all styling (included via CDN).
  • Vanilla JavaScript: For all application logic, state management, and interactivity.
  • Google Gemini API: For AI-powered example generation and prompt enhancement.

Getting Started

Prerequisites

  • A modern web browser (e.g., Chrome, Firefox, Safari, Edge).
  • A Google Gemini API key to enable the AI-powered features.

Running the Application

  1. Clone or download the index.html file.
  2. Open the index.html file directly in your web browser. No web server is required.

Configuration: Providing Your API Key

This application is designed to be secure and does not come with a pre-loaded API key. To use the AI-powered features ("Generate Examples" and "AI-Enhanced Master Prompt"), you must provide your own Google Gemini API key.

  1. Obtain a key: If you don't have one, get a Gemini API key from Google AI Studio.
  2. Enter the key in the app: The first time you click a feature that requires the API, a pop-up window will appear asking for your key.
  3. Save the key: Paste your key into the input field and click "Save Key".

Your API key is saved securely in your browser's localStorage and is never shared or stored outside of your machine. You only need to do this once per browser.

📚 Documentation

Core Documentation

Technical Guides

Model-Specific Optimization

The application automatically applies best practices from:

Quality Standards

  • 80%+ Quality Score: Production-ready prompts with comprehensive components
  • Model-Specific Techniques: Automatic application of platform-optimized structures
  • Safety Guidelines: Built-in bias prevention and content safety measures
  • Official Compliance: All techniques sourced from official documentation

🎯 Usage Examples

For GPT-4 Optimization

Set Target LLM to "GPT-4" to automatically apply:

  • System message structure
  • Few-shot prompting with examples
  • Chain-of-thought reasoning
  • Temperature control guidance

For Claude Optimization

Set Target LLM to "Claude" to automatically apply:

  • XML tag structure (<thinking>, <context>, <output>)
  • Constitutional AI principles
  • Direct, explicit instructions
  • Long context utilization

For Gemini Optimization

Set Target LLM to "Gemini" to automatically apply:

  • System instruction format
  • Structured output formatting
  • Multimodal considerations
  • Safety settings integration

License

This project is licensed under the MIT License - see the LICENSE file for details.