Skip to content

Commit 86af815

Browse files
committed
feat: Add research-backed model-specific prompt optimization
- Implement flexible LLM detection (OpenAI/GPT/ChatGPT → GPT-4, etc.) - Add philosophy-driven optimization based on official documentation - Integrate advanced techniques: agentic control, XML structure, PTCF framework - Add real-time quality scoring and production-ready templates - Create comprehensive documentation suite with comparative analysis Based on deep research of OpenAI, Anthropic, and Google prompting methodologies.
1 parent f741676 commit 86af815

9 files changed

+1705
-105
lines changed
Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
# Prompting Best Practices Knowledge Base
2+
3+
## Master Prompt Generation Rules
4+
5+
### Universal Structure Requirements
6+
All generated prompts MUST include these components:
7+
- [ROLE]: Specific persona or expertise level
8+
- [DIRECTIVE]: Clear, actionable instruction using specific verbs
9+
- [CONTEXT]: Essential background information and constraints
10+
- [OUTPUT_FORMAT]: Exact structure, length, and formatting requirements
11+
- [TONE]: Communication style and target audience
12+
- [CONSTRAINTS]: Limitations, rules, and what to avoid
13+
- [EXAMPLES]: Input-output demonstrations (2-3 high-quality examples)
14+
- [CREATIVITY_LEVEL]: Factual precision vs. creative exploration guidance
15+
- [ERROR_HANDLING]: Instructions for uncertainty and missing information
16+
17+
### Model-Specific Optimization Rules
18+
19+
#### OpenAI GPT-4 - "Configurable Agent" Philosophy
20+
**Source**: https://platform.openai.com/docs/guides/prompt-engineering
21+
- Place instructions at beginning with delimiters (###, """)
22+
- Use positive framing - say what TO do, not what NOT to do
23+
- Implement task decomposition for complex problems
24+
- Add <self_reflection> tags for agentic control
25+
- Configure reasoning_effort (low/medium/high) for computational budget
26+
- Separate instructions from context with clear structural boundaries
27+
- Temperature: 0.0-0.3 factual, 0.7-1.0 creative, 1.0-2.0 experimental
28+
29+
#### Anthropic Claude - "Auditable Processor" Philosophy
30+
**Source**: https://docs.anthropic.com/claude/docs/prompt-engineering
31+
- Use semantic XML tags: <instructions>, <thinking>, <answer>, <document>
32+
- Be explicit and direct - avoid all ambiguity
33+
- Place long documents at TOP, query at END for attention optimization
34+
- Use response prefilling (start with "{") to force JSON format
35+
- Implement prompt chaining for complex multi-step workflows
36+
- Mandatory <thinking> tags for transparent, auditable reasoning
37+
- Apply Constitutional AI principles for safety-first approach
38+
39+
#### Google Gemini - "Creative Collaborator" Philosophy
40+
**Source**: https://ai.google.dev/gemini-api/docs/prompting-strategies
41+
- Use PTCF framework: Persona, Task, Context, Format
42+
- Make it conversational - treat as collaborative partner
43+
- Leverage native multimodality (text + images + audio)
44+
- Use iterative refinement through follow-up prompts
45+
- Describe scenes narratively, don't just list keywords
46+
- Configure Temperature/topK/topP for creative control
47+
- Start good, refine through conversational iteration
48+
49+
### Quality Assessment Criteria
50+
- Role Definition (10 points): Clear, specific persona
51+
- Directive Clarity (15 points): Unambiguous, actionable instruction
52+
- Context Completeness (10 points): Sufficient background information
53+
- Output Format (10 points): Specific structure requirements
54+
- Examples Quality (15 points): Relevant, diverse demonstrations
55+
- Constraints (10 points): Clear limitations and rules
56+
- Error Handling (10 points): Uncertainty management
57+
- Tone Specification (5 points): Appropriate communication style
58+
- Creativity Guidance (5 points): Factual vs. creative balance
59+
- Overall Coherence (10 points): Logical flow and completeness
60+
61+
### Safety and Ethics Requirements
62+
- Include bias prevention measures
63+
- Specify appropriate content boundaries
64+
- Add harm prevention guidelines
65+
- Request fact-checking for factual claims
66+
- Use inclusive language and examples
67+
- Respect privacy and confidentiality
68+
69+
### Enhancement Process Rules
70+
When enhancing prompts:
71+
1. Preserve ALL original requirements and constraints
72+
2. Apply model-specific optimization techniques from official sources
73+
3. Ensure production-ready quality and reliability (80%+ quality score)
74+
4. Include appropriate safety and quality controls
75+
5. Use model-preferred structure and formatting
76+
6. Validate that enhancement improves clarity without changing intent

.vscode/settings.json

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
{
2+
}

README.md

Lines changed: 94 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,28 +1,56 @@
11
# Master Prompt Creator
22

3-
A web-based tool designed to help users construct high-quality, well-structured, and effective prompts for Large Language Models (LLMs) through a guided, step-by-step process.
3+
A web-based tool designed to help users construct high-quality, well-structured, and effective prompts for Large Language Models (LLMs) through a guided, step-by-step process with model-specific optimization.
44

55
## Overview
66

7-
The Master Prompt Creator transforms a user's basic idea into a detailed, optimized prompt. It employs a wizard-like interface to ask clarifying questions based on established prompt engineering principles. The final output provides both a raw, structured prompt and an AI-enhanced version, leveraging the Gemini API for automated optimization.
8-
9-
## Key Features
10-
11-
- **Guided Prompt Construction**: A multi-step questionnaire guides users through defining the core components of a good prompt: Role, Directive, Context, Constraints, Output Format, Tone, and Examples.
12-
- **AI-Powered Example Generation**: If a user needs help creating few-shot examples, the tool can call the Gemini API to generate relevant input/output pairs based on the task description.
13-
- **Automated Prompt Enhancement**: After the user builds their "raw" prompt, the tool makes a second API call to have an "expert prompt engineer" (simulated by another LLM) refine and optimize the prompt for clarity, conciseness, and effectiveness.
14-
- **Side-by-Side Comparison**: The final screen displays the user-generated raw prompt and the AI-enhanced version, allowing for easy comparison and copying.
15-
- **Zero Dependencies**: The entire application is a single `index.html` file with no external frameworks, making it extremely portable and easy to run.
16-
- **Clean, Responsive UI**: Styled with Tailwind CSS for a modern and intuitive user experience on all devices.
17-
18-
## How It Works
19-
20-
1. **Initial Task**: The user starts by entering the core task they want the LLM to perform.
21-
2. **Clarification Wizard**: The application presents a series of questions, one by one. The user's answers are saved as they progress through the steps.
22-
3. **Generate Examples (Optional)**: On the "Examples" step, the user can click a button to have the Gemini API generate examples for them.
23-
4. **Generate Master Prompt**: Once all questions are answered, the tool assembles the answers into a structured "Raw Prompt."
24-
5. **Enhance with AI**: Concurrently, the raw prompt is sent to the Gemini API with a meta-prompt asking it to act as an expert and improve the prompt.
25-
6. **Display & Use**: Both prompts are displayed. The user can copy their preferred version with a single click.
7+
The Master Prompt Creator transforms a user's basic idea into a detailed, optimized prompt using official best practices from OpenAI, Anthropic, and Google. It employs a wizard-like interface to ask clarifying questions based on established prompt engineering principles, then generates both a structured prompt and an AI-enhanced version optimized for the user's target LLM.
8+
9+
## 🎯 Key Features
10+
11+
### **Core Functionality**
12+
- **Guided Prompt Construction**: Multi-step questionnaire covering Role, Directive, Context, Constraints, Output Format, Tone, Examples, Creativity Level, and Error Handling
13+
- **Model-Specific Optimization**: Tailored enhancement based on target LLM (GPT-4, Claude, Gemini)
14+
- **AI-Powered Example Generation**: Gemini API generates relevant input-output pairs
15+
- **Automated Prompt Enhancement**: Expert-level prompt optimization using model-specific best practices
16+
- **Quality Assessment**: Real-time prompt scoring (0-100%) with improvement recommendations
17+
18+
### **Advanced Features**
19+
- **Official Resource Integration**: Direct links to OpenAI, Anthropic, and Google prompting guides
20+
- **Model-Specific Techniques**: Applies platform-specific optimization (XML tags for Claude, system messages for GPT-4, etc.)
21+
- **Quality Indicators**: Visual scoring with green/yellow/red quality badges
22+
- **Side-by-Side Comparison**: Raw vs. enhanced prompts with copy functionality
23+
- **Zero Dependencies**: Single HTML file with no external frameworks
24+
- **Responsive Design**: Modern UI with Tailwind CSS
25+
26+
## 🚀 How It Works
27+
28+
### **Step-by-Step Process**
29+
1. **Initial Task**: User enters the core task they want the LLM to perform
30+
2. **Guided Questionnaire**: 8-step wizard covering all essential prompt components:
31+
- Directive (specific action/command)
32+
- Role/Persona (expert identity)
33+
- Context (background information)
34+
- Tone & Audience (communication style)
35+
- Output Format (structure requirements)
36+
- Constraints (limitations/rules)
37+
- Examples (few-shot demonstrations)
38+
- Creativity Level (factual vs. creative approach)
39+
- Error Handling (uncertainty management)
40+
- Target LLM (optimization preference)
41+
42+
3. **AI-Powered Assistance**:
43+
- Generate examples automatically using Gemini API
44+
- Refine text with AI enhancement at any step
45+
46+
4. **Quality Assessment**: Real-time prompt scoring with specific recommendations
47+
48+
5. **Model-Specific Enhancement**:
49+
- GPT-4: System messages, few-shot prompting, chain-of-thought
50+
- Claude: XML structure, Constitutional AI principles
51+
- Gemini: System instructions, structured output, multimodal support
52+
53+
6. **Final Output**: Side-by-side comparison of raw and enhanced prompts with quality indicators
2654

2755
## Technology Stack
2856

@@ -53,6 +81,52 @@ This application is designed to be secure and does not come with a pre-loaded AP
5381

5482
Your API key is saved securely in your browser's `localStorage` and is **never shared or stored outside of your machine**. You only need to do this once per browser.
5583

84+
## 📚 Documentation
85+
86+
### **Core Documentation**
87+
- **[Master Prompt Generation Rules](master-prompt-generation-rules.md)**: Comprehensive rules and structure for generating high-quality prompts
88+
- **[Official Best Practices Summary](official-prompting-best-practices-summary.md)**: Key principles from OpenAI, Anthropic, and Google documentation
89+
- **[Official Resources](official-prompting-resources.md)**: Direct links to all official prompting guides and examples
90+
91+
### **Technical Guides**
92+
- **[Comprehensive Guidelines](prompting-guidelines-comprehensive.md)**: Detailed prompting techniques and strategies
93+
- **[Steering Rules](.kiro/steering/prompting-best-practices.md)**: Active rules used by the application for prompt generation
94+
95+
### **Model-Specific Optimization**
96+
The application automatically applies best practices from:
97+
- **OpenAI GPT-4**: https://platform.openai.com/docs/guides/prompt-engineering
98+
- **Anthropic Claude**: https://docs.anthropic.com/claude/docs/prompt-engineering
99+
- **Google Gemini**: https://ai.google.dev/gemini-api/docs/prompting-strategies
100+
101+
### **Quality Standards**
102+
- **80%+ Quality Score**: Production-ready prompts with comprehensive components
103+
- **Model-Specific Techniques**: Automatic application of platform-optimized structures
104+
- **Safety Guidelines**: Built-in bias prevention and content safety measures
105+
- **Official Compliance**: All techniques sourced from official documentation
106+
107+
## 🎯 Usage Examples
108+
109+
### **For GPT-4 Optimization**
110+
Set Target LLM to "GPT-4" to automatically apply:
111+
- System message structure
112+
- Few-shot prompting with examples
113+
- Chain-of-thought reasoning
114+
- Temperature control guidance
115+
116+
### **For Claude Optimization**
117+
Set Target LLM to "Claude" to automatically apply:
118+
- XML tag structure (`<thinking>`, `<context>`, `<output>`)
119+
- Constitutional AI principles
120+
- Direct, explicit instructions
121+
- Long context utilization
122+
123+
### **For Gemini Optimization**
124+
Set Target LLM to "Gemini" to automatically apply:
125+
- System instruction format
126+
- Structured output formatting
127+
- Multimodal considerations
128+
- Safety settings integration
129+
56130
## License
57131

58132
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

0 commit comments

Comments
 (0)