-
Couldn't load subscription status.
- Fork 2
Open
Labels
documentationImprovements or additions to documentationImprovements or additions to documentationenhancementNew feature or requestNew feature or request
Description
Overview
Implement model-specific prompting guidance for the expert and generate commands to optimize prompt structure based on the AI model being used. This will help users craft better prompts that leverage each model's unique capabilities.
Reference Documentation
- GPT-5 Prompting Guide: https://cookbook.openai.com/examples/gpt-5/gpt-5_prompting_guide
- Additional model guides to be researched for Claude, Gemini, and Grok models
Requirements
1. Model-Specific Prompt Templates
Create optimized prompt templates for each supported model:
GPT-5 Specific Guidelines
Based on the OpenAI cookbook guide:
- Reasoning Control: Implement reasoning_effort parameter guidance
- Verbosity Management: Use verbosity API parameter effectively
- Structured Instructions: Use XML-like sections for clarity
- Tool Call Budgets: Set explicit limits for agent workflows
- Contradiction Avoidance: Scan for and flag contradictory instructions
Other Models (To Research)
- Claude (Opus/Sonnet): Context window optimization, constitutional AI principles
- Gemini: Multi-modal capabilities, context caching
- Grok: Real-time information access, reasoning chains
2. Implementation Areas
A. Expert Command Enhancement
// packages/cli/src/commands/expert.ts
interface ModelPromptingStrategy {
model: string;
structureGuidelines: {
instructionFormat: 'xml' | 'markdown' | 'plain';
sectionOrder: string[];
requiredSections: string[];
};
optimizationRules: {
maxInstructionLength?: number;
avoidPatterns?: string[];
preferredPatterns?: string[];
};
apiParameters?: {
reasoning_effort?: 'minimal' | 'medium' | 'high';
verbosity?: 'concise' | 'normal' | 'verbose';
};
}B. Generate Command Templates
Add model-aware prompt structuring:
// packages/cli/src/commands/generate.ts
function structurePromptForModel(
model: string,
files: string[],
instructions: string
): string {
const strategy = getModelStrategy(model);
return strategy.formatPrompt(files, instructions);
}C. CLAUDE.md Updates
Add a new section for model-specific prompting:
## Model-Specific Prompting Guidelines
### GPT-5
- Use structured XML sections for complex tasks
- Set reasoning_effort based on task complexity
- Avoid contradictory instructions
- Leverage verbosity parameter for output control
### Claude
- [To be added based on Anthropic documentation]
### Gemini
- [To be added based on Google documentation]3. User-Facing Features
A. Prompt Validation
# Validate prompt for contradictions and ambiguity
promptcode expert "task" --validate-prompt --model gpt-5
# Output:
# ⚠️ Potential issues found:
# - Contradictory instruction on line 3 and 7
# - Ambiguous format specification
# - Suggested improvements: ...B. Model-Specific Help
# Show model-specific prompting tips
promptcode expert --model gpt-5 --prompting-guide
# Output GPT-5 specific guidelinesC. Auto-Optimization
# Auto-optimize prompt for target model
promptcode generate -f "src/**/*.ts" \
--instructions "Review code" \
--optimize-for gpt-54. Implementation Steps
-
Research Phase
- Compile prompting guides for all supported models
- Document key differences and optimization strategies
-
Core Implementation
- Create
ModelPromptingStrategyinterface - Implement strategies for each model
- Add prompt validation logic
- Create
-
CLI Integration
- Add
--validate-promptflag - Add
--prompting-guideflag - Implement
--optimize-forflag
- Add
-
Documentation
- Update CLAUDE.md with model-specific sections
- Add examples to README
- Create
.promptcode/prompting-guides/directory with detailed guides
-
Testing
- Unit tests for prompt validation
- Integration tests for each model strategy
- Performance benchmarks for optimized vs non-optimized prompts
5. Success Metrics
- Improved token efficiency (10-20% reduction in prompt tokens)
- Better model response quality (measured via user feedback)
- Reduced prompt errors and contradictions
- Faster prompt generation with auto-optimization
6. Future Enhancements
- Prompt caching based on model + task combination
- A/B testing framework for prompt variations
- Integration with OpenAI's Prompt Optimizer API
- Custom prompt strategies via
.promptcode/strategies/
Priority
Medium - Nice to have enhancement that improves user experience
Estimated Effort
- Research: 2-3 hours
- Implementation: 8-10 hours
- Testing: 3-4 hours
- Documentation: 2-3 hours
Dependencies
- Access to model documentation (OpenAI, Anthropic, Google, xAI)
- Updates to both CLI and potentially VS Code extension
- May require API version updates for new parameters (verbosity, reasoning_effort)
Metadata
Metadata
Assignees
Labels
documentationImprovements or additions to documentationImprovements or additions to documentationenhancementNew feature or requestNew feature or request