Analyze and improve text by identifying and removing common AI-generated writing patterns.
Transform AI-sounding content into more natural, confident, human writing while preserving technical accuracy and meaning.
Business Value:
- Improve readability and engagement in documentation
- Create more authentic-sounding content
- Reduce "AI tell" patterns in published materials
- Enhance professional credibility
Use Cases:
- Blog posts and technical articles
- Product documentation
- Marketing content
- Technical writing
Complexity: 🟡 Intermediate
| Provider | File | Key Features | Best For | Cost Range |
|---|---|---|---|---|
| Base | prompt.md |
Universal compatibility | Any provider, fallback | Varies |
| Claude | prompt.claude.md |
XML tags, chain-of-thought | Complex reasoning, accuracy | $1-15 per 1M tokens |
| OpenAI | prompt.openai.md |
Function calling, JSON mode | Structured output, integration | $0.15-10 per 1M tokens |
| Gemini | prompt.gemini.md |
2M context, caching | High volume, batch processing | $0.038-5 per 1M tokens |
from ai_models import get_prompt, get_model
# Auto-select best variant based on model
model = get_model("gpt-4o")
prompt = get_prompt("stakeholder-communication/remove-ai-writing-patterns", model=model.id)
# Use the prompt
result = model.generate(prompt.format(**your_variables))# Explicit provider selection
claude_prompt = get_prompt("stakeholder-communication/remove-ai-writing-patterns", provider="claude")
openai_prompt = get_prompt("stakeholder-communication/remove-ai-writing-patterns", provider="openai")
gemini_prompt = get_prompt("stakeholder-communication/remove-ai-writing-patterns", provider="gemini")- ✅ Accuracy is critical
- ✅ Complex reasoning required
- ✅ Need detailed explanations
- ✅ Can leverage prompt caching (90% savings)
- ✅ Need strict JSON schema validation
- ✅ Function calling for integration
- ✅ Batch processing with parallel tools
- ✅ Reproducible results required
- ✅ Ultra-high volume (10K+ operations/day)
- ✅ Cost is primary concern
- ✅ Can batch operations together
- ✅ Need large context window (2M tokens)
See the individual prompt files for detailed usage examples:
- Base Prompt - Universal examples
- Claude Examples - XML format, caching
- OpenAI Examples - Function calling, batch processing
- Gemini Examples - Context window, ultra-low cost
Browse more prompts in the prompts directory.
- All variants return compatible output formats
- Provider selection is based on your specific use case requirements
- Cost estimates are approximate and vary by usage patterns
- See provider-specific-prompts.md for detailed optimization guide