Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

README.md

Remove AI Writing Patterns

🎯 Purpose

Analyze and improve text by identifying and removing common AI-generated writing patterns.

Transform AI-sounding content into more natural, confident, human writing while preserving technical accuracy and meaning.

Business Value:

  • Improve readability and engagement in documentation
  • Create more authentic-sounding content
  • Reduce "AI tell" patterns in published materials
  • Enhance professional credibility

Use Cases:

  • Blog posts and technical articles
  • Product documentation
  • Marketing content
  • Technical writing

Complexity: 🟡 Intermediate

📊 Provider Variants

Provider File Key Features Best For Cost Range
Base prompt.md Universal compatibility Any provider, fallback Varies
Claude prompt.claude.md XML tags, chain-of-thought Complex reasoning, accuracy $1-15 per 1M tokens
OpenAI prompt.openai.md Function calling, JSON mode Structured output, integration $0.15-10 per 1M tokens
Gemini prompt.gemini.md 2M context, caching High volume, batch processing $0.038-5 per 1M tokens

🚀 Quick Start

Automatic Provider Selection

from ai_models import get_prompt, get_model

# Auto-select best variant based on model
model = get_model("gpt-4o")
prompt = get_prompt("stakeholder-communication/remove-ai-writing-patterns", model=model.id)

# Use the prompt
result = model.generate(prompt.format(**your_variables))

Manual Provider Selection

# Explicit provider selection
claude_prompt = get_prompt("stakeholder-communication/remove-ai-writing-patterns", provider="claude")
openai_prompt = get_prompt("stakeholder-communication/remove-ai-writing-patterns", provider="openai")
gemini_prompt = get_prompt("stakeholder-communication/remove-ai-writing-patterns", provider="gemini")

🎯 When to Use Each Provider

Use Claude when:

  • ✅ Accuracy is critical
  • ✅ Complex reasoning required
  • ✅ Need detailed explanations
  • ✅ Can leverage prompt caching (90% savings)

Use OpenAI when:

  • ✅ Need strict JSON schema validation
  • ✅ Function calling for integration
  • ✅ Batch processing with parallel tools
  • ✅ Reproducible results required

Use Gemini when:

  • ✅ Ultra-high volume (10K+ operations/day)
  • ✅ Cost is primary concern
  • ✅ Can batch operations together
  • ✅ Need large context window (2M tokens)

📚 Examples

See the individual prompt files for detailed usage examples:

🔗 Related Prompts

Browse more prompts in the prompts directory.

📝 Notes

  • All variants return compatible output formats
  • Provider selection is based on your specific use case requirements
  • Cost estimates are approximate and vary by usage patterns
  • See provider-specific-prompts.md for detailed optimization guide