|  | 
|  | 1 | +--- | 
|  | 2 | +allowed-tools: Bash(promptcode expert:*), Bash(promptcode preset list:*), Bash(promptcode generate:*), Bash(open -a Cursor:*), Read(/tmp/expert-*:*), Write(/tmp/expert-consultation-*.md), Task | 
|  | 3 | +description: Consult AI expert (O3/O3-pro) for complex problems with code context - supports ensemble mode for multiple models | 
|  | 4 | +--- | 
|  | 5 | + | 
|  | 6 | +Consult an expert about: $ARGUMENTS | 
|  | 7 | + | 
|  | 8 | +## Instructions: | 
|  | 9 | + | 
|  | 10 | +1. Analyze the request in $ARGUMENTS: | 
|  | 11 | +   - Extract the main question/problem | 
|  | 12 | +   - Identify if code context would help (look for keywords matching our presets) | 
|  | 13 | +   - Check for multiple model requests (e.g., "compare using o3 and gpt-5", "ask o3, gpt-5, and gemini") | 
|  | 14 | +   - Available models from our MODELS list: o3, o3-pro, o3-mini, gpt-5, gpt-5-mini, gpt-5-nano, sonnet-4, opus-4, gemini-2.5-pro, gemini-2.5-flash, grok-4 | 
|  | 15 | +   - If 2+ models detected → use ensemble mode | 
|  | 16 | +   - For single model: determine preference (if user mentions "o3-pro" or "o3 pro", use o3-pro) | 
|  | 17 | + | 
|  | 18 | +2. If code context needed, list available presets: | 
|  | 19 | +   ```bash | 
|  | 20 | +   promptcode preset list | 
|  | 21 | +   ``` | 
|  | 22 | +   Choose relevant preset(s) based on the question. | 
|  | 23 | + | 
|  | 24 | +3. Prepare consultation file for review: | 
|  | 25 | +   - Create a consultation file at `/tmp/expert-consultation-{timestamp}.md` | 
|  | 26 | +   - Structure the file with: | 
|  | 27 | +     ```markdown | 
|  | 28 | +     # Expert Consultation | 
|  | 29 | +      | 
|  | 30 | +     ## Question | 
|  | 31 | +     {user's question} | 
|  | 32 | +      | 
|  | 33 | +     ## Context | 
|  | 34 | +     {any relevant context or background} | 
|  | 35 | +     ``` | 
|  | 36 | +   - If a preset would help, append the code context: | 
|  | 37 | +     ```bash | 
|  | 38 | +     echo -e "\n## Code Context\n" >> "/tmp/expert-consultation-{timestamp}.md" | 
|  | 39 | +     promptcode generate --preset "{preset_name}" >> "/tmp/expert-consultation-{timestamp}.md" | 
|  | 40 | +     ``` | 
|  | 41 | + | 
|  | 42 | +4. Open consultation for user review (if Cursor is available): | 
|  | 43 | +   ```bash | 
|  | 44 | +   open -a Cursor "/tmp/expert-consultation-{timestamp}.md" | 
|  | 45 | +   ``` | 
|  | 46 | +    | 
|  | 47 | +5. Estimate cost and get approval: | 
|  | 48 | +   - Model costs (from our pricing): | 
|  | 49 | +     - O3: $2/$8 per million tokens (input/output) | 
|  | 50 | +     - O3-pro: $20/$80 per million tokens (input/output) | 
|  | 51 | +     - GPT-5: $1.25/$10 per million tokens | 
|  | 52 | +     - GPT-5-mini: $0.25/$2 per million tokens | 
|  | 53 | +     - Sonnet-4: $5/$20 per million tokens | 
|  | 54 | +     - Opus-4: $25/$100 per million tokens | 
|  | 55 | +     - Gemini-2.5-pro: $3/$12 per million tokens | 
|  | 56 | +     - Grok-4: $5/$15 per million tokens | 
|  | 57 | +   - Calculate based on file size (roughly: file_size_bytes / 4 = tokens) | 
|  | 58 | +    | 
|  | 59 | +   **For single model:** | 
|  | 60 | +   - Say: "I've prepared the expert consultation (~{tokens} tokens). Model: {model}. You can edit the file to refine your question. Reply 'yes' to send to the expert (estimated cost: ${cost})." | 
|  | 61 | +    | 
|  | 62 | +   **For ensemble mode (multiple models):** | 
|  | 63 | +   - Calculate total cost across all models | 
|  | 64 | +   - Say: "I've prepared an ensemble consultation (~{tokens} tokens) with {models}. Total estimated cost: ${total_cost} ({model1}: ${cost1}, {model2}: ${cost2}, ...). Reply 'yes' to proceed with all models in parallel." | 
|  | 65 | + | 
|  | 66 | +6. Execute based on mode: | 
|  | 67 | + | 
|  | 68 | +   **Single Model Mode:** | 
|  | 69 | +   ```bash | 
|  | 70 | +   promptcode expert --prompt-file "/tmp/expert-consultation-{timestamp}.md" --model {model} --yes | 
|  | 71 | +   ``` | 
|  | 72 | +    | 
|  | 73 | +   **Ensemble Mode (Parallel Execution):** | 
|  | 74 | +   - Use Task tool to run multiple models in parallel | 
|  | 75 | +   - Each task runs the same consultation file with different models | 
|  | 76 | +   - Store each result in separate file: `/tmp/expert-{model}-{timestamp}.txt` | 
|  | 77 | +   - Example for 3 models (run these in PARALLEL using Task tool): | 
|  | 78 | +     ``` | 
|  | 79 | +     Task 1: promptcode expert --prompt-file "/tmp/expert-consultation-{timestamp}.md" --model o3 --yes > /tmp/expert-o3-{timestamp}.txt | 
|  | 80 | +     Task 2: promptcode expert --prompt-file "/tmp/expert-consultation-{timestamp}.md" --model gpt-5 --yes > /tmp/expert-gpt5-{timestamp}.txt   | 
|  | 81 | +     Task 3: promptcode expert --prompt-file "/tmp/expert-consultation-{timestamp}.md" --model gemini-2.5-pro --yes > /tmp/expert-gemini-{timestamp}.txt | 
|  | 82 | +     ``` | 
|  | 83 | +   - IMPORTANT: Launch all tasks at once for true parallel execution | 
|  | 84 | +   - Wait for all tasks to complete | 
|  | 85 | +   - Note: The --yes flag confirms we have user approval for the cost | 
|  | 86 | +
 | 
|  | 87 | +7. Handle the response: | 
|  | 88 | +
 | 
|  | 89 | +   **Single Model Mode:** | 
|  | 90 | +   - If successful: Open response in Cursor (if available) and summarize key insights | 
|  | 91 | +   - If API key missing: Show appropriate setup instructions | 
|  | 92 | +    | 
|  | 93 | +   **Ensemble Mode (Synthesis):** | 
|  | 94 | +   - Read all response text files | 
|  | 95 | +   - Extract key insights from each model's response | 
|  | 96 | +   - Create synthesis report in `/tmp/expert-ensemble-synthesis-{timestamp}.md`: | 
|  | 97 | +    | 
|  | 98 | +   ```markdown | 
|  | 99 | +   # Ensemble Expert Consultation Results | 
|  | 100 | +    | 
|  | 101 | +   ## Question | 
|  | 102 | +   {original_question} | 
|  | 103 | +    | 
|  | 104 | +   ## Expert Responses | 
|  | 105 | +    | 
|  | 106 | +   ### {Model1} - ${actual_cost}, {response_time}s | 
|  | 107 | +   **Key Points:** | 
|  | 108 | +   - {key_point_1} | 
|  | 109 | +   - {key_point_2} | 
|  | 110 | +   - {key_point_3} | 
|  | 111 | +    | 
|  | 112 | +   ### {Model2} - ${actual_cost}, {response_time}s | 
|  | 113 | +   **Key Points:** | 
|  | 114 | +   - {key_point_1} | 
|  | 115 | +   - {key_point_2} | 
|  | 116 | +   - {key_point_3} | 
|  | 117 | +    | 
|  | 118 | +   ## Synthesis | 
|  | 119 | +    | 
|  | 120 | +   **Consensus Points:** | 
|  | 121 | +   - {point_agreed_by_multiple_models} | 
|  | 122 | +   - {another_consensus_point} | 
|  | 123 | +    | 
|  | 124 | +   **Best Comprehensive Answer:** {Model} provided the most thorough analysis, particularly strong on {specific_aspect} | 
|  | 125 | +    | 
|  | 126 | +   **Unique Insights:** | 
|  | 127 | +   - {Model1}: {unique_insight_from_model1} | 
|  | 128 | +   - {Model2}: {unique_insight_from_model2} | 
|  | 129 | +    | 
|  | 130 | +   **🏆 WINNER:** {winning_model} - {clear_reason_why_this_model_won} | 
|  | 131 | +   (If tie: "TIE - Both models provided equally valuable but complementary insights") | 
|  | 132 | +    | 
|  | 133 | +   **Performance Summary:** | 
|  | 134 | +   - Total Cost: ${total_actual_cost} | 
|  | 135 | +   - Total Time: {total_time}s | 
|  | 136 | +   - Best Value: {model_with_best_cost_to_quality_ratio} | 
|  | 137 | +   ``` | 
|  | 138 | +    | 
|  | 139 | +   - Open synthesis in Cursor if available | 
|  | 140 | +   - IMPORTANT: Always declare a clear winner (or explicitly state if it's a tie) | 
|  | 141 | +   - Provide brief summary of which model performed best and why they won | 
|  | 142 | + | 
|  | 143 | +   **Error Handling:** | 
|  | 144 | +   - If any model fails in ensemble mode, continue with successful ones | 
|  | 145 | +   - Report which models succeeded/failed | 
|  | 146 | +   - If OPENAI_API_KEY missing: | 
|  | 147 | +     ``` | 
|  | 148 | +     To use expert consultation, set your OpenAI API key: | 
|  | 149 | +     export OPENAI_API_KEY=sk-... | 
|  | 150 | +     Get your key from: https://platform.openai.com/api-keys | 
|  | 151 | +     ``` | 
|  | 152 | +   - For other errors: Report exact error message | 
|  | 153 | +
 | 
|  | 154 | +## Important: | 
|  | 155 | +- Default to O3 model unless O3-pro explicitly requested or needed for complex reasoning | 
|  | 156 | +- For ensemble mode: limit to maximum 4 models to prevent resource exhaustion | 
|  | 157 | +- Always show cost estimate before sending | 
|  | 158 | +- Keep questions clear and specific | 
|  | 159 | +- Include relevant code context when asking about specific functionality | 
|  | 160 | +- NEVER automatically add --yes without user approval | 
|  | 161 | +- Reasoning effort defaults to 'high' (set in CLI) - no need to specify | 
0 commit comments