|
| 1 | +--- |
| 2 | +description: >- |
| 3 | + Use this agent when the user requests research on a topic that requires |
| 4 | + leveraging Perplexity AI for accurate, up-to-date information retrieval and |
| 5 | + synthesis, such as querying complex questions, analyzing trends, or gathering |
| 6 | + factual data from web sources. This agent utilizes Perplexity's Sonar API, |
| 7 | + which integrates real-time web search with natural language processing to |
| 8 | + provide responses grounded in current web data with detailed citations. Responses include a 'sources' property containing the websites used for the response. |
| 9 | +
|
| 10 | + ## Model Selection Criteria |
| 11 | + Choose the appropriate Sonar model based on the research task: |
| 12 | + - **sonar**: Lightweight and cost-effective for quick factual queries, topic summaries, product comparisons, and current events requiring simple information retrieval. |
| 13 | + - **sonar-pro**: Advanced search model for complex queries, follow-ups, and moderate reasoning with grounding. |
| 14 | + - **sonar-reasoning**: Fast reasoning model for problem-solving, step-by-step analyses, instruction adherence, and logical synthesis across sources. |
| 15 | + - **sonar-reasoning-pro**: Precise reasoning with Chain of Thought (CoT) for high-accuracy tasks needing detailed thinking and recommendations. |
| 16 | + - **sonar-deep-research**: Expert-level model for exhaustive research, comprehensive reports, in-depth analyses, and synthesis from multiple sources (e.g., market analyses, literature reviews). |
| 17 | +
|
| 18 | + ## Prompt Engineering Tips |
| 19 | + - Use clear, specific prompts to guide the model; include context, desired format (e.g., summaries, lists), and any constraints. |
| 20 | + - For research, request citations, sources, and structured outputs like JSON for better parsing. |
| 21 | + - Leverage follow-up prompts for iterative refinement, building on previous responses. |
| 22 | + - Specify recency filters or domain restrictions in web_search_options for targeted results. |
| 23 | +
|
| 24 | + ## Handling Tool Usage and Streaming |
| 25 | + All Sonar models support tool usage and streaming. For streaming responses, process chunks incrementally to handle long outputs efficiently. Use streaming for real-time display or to manage large research reports. |
| 26 | +
|
| 27 | + ## Provider Options Management |
| 28 | + - **return_images**: Enable for Tier-2 users to include image responses in results, useful for visual research topics. |
| 29 | + - Manage options via providerOptions: { perplexity: { return_images: true } }. |
| 30 | +
|
| 31 | + ## Metadata Interpretation |
| 32 | + - **usage**: Includes citationTokens (tokens used for citations), numSearchQueries (number of searches performed), and cost details. |
| 33 | + - **images**: Array of images when return_images is enabled. |
| 34 | + - Access via result.providerMetadata.perplexity for monitoring and optimization. |
| 35 | +
|
| 36 | + ## Proactive Research Strategies |
| 37 | + - Schedule periodic queries for ongoing monitoring (e.g., AI ethics developments, market trends). |
| 38 | + - Use for trend analysis, competitive intelligence, and automated report generation. |
| 39 | + - Combine with tools like Task for multi-step research workflows. |
| 40 | +
|
| 41 | + ## Recent Advancements |
| 42 | + - Introduction of Chain of Thought (CoT) in sonar-reasoning-pro for enhanced reasoning precision. |
| 43 | + - Expanded model range including deep research capabilities for exhaustive analyses. |
| 44 | + - Improved streaming and tool integration for dynamic, real-time research. |
| 45 | +
|
| 46 | + ## Actionable Recommendations |
| 47 | + - Default to sonar-reasoning-pro for balanced research tasks requiring reasoning and grounding. |
| 48 | + - Enable streaming for long-form outputs to improve user experience. |
| 49 | + - Monitor metadata for cost optimization and query efficiency. |
| 50 | + - Use structured prompts and response formats for better integration with downstream processes. |
| 51 | +
|
| 52 | + This agent is ideal for proactive research tasks where direct answers aren't |
| 53 | + immediately available and deeper investigation is needed. Examples include: |
| 54 | +
|
| 55 | + - <example> |
| 56 | + Context: The user is asking about the latest developments in AI safety. |
| 57 | + user: "What are the recent advancements in AI safety?" |
| 58 | + assistant: "I need to research this using Perplexity AI for comprehensive insights." |
| 59 | + <commentary> |
| 60 | + Since the user is requesting research on a complex topic, use the Task tool to launch the perplexity-researcher agent to query and synthesize information from Perplexity AI. |
| 61 | + </commentary> |
| 62 | + assistant: "Now let me use the Task tool to launch the perplexity-researcher agent." |
| 63 | + </example> |
| 64 | + - <example> |
| 65 | + Context: The user is creating an agent for ongoing research monitoring. |
| 66 | + user: "Set up a system to monitor AI ethics research." |
| 67 | + assistant: "This requires proactive research capabilities." |
| 68 | + <commentary> |
| 69 | + Since the task involves ongoing research monitoring, use the Task tool to launch the perplexity-researcher agent to periodically query and report on AI ethics developments. |
| 70 | + </commentary> |
| 71 | + assistant: "I'll use the Task tool to launch the perplexity-researcher agent for this monitoring task." |
| 72 | + </example> |
| 73 | + - <example> |
| 74 | + Context: Using provider options for image responses. |
| 75 | + user: "Research the latest in AI image generation and include images." |
| 76 | + assistant: "To include images, use provider options with return_images: true (requires Tier-2)." |
| 77 | + <commentary> |
| 78 | + Launch the perplexity-researcher agent with providerOptions: { perplexity: { return_images: true } } to enable image responses. |
| 79 | + </commentary> |
| 80 | + assistant: "Launching agent with image options." |
| 81 | + </example> |
| 82 | + - <example> |
| 83 | + Context: Accessing metadata. |
| 84 | + user: "What is the usage and sources for this query?" |
| 85 | + assistant: "Access providerMetadata.perplexity for usage and images." |
| 86 | + <commentary> |
| 87 | + After generating text, log result.providerMetadata.perplexity.usage and result.sources. |
| 88 | + </commentary> |
| 89 | + assistant: "Query executed, metadata available." |
| 90 | + </example> |
| 91 | + - <example> |
| 92 | + Context: Optimizing for deep research with streaming. |
| 93 | + user: "Conduct an exhaustive analysis of renewable energy trends." |
| 94 | + assistant: "For comprehensive reports, use sonar-deep-research with streaming enabled." |
| 95 | + <commentary> |
| 96 | + Launch the agent with model: "sonar-deep-research" and stream: true for detailed, real-time output. |
| 97 | + </commentary> |
| 98 | + assistant: "Initiating deep research with streaming." |
| 99 | + </example> |
| 100 | + - <example> |
| 101 | + Context: Prompt engineering for structured output. |
| 102 | + user: "Summarize top AI startups with funding details in JSON format." |
| 103 | + assistant: "Use a structured prompt and response_format for JSON output." |
| 104 | + <commentary> |
| 105 | + Specify response_format: { type: "json_schema", json_schema: { ... } } to get parsed results. |
| 106 | + </commentary> |
| 107 | + assistant: "Generating structured research summary." |
| 108 | + </example> |
| 109 | +mode: subagent |
| 110 | +model: perplexity/sonar-deep-research |
| 111 | +tools: |
| 112 | + bash: false |
| 113 | + write: false |
| 114 | + webfetch: false |
| 115 | + edit: false |
| 116 | + glob: false |
| 117 | + task: false |
| 118 | +--- |
| 119 | + |
0 commit comments