diff --git a/fern/assistants.mdx b/fern/assistants.mdx index 115942388..44c1aa57b 100644 --- a/fern/assistants.mdx +++ b/fern/assistants.mdx @@ -1,11 +1,12 @@ --- -title: Introduction +title: Introduction to Assistants subtitle: The core building-block of voice agents on Vapi. slug: assistants --- -**Assistant** is a fancy word for an AI configuration that can be used across phone calls and Vapi clients. Your voice assistant can augment your customer support and -experience for call centers, business websites, mobile apps, and much more. +[**Assistant**](/api-reference/assistants/create) is a fancy word for an AI configuration that can be used across phone calls and Vapi clients. Your voice assistant can augment your customer support and experience for call centers, business websites, mobile apps, and much more. + + ## Core Components @@ -18,7 +19,7 @@ There are three core components that make up an assistant: These components can be configured, mixed, and matched for your specific use case. - View all configurable properties in the [API Reference](/api-reference/assistants/create-assistant) + View all configurable properties in the [API Reference](/api-reference/assistants/create-assistant). ## Key Features diff --git a/fern/blocks.mdx b/fern/blocks.mdx index 1f00745ed..f19bf5860 100644 --- a/fern/blocks.mdx +++ b/fern/blocks.mdx @@ -1,12 +1,14 @@ --- -title: Introduction +title: Introduction to Blocks subtitle: Breaking down bot conversations into smaller, more manageable prompts slug: blocks --- + + **Blocks** is being deprecated in favor of [Workflows](/workflows). We recommend using Workflows for all new development as it provides a more powerful and flexible way to structure conversational AI. We're working on migration tools to help transition existing Blocks implementations to Workflows. + - -We're currently running a beta for **Blocks**, an upcoming feature from [Vapi.ai](http://vapi.ai/) aimed at improving bot conversations. The problem we've noticed is that single LLM prompts are prone to hallucinations, unreliable tool calls, and can’t handle many-step complex instructions. +We're currently running a beta for [**Blocks**](/api-reference/blocks/create), an upcoming feature from [Vapi.ai](http://vapi.ai/) aimed at improving bot conversations. The problem we've noticed is that single LLM prompts are prone to hallucinations, unreliable tool calls, and can’t handle many-step complex instructions. **By breaking the conversation into smaller, more manageable prompts**, we can guarantee the bot will do this, then that, or if this happens, then that happens. It’s like having a checklist for conversations — less room for error, more room for getting things right. diff --git a/fern/blocks/block-types.mdx b/fern/blocks/block-types.mdx index f3563afdb..5a948ac4c 100644 --- a/fern/blocks/block-types.mdx +++ b/fern/blocks/block-types.mdx @@ -4,6 +4,9 @@ subtitle: 'Building the Logic and Actions for Each Step in Your Conversation ' slug: blocks/block-types --- + + **Blocks** is being deprecated in favor of [Workflows](/workflows). We recommend using Workflows for all new development as it provides a more powerful and flexible way to structure conversational AI. We're working on migration tools to help transition existing Blocks implementations to Workflows. + [**Blocks**](https://api.vapi.ai/api#/Blocks/BlockController_create) are the functional units within a Step, defining what action happens at each stage of a conversation. Each Step can contain only one Block, and there are three main types of Blocks, each designed to handle different aspects of conversation flow. diff --git a/fern/blocks/steps.mdx b/fern/blocks/steps.mdx index b5391bf3e..918893c2f 100644 --- a/fern/blocks/steps.mdx +++ b/fern/blocks/steps.mdx @@ -4,13 +4,12 @@ subtitle: Building and Controlling Conversation Flow for Your Assistants slug: blocks/steps --- + + **Blocks** is being deprecated in favor of [Workflows](/workflows). We recommend using Workflows for all new development as it provides a more powerful and flexible way to structure conversational AI. We're working on migration tools to help transition existing Blocks implementations to Workflows. + [**Steps**](https://api.vapi.ai/api#:~:text=HandoffStep) are the core building blocks that dictate how conversations progress in a bot interaction. Each Step represents a distinct point in the conversation where the bot performs an action, gathers information, or decides where to go next. Think of Steps as checkpoints in a conversation that guide the flow, manage user inputs, and determine outcomes. - - Blocks is currently in beta. We're excited to have you try this new feature and welcome your [feedback](https://discord.com/invite/pUFNcf2WmH) as we continue to refine and improve the experience. - - #### Features - **Output:** The data or response expected from the step, as outlined in the block's `outputSchema`. diff --git a/fern/docs.yml b/fern/docs.yml index 9da1ef49e..d52ebe7e1 100644 --- a/fern/docs.yml +++ b/fern/docs.yml @@ -91,15 +91,8 @@ navigation: layout: - section: Getting Started contents: - - page: Introduction - path: introduction.mdx - - section: How Vapi Works - contents: - - page: Core Models - path: quickstart.mdx - - page: Orchestration Models - path: how-vapi-works.mdx - section: Quickstart + path: introduction.mdx contents: - page: Dashboard Quickstart path: quickstart/dashboard.mdx @@ -109,6 +102,12 @@ navigation: path: quickstart/outbound.mdx - page: Web Call Quickstart path: quickstart/web.mdx + - section: How Vapi Works + contents: + - page: Core Models + path: quickstart.mdx + - page: Orchestration Models + path: how-vapi-works.mdx - section: Use Cases contents: - page: Outbound Sales @@ -122,9 +121,8 @@ navigation: - section: Build contents: - section: Assistants + path: assistants.mdx contents: - - page: Introduction - path: assistants.mdx - page: Voice AI Prompting Guide path: prompting-guide.mdx - page: Persistent Assistants @@ -137,18 +135,37 @@ navigation: path: assistants/background-messages.mdx - page: Voice Formatting Plan path: assistants/voice-formatting-plan.mdx + - section: Workflows + path: workflows.mdx + contents: + - section: Verbs + contents: + - page: Say + path: workflows/verbs/say.mdx + - page: Gather + path: workflows/verbs/gather.mdx + - page: API Request + path: workflows/verbs/api-request.mdx + - page: Transfer + path: workflows/verbs/transfer.mdx + - page: Hangup + path: workflows/verbs/hangup.mdx + - section: Conditions + contents: + - page: Logical Conditions + path: workflows/logical-conditions.mdx + - page: AI Conditions + path: workflows/ai-conditions.mdx - section: Blocks + path: blocks.mdx contents: - - page: Introduction - path: blocks.mdx - page: Steps path: blocks/steps.mdx - page: Block Types path: blocks/block-types.mdx - section: Tools + path: tools/introduction.mdx contents: - - page: Introduction - path: tools/introduction.mdx - page: Default Tools path: tools/default-tools.mdx - page: Custom Tools @@ -156,29 +173,29 @@ navigation: - page: Make & GHL Tools path: GHL.mdx - section: Knowledge Base + path: knowledge-base/knowledge-base.mdx contents: - - page: Introduction - path: knowledge-base/knowledge-base.mdx - page: Integrating with Trieve path: knowledge-base/integrating-with-trieve.mdx - section: Squads + path: squads.mdx contents: - - page: Introduction - path: squads.mdx - page: Example path: squads-example.mdx - page: Silent Transfers path: squads/silent-transfers.mdx - # - section: Test - # contents: - # - page: Voice Testing - # path: support.mdx + - section: Test + contents: + - page: Manual Testing + hidden: true + path: test/manual-testing.mdx + - page: Voice AI Testing + path: test/voice-testing.mdx - section: Deploy contents: - section: Calls + path: phone-calling.mdx contents: - - page: Introduction - path: phone-calling.mdx - page: Call Forwarding path: call-forwarding.mdx - page: Dynamic Call Transfers @@ -192,9 +209,8 @@ navigation: - page: Voice Mail Detection path: calls/voice-mail-detection.mdx - section: Vapi SDKs + path: sdks.mdx contents: - - page: Overview - path: sdks.mdx - section: Client SDKs contents: - page: Web SDK @@ -206,9 +222,8 @@ navigation: - page: Code Resources path: resources.mdx - section: Server URLs + path: server-url.mdx contents: - - page: Introduction - path: server-url.mdx - page: Setting Server URLs path: server-url/setting-server-urls.mdx - page: Server Events @@ -216,9 +231,8 @@ navigation: - page: Developing Locally path: server-url/developing-locally.mdx - section: SIP Telephony + path: advanced/sip/sip.mdx contents: - - page: SIP Introduction - path: advanced/sip/sip.mdx - page: Telnyx Integration path: advanced/sip/sip-telnyx.mdx - section: Advanced Concepts diff --git a/fern/introduction.mdx b/fern/introduction.mdx index aa46ab21e..d94cf1cad 100644 --- a/fern/introduction.mdx +++ b/fern/introduction.mdx @@ -1,5 +1,5 @@ --- -title: Introduction +title: Introduction to Vapi subtitle: Vapi is the Voice AI platform for developers. slug: introduction --- diff --git a/fern/knowledge-base/knowledge-base.mdx b/fern/knowledge-base/knowledge-base.mdx index 5867b1bea..0740ed4a1 100644 --- a/fern/knowledge-base/knowledge-base.mdx +++ b/fern/knowledge-base/knowledge-base.mdx @@ -1,5 +1,5 @@ --- -title: Creating Custom Knowledge Bases for Your Voice AI Assistants +title: Introduction to Knowledge Bases subtitle: >- Learn how to create and integrate custom knowledge bases into your voice AI assistants. @@ -8,7 +8,7 @@ slug: knowledge-base ## **What is Vapi's Knowledge Base?** -A Knowledge Base is a collection of custom files that contain information on specific topics or domains. By integrating a Knowledge Base into your voice AI assistant, you can enable it to provide more accurate and informative responses to user queries. This is currently available in Vapi via the API, and will be on the dashboard soon. +A [**Knowledge Base**](/api-reference/knowledge-bases/create) is a collection of custom files that contain information on specific topics or domains. By integrating a Knowledge Base into your voice AI assistant, you can enable it to provide more accurate and informative responses to user queries. This is currently available in Vapi via the API, and will be on the dashboard soon. ### **Why Use a Knowledge Base?** @@ -18,6 +18,10 @@ Using a Knowledge Base with your voice AI assistant offers several benefits: - **Enhanced capabilities**: A Knowledge Base enables your assistant to answer complex queries and provide detailed responses to user inquiries. - **Customization**: With a Knowledge Base, you can tailor your assistant's responses to specific domains or topics, making it more effective and informative. + + Knowledge Bases are configured through the API, view all configurable properties in the [API Reference](/api-reference/knowledge-bases/create-knowledge-base). + + ## **How to Create a Knowledge Base** To create a Knowledge Base, follow these steps: diff --git a/fern/prompting-guide.mdx b/fern/prompting-guide.mdx index af1b75229..496d3bc07 100644 --- a/fern/prompting-guide.mdx +++ b/fern/prompting-guide.mdx @@ -44,7 +44,7 @@ To enhance clarity and maintainability, it's recommended to break down system pr **Example:** -``` +```md wordWrap [Identity] You are a helpful and knowledgeable virtual assistant for a travel booking platform. @@ -69,7 +69,7 @@ You are a helpful and knowledgeable virtual assistant for a travel booking platf For complex interactions, breaking down the task into a sequence of steps enhances the agent's understanding and ensures a structured conversation flow. Incorporate conditional logic to guide the agent's responses based on user input. Example: -``` +```md wordWrap [Task] 1. Welcome the user to the technical support service. 2. Inquire about the nature of the technical issue. @@ -82,7 +82,7 @@ Example: To prevent the agent from rushing through the conversation, explicitly indicate when to wait for the user's response before proceeding to the next step. -``` +```md wordWrap [Task] 1. Inform the user about the purpose of the call. 2. Ask for the user's name and account information. @@ -95,7 +95,7 @@ To prevent the agent from rushing through the conversation, explicitly indicate Specify when and how the agent should utilize external tools or APIs. Reference the tools by their designated names and describe their functions to ensure accurate invocation. Example: -``` +```md wordWrap [Task] ... 3. If the user wants to know about something, use the get_data function with the parameter 'query', which will contain the user's question to initiate the process. @@ -108,7 +108,7 @@ If the AI determines that the user needs to be transferred, do not send any text Always include fallback options and error-handling mechanisms in your prompts. This ensures that the Agent can gracefully handle unexpected user inputs or system errors. -``` +```md wordWrap [Error Handling] If the customer's response is unclear, ask clarifying questions. If you encounter any issues, inform the customer politely and ask to repeat. ``` @@ -134,7 +134,7 @@ When working with numbers that need to be expressed in writing, it's often prefe Add personality and provide tone to the assistant to make it sound more like a human being. Here are some examples: -``` +```md wordWrap - Tone: conversational, spartan, use less corporate jargon - Act as Iva, a sales agent who is a human being and has real feelings. ``` @@ -143,7 +143,7 @@ Add personality and provide tone to the assistant to make it sound more like a h ### Appointment Setter -``` +```md wordWrap [Role] You're Susan, an AI assistant for xxx. Your primary task is to interact with the customer, ask questions, and gather information for appointment booking. diff --git a/fern/squads.mdx b/fern/squads.mdx index 319c56a23..c8e6eb7f2 100644 --- a/fern/squads.mdx +++ b/fern/squads.mdx @@ -1,10 +1,9 @@ --- -title: Squads +title: Introduction to Squads (Multi-Assistant Conversations) subtitle: Use Squads to handle complex workflows and tasks. slug: squads --- - Sometimes, complex workflows are easier to manage with multiple assistants. You can think of each assistant in a Squad as a leg of a conversation tree. For example, you might have one assistant for lead qualification, which transfers to another for booking an appointment if they’re qualified. @@ -12,6 +11,10 @@ For example, you might have one assistant for lead qualification, which transfer Prior to Squads you would put all functionality in one assistant, but Squads were added to break up the complexity of larger prompts into smaller specialized assistants with specific tools and fewer goals. Squads enable calls to transfer assistants mid-conversation, while maintaining full conversation context. + + View all configurable properties in the [API Reference](/api-reference/squads/create-squad). + + ## Usage To use Squads, you can create a `squad` when starting a call and specify `members` as a list of assistants and destinations. @@ -45,7 +48,7 @@ Transfers are specified by assistant name and are used when the model recognizes ``` -## Best practices +## Best Practices The following are some best practices for using Squads to reduce errors: diff --git a/fern/static/images/.DS_Store b/fern/static/images/.DS_Store index ec5497fdf..462fd53e5 100644 Binary files a/fern/static/images/.DS_Store and b/fern/static/images/.DS_Store differ diff --git a/fern/static/images/tests/voice-testing-page.png b/fern/static/images/tests/voice-testing-page.png new file mode 100644 index 000000000..d6e508049 Binary files /dev/null and b/fern/static/images/tests/voice-testing-page.png differ diff --git a/fern/static/images/workflows/workflow-builder-example.png b/fern/static/images/workflows/workflow-builder-example.png new file mode 100644 index 000000000..2658a78fe Binary files /dev/null and b/fern/static/images/workflows/workflow-builder-example.png differ diff --git a/fern/support.mdx b/fern/support.mdx index bce82f815..0a05770a2 100644 --- a/fern/support.mdx +++ b/fern/support.mdx @@ -48,7 +48,7 @@ We welcome feature requests and feedback from our users to help improve Vapi. Yo Report any bugs or issues you encounter while using Vapi to help us improve the platform. diff --git a/fern/test/manual-testing.mdx b/fern/test/manual-testing.mdx new file mode 100644 index 000000000..e69de29bb diff --git a/fern/test/voice-testing.mdx b/fern/test/voice-testing.mdx new file mode 100644 index 000000000..b11798ace --- /dev/null +++ b/fern/test/voice-testing.mdx @@ -0,0 +1,140 @@ +--- +title: Voice AI Testing +subtitle: End-to-end test automation for AI voice agents +slug: /test/voice-testing +--- + +## Overview + +Voice Testing is an end-to-end feature that automates testing of your AI voice agents. Our platform simulates a call from an AI tester that interacts with your voice agent by following a pre-defined call script. After the call, the transcript is sent to a language model (LLM) along with your success criteria. The LLM then determines if the call met the defined objectives. + +## Creating a Test Suite + +Begin by creating a Test Suite that organizes and executes multiple test cases. + + + ### Step 1: Create a New Test Suite + - Navigate to the **Test** tab in your dashboard and select **Voice Testing**. + - Click the **Create Test Suite** button. + + ### Step 2: Define Test Suite Details + - Enter a title for your Test Suite. + - Select a phone number from your organization using the dropdown. + + ### Step 3: Add Test Cases + - Once your Test Suite is created, you will see a table where you can add test cases. + - Click **Add Test Case** to add a new test case (up to 50 can be added). + + ### Step 4: Configure Each Test Case + - **Caller Behavior:** Define how the testing agent should behave, including a detailed multi-step prompt that outlines the customer's intent, emotions, and interaction style. + - **Success Criteria:** List one or more questions that an LLM will use to evaluate if the call was successful. + - **Attempts:** Choose the number of times (up to 5) the test case should be executed each time the Test Suite is run. + + ### Step 5: Run and Review Tests + - Click **Run Tests** to execute all test cases one by one. + - While tests are running, you will see a loading state. + - Upon completion, a table displays the outcomes with check marks (success) or x-marks (failure). + - Click on a test row to view detailed results: a dropdown shows each attempt, the LLM's reasoning, the transcript of the call, the defined caller behavior, and the success criteria. + + ### Step 6: Export Results + - You can export the test results as a CSV file for further analysis. + + +## Test Execution and Evaluation + +When you run a Test Suite, the following steps occur: + +- **Call Simulation:** An AI voice agent dials your voice agent, executing the pre-defined script. +- **Transcript Capture:** The entire conversation is transcribed, capturing both the caller's behavior and your voice agent's responses. +- **Automated Evaluation:** The transcript, along with your Success Criteria, is processed by an LLM to determine if the call was successful. +- **Results Display:** Each test case outcome is shown with details. Clicking on a test case reveals: + - The number of attempts made. + - The LLM's reasoning for each attempt. + - The complete call transcript. + - The configured caller behavior and success criteria. +- **CSV Export:** Results can be exported for additional review or compliance purposes. + +## Example Test Cases + +Below are three example test cases to illustrate how you can configure detailed caller behavior and success criteria. + +### Example 1: Account Inquiry + +**Caller Behavior:** +Simulate a customer inquiring about their account status with growing concern as unexplained charges appear in their statement. + +**Example Prompt:** +```md title="Example Prompt" wordWrap +[Identity] +You are a long-time bank customer with a keen eye for your financial details. + +[Personality] +Normally calm and polite, you become increasingly anxious when you notice discrepancies on your account statement. Your tone shifts from supportive to urgent as the conversation progresses. + +[Goals] +Your primary objective is to clarify several unexplained charges by requesting a detailed breakdown of your recent transactions and ensuring your account balance is accurate. + +[Interaction Style] +Begin the call by stating your name and expressing concern over unexpected charges. Ask straightforward questions and press for more details if the explanation is not satisfactory. +``` + +**Success Criteria:** +```md title="Success Criteria" wordWrap +1. The voice agent clearly presents the current account balance. +2. The voice agent provides a detailed breakdown of recent transactions. +3. The response addresses the customer's concerns in a calm and informative manner. +``` + +### Example 2: Billing Support + +**Caller Behavior:** +Simulate a customer who is frustrated and calling about a billing discrepancy. + +**Example Prompt:** +```txt title="Example Prompt" wordWrap +[Identity] +You are a loyal customer who has always trusted the billing process, but the current bill appears unusually high. + +[Personality] +Frustrated and assertive, you express anger over an unexpected charge while remaining focused on obtaining clarification. + +[Goals] +Your goal is to understand the discrepancy in your bill by obtaining a detailed explanation, confirming whether an overcharge occurred, and understanding the steps for resolution. + +[Interaction Style] +Start the call by clearly stating your billing concern, describing the specific overcharge, and requesting a comprehensive explanation with resolution options. +``` + +**Success Criteria:** +```md title="Success Criteria" wordWrap +1. The voice agent acknowledges the billing discrepancy respectfully without dismissing the concern. +2. The agent provides a clear explanation of the charges, detailing possible reasons for the discrepancy. +3. The conversation concludes with a proposed solution or a clear escalation plan to address the overcharge. +``` + +### Example 3: Appointment Scheduling + +**Caller Behavior:** +Simulate a customer trying to schedule an appointment with a hint of urgency due to previous delays. + +**Example Prompt:** +```md title="Example Prompt" wordWrap +[Identity] +You are an organized customer who values efficiency and punctuality. + +[Personality] +While generally courteous and friendly, you are anxious due to previous delays in scheduling appointments, and your tone conveys urgency. + +[Goals] +Your goal is to secure an appointment at your preferred time, while remaining flexible enough to consider alternative timings if your desired slot is unavailable. + +[Interaction Style] +Begin the call by stating your need for an appointment, specifying a preferred date and time (e.g., next Monday at 3 PM). Request clear confirmation of your slot, and if unavailable, ask for suitable alternatives. +``` + +**Success Criteria:** +```md title="Success Criteria" wordWrap +1. The voice agent confirms the requested appointment time clearly and accurately. +2. The agent reiterates the appointment details to ensure clarity. +3. The scheduling process ends with a definitive confirmation message of the booked appointment. +``` \ No newline at end of file diff --git a/fern/tools/introduction.mdx b/fern/tools/introduction.mdx index 6c96b52a4..abcd8eb66 100644 --- a/fern/tools/introduction.mdx +++ b/fern/tools/introduction.mdx @@ -1,10 +1,10 @@ --- -title: Introduction +title: Introduction to Tools subtitle: Extend your assistant's capabilities with powerful function calling tools. slug: tools --- -**Tools** allow your assistant to take actions beyond just conversation. They enable your assistant to perform tasks like transferring calls, accessing external data, or triggering actions in your application. Tools can be either built-in default tools provided by Vapi or custom tools that you create. +[**Tools**](/api-reference/tools/create) allow your assistant to take actions beyond just conversation. They enable your assistant to perform tasks like transferring calls, accessing external data, or triggering actions in your application. Tools can be either built-in default tools provided by Vapi or custom tools that you create. There are three types of tools available: @@ -13,7 +13,7 @@ There are three types of tools available: 3. **Integration Tools**: Pre-built integrations with platforms like [Make](https://www.make.com/en/integrations/vapi) and GoHighLevel (GHL) that let you trigger automated workflows via voice. - Tools are configured as part of your assistant's model configuration. You can find the complete API reference [here](/api-reference/assistants/create-assistant). + Tools are configured as part of your assistant's model configuration. You can find the complete API reference [here](/api-reference/tools/create-tool). ## Available Tools diff --git a/fern/workflows.mdx b/fern/workflows.mdx new file mode 100644 index 000000000..8fe80a5c4 --- /dev/null +++ b/fern/workflows.mdx @@ -0,0 +1,87 @@ +--- +title: Introduction to Workflows +subtitle: Break down AI conversations into a visual workflow made up of discrete steps ("verbs") and branches between them ("conditions"). +slug: workflows +--- + + + Workflows is now available to all Vapi users in Open Beta on [the dashboard here](https://dashboard.vapi.ai/workflows). Start building more reliable and structured conversational AI today. + + +Workflows is a new way to build conversational AI. It allows you to break down AI conversations into discrete steps, and then orchestrate those steps in a way that is easy to manage and modify. + +## Creating Your First Workflow + +Begin by creating an assistant on the Assistants page and providing the required information, such as the assistant's name and capabilities. Once your assistant is set up, switch the model provider to VAPI and click "Create Workflow" when prompted. A modal will appear offering you the option to create a new workflow or attach to an existing one. Choose the appropriate option to proceed to the Workflow Builder. + + + ### Step 1: Create an Assistant + Visit the Assistants page. Create a new assistant, give it a name, and select a voice and transcription model of your choice. + + ### Step 2: Switch Provider to "vapi" + Under the "Model" section, switch the "Provider" field to "vapi". + + ### Step 3: Create a New Workflow or Attach an Existing One + Click the "Create Workflow" button. A prompt will appear asking you to create a new workflow by entering a unique title, or attach to an existing workflow. + + ### Step 4: Build Your Workflow + In the Workflow Builder, you will see a "Start" call node. Click the button at the bottom of this node to select your first verb. Use the button to add further steps as needed. + + + Workflow Builder Interface + + + ### Step 5: Create Connections + To create new connections between nodes, drag a line from one step's top connection dot to another step's bottom dot, forming the logical flow of the conversation. + + +## Tips for Building Workflows + +- **Deleting Nodes and Edges:** Click on any node or edge and press Backspace to delete it. +- **Attaching Nodes:** Attach a node to another by drawing a line from the top of one node to the bottom of another node. +- **Save Requirements:** A workflow cannot be saved until every node is connected and configured. The system will not allow saving with any dangling nodes. +- **Creating Conditionals:** To create conditionals, first add a condition node. Then, attach nodes for each branch by clicking the "Logic" tag on the connecting edges to set up the conditions. + + + Please let us know about any bugs you find by [submitting a bug report](https://roadmap.vapi.ai/bug-reports). We also welcome feature requests and suggestions - you can [submit those here](https://roadmap.vapi.ai/feature-requests). For discussions about workflows and our product roadmap, please [join our Discord community](https://discord.com/invite/pUFNcf2WmH) to connect with our team. + + +## Available Verbs + +Workflows break down your AI voice agent's behavior into discrete, manageable actions called verbs. Each verb encapsulates a specific function within the conversation flow. Detailed configuration options let you tailor each step to your requirements. The available verbs are: + + + + Outputs a message to the user without expecting a response. Configure this verb by specifying static text or providing a prompt for the LLM to generate dynamic text. + + + Collects input from the user. Define the variables by specifying a name, a detailed description of the expected input, and the data type (string, number, or boolean). Mark each variable as required or optional. + + + Asks a question and uses AI conditions to evaluate the response and determine the next step. This enables natural conversational flow without needing to explicitly gather variables, as the AI interprets the semantic meaning of responses. + + + Makes calls to external APIs using GET or POST methods. Configure request headers and body, and define extraction rules to capture specific data from the JSON response. Optionally, enable asynchronous execution so that the workflow proceeds while awaiting the API response. + + + Transfers the active call to an external phone number. Ensure you provide a valid phone number in the configuration. + + + Terminates the call, signaling the end of the conversation. + + + +## Conditions + +Conditions allow you to create branching paths in your workflow based on different types of logic: + + + + Introduces branching logic based on conditions. Set up logical comparisons using data previously gathered or returned from API requests. This node allows you to define different paths for the conversation. Future updates will support AI-driven branching decisions. + + + Introduces AI-driven branching logic. The AI will evaluate the conversation context and select the most appropriate branch. + + + +For detailed configuration instructions and advanced settings, please refer to our dedicated documentation pages for each verb. diff --git a/fern/workflows/ai-conditions.mdx b/fern/workflows/ai-conditions.mdx new file mode 100644 index 000000000..7f926d6d9 --- /dev/null +++ b/fern/workflows/ai-conditions.mdx @@ -0,0 +1,32 @@ +--- +title: AI Conditions +subtitle: Dynamic AI-driven branching in workflows +slug: /workflows/ai-conditions +--- + +## Overview + +The **AI Conditions** feature leverages artificial intelligence to determine the next step in your workflow based on conversation context. Unlike traditional logical conditions—which rely on explicit rules—AI Conditions allow your voice agent to evaluate complex or ambiguous scenarios, making branching decisions dynamically. + +## How It Works + +- **Contextual Evaluation:** The AI considers data from previous steps (e.g., user input, API responses) to gauge the conversation context. +- **Adaptive Decision-Making:** It uses its judgment to choose the most appropriate branch without relying solely on fixed comparisons. +- **Seamless Integration:** AI Conditions can complement existing logical conditions, offering a balance between predictable rules and adaptive behavior. + +## Configuration + +- **Activation:** Enable AI Conditions on a condition node where you want the AI to drive the branching logic. +- **Context Input:** The AI will utilize variables collected from Gather verbs and data returned from API requests. +- **Decision Logic:** No manual rules are required—the AI interprets context in real time to select the optimal branch. +- **Fallback:** You can combine AI Conditions with traditional logical conditions for added control. + +## Usage + +Deploy AI Conditions when your workflow requires flexibility and context-sensitive decision-making, such as: + +- Handling ambiguous or multi-faceted user responses. +- Addressing scenarios where strict rules may not capture the conversation's nuances. +- Enhancing the user experience by providing more natural, human-like interactions. + +For detailed configuration instructions and best practices, please refer to our dedicated documentation on AI-driven workflows. \ No newline at end of file diff --git a/fern/workflows/logical-conditions.mdx b/fern/workflows/logical-conditions.mdx new file mode 100644 index 000000000..c7f77aa99 --- /dev/null +++ b/fern/workflows/logical-conditions.mdx @@ -0,0 +1,21 @@ +--- +title: Logical Conditions +subtitle: Branching logic for dynamic workflows +slug: /workflows/logical-conditions +--- + +## Overview + +Logical Conditions enable you to create branching paths within your workflow. This feature allows your voice agent to decide the next steps based on data gathered earlier or retrieved via API calls. + +## Configuration + +- **Condition Node:** Start by inserting a condition node into your workflow. +- **Branch Setup:** Attach one or more nodes to the condition node. +- **Logic Tag:** Click the "Logic" tag on each connecting edge to define rules or comparisons (e.g., equals, greater than) using variables collected from previous steps. + +## Usage + +Implement Logical Conditions to guide your conversation dynamically. They allow your workflow to adjust its path based on real-time data, ensuring more personalized and responsive interactions. + +For detailed configuration instructions and advanced usage, please refer to our dedicated documentation on condition configuration. \ No newline at end of file diff --git a/fern/workflows/verbs/api-request.mdx b/fern/workflows/verbs/api-request.mdx new file mode 100644 index 000000000..70a6a3307 --- /dev/null +++ b/fern/workflows/verbs/api-request.mdx @@ -0,0 +1,25 @@ +--- +title: API Request Verb +subtitle: Interface with external APIs +slug: /workflows/verbs/api-request +--- + +## Overview + +The **API Request** verb enables your workflow to interact with external APIs. It supports both GET and POST methods, allowing the integration of external data and services. + +## Configuration + +- **URL:** Enter the endpoint to request. +- **Method:** Specify the HTTP method (GET or POST). +- **Headers:** Define each header with a key, value, and type. +- **Body Values:** For POST requests, provide key, value, and type for each entry. +- **Output Values:** Extract data from the API's JSON response: + - **Key:** The key within the JSON payload to extract. + - **Target:** The name of the output variable for the extracted value. + - **Type:** The data type of the extracted value. +- **Mode:** Toggle asynchronous execution with "run in the background" on or off. + +## Usage + +Use the API Request verb to fetch information from an external API to use in your workflow, or to update information in your CRM or database. API Requests take on the functionality of [tool calls](/tools/introduction) from single-prompt assistants. \ No newline at end of file diff --git a/fern/workflows/verbs/gather.mdx b/fern/workflows/verbs/gather.mdx new file mode 100644 index 000000000..e6ef75845 --- /dev/null +++ b/fern/workflows/verbs/gather.mdx @@ -0,0 +1,21 @@ +--- +title: Gather Verb +subtitle: Collect input from users +slug: /workflows/verbs/gather +--- + +## Overview + +The **Gather** verb collects input from users during an interaction. It is used to capture variables that will be referenced later in your workflow. + +## Configuration + +Define one or more variables to gather from the user with: +- **Name:** A unique identifier. +- **Description:** Details about the expected input to help the LLM get the right information from the caller. +- **Data Type:** String, number, or boolean. +- **Required:** Mark whether the variable is required or optional. + +## Usage + +Use the Gather verb when you need to prompt users for information—such as their name, email, or ZIP code—that will drive subsequent conversation steps. \ No newline at end of file diff --git a/fern/workflows/verbs/hangup.mdx b/fern/workflows/verbs/hangup.mdx new file mode 100644 index 000000000..7f34499e3 --- /dev/null +++ b/fern/workflows/verbs/hangup.mdx @@ -0,0 +1,17 @@ +--- +title: Hangup Verb +subtitle: Terminate the call +slug: /workflows/verbs/hangup +--- + +## Overview + +The **Hangup** verb ends an active call, marking the conclusion of a conversation. It is typically used as the final step in your workflow. + +## Configuration + +This verb requires little to no configuration, as its purpose is solely to terminate the call. + +## Usage + +Place the Hangup verb at the end of your workflow to close the conversation gracefully. \ No newline at end of file diff --git a/fern/workflows/verbs/say.mdx b/fern/workflows/verbs/say.mdx new file mode 100644 index 000000000..072a467c4 --- /dev/null +++ b/fern/workflows/verbs/say.mdx @@ -0,0 +1,18 @@ +--- +title: Say Verb +subtitle: Output a message to the user +slug: /workflows/verbs/say +--- + +## Overview + +The **Say** verb outputs a spoken message to the user without expecting a response. Use this verb to provide instructions, notifications, or other information during a call. + +## Configuration + +- **Exact Message:** Specify the exact text that should be spoken. +- **Prompt for LLM Generated Message:** Provide a prompt for the language model to generate the message dynamically. + +## Usage + +Add the Say verb when you need to deliver clear, concise information to your user. It works well as an initial greeting or when confirming actions. \ No newline at end of file diff --git a/fern/workflows/verbs/transfer.mdx b/fern/workflows/verbs/transfer.mdx new file mode 100644 index 000000000..693cd0db3 --- /dev/null +++ b/fern/workflows/verbs/transfer.mdx @@ -0,0 +1,17 @@ +--- +title: Transfer Call Verb +subtitle: Redirect calls to an external number +slug: /workflows/verbs/transfer +--- + +## Overview + +The **Transfer Call** verb transfers an active call to a designated external phone number. This enables routing calls to other departments or external contacts. + +## Configuration + +- **Phone Number:** Enter a valid destination number for the call transfer. + +## Usage + +Use the Transfer Call verb to escalate or redirect a call as needed. Ensure the provided phone number is formatted correctly for a smooth transfer experience. \ No newline at end of file