diff --git a/.github/workflows/preview-docs.yml b/.github/workflows/preview-docs.yml index 4b9c83bf6..ce6eadf4f 100644 --- a/.github/workflows/preview-docs.yml +++ b/.github/workflows/preview-docs.yml @@ -17,6 +17,7 @@ jobs: id: generate-docs env: FERN_TOKEN: ${{ secrets.FERN_TOKEN }} + POSTHOG_PROJECT_API_KEY: ${{ secrets.POSTHOG_PROJECT_API_KEY }} run: | OUTPUT=$(fern generate --docs --preview --log-level debug 2>&1) || true echo "$OUTPUT" diff --git a/.github/workflows/publish-docs.yml b/.github/workflows/publish-docs.yml index fee02a10e..21735321b 100644 --- a/.github/workflows/publish-docs.yml +++ b/.github/workflows/publish-docs.yml @@ -19,4 +19,5 @@ jobs: - name: Publish Docs env: FERN_TOKEN: ${{ secrets.FERN_TOKEN }} + POSTHOG_PROJECT_API_KEY: ${{ secrets.POSTHOG_PROJECT_API_KEY }} run: fern generate --docs --log-level debug \ No newline at end of file diff --git a/fern/GHL.mdx b/fern/GHL.mdx index 1cd63534d..90e0e5165 100644 --- a/fern/GHL.mdx +++ b/fern/GHL.mdx @@ -1,6 +1,6 @@ --- title: How to Connect Vapi with Make & GHL -slug: GHL +slug: tools/GHL --- diff --git a/fern/advanced/sip/sip-telnyx.mdx b/fern/advanced/sip/sip-telnyx.mdx new file mode 100644 index 000000000..6756fb014 --- /dev/null +++ b/fern/advanced/sip/sip-telnyx.mdx @@ -0,0 +1,167 @@ +--- +title: Telnyx SIP Integration +subtitle: How to integrate SIP Telnyx to Vapi +slug: advanced/sip/telnyx +--- +## Inbound +### On Vapi + + + +First we will create a personalized origination SIP URI via the Vapi API + +```json +curl --location 'https://api.vapi.ai/phone-number' \ + --header 'Authorization: Bearer your-vapi-private-api-key' \ + --header 'Content-Type: application/json' \ + --data-raw '{ + "provider": "vapi", + "sipUri": "sip:username@sip.vapi.ai", + "assistantId": "your-assistant-id" + }' +``` + - ```provider```: This is set to "vapi". + - ```sipUri```: Replace ` username ` with your desired SIP username. + - ```assistantId```: Provide your specific `assistant ID` associated with your Vapi AI account. + + + + +Send a PATCH to /phone-number/your_phone_id + +```json + curl --location --request PATCH 'https://api.vapi.ai/phone-number/your_phone_id' \ + --header 'Content-Type: application/json' \ + --header 'Authorization: Bearer your-vapi-private-api-key' \ + --data '{ + "assistantId": null, + "serverUrl": "https://your_server_url" + }' +``` +- `your_server_url` is the webhook link that will receive the assistant request. +- `your_phone_id` is the id of your just created origination sip URI + +Now every time you make a call to this number (i.e. assigned numbers on SIP trunking to this origination URI), you'll get a webhook event requesting an assistant. + + + +### On Telnyx + + +1. Go to Voice / SIP Trunking / Create +2. Select FQDN +3. Select add FQDN +4. Select A +5. Add created SIP URI +6. FQDN: sip.vapi.ai +7. Port should be 5060 by default + + +Set as follows: + + + + + +Go to numbers tab, assign number + + +Modify SIP invite so your VAPI and Telnyx accounts will be matched correctly +1. Go to numbers, edit the one your will be using +2. Navigate do voice +3. Scroll down till the end to find Translated Number + +*This setting will modify the SIP Invite to the vapi platform so invites are sent to your vapi sip URI. It will be whatever value you set when you created it.* + +4. If your chosen sipURI from previous step is username@sip.vapi.ai , this should be username +5. Done! You should now be receiving calls! + + + +## Outbound +### On Telnyx + + +1. Go to Voice / Sip Trunking / Authentication and routing +2. Scroll down to outbound calls authentication and: +- Add the two fixed IPs from VAPI, select Tech Prefix and create a unique 4-digits Tech Prefix (example 1234 - don't use 1234, must be unique to your account) + + + + + +1. Go to voice / outbound voice profiles +2. Create profile +3. Name it as you will (1. details) +4. Allow as desired destination (2. destinations) +5. Leave the next screen as is (3. configuration) +6. Assign the desired sip trunk (4. …) +7. Complete + +Or you an just go to sip trunk / you sip trunk / outbound / and select your just created outbound voice profile. + + +Set as follows, choosing the country that you will be making most calls to (example Brazil) + +*We recommend creating a separate SIP Trunk for each country you aim to be making most calls to.* + + + + + +### On Vapi + + +```json +curl -X POST https://api.vapi.ai/credential \ + -H "Content-Type: application/json" \ + -H "Authorization: Bearer your-vapi-private-api-key" \ + -d '{ + "provider": "byo-sip-trunk", + "name": "Telnyx Trunk", + "gateways": [ + { + "ip": "sip.telnyx.com" + } + ] + }' +``` + + +```json +curl -X POST https://api.vapi.ai/phone-number \ + -H "Content-Type: application/json" \ + -H "Authorization: Bearer your-vapi-private-api-key" \ + -d '{ + "provider": "byo-phone-number", + "name": "Telnyx SIP Number", + "number": "your-sip-phone-number", + "numberE164CheckEnabled": false, + "credentialId": "your-new-trunk-credential-id-which-you-got-from-previous-step" + }' +``` + + +Use this cURL to trigger calls with tech prefix +```json +curl --location 'https://api.vapi.ai/call/phone' \ + --header 'Authorization: Bearer your-vapi-private-api-key' \ + --header 'Content-Type: application/json' \ + --data '{ + "assistantId": "your-assistant-id", + "customer": { + "number": "tech-prefix-with-phone-number-without-plus-signal", + "numberE164CheckEnabled": false + }, + "phoneNumberId": "your-phone-id" +}' +``` +Example of tech-prefix-with-phone-number-without-plus-signal +- Phone number: +6699999999 +- Tech Prefix: 1234 +- Should look like this: 12346699999999 +- No + as you can see + +Done! Outbound should now be working! + + \ No newline at end of file diff --git a/fern/advanced/sip/sip-trunk.mdx b/fern/advanced/sip/sip-trunk.mdx new file mode 100644 index 000000000..e012220ce --- /dev/null +++ b/fern/advanced/sip/sip-trunk.mdx @@ -0,0 +1,94 @@ +--- +title: SIP Trunking Guide for Vapi +subtitle: How to integrate your SIP provider with Vapi +slug: advanced/sip/sip-trunk +--- + +SIP trunking replaces traditional phone lines with a virtual connection over the internet, allowing your business to make and receive calls via a broadband connection. It connects your internal PBX or VoIP system to a SIP provider, which then routes calls to the Public Switched Telephone Network (PSTN). This setup simplifies your communications infrastructure and often reduces costs. + +## 1. Vapi SIP Trunking Options + +Vapi supports multiple SIP trunk configurations, including: + +- **Telnyx**: Uses SIP gateway domain (e.g., sip.telnyx.com) with IP-based authentication. May require a tech prefix for outbound calls. +- **Zadarma**: Uses SIP credentials (username/password) with its SIP server (e.g., sip.zadarma.com). +- **Custom "BYO" SIP Trunk**: Allows integration with any SIP provider. You simply provide the SIP gateway address and the necessary authentication details. + +## 2. Setup Process Overview + +To set up a SIP trunk in Vapi, follow these steps: + +### Obtain Provider Details + +Gather the SIP server address, authentication credentials (username/password or IP-based), and at least one phone number (DID) from your provider. + +### Create a SIP Trunk Credential in Vapi + +Use the Vapi API to create a new credential (type: byo-sip-trunk) with your provider's details. This informs Vapi how to connect to your SIP network. + +**Example (using Zadarma):** + +```bash +curl -X POST "https://api.vapi.ai/credential" \ + -H "Content-Type: application/json" \ + -H "Authorization: Bearer YOUR_VAPI_PRIVATE_KEY" \ + -d '{ + "provider": "byo-sip-trunk", + "name": "Zadarma Trunk", + "gateways": [{ + "ip": "sip.zadarma.com" + }], + "outboundLeadingPlusEnabled": true, + "outboundAuthenticationPlan": { + "authUsername": "YOUR_SIP_NUMBER", + "authPassword": "YOUR_SIP_PASSWORD" + } + }' +``` + +Save the returned Credential ID for later use. + +### Associate a Phone Number with the SIP Trunk + +Link your external phone number (DID) to the SIP trunk credential in Vapi by creating a Phone Number resource. + +**Example:** + +```bash +curl -X POST "https://api.vapi.ai/phone-number" \ + -H "Content-Type: application/json" \ + -H "Authorization: Bearer YOUR_VAPI_PRIVATE_KEY" \ + -d '{ + "provider": "byo-phone-number", + "name": "Zadarma Number", + "number": "15551234567", + "numberE164CheckEnabled": false, + "credentialId": "YOUR_CREDENTIAL_ID" + }' +``` + +Note the returned Phone Number ID for use in test calls. + +### Test Your SIP Trunk + +#### Outbound Call Test + +Initiate a call through the Vapi dashboard or API to ensure outbound calls are properly routed. + +**API Example:** + +```json +POST https://api.vapi.ai/call/phone +{ + "assistantId": "YOUR_ASSISTANT_ID", + "customer": { + "number": "15557654321", + "numberE164CheckEnabled": false + }, + "phoneNumberId": "YOUR_PHONE_NUMBER_ID" +} +``` + +#### Inbound Call Test + +If inbound routing is configured, call your phone number from an external line. Ensure your provider forwards calls to the correct SIP URI (e.g., `{phoneNumber}@sip.vapi.ai` for Zadarma). diff --git a/fern/advanced/sip/sip-zadarma.mdx b/fern/advanced/sip/sip-zadarma.mdx new file mode 100644 index 000000000..6bb306b28 --- /dev/null +++ b/fern/advanced/sip/sip-zadarma.mdx @@ -0,0 +1,94 @@ +--- +title: Zadarma SIP Integration +subtitle: How to integrate SIP Zadarma to Vapi +slug: advanced/sip/zadarma +--- + + +Integrate your Zadarma SIP trunk with Vapi.ai to enable your AI voice assistants to handle calls efficiently. Follow the steps below to set up this integration: + +## 1. Retrieve Your Vapi.ai Private Key + +- Log in to your Vapi.ai account. +- Navigate to **Organization Settings**. +- In the **API Keys** section, copy your **Private Key**. + +## 2. Add Your Zadarma SIP Credentials to Vapi.ai + +You'll need to send a `curl` request to Vapi.ai's API to add your SIP credentials: + +- **Private Key**: Your Vapi.ai private key. +- **Trunk Name**: A name for your SIP trunk (e.g., "Zadarma Trunk"). +- **Server Address**: The server address provided by Zadarma (e.g., "sip.zadarma.com"). +- **SIP Number**: Your Zadarma SIP number. +- **SIP Password**: The password for your Zadarma SIP number. + +Here's the `curl` command to execute: + +```bash +curl -L 'https://api.vapi.ai/credential' \\ +-H 'Content-Type: application/json' \\ +-H 'Authorization: Bearer YOUR_PRIVATE_KEY' \\ +-d '{ + "provider": "byo-sip-trunk", + "name": "Zadarma Trunk", + "gateways": [ + { "ip": "sip.zadarma.com" } + ], + "outboundLeadingPlusEnabled": true, + "outboundAuthenticationPlan": { + "authUsername": "YOUR_SIP_NUMBER", + "authPassword": "YOUR_SIP_PASSWORD" + } +}' +``` +Replace `YOUR_PRIVATE_KEY`, `YOUR_SIP_NUMBER`, and `YOUR_SIP_PASSWORD` with your actual credentials. + +If successful, the response will include an `id` for the created credential, which you'll use in the next step. + +## 3. Add Your Virtual Number to Vapi.ai + +Next, associate your virtual number with the SIP trunk in Vapi.ai: + +- **Private Key**: Your Vapi.ai private key. +- **Number Name**: A name for your virtual number (e.g., "Zadarma Number"). +- **Virtual Number**: Your Zadarma virtual number in international format (e.g., "15551111111"). +- **Credential ID**: The `id` from the previous step. + +Use the following `curl` command: + +```bash +curl -L 'https://api.vapi.ai/phone-number' \\ +-H 'Content-Type: application/json' \\ +-H 'Authorization: Bearer YOUR_PRIVATE_KEY' \\ +-d '{ + "provider": "byo-phone-number", + "name": "Zadarma Number", + "number": "YOUR_VIRTUAL_NUMBER", + "numberE164CheckEnabled": false, + "credentialId": "YOUR_CREDENTIAL_ID" +}' +``` + +Replace `YOUR_PRIVATE_KEY`, `YOUR_VIRTUAL_NUMBER`, and `YOUR_CREDENTIAL_ID` with your actual details. + +## 4. Assign Your Voice Assistant to Handle Calls + +- In your Vapi.ai dashboard, go to the **Build** section and select **Phone Numbers**. +- Click on your **Zadarma Number**. +- In the **Inbound Settings** section, assign your voice assistant to handle incoming calls. +- In the **Outbound Form** section, assign your voice assistant to handle outgoing calls. + +## 5. Configure Incoming Call Reception in Zadarma + +To forward incoming calls from your Zadarma virtual number to Vapi.ai: + +- Log in to your Zadarma account. +- Navigate to **Settings** → **Virtual phone numbers**. +- Click the ⚙ (gear) icon next to your number. +- Open the **External server** tab. +- Enable **External server (SIP URI)**. +- Enter the address: `YOUR_VIRTUAL_NUMBER@sip.vapi.ai` (replace `YOUR_VIRTUAL_NUMBER` with your number in international format). +- Click **Save**. + +By following these steps, your Zadarma SIP trunk will be integrated with Vapi.ai, allowing your AI voice assistants to manage calls effectively. diff --git a/fern/advanced/calls/sip.mdx b/fern/advanced/sip/sip.mdx similarity index 97% rename from fern/advanced/calls/sip.mdx rename to fern/advanced/sip/sip.mdx index dcf8f17a2..a09e2c43e 100644 --- a/fern/advanced/calls/sip.mdx +++ b/fern/advanced/sip/sip.mdx @@ -1,7 +1,7 @@ --- -title: SIP +title: SIP Introduction subtitle: You can make SIP calls to Vapi Assistants. -slug: advanced/calls/sip +slug: advanced/sip --- diff --git a/fern/apis/api/generators.yml b/fern/apis/api/generators.yml index 4836db268..d61fda1ed 100644 --- a/fern/apis/api/generators.yml +++ b/fern/apis/api/generators.yml @@ -10,7 +10,7 @@ groups: python-sdk: generators: - name: fernapi/fern-python-sdk - version: 4.3.9 + version: 4.3.14 disable-examples: true api: settings: @@ -47,7 +47,8 @@ groups: java-sdk: generators: - name: fernapi/fern-java-sdk - version: 2.3.1 + version: 2.16.0 + disable-examples: true output: location: maven coordinate: dev.vapi:server-sdk @@ -60,7 +61,8 @@ groups: go-sdk: generators: - name: fernapi/fern-go-sdk - version: 0.33.0 + version: 0.36.5 + disable-examples: true api: settings: unions: v1 @@ -72,6 +74,7 @@ groups: generators: - name: fernapi/fern-ruby-sdk version: 0.8.2 + disable-examples: true github: repository: VapiAI/server-sdk-ruby output: @@ -83,7 +86,8 @@ groups: csharp-sdk: generators: - name: fernapi/fern-csharp-sdk - version: 1.9.11 + version: 1.9.31 + disable-examples: true github: repository: VapiAI/server-sdk-csharp output: diff --git a/fern/apis/api/openapi-overrides.yml b/fern/apis/api/openapi-overrides.yml index 0053cf20e..4fbd65180 100644 --- a/fern/apis/api/openapi-overrides.yml +++ b/fern/apis/api/openapi-overrides.yml @@ -2,6 +2,9 @@ x-fern-pagination: offset: $request.page results: $response.results paths: + /enterprise: + post: + x-fern-ignore: true /call: get: x-fern-sdk-group-name: @@ -193,6 +196,178 @@ paths: x-fern-sdk-method-name: get components: schemas: + CreateAssistantDTO: + properties: + serverMessages: + enum: + - conversation-update + - end-of-call-report + - function-call + - hang + - language-changed + - language-change-detected + - model-output + - phone-call-control + - speech-update + - status-update + - transcript + - transcript[transcriptType='final'] + - tool-calls + - transfer-destination-request + - transfer-update + - user-interrupted + - voice-input + items: + enum: + - conversation-update + - end-of-call-report + - function-call + - hang + - language-changed + - language-change-detected + - model-output + - phone-call-control + - speech-update + - status-update + - transcript + - transcript[transcriptType='final'] + - tool-calls + - transfer-destination-request + - transfer-update + - user-interrupted + - voice-input + AssistantOverrides: + properties: + serverMessages: + enum: + - conversation-update + - end-of-call-report + - function-call + - hang + - language-changed + - language-change-detected + - model-output + - phone-call-control + - speech-update + - status-update + - transcript + - transcript[transcriptType='final'] + - tool-calls + - transfer-destination-request + - transfer-update + - user-interrupted + - voice-input + items: + enum: + - conversation-update + - end-of-call-report + - function-call + - hang + - language-changed + - language-change-detected + - model-output + - phone-call-control + - speech-update + - status-update + - transcript + - transcript[transcriptType='final'] + - tool-calls + - transfer-destination-request + - transfer-update + - user-interrupted + - voice-input + Assistant: + properties: + serverMessages: + enum: + - conversation-update + - end-of-call-report + - function-call + - hang + - language-changed + - language-change-detected + - model-output + - phone-call-control + - speech-update + - status-update + - transcript + - transcript[transcriptType='final'] + - tool-calls + - transfer-destination-request + - transfer-update + - user-interrupted + - voice-input + items: + enum: + - conversation-update + - end-of-call-report + - function-call + - hang + - language-changed + - language-change-detected + - model-output + - phone-call-control + - speech-update + - status-update + - transcript + - transcript[transcriptType='final'] + - tool-calls + - transfer-destination-request + - transfer-update + - user-interrupted + - voice-input + UpdateAssistantDTO: + properties: + serverMessages: + enum: + - conversation-update + - end-of-call-report + - function-call + - hang + - language-changed + - language-change-detected + - model-output + - phone-call-control + - speech-update + - status-update + - transcript + - transcript[transcriptType='final'] + - tool-calls + - transfer-destination-request + - transfer-update + - user-interrupted + - voice-input + items: + enum: + - conversation-update + - end-of-call-report + - function-call + - hang + - language-changed + - language-change-detected + - model-output + - phone-call-control + - speech-update + - status-update + - transcript + - transcript[transcriptType='final'] + - tool-calls + - transfer-destination-request + - transfer-update + - user-interrupted + - voice-input + ClientMessageTranscript: + properties: + type: + enum: + - transcript + - transcript[transcriptType='final'] + ServerMessageTranscript: + properties: + type: + enum: + - transcript + - transcript[transcriptType='final'] FallbackAzureVoice: properties: voiceId: @@ -243,6 +418,12 @@ components: x-fern-type-name: FallbackNeetsVoiceId oneOf: - x-fern-type-name: FallbackNeetsVoiceIdEnum + FallbackSmallestAIVoice: + properties: + voiceId: + x-fern-type-name: FallbackSmallestAIVoiceId + oneOf: + - x-fern-type-name: FallbackSmallestAIVoiceIdEnum AzureVoice: properties: voiceId: @@ -263,6 +444,12 @@ components: - x-fern-type-name: ElevenLabsVoiceIdEnum provider: x-fern-type: literal<"11labs"> + SmallestAIVoice: + properties: + voiceId: + x-fern-type-name: SmallestAIVoiceId + oneOf: + - x-fern-type-name: SmallestAIVoiceIdEnum ElevenLabsCredential: properties: provider: diff --git a/fern/apis/api/openapi.json b/fern/apis/api/openapi.json index 98d78d568..eb949620c 100644 --- a/fern/apis/api/openapi.json +++ b/fern/apis/api/openapi.json @@ -887,7 +887,33 @@ "content": { "application/json": { "schema": { - "$ref": "#/components/schemas/UpdatePhoneNumberDTO" + "oneOf": [ + { + "$ref": "#/components/schemas/UpdateByoPhoneNumberDTO", + "title": "ByoPhoneNumber" + }, + { + "$ref": "#/components/schemas/UpdateTwilioPhoneNumberDTO", + "title": "TwilioPhoneNumber" + }, + { + "$ref": "#/components/schemas/UpdateVonagePhoneNumberDTO", + "title": "VonagePhoneNumber" + }, + { + "$ref": "#/components/schemas/UpdateVapiPhoneNumberDTO", + "title": "VapiPhoneNumber" + } + ], + "discriminator": { + "propertyName": "provider", + "mapping": { + "byo-phone-number": "#/components/schemas/UpdateByoPhoneNumberDTO", + "twilio": "#/components/schemas/UpdateTwilioPhoneNumberDTO", + "vonage": "#/components/schemas/UpdateVonagePhoneNumberDTO", + "vapi": "#/components/schemas/UpdateVapiPhoneNumberDTO" + } + } } } } @@ -1542,6 +1568,32 @@ } } ], + "requestBody": { + "required": true, + "content": { + "application/json": { + "schema": { + "oneOf": [ + { + "$ref": "#/components/schemas/UpdateTrieveKnowledgeBaseDTO", + "title": "UpdateTrieveKnowledgeBaseDTO" + }, + { + "$ref": "#/components/schemas/UpdateCustomKnowledgeBaseDTO", + "title": "UpdateCustomKnowledgeBaseDTO" + } + ], + "discriminator": { + "propertyName": "provider", + "mapping": { + "trieve": "#/components/schemas/UpdateTrieveKnowledgeBaseDTO", + "custom-knowledge-base": "#/components/schemas/UpdateCustomKnowledgeBaseDTO" + } + } + } + } + } + }, "responses": { "200": { "description": "", @@ -1924,7 +1976,28 @@ "content": { "application/json": { "schema": { - "$ref": "#/components/schemas/UpdateBlockDTO" + "oneOf": [ + { + "$ref": "#/components/schemas/UpdateConversationBlockDTO", + "title": "ConversationBlock" + }, + { + "$ref": "#/components/schemas/UpdateToolCallBlockDTO", + "title": "ToolCallBlock" + }, + { + "$ref": "#/components/schemas/UpdateWorkflowBlockDTO", + "title": "WorkflowBlock" + } + ], + "discriminator": { + "propertyName": "type", + "mapping": { + "conversation": "#/components/schemas/UpdateConversationBlockDTO", + "tool-call": "#/components/schemas/UpdateToolCallBlockDTO", + "workflow": "#/components/schemas/UpdateWorkflowBlockDTO" + } + } } } } @@ -2065,6 +2138,18 @@ { "$ref": "#/components/schemas/CreateOutputToolDTO", "title": "OutputTool" + }, + { + "$ref": "#/components/schemas/CreateBashToolDTO", + "title": "BashTool" + }, + { + "$ref": "#/components/schemas/CreateComputerToolDTO", + "title": "ComputerTool" + }, + { + "$ref": "#/components/schemas/CreateTextEditorToolDTO", + "title": "TextEditorTool" } ], "discriminator": { @@ -2076,7 +2161,10 @@ "ghl": "#/components/schemas/CreateGhlToolDTO", "make": "#/components/schemas/CreateMakeToolDTO", "transferCall": "#/components/schemas/CreateTransferCallToolDTO", - "output": "#/components/schemas/CreateOutputToolDTO" + "output": "#/components/schemas/CreateOutputToolDTO", + "bash": "#/components/schemas/CreateBashToolDTO", + "computer": "#/components/schemas/CreateComputerToolDTO", + "textEditor": "#/components/schemas/CreateTextEditorToolDTO" } } } @@ -2117,6 +2205,18 @@ { "$ref": "#/components/schemas/OutputTool", "title": "OutputTool" + }, + { + "$ref": "#/components/schemas/BashTool", + "title": "BashTool" + }, + { + "$ref": "#/components/schemas/ComputerTool", + "title": "ComputerTool" + }, + { + "$ref": "#/components/schemas/TextEditorTool", + "title": "TextEditorTool" } ], "discriminator": { @@ -2128,7 +2228,10 @@ "ghl": "#/components/schemas/GhlTool", "make": "#/components/schemas/MakeTool", "transferCall": "#/components/schemas/TransferCallTool", - "output": "#/components/schemas/OutputTool" + "output": "#/components/schemas/OutputTool", + "bash": "#/components/schemas/BashTool", + "computer": "#/components/schemas/ComputerTool", + "textEditor": "#/components/schemas/TextEditorTool" } } } @@ -2277,6 +2380,18 @@ { "$ref": "#/components/schemas/OutputTool", "title": "OutputTool" + }, + { + "$ref": "#/components/schemas/BashTool", + "title": "BashTool" + }, + { + "$ref": "#/components/schemas/ComputerTool", + "title": "ComputerTool" + }, + { + "$ref": "#/components/schemas/TextEditorTool", + "title": "TextEditorTool" } ], "discriminator": { @@ -2288,7 +2403,10 @@ "ghl": "#/components/schemas/GhlTool", "make": "#/components/schemas/MakeTool", "transferCall": "#/components/schemas/TransferCallTool", - "output": "#/components/schemas/OutputTool" + "output": "#/components/schemas/OutputTool", + "bash": "#/components/schemas/BashTool", + "computer": "#/components/schemas/ComputerTool", + "textEditor": "#/components/schemas/TextEditorTool" } } } @@ -2355,6 +2473,18 @@ { "$ref": "#/components/schemas/OutputTool", "title": "OutputTool" + }, + { + "$ref": "#/components/schemas/BashTool", + "title": "BashTool" + }, + { + "$ref": "#/components/schemas/ComputerTool", + "title": "ComputerTool" + }, + { + "$ref": "#/components/schemas/TextEditorTool", + "title": "TextEditorTool" } ], "discriminator": { @@ -2366,7 +2496,10 @@ "ghl": "#/components/schemas/GhlTool", "make": "#/components/schemas/MakeTool", "transferCall": "#/components/schemas/TransferCallTool", - "output": "#/components/schemas/OutputTool" + "output": "#/components/schemas/OutputTool", + "bash": "#/components/schemas/BashTool", + "computer": "#/components/schemas/ComputerTool", + "textEditor": "#/components/schemas/TextEditorTool" } } } @@ -2401,7 +2534,63 @@ "content": { "application/json": { "schema": { - "$ref": "#/components/schemas/UpdateToolDTO" + "oneOf": [ + { + "$ref": "#/components/schemas/UpdateDtmfToolDTO", + "title": "DtmfTool" + }, + { + "$ref": "#/components/schemas/UpdateEndCallToolDTO", + "title": "EndCallTool" + }, + { + "$ref": "#/components/schemas/UpdateFunctionToolDTO", + "title": "FunctionTool" + }, + { + "$ref": "#/components/schemas/UpdateGhlToolDTO", + "title": "GhlTool" + }, + { + "$ref": "#/components/schemas/UpdateMakeToolDTO", + "title": "MakeTool" + }, + { + "$ref": "#/components/schemas/UpdateTransferCallToolDTO", + "title": "TransferCallTool" + }, + { + "$ref": "#/components/schemas/UpdateOutputToolDTO", + "title": "OutputTool" + }, + { + "$ref": "#/components/schemas/UpdateBashToolDTO", + "title": "BashTool" + }, + { + "$ref": "#/components/schemas/UpdateComputerToolDTO", + "title": "ComputerTool" + }, + { + "$ref": "#/components/schemas/UpdateTextEditorToolDTO", + "title": "TextEditorTool" + } + ], + "discriminator": { + "propertyName": "type", + "mapping": { + "dtmf": "#/components/schemas/UpdateDtmfToolDTO", + "endCall": "#/components/schemas/UpdateEndCallToolDTO", + "function": "#/components/schemas/UpdateFunctionToolDTO", + "ghl": "#/components/schemas/UpdateGhlToolDTO", + "make": "#/components/schemas/UpdateMakeToolDTO", + "transferCall": "#/components/schemas/UpdateTransferCallToolDTO", + "output": "#/components/schemas/UpdateOutputToolDTO", + "bash": "#/components/schemas/UpdateBashToolDTO", + "computer": "#/components/schemas/UpdateComputerToolDTO", + "textEditor": "#/components/schemas/UpdateTextEditorToolDTO" + } + } } } } @@ -2440,6 +2629,18 @@ { "$ref": "#/components/schemas/OutputTool", "title": "OutputTool" + }, + { + "$ref": "#/components/schemas/BashTool", + "title": "BashTool" + }, + { + "$ref": "#/components/schemas/ComputerTool", + "title": "ComputerTool" + }, + { + "$ref": "#/components/schemas/TextEditorTool", + "title": "TextEditorTool" } ], "discriminator": { @@ -2451,7 +2652,10 @@ "ghl": "#/components/schemas/GhlTool", "make": "#/components/schemas/MakeTool", "transferCall": "#/components/schemas/TransferCallTool", - "output": "#/components/schemas/OutputTool" + "output": "#/components/schemas/OutputTool", + "bash": "#/components/schemas/BashTool", + "computer": "#/components/schemas/ComputerTool", + "textEditor": "#/components/schemas/TextEditorTool" } } } @@ -2515,6 +2719,18 @@ { "$ref": "#/components/schemas/OutputTool", "title": "OutputTool" + }, + { + "$ref": "#/components/schemas/BashTool", + "title": "BashTool" + }, + { + "$ref": "#/components/schemas/ComputerTool", + "title": "ComputerTool" + }, + { + "$ref": "#/components/schemas/TextEditorTool", + "title": "TextEditorTool" } ], "discriminator": { @@ -2526,7 +2742,10 @@ "ghl": "#/components/schemas/GhlTool", "make": "#/components/schemas/MakeTool", "transferCall": "#/components/schemas/TransferCallTool", - "output": "#/components/schemas/OutputTool" + "output": "#/components/schemas/OutputTool", + "bash": "#/components/schemas/BashTool", + "computer": "#/components/schemas/ComputerTool", + "textEditor": "#/components/schemas/TextEditorTool" } } } @@ -2727,9 +2946,9 @@ } }, "/analytics": { - "get": { - "operationId": "AnalyticsController_getQuery", - "summary": "Get Analytics", + "post": { + "operationId": "AnalyticsController_query", + "summary": "Create Analytics Queries", "parameters": [], "requestBody": { "required": true, @@ -2754,6 +2973,9 @@ } } } + }, + "201": { + "description": "" } }, "tags": [ @@ -2768,18 +2990,10 @@ }, "/logs": { "get": { - "operationId": "LoggingController_queryLogs", + "operationId": "LoggingController_logsQuery", "summary": "Get Logs", + "deprecated": true, "parameters": [ - { - "name": "orgId", - "required": false, - "in": "query", - "description": "This is the unique identifier for the org that this log belongs to.", - "schema": { - "type": "string" - } - }, { "name": "type", "required": false, @@ -2863,7 +3077,7 @@ "name": "sortOrder", "required": false, "in": "query", - "description": "This is the sort order for pagination. Defaults to 'ASC'.", + "description": "This is the sort order for pagination. Defaults to 'DESC'.", "schema": { "enum": [ "ASC", @@ -2984,808 +3198,2808 @@ "bearer": [] } ] - } - } - }, - "info": { - "title": "Vapi API", - "description": "API for building voice assistants", - "version": "1.0", - "contact": {} - }, - "tags": [], - "servers": [ - { - "url": "https://api.vapi.ai" - } - ], - "components": { - "securitySchemes": { - "bearer": { - "scheme": "bearer", - "bearerFormat": "Bearer", - "type": "http", - "description": "Retrieve your API Key from [Dashboard](dashboard.vapi.ai)." - } - }, - "schemas": { - "AssemblyAITranscriber": { - "type": "object", - "properties": { - "provider": { - "type": "string", - "description": "This is the transcription provider that will be used.", - "enum": [ - "assembly-ai" - ] + }, + "delete": { + "operationId": "LoggingController_logsDeleteQuery", + "summary": "Delete Logs", + "deprecated": true, + "parameters": [ + { + "name": "type", + "required": false, + "in": "query", + "description": "This is the type of the log.", + "schema": { + "enum": [ + "API", + "Webhook", + "Call", + "Provider" + ], + "type": "string" + } }, - "language": { - "type": "string", - "description": "This is the language that will be set for the transcription.", - "enum": [ - "en" - ] + { + "name": "assistantId", + "required": false, + "in": "query", + "schema": { + "type": "string" + } }, - "realtimeUrl": { - "type": "string", - "description": "The WebSocket URL that the transcriber connects to." + { + "name": "phoneNumberId", + "required": false, + "in": "query", + "description": "This is the ID of the phone number.", + "schema": { + "type": "string" + } }, - "wordBoost": { - "description": "Add up to 2500 characters of custom vocabulary.", - "type": "array", - "items": { - "type": "string", - "maxLength": 2500 + { + "name": "customerId", + "required": false, + "in": "query", + "description": "This is the ID of the customer.", + "schema": { + "type": "string" } }, - "endUtteranceSilenceThreshold": { - "type": "number", - "description": "The duration of the end utterance silence threshold in milliseconds." + { + "name": "squadId", + "required": false, + "in": "query", + "description": "This is the ID of the squad.", + "schema": { + "type": "string" + } }, - "disablePartialTranscripts": { - "type": "boolean", - "description": "Disable partial transcripts.\nSet to `true` to not receive partial transcripts. Defaults to `false`." + { + "name": "callId", + "required": false, + "in": "query", + "description": "This is the ID of the call.", + "schema": { + "type": "string" + } + } + ], + "responses": { + "200": { + "description": "" } }, - "required": [ - "provider" - ] - }, - "Server": { - "type": "object", + "tags": [ + "Logs" + ], + "security": [ + { + "bearer": [] + } + ] + } + }, + "/test-suite": { + "get": { + "operationId": "TestSuiteController_findAllPaginated", + "summary": "List Test Suites", + "parameters": [ + { + "name": "page", + "required": false, + "in": "query", + "description": "This is the page number to return. Defaults to 1.", + "schema": { + "minimum": 1, + "type": "number" + } + }, + { + "name": "sortOrder", + "required": false, + "in": "query", + "description": "This is the sort order for pagination. Defaults to 'DESC'.", + "schema": { + "enum": [ + "ASC", + "DESC" + ], + "type": "string" + } + }, + { + "name": "limit", + "required": false, + "in": "query", + "description": "This is the maximum number of items to return. Defaults to 100.", + "schema": { + "minimum": 0, + "maximum": 1000, + "type": "number" + } + }, + { + "name": "createdAtGt", + "required": false, + "in": "query", + "description": "This will return items where the createdAt is greater than the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "createdAtLt", + "required": false, + "in": "query", + "description": "This will return items where the createdAt is less than the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "createdAtGe", + "required": false, + "in": "query", + "description": "This will return items where the createdAt is greater than or equal to the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "createdAtLe", + "required": false, + "in": "query", + "description": "This will return items where the createdAt is less than or equal to the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "updatedAtGt", + "required": false, + "in": "query", + "description": "This will return items where the updatedAt is greater than the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "updatedAtLt", + "required": false, + "in": "query", + "description": "This will return items where the updatedAt is less than the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "updatedAtGe", + "required": false, + "in": "query", + "description": "This will return items where the updatedAt is greater than or equal to the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "updatedAtLe", + "required": false, + "in": "query", + "description": "This will return items where the updatedAt is less than or equal to the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + } + ], + "responses": { + "200": { + "description": "", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/TestSuitesPaginatedResponse" + } + } + } + } + }, + "tags": [ + "Test Suites" + ], + "security": [ + { + "bearer": [] + } + ] + }, + "post": { + "operationId": "TestSuiteController_create", + "summary": "Create Test Suite", + "parameters": [], + "requestBody": { + "required": true, + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/CreateTestSuiteDto" + } + } + } + }, + "responses": { + "201": { + "description": "", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/TestSuite" + } + } + } + } + }, + "tags": [ + "Test Suites" + ], + "security": [ + { + "bearer": [] + } + ] + } + }, + "/test-suite/{id}": { + "get": { + "operationId": "TestSuiteController_findOne", + "summary": "Get Test Suite", + "parameters": [ + { + "name": "id", + "required": true, + "in": "path", + "schema": { + "type": "string" + } + } + ], + "responses": { + "200": { + "description": "", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/TestSuite" + } + } + } + } + }, + "tags": [ + "Test Suites" + ], + "security": [ + { + "bearer": [] + } + ] + }, + "patch": { + "operationId": "TestSuiteController_update", + "summary": "Update Test Suite", + "parameters": [ + { + "name": "id", + "required": true, + "in": "path", + "schema": { + "type": "string" + } + } + ], + "requestBody": { + "required": true, + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/UpdateTestSuiteDto" + } + } + } + }, + "responses": { + "200": { + "description": "", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/TestSuite" + } + } + } + } + }, + "tags": [ + "Test Suites" + ], + "security": [ + { + "bearer": [] + } + ] + }, + "delete": { + "operationId": "TestSuiteController_remove", + "summary": "Delete Test Suite", + "parameters": [ + { + "name": "id", + "required": true, + "in": "path", + "schema": { + "type": "string" + } + } + ], + "responses": { + "200": { + "description": "", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/TestSuite" + } + } + } + } + }, + "tags": [ + "Test Suites" + ], + "security": [ + { + "bearer": [] + } + ] + } + }, + "/test-suite/{testSuiteId}/test": { + "get": { + "operationId": "TestSuiteTestController_findAllPaginated", + "summary": "List Tests", + "parameters": [ + { + "name": "testSuiteId", + "required": true, + "in": "path", + "schema": { + "type": "string" + } + }, + { + "name": "page", + "required": false, + "in": "query", + "description": "This is the page number to return. Defaults to 1.", + "schema": { + "minimum": 1, + "type": "number" + } + }, + { + "name": "sortOrder", + "required": false, + "in": "query", + "description": "This is the sort order for pagination. Defaults to 'DESC'.", + "schema": { + "enum": [ + "ASC", + "DESC" + ], + "type": "string" + } + }, + { + "name": "limit", + "required": false, + "in": "query", + "description": "This is the maximum number of items to return. Defaults to 100.", + "schema": { + "minimum": 0, + "maximum": 1000, + "type": "number" + } + }, + { + "name": "createdAtGt", + "required": false, + "in": "query", + "description": "This will return items where the createdAt is greater than the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "createdAtLt", + "required": false, + "in": "query", + "description": "This will return items where the createdAt is less than the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "createdAtGe", + "required": false, + "in": "query", + "description": "This will return items where the createdAt is greater than or equal to the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "createdAtLe", + "required": false, + "in": "query", + "description": "This will return items where the createdAt is less than or equal to the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "updatedAtGt", + "required": false, + "in": "query", + "description": "This will return items where the updatedAt is greater than the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "updatedAtLt", + "required": false, + "in": "query", + "description": "This will return items where the updatedAt is less than the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "updatedAtGe", + "required": false, + "in": "query", + "description": "This will return items where the updatedAt is greater than or equal to the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "updatedAtLe", + "required": false, + "in": "query", + "description": "This will return items where the updatedAt is less than or equal to the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + } + ], + "responses": { + "200": { + "description": "", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/TestSuiteTestsPaginatedResponse" + } + } + } + } + }, + "tags": [ + "Test Suite Tests" + ], + "security": [ + { + "bearer": [] + } + ] + }, + "post": { + "operationId": "TestSuiteTestController_create", + "summary": "Create Test", + "parameters": [ + { + "name": "testSuiteId", + "required": true, + "in": "path", + "schema": { + "type": "string" + } + } + ], + "requestBody": { + "required": true, + "content": { + "application/json": { + "schema": { + "oneOf": [ + { + "$ref": "#/components/schemas/CreateTestSuiteTestVoiceDto", + "title": "TestSuiteTestVoice" + } + ], + "discriminator": { + "propertyName": "type", + "mapping": { + "voice": "#/components/schemas/CreateTestSuiteTestVoiceDto" + } + } + } + } + } + }, + "responses": { + "201": { + "description": "", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/TestSuiteTestVoice" + } + } + } + }, + "default": { + "description": "", + "content": { + "application/json": { + "schema": { + "oneOf": [ + { + "$ref": "#/components/schemas/TestSuiteTestVoice", + "title": "Voice" + } + ], + "discriminator": { + "propertyName": "type", + "mapping": { + "voice": "#/components/schemas/TestSuiteTestVoice" + } + } + } + } + } + } + }, + "tags": [ + "Test Suite Tests" + ], + "security": [ + { + "bearer": [] + } + ] + } + }, + "/test-suite/{testSuiteId}/test/{id}": { + "get": { + "operationId": "TestSuiteTestController_findOne", + "summary": "Get Test", + "parameters": [ + { + "name": "testSuiteId", + "required": true, + "in": "path", + "schema": { + "type": "string" + } + }, + { + "name": "id", + "required": true, + "in": "path", + "schema": { + "type": "string" + } + } + ], + "responses": { + "200": { + "description": "", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/TestSuiteTestVoice" + } + } + } + }, + "default": { + "description": "", + "content": { + "application/json": { + "schema": { + "oneOf": [ + { + "$ref": "#/components/schemas/TestSuiteTestVoice", + "title": "Voice" + } + ], + "discriminator": { + "propertyName": "type", + "mapping": { + "voice": "#/components/schemas/TestSuiteTestVoice" + } + } + } + } + } + } + }, + "tags": [ + "Test Suite Tests" + ], + "security": [ + { + "bearer": [] + } + ] + }, + "patch": { + "operationId": "TestSuiteTestController_update", + "summary": "Update Test", + "parameters": [ + { + "name": "testSuiteId", + "required": true, + "in": "path", + "schema": { + "type": "string" + } + }, + { + "name": "id", + "required": true, + "in": "path", + "schema": { + "type": "string" + } + } + ], + "requestBody": { + "required": true, + "content": { + "application/json": { + "schema": { + "oneOf": [ + { + "$ref": "#/components/schemas/UpdateTestSuiteTestVoiceDto", + "title": "TestSuiteTestVoice" + } + ], + "discriminator": { + "propertyName": "type", + "mapping": { + "voice": "#/components/schemas/UpdateTestSuiteTestVoiceDto" + } + } + } + } + } + }, + "responses": { + "200": { + "description": "", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/TestSuiteTestVoice" + } + } + } + }, + "default": { + "description": "", + "content": { + "application/json": { + "schema": { + "oneOf": [ + { + "$ref": "#/components/schemas/TestSuiteTestVoice", + "title": "Voice" + } + ], + "discriminator": { + "propertyName": "type", + "mapping": { + "voice": "#/components/schemas/TestSuiteTestVoice" + } + } + } + } + } + } + }, + "tags": [ + "Test Suite Tests" + ], + "security": [ + { + "bearer": [] + } + ] + }, + "delete": { + "operationId": "TestSuiteTestController_remove", + "summary": "Delete Test", + "parameters": [ + { + "name": "testSuiteId", + "required": true, + "in": "path", + "schema": { + "type": "string" + } + }, + { + "name": "id", + "required": true, + "in": "path", + "schema": { + "type": "string" + } + } + ], + "responses": { + "200": { + "description": "", + "content": { + "application/json": { + "schema": { + "oneOf": [ + { + "$ref": "#/components/schemas/TestSuiteTestVoice", + "title": "Voice" + } + ], + "discriminator": { + "propertyName": "type", + "mapping": { + "voice": "#/components/schemas/TestSuiteTestVoice" + } + } + } + } + } + } + }, + "tags": [ + "Test Suite Tests" + ], + "security": [ + { + "bearer": [] + } + ] + } + }, + "/test-suite/{testSuiteId}/run": { + "get": { + "operationId": "TestSuiteRunController_findAllPaginated", + "summary": "List Test Suite Runs", + "parameters": [ + { + "name": "testSuiteId", + "required": true, + "in": "path", + "schema": { + "type": "string" + } + }, + { + "name": "page", + "required": false, + "in": "query", + "description": "This is the page number to return. Defaults to 1.", + "schema": { + "minimum": 1, + "type": "number" + } + }, + { + "name": "sortOrder", + "required": false, + "in": "query", + "description": "This is the sort order for pagination. Defaults to 'DESC'.", + "schema": { + "enum": [ + "ASC", + "DESC" + ], + "type": "string" + } + }, + { + "name": "limit", + "required": false, + "in": "query", + "description": "This is the maximum number of items to return. Defaults to 100.", + "schema": { + "minimum": 0, + "maximum": 1000, + "type": "number" + } + }, + { + "name": "createdAtGt", + "required": false, + "in": "query", + "description": "This will return items where the createdAt is greater than the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "createdAtLt", + "required": false, + "in": "query", + "description": "This will return items where the createdAt is less than the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "createdAtGe", + "required": false, + "in": "query", + "description": "This will return items where the createdAt is greater than or equal to the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "createdAtLe", + "required": false, + "in": "query", + "description": "This will return items where the createdAt is less than or equal to the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "updatedAtGt", + "required": false, + "in": "query", + "description": "This will return items where the updatedAt is greater than the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "updatedAtLt", + "required": false, + "in": "query", + "description": "This will return items where the updatedAt is less than the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "updatedAtGe", + "required": false, + "in": "query", + "description": "This will return items where the updatedAt is greater than or equal to the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + }, + { + "name": "updatedAtLe", + "required": false, + "in": "query", + "description": "This will return items where the updatedAt is less than or equal to the specified value.", + "schema": { + "format": "date-time", + "type": "string" + } + } + ], + "responses": { + "200": { + "description": "", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/TestSuiteRunsPaginatedResponse" + } + } + } + } + }, + "tags": [ + "Test Suite Runs" + ], + "security": [ + { + "bearer": [] + } + ] + }, + "post": { + "operationId": "TestSuiteRunController_create", + "summary": "Create Test Suite Run", + "parameters": [ + { + "name": "testSuiteId", + "required": true, + "in": "path", + "schema": { + "type": "string" + } + } + ], + "requestBody": { + "required": true, + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/CreateTestSuiteRunDto" + } + } + } + }, + "responses": { + "201": { + "description": "", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/TestSuiteRun" + } + } + } + } + }, + "tags": [ + "Test Suite Runs" + ], + "security": [ + { + "bearer": [] + } + ] + } + }, + "/test-suite/{testSuiteId}/run/{id}": { + "get": { + "operationId": "TestSuiteRunController_findOne", + "summary": "Get Test Suite Run", + "parameters": [ + { + "name": "testSuiteId", + "required": true, + "in": "path", + "schema": { + "type": "string" + } + }, + { + "name": "id", + "required": true, + "in": "path", + "schema": { + "type": "string" + } + } + ], + "responses": { + "200": { + "description": "", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/TestSuiteRun" + } + } + } + } + }, + "tags": [ + "Test Suite Runs" + ], + "security": [ + { + "bearer": [] + } + ] + }, + "patch": { + "operationId": "TestSuiteRunController_update", + "summary": "Update Test Suite Run", + "parameters": [ + { + "name": "testSuiteId", + "required": true, + "in": "path", + "schema": { + "type": "string" + } + }, + { + "name": "id", + "required": true, + "in": "path", + "schema": { + "type": "string" + } + } + ], + "requestBody": { + "required": true, + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/UpdateTestSuiteRunDto" + } + } + } + }, + "responses": { + "200": { + "description": "", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/TestSuiteRun" + } + } + } + } + }, + "tags": [ + "Test Suite Runs" + ], + "security": [ + { + "bearer": [] + } + ] + }, + "delete": { + "operationId": "TestSuiteRunController_remove", + "summary": "Delete Test Suite Run", + "parameters": [ + { + "name": "testSuiteId", + "required": true, + "in": "path", + "schema": { + "type": "string" + } + }, + { + "name": "id", + "required": true, + "in": "path", + "schema": { + "type": "string" + } + } + ], + "responses": { + "200": { + "description": "", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/TestSuiteRun" + } + } + } + } + }, + "tags": [ + "Test Suite Runs" + ], + "security": [ + { + "bearer": [] + } + ] + } + } + }, + "info": { + "title": "Vapi API", + "description": "API for building voice assistants", + "version": "1.0", + "contact": {} + }, + "tags": [], + "servers": [ + { + "url": "https://api.vapi.ai" + } + ], + "components": { + "securitySchemes": { + "bearer": { + "scheme": "bearer", + "bearerFormat": "Bearer", + "type": "http", + "description": "Retrieve your API Key from [Dashboard](dashboard.vapi.ai)." + } + }, + "schemas": { + "AssemblyAITranscriber": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "description": "This is the transcription provider that will be used.", + "enum": [ + "assembly-ai" + ] + }, + "language": { + "type": "string", + "description": "This is the language that will be set for the transcription.", + "enum": [ + "en" + ] + }, + "realtimeUrl": { + "type": "string", + "description": "The WebSocket URL that the transcriber connects to." + }, + "wordBoost": { + "description": "Add up to 2500 characters of custom vocabulary.", + "type": "array", + "items": { + "type": "string", + "maxLength": 2500 + } + }, + "endUtteranceSilenceThreshold": { + "type": "number", + "description": "The duration of the end utterance silence threshold in milliseconds." + }, + "disablePartialTranscripts": { + "type": "boolean", + "description": "Disable partial transcripts.\nSet to `true` to not receive partial transcripts. Defaults to `false`." + } + }, + "required": [ + "provider" + ] + }, + "BackoffPlan": { + "type": "object", + "properties": { + "maxRetries": { + "type": "number", + "description": "This is the maximum number of retries to attempt if the request fails. Defaults to 0 (no retries).\n\n@default 0", + "minimum": 0, + "maximum": 10, + "example": 0 + }, + "type": { + "type": "object", + "description": "This is the type of backoff plan to use. Defaults to fixed.\n\n@default fixed", + "enum": [ + "fixed", + "exponential" + ], + "example": "fixed" + }, + "baseDelaySeconds": { + "type": "number", + "description": "This is the base delay in seconds. For linear backoff, this is the delay between each retry. For exponential backoff, this is the initial delay.", + "minimum": 0, + "maximum": 10, + "example": 1 + } + }, + "required": [ + "maxRetries", + "type", + "baseDelaySeconds" + ] + }, + "Server": { + "type": "object", + "properties": { + "timeoutSeconds": { + "type": "number", + "description": "This is the timeout in seconds for the request to your server. Defaults to 20 seconds.\n\n@default 20", + "minimum": 1, + "maximum": 120, + "example": 20 + }, + "url": { + "type": "string", + "description": "API endpoint to send requests to." + }, + "secret": { + "type": "string", + "description": "This is the secret you can set that Vapi will send with every request to your server. Will be sent as a header called x-vapi-secret.\n\nSame precedence logic as server." + }, + "headers": { + "type": "object", + "description": "These are the custom headers to include in the request sent to your server.\n\nEach key-value pair represents a header name and its value." + }, + "backoffPlan": { + "description": "This is the backoff plan to use if the request fails.", + "allOf": [ + { + "$ref": "#/components/schemas/BackoffPlan" + } + ] + } + }, + "required": [ + "url" + ] + }, + "CustomTranscriber": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "description": "This is the transcription provider that will be used. Use `custom-transcriber` for providers that are not natively supported.", + "enum": [ + "custom-transcriber" + ] + }, + "server": { + "description": "This is where the transcription request will be sent.\n\nUsage:\n1. Vapi will initiate a websocket connection with `server.url`.\n\n2. Vapi will send an initial text frame with the sample rate. Format:\n```\n {\n \"type\": \"start\",\n \"encoding\": \"linear16\", // 16-bit raw PCM format\n \"container\": \"raw\",\n \"sampleRate\": {{sampleRate}},\n \"channels\": 2 // customer is channel 0, assistant is channel 1\n }\n```\n\n3. Vapi will send the audio data in 16-bit raw PCM format as binary frames.\n\n4. You can read the messages something like this:\n```\nws.on('message', (data, isBinary) => {\n if (isBinary) {\n pcmBuffer = Buffer.concat([pcmBuffer, data]);\n console.log(`Received PCM data, buffer size: ${pcmBuffer.length}`);\n } else {\n console.log('Received message:', JSON.parse(data.toString()));\n }\n});\n```\n\n5. You will respond with transcriptions as you have them. Format:\n```\n {\n \"type\": \"transcriber-response\",\n \"transcription\": \"Hello, world!\",\n \"channel\": \"customer\" | \"assistant\"\n }\n```", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] + } + }, + "required": [ + "provider", + "server" + ] + }, + "DeepgramTranscriber": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "description": "This is the transcription provider that will be used.", + "enum": [ + "deepgram" + ] + }, + "model": { + "description": "This is the Deepgram model that will be used. A list of models can be found here: https://developers.deepgram.com/docs/models-languages-overview", + "oneOf": [ + { + "type": "string", + "enum": [ + "nova-3", + "nova-3-general", + "nova-2", + "nova-2-general", + "nova-2-meeting", + "nova-2-phonecall", + "nova-2-finance", + "nova-2-conversationalai", + "nova-2-voicemail", + "nova-2-video", + "nova-2-medical", + "nova-2-drivethru", + "nova-2-automotive", + "nova", + "nova-general", + "nova-phonecall", + "nova-medical", + "enhanced", + "enhanced-general", + "enhanced-meeting", + "enhanced-phonecall", + "enhanced-finance", + "base", + "base-general", + "base-meeting", + "base-phonecall", + "base-finance", + "base-conversationalai", + "base-voicemail", + "base-video" + ] + }, + { + "type": "string" + } + ] + }, + "language": { + "type": "string", + "description": "This is the language that will be set for the transcription. The list of languages Deepgram supports can be found here: https://developers.deepgram.com/docs/models-languages-overview", + "enum": [ + "bg", + "ca", + "cs", + "da", + "da-DK", + "de", + "de-CH", + "el", + "en", + "en-AU", + "en-GB", + "en-IN", + "en-NZ", + "en-US", + "es", + "es-419", + "es-LATAM", + "et", + "fi", + "fr", + "fr-CA", + "hi", + "hi-Latn", + "hu", + "id", + "it", + "ja", + "ko", + "ko-KR", + "lt", + "lv", + "ms", + "multi", + "nl", + "nl-BE", + "no", + "pl", + "pt", + "pt-BR", + "ro", + "ru", + "sk", + "sv", + "sv-SE", + "ta", + "taq", + "th", + "th-TH", + "tr", + "uk", + "vi", + "zh", + "zh-CN", + "zh-HK", + "zh-Hans", + "zh-Hant", + "zh-TW" + ] + }, + "smartFormat": { + "type": "boolean", + "description": "This will be use smart format option provided by Deepgram. It's default disabled because it can sometimes format numbers as times but it's getting better.", + "example": false + }, + "codeSwitchingEnabled": { + "type": "boolean", + "description": "This automatically switches the transcriber's language when the customer's language changes. Defaults to false.\n\nUsage:\n- If your customers switch languages mid-call, you can set this to true.\n\nNote:\n- To detect language changes, Vapi uses a custom trained model. Languages supported (X = limited support):\n 1. Arabic\n 2. Bengali\n 3. Cantonese\n 4. Chinese\n 5. Chinese Simplified (X)\n 6. Chinese Traditional (X)\n 7. English\n 8. Farsi (X)\n 9. French\n 10. German\n 11. Haitian Creole (X)\n 12. Hindi\n 13. Italian\n 14. Japanese\n 15. Korean\n 16. Portuguese\n 17. Russian\n 18. Spanish\n 19. Thai\n 20. Urdu\n 21. Vietnamese\n- To receive `language-change-detected` webhook events, add it to `assistant.serverMessages`.\n\n@default false", + "example": false + }, + "mipOptOut": { + "type": "boolean", + "description": "If set to true, this will add mip_opt_out=true as a query parameter of all API requests. See https://developers.deepgram.com/docs/the-deepgram-model-improvement-partnership-program#want-to-opt-out\n\nThis will only be used if you are using your own Deepgram API key.\n\n@default false", + "example": false, + "default": false + }, + "keywords": { + "description": "These keywords are passed to the transcription model to help it pick up use-case specific words. Anything that may not be a common word, like your company name, should be added here.", + "type": "array", + "items": { + "type": "string", + "pattern": "/^\\p{L}[\\p{L}\\d]*(?::[+-]?\\d+)?$/u" + } + }, + "keyterm": { + "description": "Keyterm Prompting allows you improve Keyword Recall Rate (KRR) for important keyterms or phrases up to 90%.", + "type": "array", + "items": { + "type": "string" + } + }, + "endpointing": { + "type": "number", + "description": "This is the timeout after which Deepgram will send transcription on user silence. You can read in-depth documentation here: https://developers.deepgram.com/docs/endpointing.\n\nHere are the most important bits:\n- Defaults to 10. This is recommended for most use cases to optimize for latency.\n- 10 can cause some missing transcriptions since because of the shorter context. This mostly happens for one-word utterances. For those uses cases, it's recommended to try 300. It will add a bit of latency but the quality and reliability of the experience will be better.\n- If neither 10 nor 300 work, contact support@vapi.ai and we'll find another solution.\n\n@default 10", + "minimum": 10, + "maximum": 500 + } + }, + "required": [ + "provider" + ] + }, + "TalkscriberTranscriber": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "description": "This is the transcription provider that will be used.", + "enum": [ + "talkscriber" + ] + }, + "model": { + "type": "string", + "description": "This is the model that will be used for the transcription.", + "enum": [ + "whisper" + ] + }, + "language": { + "type": "string", + "description": "This is the language that will be set for the transcription. The list of languages Whisper supports can be found here: https://github.com/openai/whisper/blob/main/whisper/tokenizer.py", + "enum": [ + "en", + "zh", + "de", + "es", + "ru", + "ko", + "fr", + "ja", + "pt", + "tr", + "pl", + "ca", + "nl", + "ar", + "sv", + "it", + "id", + "hi", + "fi", + "vi", + "he", + "uk", + "el", + "ms", + "cs", + "ro", + "da", + "hu", + "ta", + "no", + "th", + "ur", + "hr", + "bg", + "lt", + "la", + "mi", + "ml", + "cy", + "sk", + "te", + "fa", + "lv", + "bn", + "sr", + "az", + "sl", + "kn", + "et", + "mk", + "br", + "eu", + "is", + "hy", + "ne", + "mn", + "bs", + "kk", + "sq", + "sw", + "gl", + "mr", + "pa", + "si", + "km", + "sn", + "yo", + "so", + "af", + "oc", + "ka", + "be", + "tg", + "sd", + "gu", + "am", + "yi", + "lo", + "uz", + "fo", + "ht", + "ps", + "tk", + "nn", + "mt", + "sa", + "lb", + "my", + "bo", + "tl", + "mg", + "as", + "tt", + "haw", + "ln", + "ha", + "ba", + "jw", + "su", + "yue" + ] + } + }, + "required": [ + "provider" + ] + }, + "GladiaTranscriber": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "description": "This is the transcription provider that will be used.", + "enum": [ + "gladia" + ] + }, + "model": { + "description": "This is the Gladia model that will be used. Default is 'fast'", + "oneOf": [ + { + "enum": [ + "fast", + "accurate" + ] + } + ] + }, + "languageBehaviour": { + "description": "Defines how the transcription model detects the audio language. Default value is 'automatic single language'.", + "oneOf": [ + { + "type": "string", + "enum": [ + "manual", + "automatic single language", + "automatic multiple languages" + ] + } + ] + }, + "language": { + "type": "string", + "description": "Defines the language to use for the transcription. Required when languageBehaviour is 'manual'.", + "enum": [ + "af", + "sq", + "am", + "ar", + "hy", + "as", + "az", + "ba", + "eu", + "be", + "bn", + "bs", + "br", + "bg", + "ca", + "zh", + "hr", + "cs", + "da", + "nl", + "en", + "et", + "fo", + "fi", + "fr", + "gl", + "ka", + "de", + "el", + "gu", + "ht", + "ha", + "haw", + "he", + "hi", + "hu", + "is", + "id", + "it", + "ja", + "jp", + "jv", + "kn", + "kk", + "km", + "ko", + "lo", + "la", + "lv", + "ln", + "lt", + "lb", + "mk", + "mg", + "ms", + "ml", + "mt", + "mi", + "mr", + "mn", + "mymr", + "ne", + "no", + "nn", + "oc", + "ps", + "fa", + "pl", + "pt", + "pa", + "ro", + "ru", + "sa", + "sr", + "sn", + "sd", + "si", + "sk", + "sl", + "so", + "es", + "su", + "sw", + "sv", + "tl", + "tg", + "ta", + "tt", + "te", + "th", + "bo", + "tr", + "tk", + "uk", + "ur", + "uz", + "vi", + "cy", + "yi", + "yo" + ] + }, + "transcriptionHint": { + "type": "string", + "description": "Provides a custom vocabulary to the model to improve accuracy of transcribing context specific words, technical terms, names, etc. If empty, this argument is ignored.\n⚠️ Warning ⚠️: Please be aware that the transcription_hint field has a character limit of 600. If you provide a transcription_hint longer than 600 characters, it will be automatically truncated to meet this limit.", + "maxLength": 600, + "example": "custom vocabulary" + }, + "prosody": { + "type": "boolean", + "description": "If prosody is true, you will get a transcription that can contain prosodies i.e. (laugh) (giggles) (malefic laugh) (toss) (music)… Default value is false.", + "example": false + }, + "audioEnhancer": { + "type": "boolean", + "description": "If true, audio will be pre-processed to improve accuracy but latency will increase. Default value is false.", + "example": false + } + }, + "required": [ + "provider" + ] + }, + "AzureSpeechTranscriber": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "description": "This is the transcription provider that will be used.", + "enum": [ + "azure" + ] + }, + "language": { + "type": "string", + "description": "This is the language that will be set for the transcription. The list of languages Azure supports can be found here: https://learn.microsoft.com/en-us/azure/ai-services/speech-service/language-support?tabs=stt", + "enum": [ + "af-ZA", + "am-ET", + "ar-AE", + "ar-BH", + "ar-DZ", + "ar-EG", + "ar-IL", + "ar-IQ", + "ar-JO", + "ar-KW", + "ar-LB", + "ar-LY", + "ar-MA", + "ar-OM", + "ar-PS", + "ar-QA", + "ar-SA", + "ar-SY", + "ar-TN", + "ar-YE", + "az-AZ", + "bg-BG", + "bn-IN", + "bs-BA", + "ca-ES", + "cs-CZ", + "cy-GB", + "da-DK", + "de-AT", + "de-CH", + "de-DE", + "el-GR", + "en-AU", + "en-CA", + "en-GB", + "en-GH", + "en-HK", + "en-IE", + "en-IN", + "en-KE", + "en-NG", + "en-NZ", + "en-PH", + "en-SG", + "en-TZ", + "en-US", + "en-ZA", + "es-AR", + "es-BO", + "es-CL", + "es-CO", + "es-CR", + "es-CU", + "es-DO", + "es-EC", + "es-ES", + "es-GQ", + "es-GT", + "es-HN", + "es-MX", + "es-NI", + "es-PA", + "es-PE", + "es-PR", + "es-PY", + "es-SV", + "es-US", + "es-UY", + "es-VE", + "et-EE", + "eu-ES", + "fa-IR", + "fi-FI", + "fil-PH", + "fr-BE", + "fr-CA", + "fr-CH", + "fr-FR", + "ga-IE", + "gl-ES", + "gu-IN", + "he-IL", + "hi-IN", + "hr-HR", + "hu-HU", + "hy-AM", + "id-ID", + "is-IS", + "it-CH", + "it-IT", + "ja-JP", + "jv-ID", + "ka-GE", + "kk-KZ", + "km-KH", + "kn-IN", + "ko-KR", + "lo-LA", + "lt-LT", + "lv-LV", + "mk-MK", + "ml-IN", + "mn-MN", + "mr-IN", + "ms-MY", + "mt-MT", + "my-MM", + "nb-NO", + "ne-NP", + "nl-BE", + "nl-NL", + "pa-IN", + "pl-PL", + "ps-AF", + "pt-BR", + "pt-PT", + "ro-RO", + "ru-RU", + "si-LK", + "sk-SK", + "sl-SI", + "so-SO", + "sq-AL", + "sr-RS", + "sv-SE", + "sw-KE", + "sw-TZ", + "ta-IN", + "te-IN", + "th-TH", + "tr-TR", + "uk-UA", + "ur-IN", + "uz-UZ", + "vi-VN", + "wuu-CN", + "yue-CN", + "zh-CN", + "zh-CN-shandong", + "zh-CN-sichuan", + "zh-HK", + "zh-TW", + "zu-ZA" + ] + } + }, + "required": [ + "provider" + ] + }, + "TextContent": { + "type": "object", + "properties": { + "type": { + "type": "string", + "enum": [ + "text" + ] + }, + "text": { + "type": "string" + }, + "language": { + "type": "string", + "enum": [ + "aa", + "ab", + "ae", + "af", + "ak", + "am", + "an", + "ar", + "as", + "av", + "ay", + "az", + "ba", + "be", + "bg", + "bh", + "bi", + "bm", + "bn", + "bo", + "br", + "bs", + "ca", + "ce", + "ch", + "co", + "cr", + "cs", + "cu", + "cv", + "cy", + "da", + "de", + "dv", + "dz", + "ee", + "el", + "en", + "eo", + "es", + "et", + "eu", + "fa", + "ff", + "fi", + "fj", + "fo", + "fr", + "fy", + "ga", + "gd", + "gl", + "gn", + "gu", + "gv", + "ha", + "he", + "hi", + "ho", + "hr", + "ht", + "hu", + "hy", + "hz", + "ia", + "id", + "ie", + "ig", + "ii", + "ik", + "io", + "is", + "it", + "iu", + "ja", + "jv", + "ka", + "kg", + "ki", + "kj", + "kk", + "kl", + "km", + "kn", + "ko", + "kr", + "ks", + "ku", + "kv", + "kw", + "ky", + "la", + "lb", + "lg", + "li", + "ln", + "lo", + "lt", + "lu", + "lv", + "mg", + "mh", + "mi", + "mk", + "ml", + "mn", + "mr", + "ms", + "mt", + "my", + "na", + "nb", + "nd", + "ne", + "ng", + "nl", + "nn", + "no", + "nr", + "nv", + "ny", + "oc", + "oj", + "om", + "or", + "os", + "pa", + "pi", + "pl", + "ps", + "pt", + "qu", + "rm", + "rn", + "ro", + "ru", + "rw", + "sa", + "sc", + "sd", + "se", + "sg", + "si", + "sk", + "sl", + "sm", + "sn", + "so", + "sq", + "sr", + "ss", + "st", + "su", + "sv", + "sw", + "ta", + "te", + "tg", + "th", + "ti", + "tk", + "tl", + "tn", + "to", + "tr", + "ts", + "tt", + "tw", + "ty", + "ug", + "uk", + "ur", + "uz", + "ve", + "vi", + "vo", + "wa", + "wo", + "xh", + "yi", + "yue", + "yo", + "za", + "zh", + "zu" + ] + } + }, + "required": [ + "type", + "text", + "language" + ] + }, + "Condition": { + "type": "object", + "properties": { + "operator": { + "type": "string", + "description": "This is the operator you want to use to compare the parameter and value.", + "enum": [ + "eq", + "neq", + "gt", + "gte", + "lt", + "lte" + ] + }, + "param": { + "type": "string", + "description": "This is the name of the parameter that you want to check.", + "maxLength": 1000 + }, + "value": { + "type": "object", + "description": "This is the value you want to compare against the parameter.", + "maxLength": 1000 + } + }, + "required": [ + "operator", + "param", + "value" + ] + }, + "ToolMessageStart": { + "type": "object", "properties": { - "timeoutSeconds": { - "type": "number", - "description": "This is the timeout in seconds for the request to your server. Defaults to 20 seconds.\n\n@default 20", - "minimum": 1, - "maximum": 120, - "example": 20 + "contents": { + "type": "array", + "description": "This is an alternative to the `content` property. It allows to specify variants of the same content, one per language.\n\nUsage:\n- If your assistants are multilingual, you can provide content for each language.\n- If you don't provide content for a language, the first item in the array will be automatically translated to the active language at that moment.\n\nThis will override the `content` property.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/TextContent", + "title": "Text" + } + ] + } }, - "url": { + "type": { "type": "string", - "description": "API endpoint to send requests to." + "enum": [ + "request-start" + ], + "description": "This message is triggered when the tool call starts.\n\nThis message is never triggered for async tools.\n\nIf this message is not provided, one of the default filler messages \"Hold on a sec\", \"One moment\", \"Just a sec\", \"Give me a moment\" or \"This'll just take a sec\" will be used." }, - "secret": { + "blocking": { + "type": "boolean", + "description": "This is an optional boolean that if true, the tool call will only trigger after the message is spoken. Default is false.\n\n@default false", + "example": false, + "default": false + }, + "content": { "type": "string", - "description": "This is the secret you can set that Vapi will send with every request to your server. Will be sent as a header called x-vapi-secret.\n\nSame precedence logic as server." + "description": "This is the content that the assistant says when this message is triggered.", + "maxLength": 1000 }, - "headers": { - "type": "object", - "description": "These are the custom headers to include in the request sent to your server.\n\nEach key-value pair represents a header name and its value." + "conditions": { + "description": "This is an optional array of conditions that the tool call arguments must meet in order for this message to be triggered.", + "type": "array", + "items": { + "$ref": "#/components/schemas/Condition" + } } }, "required": [ - "url" + "type" ] }, - "CustomTranscriber": { + "ToolMessageComplete": { "type": "object", "properties": { - "provider": { + "contents": { + "type": "array", + "description": "This is an alternative to the `content` property. It allows to specify variants of the same content, one per language.\n\nUsage:\n- If your assistants are multilingual, you can provide content for each language.\n- If you don't provide content for a language, the first item in the array will be automatically translated to the active language at that moment.\n\nThis will override the `content` property.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/TextContent", + "title": "Text" + } + ] + } + }, + "type": { "type": "string", - "description": "This is the transcription provider that will be used. Use `custom-transcriber` for providers that are not natively supported.", + "description": "This message is triggered when the tool call is complete.\n\nThis message is triggered immediately without waiting for your server to respond for async tool calls.\n\nIf this message is not provided, the model will be requested to respond.\n\nIf this message is provided, only this message will be spoken and the model will not be requested to come up with a response. It's an exclusive OR.", "enum": [ - "custom-transcriber" + "request-complete" ] }, - "server": { - "description": "This is where the transcription request will be sent.\n\nUsage:\n1. Vapi will initiate a websocket connection with `server.url`.\n\n2. Vapi will send an initial text frame with the sample rate. Format:\n```\n {\n \"type\": \"start\",\n \"encoding\": \"linear16\", // 16-bit raw PCM format\n \"container\": \"raw\",\n \"sampleRate\": {{sampleRate}},\n \"channels\": 2 // customer is channel 0, assistant is channel 1\n }\n```\n\n3. Vapi will send the audio data in 16-bit raw PCM format as binary frames.\n\n4. You can read the messages something like this:\n```\nws.on('message', (data, isBinary) => {\n if (isBinary) {\n pcmBuffer = Buffer.concat([pcmBuffer, data]);\n console.log(`Received PCM data, buffer size: ${pcmBuffer.length}`);\n } else {\n console.log('Received message:', JSON.parse(data.toString()));\n }\n});\n```\n\n5. You will respond with transcriptions as you have them. Format:\n```\n {\n \"type\": \"transcriber-response\",\n \"transcription\": \"Hello, world!\",\n \"channel\": \"customer\" | \"assistant\"\n }\n```", - "allOf": [ - { - "$ref": "#/components/schemas/Server" - } + "role": { + "type": "string", + "description": "This is optional and defaults to \"assistant\".\n\nWhen role=assistant, `content` is said out loud.\n\nWhen role=system, `content` is passed to the model in a system message. Example:\n system: default one\n assistant:\n user:\n assistant:\n user:\n assistant:\n user:\n assistant: tool called\n tool: your server response\n <--- system prompt as hint\n ---> model generates response which is spoken\nThis is useful when you want to provide a hint to the model about what to say next.", + "enum": [ + "assistant", + "system" ] + }, + "endCallAfterSpokenEnabled": { + "type": "boolean", + "description": "This is an optional boolean that if true, the call will end after the message is spoken. Default is false.\n\nThis is ignored if `role` is set to `system`.\n\n@default false", + "example": false + }, + "content": { + "type": "string", + "description": "This is the content that the assistant says when this message is triggered.", + "maxLength": 1000 + }, + "conditions": { + "description": "This is an optional array of conditions that the tool call arguments must meet in order for this message to be triggered.", + "type": "array", + "items": { + "$ref": "#/components/schemas/Condition" + } } }, "required": [ - "provider", - "server" + "type" ] }, - "DeepgramTranscriber": { + "ToolMessageFailed": { "type": "object", "properties": { - "provider": { + "contents": { + "type": "array", + "description": "This is an alternative to the `content` property. It allows to specify variants of the same content, one per language.\n\nUsage:\n- If your assistants are multilingual, you can provide content for each language.\n- If you don't provide content for a language, the first item in the array will be automatically translated to the active language at that moment.\n\nThis will override the `content` property.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/TextContent", + "title": "Text" + } + ] + } + }, + "type": { "type": "string", - "description": "This is the transcription provider that will be used.", + "description": "This message is triggered when the tool call fails.\n\nThis message is never triggered for async tool calls.\n\nIf this message is not provided, the model will be requested to respond.\n\nIf this message is provided, only this message will be spoken and the model will not be requested to come up with a response. It's an exclusive OR.", "enum": [ - "deepgram" + "request-failed" ] }, - "model": { - "description": "This is the Deepgram model that will be used. A list of models can be found here: https://developers.deepgram.com/docs/models-languages-overview", - "oneOf": [ - { - "type": "string", - "enum": [ - "nova-2", - "nova-2-general", - "nova-2-meeting", - "nova-2-phonecall", - "nova-2-finance", - "nova-2-conversationalai", - "nova-2-voicemail", - "nova-2-video", - "nova-2-medical", - "nova-2-drivethru", - "nova-2-automotive", - "nova", - "nova-general", - "nova-phonecall", - "nova-medical", - "enhanced", - "enhanced-general", - "enhanced-meeting", - "enhanced-phonecall", - "enhanced-finance", - "base", - "base-general", - "base-meeting", - "base-phonecall", - "base-finance", - "base-conversationalai", - "base-voicemail", - "base-video" - ] - }, - { - "type": "string" - } - ] + "endCallAfterSpokenEnabled": { + "type": "boolean", + "description": "This is an optional boolean that if true, the call will end after the message is spoken. Default is false.\n\n@default false", + "example": false }, - "language": { + "content": { "type": "string", - "description": "This is the language that will be set for the transcription. The list of languages Deepgram supports can be found here: https://developers.deepgram.com/docs/models-languages-overview", - "enum": [ - "bg", - "ca", - "cs", - "da", - "da-DK", - "de", - "de-CH", - "el", - "en", - "en-AU", - "en-GB", - "en-IN", - "en-NZ", - "en-US", - "es", - "es-419", - "es-LATAM", - "et", - "fi", - "fr", - "fr-CA", - "hi", - "hi-Latn", - "hu", - "id", - "it", - "ja", - "ko", - "ko-KR", - "lt", - "lv", - "ms", - "multi", - "nl", - "nl-BE", - "no", - "pl", - "pt", - "pt-BR", - "ro", - "ru", - "sk", - "sv", - "sv-SE", - "ta", - "taq", - "th", - "th-TH", - "tr", - "uk", - "vi", - "zh", - "zh-CN", - "zh-HK", - "zh-Hans", - "zh-Hant", - "zh-TW" + "description": "This is the content that the assistant says when this message is triggered.", + "maxLength": 1000 + }, + "conditions": { + "description": "This is an optional array of conditions that the tool call arguments must meet in order for this message to be triggered.", + "type": "array", + "items": { + "$ref": "#/components/schemas/Condition" + } + } + }, + "required": [ + "type" + ] + }, + "ToolMessageDelayed": { + "type": "object", + "properties": { + "contents": { + "type": "array", + "description": "This is an alternative to the `content` property. It allows to specify variants of the same content, one per language.\n\nUsage:\n- If your assistants are multilingual, you can provide content for each language.\n- If you don't provide content for a language, the first item in the array will be automatically translated to the active language at that moment.\n\nThis will override the `content` property.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/TextContent", + "title": "Text" + } + ] + } + }, + "type": { + "type": "string", + "description": "This message is triggered when the tool call is delayed.\n\nThere are the two things that can trigger this message:\n1. The user talks with the assistant while your server is processing the request. Default is \"Sorry, a few more seconds.\"\n2. The server doesn't respond within `timingMilliseconds`.\n\nThis message is never triggered for async tool calls.", + "enum": [ + "request-response-delayed" ] }, - "smartFormat": { - "type": "boolean", - "description": "This will be use smart format option provided by Deepgram. It's default disabled because it can sometimes format numbers as times but it's getting better.", - "example": false + "timingMilliseconds": { + "type": "number", + "minimum": 100, + "maximum": 120000, + "example": 1000, + "description": "The number of milliseconds to wait for the server response before saying this message." }, - "codeSwitchingEnabled": { - "type": "boolean", - "description": "This automatically switches the transcriber's language when the customer's language changes. Defaults to false.\n\nUsage:\n- If your customers switch languages mid-call, you can set this to true.\n\nNote:\n- To detect language changes, Vapi uses a custom trained model. Languages supported (X = limited support):\n 1. Arabic\n 2. Bengali\n 3. Cantonese\n 4. Chinese\n 5. Chinese Simplified (X)\n 6. Chinese Traditional (X)\n 7. English\n 8. Farsi (X)\n 9. French\n 10. German\n 11. Haitian Creole (X)\n 12. Hindi\n 13. Italian\n 14. Japanese\n 15. Korean\n 16. Portuguese\n 17. Russian\n 18. Spanish\n 19. Thai\n 20. Urdu\n 21. Vietnamese\n- To receive `language-change-detected` webhook events, add it to `assistant.serverMessages`.\n\n@default false", - "example": false + "content": { + "type": "string", + "description": "This is the content that the assistant says when this message is triggered.", + "maxLength": 1000 }, - "keywords": { - "description": "These keywords are passed to the transcription model to help it pick up use-case specific words. Anything that may not be a common word, like your company name, should be added here.", + "conditions": { + "description": "This is an optional array of conditions that the tool call arguments must meet in order for this message to be triggered.", "type": "array", "items": { - "type": "string", - "pattern": "/^\\p{L}[\\p{L}\\d]*(?::[+-]?\\d+)?$/u" + "$ref": "#/components/schemas/Condition" } - }, - "endpointing": { - "type": "number", - "description": "This is the timeout after which Deepgram will send transcription on user silence. You can read in-depth documentation here: https://developers.deepgram.com/docs/endpointing.\n\nHere are the most important bits:\n- Defaults to 10. This is recommended for most use cases to optimize for latency.\n- 10 can cause some missing transcriptions since because of the shorter context. This mostly happens for one-word utterances. For those uses cases, it's recommended to try 300. It will add a bit of latency but the quality and reliability of the experience will be better.\n- If neither 10 nor 300 work, contact support@vapi.ai and we'll find another solution.\n\n@default 10", - "minimum": 10, - "maximum": 500 } }, "required": [ - "provider" + "type" ] }, - "TalkscriberTranscriber": { + "JsonSchema": { "type": "object", "properties": { - "provider": { + "type": { "type": "string", - "description": "This is the transcription provider that will be used.", + "description": "This is the type of output you'd like.\n\n`string`, `number`, `integer`, `boolean` are the primitive types and should be obvious.\n\n`array` and `object` are more interesting and quite powerful. They allow you to define nested structures.\n\nFor `array`, you can define the schema of the items in the array using the `items` property.\n\nFor `object`, you can define the properties of the object using the `properties` property.", "enum": [ - "talkscriber" + "string", + "number", + "integer", + "boolean", + "array", + "object" ] }, - "model": { + "items": { + "type": "object", + "description": "This is required if the type is \"array\". This is the schema of the items in the array.\n\nThis is of type JsonSchema. However, Swagger doesn't support circular references." + }, + "properties": { + "type": "object", + "description": "This is required if the type is \"object\". This specifies the properties of the object.\n\nThis is a map of string to JsonSchema. However, Swagger doesn't support circular references." + }, + "description": { "type": "string", - "description": "This is the model that will be used for the transcription.", + "description": "This is the description to help the model understand what it needs to output." + }, + "required": { + "description": "This is a list of properties that are required.\n\nThis only makes sense if the type is \"object\".", + "type": "array", + "items": { + "type": "string" + } + }, + "regex": { + "type": "string", + "description": "This is a regex that will be used to validate data in question." + }, + "value": { + "type": "string", + "description": "This the value that will be used in filling the property." + }, + "target": { + "type": "string", + "description": "This the target variable that will be filled with the value of this property." + }, + "enum": { + "description": "This array specifies the allowed values that can be used to restrict the output of the model.", + "type": "array", + "items": { + "type": "string" + } + } + }, + "required": [ + "type" + ] + }, + "OpenAIFunctionParameters": { + "type": "object", + "properties": { + "type": { + "type": "string", + "description": "This must be set to 'object'. It instructs the model to return a JSON object containing the function call properties.", "enum": [ - "whisper" + "object" ] }, - "language": { + "properties": { + "type": "object", + "description": "This provides a description of the properties required by the function.\nJSON Schema can be used to specify expectations for each property.\nRefer to [this doc](https://ajv.js.org/json-schema.html#json-data-type) for a comprehensive guide on JSON Schema.", + "additionalProperties": { + "$ref": "#/components/schemas/JsonSchema" + } + }, + "required": { + "description": "This specifies the properties that are required by the function.", + "type": "array", + "items": { + "type": "string" + } + } + }, + "required": [ + "type", + "properties" + ] + }, + "OpenAIFunction": { + "type": "object", + "properties": { + "strict": { + "type": "boolean", + "description": "This is a boolean that controls whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is true. Learn more about Structured Outputs in the [OpenAI guide](https://openai.com/index/introducing-structured-outputs-in-the-api/).\n\n@default false", + "default": false + }, + "name": { "type": "string", - "description": "This is the language that will be set for the transcription. The list of languages Whisper supports can be found here: https://github.com/openai/whisper/blob/main/whisper/tokenizer.py", - "enum": [ - "en", - "zh", - "de", - "es", - "ru", - "ko", - "fr", - "ja", - "pt", - "tr", - "pl", - "ca", - "nl", - "ar", - "sv", - "it", - "id", - "hi", - "fi", - "vi", - "he", - "uk", - "el", - "ms", - "cs", - "ro", - "da", - "hu", - "ta", - "no", - "th", - "ur", - "hr", - "bg", - "lt", - "la", - "mi", - "ml", - "cy", - "sk", - "te", - "fa", - "lv", - "bn", - "sr", - "az", - "sl", - "kn", - "et", - "mk", - "br", - "eu", - "is", - "hy", - "ne", - "mn", - "bs", - "kk", - "sq", - "sw", - "gl", - "mr", - "pa", - "si", - "km", - "sn", - "yo", - "so", - "af", - "oc", - "ka", - "be", - "tg", - "sd", - "gu", - "am", - "yi", - "lo", - "uz", - "fo", - "ht", - "ps", - "tk", - "nn", - "mt", - "sa", - "lb", - "my", - "bo", - "tl", - "mg", - "as", - "tt", - "haw", - "ln", - "ha", - "ba", - "jw", - "su", - "yue" + "description": "This is the the name of the function to be called.\n\nMust be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.", + "maxLength": 64, + "pattern": "/^[a-zA-Z0-9_-]{1,64}$/" + }, + "description": { + "type": "string", + "description": "This is the description of what the function does, used by the AI to choose when and how to call the function.", + "maxLength": 1000 + }, + "parameters": { + "description": "These are the parameters the functions accepts, described as a JSON Schema object.\n\nSee the [OpenAI guide](https://platform.openai.com/docs/guides/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema) for documentation about the format.\n\nOmitting parameters defines a function with an empty parameter list.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunctionParameters" + } ] } }, "required": [ - "provider" + "name" ] }, - "GladiaTranscriber": { + "CreateDtmfToolDTO": { "type": "object", "properties": { - "provider": { + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false + }, + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } + }, + "type": { "type": "string", - "description": "This is the transcription provider that will be used.", "enum": [ - "gladia" - ] + "dtmf" + ], + "description": "The type of tool. \"dtmf\" for DTMF tool." }, - "model": { - "description": "This is the Gladia model that will be used. Default is 'fast'", - "oneOf": [ + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ { - "enum": [ - "fast", - "accurate" - ] + "$ref": "#/components/schemas/OpenAIFunction" } ] }, - "languageBehaviour": { - "description": "Defines how the transcription model detects the audio language. Default value is 'automatic single language'.", - "oneOf": [ + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ { - "type": "string", - "enum": [ - "manual", - "automatic single language", - "automatic multiple languages" - ] + "$ref": "#/components/schemas/Server" } ] + } + }, + "required": [ + "type" + ] + }, + "CreateEndCallToolDTO": { + "type": "object", + "properties": { + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false }, - "language": { + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } + }, + "type": { "type": "string", - "description": "Defines the language to use for the transcription. Required when languageBehaviour is 'manual'.", "enum": [ - "af", - "sq", - "am", - "ar", - "hy", - "as", - "az", - "ba", - "eu", - "be", - "bn", - "bs", - "br", - "bg", - "ca", - "zh", - "hr", - "cs", - "da", - "nl", - "en", - "et", - "fo", - "fi", - "fr", - "gl", - "ka", - "de", - "el", - "gu", - "ht", - "ha", - "haw", - "he", - "hi", - "hu", - "is", - "id", - "it", - "ja", - "jp", - "jv", - "kn", - "kk", - "km", - "ko", - "lo", - "la", - "lv", - "ln", - "lt", - "lb", - "mk", - "mg", - "ms", - "ml", - "mt", - "mi", - "mr", - "mn", - "mymr", - "ne", - "no", - "nn", - "oc", - "ps", - "fa", - "pl", - "pt", - "pa", - "ro", - "ru", - "sa", - "sr", - "sn", - "sd", - "si", - "sk", - "sl", - "so", - "es", - "su", - "sw", - "sv", - "tl", - "tg", - "ta", - "tt", - "te", - "th", - "bo", - "tr", - "tk", - "uk", - "ur", - "uz", - "vi", - "cy", - "yi", - "yo" + "endCall" + ], + "description": "The type of tool. \"endCall\" for End Call tool." + }, + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] + }, + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } ] + } + }, + "required": [ + "type" + ] + }, + "CreateVoicemailToolDTO": { + "type": "object", + "properties": { + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false + }, + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } }, - "transcriptionHint": { + "type": { "type": "string", - "description": "Provides a custom vocabulary to the model to improve accuracy of transcribing context specific words, technical terms, names, etc. If empty, this argument is ignored.\n⚠️ Warning ⚠️: Please be aware that the transcription_hint field has a character limit of 600. If you provide a transcription_hint longer than 600 characters, it will be automatically truncated to meet this limit.", - "maxLength": 600, - "example": "custom vocabulary" + "enum": [ + "voicemail" + ], + "description": "The type of tool. \"voicemail\". This uses the model itself to determine if a voicemil was reached. Can be used alternatively/alongside with TwilioVoicemailDetection" }, - "prosody": { - "type": "boolean", - "description": "If prosody is true, you will get a transcription that can contain prosodies i.e. (laugh) (giggles) (malefic laugh) (toss) (music)… Default value is false.", - "example": false + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] }, - "audioEnhancer": { - "type": "boolean", - "description": "If true, audio will be pre-processed to improve accuracy but latency will increase. Default value is false.", - "example": false + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] } }, "required": [ - "provider" + "type" ] }, - "TextContent": { + "CreateFunctionToolDTO": { "type": "object", "properties": { - "type": { - "type": "string", - "enum": [ - "text" - ] - }, - "text": { - "type": "string" + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false }, - "language": { - "type": "string", - "enum": [ - "aa", - "ab", - "ae", - "af", - "ak", - "am", - "an", - "ar", - "as", - "av", - "ay", - "az", - "ba", - "be", - "bg", - "bh", - "bi", - "bm", - "bn", - "bo", - "br", - "bs", - "ca", - "ce", - "ch", - "co", - "cr", - "cs", - "cu", - "cv", - "cy", - "da", - "de", - "dv", - "dz", - "ee", - "el", - "en", - "eo", - "es", - "et", - "eu", - "fa", - "ff", - "fi", - "fj", - "fo", - "fr", - "fy", - "ga", - "gd", - "gl", - "gn", - "gu", - "gv", - "ha", - "he", - "hi", - "ho", - "hr", - "ht", - "hu", - "hy", - "hz", - "ia", - "id", - "ie", - "ig", - "ii", - "ik", - "io", - "is", - "it", - "iu", - "ja", - "jv", - "ka", - "kg", - "ki", - "kj", - "kk", - "kl", - "km", - "kn", - "ko", - "kr", - "ks", - "ku", - "kv", - "kw", - "ky", - "la", - "lb", - "lg", - "li", - "ln", - "lo", - "lt", - "lu", - "lv", - "mg", - "mh", - "mi", - "mk", - "ml", - "mn", - "mr", - "ms", - "mt", - "my", - "na", - "nb", - "nd", - "ne", - "ng", - "nl", - "nn", - "no", - "nr", - "nv", - "ny", - "oc", - "oj", - "om", - "or", - "os", - "pa", - "pi", - "pl", - "ps", - "pt", - "qu", - "rm", - "rn", - "ro", - "ru", - "rw", - "sa", - "sc", - "sd", - "se", - "sg", - "si", - "sk", - "sl", - "sm", - "sn", - "so", - "sq", - "sr", - "ss", - "st", - "su", - "sv", - "sw", - "ta", - "te", - "tg", - "th", - "ti", - "tk", - "tl", - "tn", - "to", - "tr", - "ts", - "tt", - "tw", - "ty", - "ug", - "uk", - "ur", - "uz", - "ve", - "vi", - "vo", - "wa", - "wo", - "xh", - "yi", - "yue", - "yo", - "za", - "zh", - "zu" + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } + }, + "type": { + "type": "string", + "enum": [ + "function" + ], + "description": "The type of tool. \"function\" for Function tool." + }, + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] + }, + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } ] } }, "required": [ - "type", - "text", - "language" + "type" ] }, - "Condition": { + "GhlToolMetadata": { "type": "object", "properties": { - "operator": { + "workflowId": { + "type": "string" + }, + "locationId": { + "type": "string" + } + } + }, + "CreateGhlToolDTO": { + "type": "object", + "properties": { + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false + }, + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } + }, + "type": { "type": "string", - "description": "This is the operator you want to use to compare the parameter and value.", "enum": [ - "eq", - "neq", - "gt", - "gte", - "lt", - "lte" - ] + "ghl" + ], + "description": "The type of tool. \"ghl\" for GHL tool." }, - "param": { - "type": "string", - "description": "This is the name of the parameter that you want to check.", - "maxLength": 1000 + "metadata": { + "$ref": "#/components/schemas/GhlToolMetadata" }, - "value": { - "type": "object", - "description": "This is the value you want to compare against the parameter.", - "maxLength": 1000 + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] + }, + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] } }, "required": [ - "operator", - "param", - "value" + "type", + "metadata" ] }, - "ToolMessageStart": { + "MakeToolMetadata": { "type": "object", "properties": { - "contents": { + "scenarioId": { + "type": "number" + }, + "triggerHookId": { + "type": "number" + } + } + }, + "CreateMakeToolDTO": { + "type": "object", + "properties": { + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false + }, + "messages": { "type": "array", - "description": "This is an alternative to the `content` property. It allows to specify variants of the same content, one per language.\n\nUsage:\n- If your assistants are multilingual, you can provide content for each language.\n- If you don't provide content for a language, the first item in the array will be automatically translated to the active language at that moment.\n\nThis will override the `content` property.", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", "items": { "oneOf": [ { - "$ref": "#/components/schemas/TextContent", - "title": "Text" + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" } ] } @@ -3793,28 +6007,36 @@ "type": { "type": "string", "enum": [ - "request-start" + "make" ], - "description": "This message is triggered when the tool call starts.\n\nThis message is never triggered for async tools.\n\nIf this message is not provided, one of the default filler messages \"Hold on a sec\", \"One moment\", \"Just a sec\", \"Give me a moment\" or \"This'll just take a sec\" will be used." + "description": "The type of tool. \"make\" for Make tool." }, - "content": { - "type": "string", - "description": "This is the content that the assistant says when this message is triggered.", - "maxLength": 1000 + "metadata": { + "$ref": "#/components/schemas/MakeToolMetadata" }, - "conditions": { - "description": "This is an optional array of conditions that the tool call arguments must meet in order for this message to be triggered.", - "type": "array", - "items": { - "$ref": "#/components/schemas/Condition" - } + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] + }, + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] } }, "required": [ - "type" + "type", + "metadata" ] }, - "ToolMessageComplete": { + "CustomMessage": { "type": "object", "properties": { "contents": { @@ -3831,233 +6053,282 @@ }, "type": { "type": "string", - "description": "This message is triggered when the tool call is complete.\n\nThis message is triggered immediately without waiting for your server to respond for async tool calls.\n\nIf this message is not provided, the model will be requested to respond.\n\nIf this message is provided, only this message will be spoken and the model will not be requested to come up with a response. It's an exclusive OR.", + "description": "This is a custom message.", "enum": [ - "request-complete" + "custom-message" ] }, - "role": { + "content": { + "type": "string", + "description": "This is the content that the assistant will say when this message is triggered.", + "maxLength": 1000 + } + }, + "required": [ + "type" + ] + }, + "TransferDestinationAssistant": { + "type": "object", + "properties": { + "message": { + "description": "This is spoken to the customer before connecting them to the destination.\n\nUsage:\n- If this is not provided and transfer tool messages is not provided, default is \"Transferring the call now\".\n- If set to \"\", nothing is spoken. This is useful when you want to silently transfer. This is especially useful when transferring between assistants in a squad. In this scenario, you likely also want to set `assistant.firstMessageMode=assistant-speaks-first-with-model-generated-message` for the destination assistant.\n\nThis accepts a string or a ToolMessageStart class. Latter is useful if you want to specify multiple messages for different languages through the `contents` field.", + "oneOf": [ + { + "type": "string" + }, + { + "$ref": "#/components/schemas/CustomMessage" + } + ] + }, + "type": { "type": "string", - "description": "This is optional and defaults to \"assistant\".\n\nWhen role=assistant, `content` is said out loud.\n\nWhen role=system, `content` is passed to the model in a system message. Example:\n system: default one\n assistant:\n user:\n assistant:\n user:\n assistant:\n user:\n assistant: tool called\n tool: your server response\n <--- system prompt as hint\n ---> model generates response which is spoken\nThis is useful when you want to provide a hint to the model about what to say next.", "enum": [ - "assistant", - "system" + "assistant" ] }, - "endCallAfterSpokenEnabled": { - "type": "boolean", - "description": "This is an optional boolean that if true, the call will end after the message is spoken. Default is false.\n\nThis is ignored if `role` is set to `system`.\n\n@default false", - "example": false + "transferMode": { + "type": "string", + "description": "This is the mode to use for the transfer. Defaults to `rolling-history`.\n\n- `rolling-history`: This is the default mode. It keeps the entire conversation history and appends the new assistant's system message on transfer.\n\n Example:\n\n Pre-transfer:\n system: assistant1 system message\n assistant: assistant1 first message\n user: hey, good morning\n assistant: how can i help?\n user: i need help with my account\n assistant: (destination.message)\n\n Post-transfer:\n system: assistant1 system message\n assistant: assistant1 first message\n user: hey, good morning\n assistant: how can i help?\n user: i need help with my account\n assistant: (destination.message)\n system: assistant2 system message\n assistant: assistant2 first message (or model generated if firstMessageMode is set to `assistant-speaks-first-with-model-generated-message`)\n\n- `swap-system-message-in-history`: This replaces the original system message with the new assistant's system message on transfer.\n\n Example:\n\n Pre-transfer:\n system: assistant1 system message\n assistant: assistant1 first message\n user: hey, good morning\n assistant: how can i help?\n user: i need help with my account\n assistant: (destination.message)\n\n Post-transfer:\n system: assistant2 system message\n assistant: assistant1 first message\n user: hey, good morning\n assistant: how can i help?\n user: i need help with my account\n assistant: (destination.message)\n assistant: assistant2 first message (or model generated if firstMessageMode is set to `assistant-speaks-first-with-model-generated-message`)\n\n- `delete-history`: This deletes the entire conversation history on transfer.\n\n Example:\n\n Pre-transfer:\n system: assistant1 system message\n assistant: assistant1 first message\n user: hey, good morning\n assistant: how can i help?\n user: i need help with my account\n assistant: (destination.message)\n\n Post-transfer:\n system: assistant2 system message\n assistant: assistant2 first message\n user: Yes, please\n assistant: how can i help?\n user: i need help with my account\n\n- `swap-system-message-in-history-and-remove-transfer-tool-messages`: This replaces the original system message with the new assistant's system message on transfer and removes transfer tool messages from conversation history sent to the LLM.\n\n Example:\n\n Pre-transfer:\n system: assistant1 system message\n assistant: assistant1 first message\n user: hey, good morning\n assistant: how can i help?\n user: i need help with my account\n transfer-tool\n transfer-tool-result\n assistant: (destination.message)\n\n Post-transfer:\n system: assistant2 system message\n assistant: assistant1 first message\n user: hey, good morning\n assistant: how can i help?\n user: i need help with my account\n assistant: (destination.message)\n assistant: assistant2 first message (or model generated if firstMessageMode is set to `assistant-speaks-first-with-model-generated-message`)\n\n@default 'rolling-history'", + "enum": [ + "rolling-history", + "swap-system-message-in-history", + "swap-system-message-in-history-and-remove-transfer-tool-messages", + "delete-history" + ] }, - "content": { + "assistantName": { "type": "string", - "description": "This is the content that the assistant says when this message is triggered.", - "maxLength": 1000 + "description": "This is the assistant to transfer the call to." }, - "conditions": { - "description": "This is an optional array of conditions that the tool call arguments must meet in order for this message to be triggered.", - "type": "array", - "items": { - "$ref": "#/components/schemas/Condition" - } + "description": { + "type": "string", + "description": "This is the description of the destination, used by the AI to choose when and how to transfer the call." } }, "required": [ - "type" + "type", + "assistantName" ] }, - "ToolMessageFailed": { + "TransferDestinationStep": { "type": "object", "properties": { - "contents": { - "type": "array", - "description": "This is an alternative to the `content` property. It allows to specify variants of the same content, one per language.\n\nUsage:\n- If your assistants are multilingual, you can provide content for each language.\n- If you don't provide content for a language, the first item in the array will be automatically translated to the active language at that moment.\n\nThis will override the `content` property.", - "items": { - "oneOf": [ - { - "$ref": "#/components/schemas/TextContent", - "title": "Text" - } - ] - } + "message": { + "description": "This is spoken to the customer before connecting them to the destination.\n\nUsage:\n- If this is not provided and transfer tool messages is not provided, default is \"Transferring the call now\".\n- If set to \"\", nothing is spoken. This is useful when you want to silently transfer. This is especially useful when transferring between assistants in a squad. In this scenario, you likely also want to set `assistant.firstMessageMode=assistant-speaks-first-with-model-generated-message` for the destination assistant.\n\nThis accepts a string or a ToolMessageStart class. Latter is useful if you want to specify multiple messages for different languages through the `contents` field.", + "oneOf": [ + { + "type": "string" + }, + { + "$ref": "#/components/schemas/CustomMessage" + } + ] }, "type": { "type": "string", - "description": "This message is triggered when the tool call fails.\n\nThis message is never triggered for async tool calls.\n\nIf this message is not provided, the model will be requested to respond.\n\nIf this message is provided, only this message will be spoken and the model will not be requested to come up with a response. It's an exclusive OR.", "enum": [ - "request-failed" + "step" ] }, - "endCallAfterSpokenEnabled": { - "type": "boolean", - "description": "This is an optional boolean that if true, the call will end after the message is spoken. Default is false.\n\n@default false", - "example": false - }, - "content": { + "stepName": { "type": "string", - "description": "This is the content that the assistant says when this message is triggered.", - "maxLength": 1000 + "description": "This is the step to transfer to." }, - "conditions": { - "description": "This is an optional array of conditions that the tool call arguments must meet in order for this message to be triggered.", - "type": "array", - "items": { - "$ref": "#/components/schemas/Condition" - } + "description": { + "type": "string", + "description": "This is the description of the destination, used by the AI to choose when and how to transfer the call." } }, "required": [ - "type" + "type", + "stepName" ] }, - "ToolMessageDelayed": { + "SummaryPlan": { "type": "object", "properties": { - "contents": { + "messages": { + "description": "These are the messages used to generate the summary.\n\n@default: ```\n[\n {\n \"role\": \"system\",\n \"content\": \"You are an expert note-taker. You will be given a transcript of a call. Summarize the call in 2-3 sentences. DO NOT return anything except the summary.\"\n },\n {\n \"role\": \"user\",\n \"content\": \"Here is the transcript:\\n\\n{{transcript}}\\n\\n\"\n }\n]```\n\nYou can customize by providing any messages you want.\n\nHere are the template variables available:\n- {{transcript}}: The transcript of the call from `call.artifact.transcript`- {{systemPrompt}}: The system prompt of the call from `assistant.model.messages[type=system].content`", "type": "array", - "description": "This is an alternative to the `content` property. It allows to specify variants of the same content, one per language.\n\nUsage:\n- If your assistants are multilingual, you can provide content for each language.\n- If you don't provide content for a language, the first item in the array will be automatically translated to the active language at that moment.\n\nThis will override the `content` property.", "items": { - "oneOf": [ - { - "$ref": "#/components/schemas/TextContent", - "title": "Text" - } - ] + "type": "object" } }, - "type": { - "type": "string", - "description": "This message is triggered when the tool call is delayed.\n\nThere are the two things that can trigger this message:\n1. The user talks with the assistant while your server is processing the request. Default is \"Sorry, a few more seconds.\"\n2. The server doesn't respond within `timingMilliseconds`.\n\nThis message is never triggered for async tool calls.", - "enum": [ - "request-response-delayed" - ] + "enabled": { + "type": "boolean", + "description": "This determines whether a summary is generated and stored in `call.analysis.summary`. Defaults to true.\n\nUsage:\n- If you want to disable the summary, set this to false.\n\n@default true" }, - "timingMilliseconds": { + "timeoutSeconds": { "type": "number", - "minimum": 100, - "maximum": 120000, - "example": 1000, - "description": "The number of milliseconds to wait for the server response before saying this message." - }, - "content": { - "type": "string", - "description": "This is the content that the assistant says when this message is triggered.", - "maxLength": 1000 - }, - "conditions": { - "description": "This is an optional array of conditions that the tool call arguments must meet in order for this message to be triggered.", - "type": "array", - "items": { - "$ref": "#/components/schemas/Condition" - } + "description": "This is how long the request is tried before giving up. When request times out, `call.analysis.summary` will be empty.\n\nUsage:\n- To guarantee the summary is generated, set this value high. Note, this will delay the end of call report in cases where model is slow to respond.\n\n@default 5 seconds", + "minimum": 1, + "maximum": 60 } - }, - "required": [ - "type" - ] + } }, - "JsonSchema": { + "TransferPlan": { "type": "object", "properties": { - "type": { + "mode": { "type": "string", - "description": "This is the type of output you'd like.\n\n`string`, `number`, `integer`, `boolean` are the primitive types and should be obvious.\n\n`array` and `object` are more interesting and quite powerful. They allow you to define nested structures.\n\nFor `array`, you can define the schema of the items in the array using the `items` property.\n\nFor `object`, you can define the properties of the object using the `properties` property.", + "description": "This configures how transfer is executed and the experience of the destination party receiving the call.\n\nUsage:\n- `blind-transfer`: The assistant forwards the call to the destination without any message or summary.\n- `blind-transfer-add-summary-to-sip-header`: The assistant forwards the call to the destination and adds a SIP header X-Transfer-Summary to the call to include the summary.\n- `warm-transfer-say-message`: The assistant dials the destination, delivers the `message` to the destination party, connects the customer, and leaves the call.\n- `warm-transfer-say-summary`: The assistant dials the destination, provides a summary of the call to the destination party, connects the customer, and leaves the call.\n- `warm-transfer-wait-for-operator-to-speak-first-and-then-say-message`: The assistant dials the destination, waits for the operator to speak, delivers the `message` to the destination party, and then connects the customer.\n- `warm-transfer-wait-for-operator-to-speak-first-and-then-say-summary`: The assistant dials the destination, waits for the operator to speak, provides a summary of the call to the destination party, and then connects the customer.\n- `warm-transfer-twiml`: The assistant dials the destination, executes the twiml instructions on the destination call leg, connects the customer, and leaves the call.\n\n@default 'blind-transfer'", "enum": [ - "string", - "number", - "integer", - "boolean", - "array", - "object" + "blind-transfer", + "blind-transfer-add-summary-to-sip-header", + "warm-transfer-say-message", + "warm-transfer-say-summary", + "warm-transfer-twiml", + "warm-transfer-wait-for-operator-to-speak-first-and-then-say-message", + "warm-transfer-wait-for-operator-to-speak-first-and-then-say-summary" ] }, - "items": { - "type": "object", - "description": "This is required if the type is \"array\". This is the schema of the items in the array.\n\nThis is of type JsonSchema. However, Swagger doesn't support circular references." + "message": { + "description": "This is the message the assistant will deliver to the destination party before connecting the customer.\n\nUsage:\n- Used only when `mode` is `blind-transfer-add-summary-to-sip-header`, `warm-transfer-say-message` or `warm-transfer-wait-for-operator-to-speak-first-and-then-say-message`.", + "oneOf": [ + { + "type": "string" + }, + { + "$ref": "#/components/schemas/CustomMessage" + } + ] }, - "properties": { + "sipVerb": { "type": "object", - "description": "This is required if the type is \"object\". This specifies the properties of the object.\n\nThis is a map of string to JsonSchema. However, Swagger doesn't support circular references." + "description": "This specifies the SIP verb to use while transferring the call.\n- 'refer': Uses SIP REFER to transfer the call (default)\n- 'bye': Ends current call with SIP BYE", + "default": "refer", + "enum": [ + "refer", + "bye" + ] }, - "description": { + "twiml": { "type": "string", - "description": "This is the description to help the model understand what it needs to output." + "description": "This is the TwiML instructions to execute on the destination call leg before connecting the customer.\n\nUsage:\n- Used only when `mode` is `warm-transfer-twiml`.\n- Supports only `Play`, `Say`, `Gather`, `Hangup` and `Pause` verbs.\n- Maximum length is 4096 characters.\n\nExample:\n```\nHello, transferring a customer to you.\n\nThey called about billing questions.\n```", + "maxLength": 4096 }, - "required": { - "description": "This is a list of properties that are required.\n\nThis only makes sense if the type is \"object\".", - "type": "array", - "items": { - "type": "string" - } + "summaryPlan": { + "description": "This is the plan for generating a summary of the call to present to the destination party.\n\nUsage:\n- Used only when `mode` is `blind-transfer-add-summary-to-sip-header` or `warm-transfer-say-summary` or `warm-transfer-wait-for-operator-to-speak-first-and-then-say-summary`.", + "allOf": [ + { + "$ref": "#/components/schemas/SummaryPlan" + } + ] } }, "required": [ - "type" + "mode" ] }, - "OpenAIFunctionParameters": { + "TransferDestinationNumber": { "type": "object", "properties": { + "message": { + "description": "This is spoken to the customer before connecting them to the destination.\n\nUsage:\n- If this is not provided and transfer tool messages is not provided, default is \"Transferring the call now\".\n- If set to \"\", nothing is spoken. This is useful when you want to silently transfer. This is especially useful when transferring between assistants in a squad. In this scenario, you likely also want to set `assistant.firstMessageMode=assistant-speaks-first-with-model-generated-message` for the destination assistant.\n\nThis accepts a string or a ToolMessageStart class. Latter is useful if you want to specify multiple messages for different languages through the `contents` field.", + "oneOf": [ + { + "type": "string" + }, + { + "$ref": "#/components/schemas/CustomMessage" + } + ] + }, "type": { "type": "string", - "description": "This must be set to 'object'. It instructs the model to return a JSON object containing the function call properties.", "enum": [ - "object" + "number" ] }, - "properties": { - "type": "object", - "description": "This provides a description of the properties required by the function.\nJSON Schema can be used to specify expectations for each property.\nRefer to [this doc](https://ajv.js.org/json-schema.html#json-data-type) for a comprehensive guide on JSON Schema.", - "additionalProperties": { - "$ref": "#/components/schemas/JsonSchema" - } + "numberE164CheckEnabled": { + "type": "boolean", + "description": "This is the flag to toggle the E164 check for the `number` field. This is an advanced property which should be used if you know your use case requires it.\n\nUse cases:\n- `false`: To allow non-E164 numbers like `+001234567890`, `1234`, or `abc`. This is useful for dialing out to non-E164 numbers on your SIP trunks.\n- `true` (default): To allow only E164 numbers like `+14155551234`. This is standard for PSTN calls.\n\nIf `false`, the `number` is still required to only contain alphanumeric characters (regex: `/^\\+?[a-zA-Z0-9]+$/`).\n\n@default true (E164 check is enabled)", + "default": true }, - "required": { - "description": "This specifies the properties that are required by the function.", - "type": "array", - "items": { - "type": "string" - } + "number": { + "type": "string", + "description": "This is the phone number to transfer the call to.", + "minLength": 3, + "maxLength": 40 + }, + "extension": { + "type": "string", + "description": "This is the extension to dial after transferring the call to the `number`.", + "minLength": 1, + "maxLength": 10 + }, + "callerId": { + "type": "string", + "description": "This is the caller ID to use when transferring the call to the `number`.\n\nUsage:\n- If not provided, the caller ID will be the number the call is coming from. Example, +14151111111 calls in to and the assistant transfers out to +16470000000. +16470000000 will see +14151111111 as the caller.\n- To change this behavior, provide a `callerId`.\n- Set to '{{customer.number}}' to always use the customer's number as the caller ID.\n- Set to '{{phoneNumber.number}}' to always use the phone number of the assistant as the caller ID.\n- Set to any E164 number to always use that number as the caller ID. This needs to be a number that is owned or verified by your Transport provider like Twilio.\n\nFor Twilio, you can read up more here: https://www.twilio.com/docs/voice/twiml/dial#callerid", + "maxLength": 40 + }, + "transferPlan": { + "description": "This configures how transfer is executed and the experience of the destination party receiving the call. Defaults to `blind-transfer`.\n\n@default `transferPlan.mode='blind-transfer'`", + "allOf": [ + { + "$ref": "#/components/schemas/TransferPlan" + } + ] + }, + "description": { + "type": "string", + "description": "This is the description of the destination, used by the AI to choose when and how to transfer the call." } }, "required": [ "type", - "properties" + "number" ] }, - "OpenAIFunction": { + "TransferDestinationSip": { "type": "object", "properties": { - "strict": { - "type": "boolean", - "description": "This is a boolean that controls whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is true. Learn more about Structured Outputs in the [OpenAI guide](https://openai.com/index/introducing-structured-outputs-in-the-api/).\n\n@default false", - "default": false + "message": { + "description": "This is spoken to the customer before connecting them to the destination.\n\nUsage:\n- If this is not provided and transfer tool messages is not provided, default is \"Transferring the call now\".\n- If set to \"\", nothing is spoken. This is useful when you want to silently transfer. This is especially useful when transferring between assistants in a squad. In this scenario, you likely also want to set `assistant.firstMessageMode=assistant-speaks-first-with-model-generated-message` for the destination assistant.\n\nThis accepts a string or a ToolMessageStart class. Latter is useful if you want to specify multiple messages for different languages through the `contents` field.", + "oneOf": [ + { + "type": "string" + }, + { + "$ref": "#/components/schemas/CustomMessage" + } + ] }, - "name": { + "type": { "type": "string", - "description": "This is the the name of the function to be called.\n\nMust be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.", - "maxLength": 64, - "pattern": "/^[a-zA-Z0-9_-]{1,64}$/" + "enum": [ + "sip" + ] }, - "description": { + "sipUri": { "type": "string", - "description": "This is the description of what the function does, used by the AI to choose when and how to call the function.", - "maxLength": 1000 + "description": "This is the SIP URI to transfer the call to." }, - "parameters": { - "description": "These are the parameters the functions accepts, described as a JSON Schema object.\n\nSee the [OpenAI guide](https://platform.openai.com/docs/guides/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema) for documentation about the format.\n\nOmitting parameters defines a function with an empty parameter list.", + "transferPlan": { + "description": "This configures how transfer is executed and the experience of the destination party receiving the call. Defaults to `blind-transfer`.\n\n@default `transferPlan.mode='blind-transfer'`", "allOf": [ { - "$ref": "#/components/schemas/OpenAIFunctionParameters" + "$ref": "#/components/schemas/TransferPlan" } ] + }, + "sipHeaders": { + "type": "object", + "description": "These are custom headers to be added to SIP refer during transfer call." + }, + "description": { + "type": "string", + "description": "This is the description of the destination, used by the AI to choose when and how to transfer the call." } }, "required": [ - "name" + "type", + "sipUri" ] }, - "CreateDtmfToolDTO": { + "CreateTransferCallToolDTO": { "type": "object", "properties": { "async": { @@ -4092,9 +6363,32 @@ "type": { "type": "string", "enum": [ - "dtmf" - ], - "description": "The type of tool. \"dtmf\" for DTMF tool." + "transferCall" + ] + }, + "destinations": { + "type": "array", + "description": "These are the destinations that the call can be transferred to. If no destinations are provided, server.url will be used to get the transfer destination once the tool is called.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/TransferDestinationAssistant", + "title": "Assistant" + }, + { + "$ref": "#/components/schemas/TransferDestinationStep", + "title": "Step" + }, + { + "$ref": "#/components/schemas/TransferDestinationNumber", + "title": "Number" + }, + { + "$ref": "#/components/schemas/TransferDestinationSip", + "title": "Sip" + } + ] + } }, "function": { "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", @@ -4117,744 +6411,1212 @@ "type" ] }, - "CreateEndCallToolDTO": { + "CreateCustomKnowledgeBaseDTO": { "type": "object", "properties": { - "async": { - "type": "boolean", - "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", - "example": false + "provider": { + "type": "string", + "description": "This knowledge base is bring your own knowledge base implementation.", + "enum": [ + "custom-knowledge-base" + ] + }, + "server": { + "description": "This is where the knowledge base request will be sent.\n\nRequest Example:\n\nPOST https://{server.url}\nContent-Type: application/json\n\n{\n \"messsage\": {\n \"type\": \"knowledge-base-request\",\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": \"Why is ocean blue?\"\n }\n ],\n ...other metadata about the call...\n }\n}\n\nResponse Expected:\n```\n{\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"The ocean is blue because water absorbs everything but blue.\",\n }, // YOU CAN RETURN THE EXACT RESPONSE TO SPEAK\n \"documents\": [\n {\n \"content\": \"The ocean is blue primarily because water absorbs colors in the red part of the light spectrum and scatters the blue light, making it more visible to our eyes.\",\n \"similarity\": 1\n },\n {\n \"content\": \"Blue light is scattered more by the water molecules than other colors, enhancing the blue appearance of the ocean.\",\n \"similarity\": .5\n }\n ] // OR, YOU CAN RETURN AN ARRAY OF DOCUMENTS THAT WILL BE SENT TO THE MODEL\n}\n```", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] + } + }, + "required": [ + "provider", + "server" + ] + }, + "OpenAIMessage": { + "type": "object", + "properties": { + "content": { + "type": "string", + "nullable": true, + "maxLength": 100000000 }, + "role": { + "type": "string", + "enum": [ + "assistant", + "function", + "user", + "system", + "tool" + ] + } + }, + "required": [ + "content", + "role" + ] + }, + "AnyscaleModel": { + "type": "object", + "properties": { "messages": { + "description": "This is the starting state for the conversation.", "type": "array", - "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "$ref": "#/components/schemas/OpenAIMessage" + } + }, + "tools": { + "type": "array", + "description": "These are the tools that the assistant can use during the call. To use existing tools, use `toolIds`.\n\nBoth `tools` and `toolIds` can be used together.", "items": { "oneOf": [ { - "$ref": "#/components/schemas/ToolMessageStart", - "title": "ToolMessageStart" + "$ref": "#/components/schemas/CreateDtmfToolDTO", + "title": "DtmfTool" }, { - "$ref": "#/components/schemas/ToolMessageComplete", - "title": "ToolMessageComplete" + "$ref": "#/components/schemas/CreateEndCallToolDTO", + "title": "EndCallTool" }, { - "$ref": "#/components/schemas/ToolMessageFailed", - "title": "ToolMessageFailed" + "$ref": "#/components/schemas/CreateVoicemailToolDTO", + "title": "VoicemailTool" }, { - "$ref": "#/components/schemas/ToolMessageDelayed", - "title": "ToolMessageDelayed" + "$ref": "#/components/schemas/CreateFunctionToolDTO", + "title": "FunctionTool" + }, + { + "$ref": "#/components/schemas/CreateGhlToolDTO", + "title": "GhlTool" + }, + { + "$ref": "#/components/schemas/CreateMakeToolDTO", + "title": "MakeTool" + }, + { + "$ref": "#/components/schemas/CreateTransferCallToolDTO", + "title": "TransferTool" } ] } }, - "type": { - "type": "string", - "enum": [ - "endCall" - ], - "description": "The type of tool. \"endCall\" for End Call tool." + "toolIds": { + "description": "These are the tools that the assistant can use during the call. To use transient tools, use `tools`.\n\nBoth `tools` and `toolIds` can be used together.", + "type": "array", + "items": { + "type": "string" + } }, - "function": { - "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", - "allOf": [ + "knowledgeBase": { + "description": "These are the options for the knowledge base.", + "oneOf": [ { - "$ref": "#/components/schemas/OpenAIFunction" + "$ref": "#/components/schemas/CreateCustomKnowledgeBaseDTO", + "title": "Custom" } ] }, - "server": { - "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", - "allOf": [ - { - "$ref": "#/components/schemas/Server" - } + "knowledgeBaseId": { + "type": "string", + "description": "This is the ID of the knowledge base the model will use." + }, + "provider": { + "type": "string", + "enum": [ + "anyscale" ] + }, + "model": { + "type": "string", + "description": "This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b" + }, + "temperature": { + "type": "number", + "description": "This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.", + "minimum": 0, + "maximum": 2 + }, + "maxTokens": { + "type": "number", + "description": "This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.", + "minimum": 50, + "maximum": 10000 + }, + "emotionRecognitionEnabled": { + "type": "boolean", + "description": "This determines whether we detect user's emotion while they speak and send it as an additional info to model.\n\nDefault `false` because the model is usually are good at understanding the user's emotion from text.\n\n@default false" + }, + "numFastTurns": { + "type": "number", + "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", + "minimum": 0 } }, "required": [ - "type" + "provider", + "model" ] }, - "CreateVoicemailToolDTO": { + "AnthropicModel": { "type": "object", "properties": { - "async": { - "type": "boolean", - "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", - "example": false - }, "messages": { + "description": "This is the starting state for the conversation.", "type": "array", - "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "$ref": "#/components/schemas/OpenAIMessage" + } + }, + "tools": { + "type": "array", + "description": "These are the tools that the assistant can use during the call. To use existing tools, use `toolIds`.\n\nBoth `tools` and `toolIds` can be used together.", "items": { "oneOf": [ { - "$ref": "#/components/schemas/ToolMessageStart", - "title": "ToolMessageStart" + "$ref": "#/components/schemas/CreateDtmfToolDTO", + "title": "DtmfTool" }, { - "$ref": "#/components/schemas/ToolMessageComplete", - "title": "ToolMessageComplete" + "$ref": "#/components/schemas/CreateEndCallToolDTO", + "title": "EndCallTool" }, { - "$ref": "#/components/schemas/ToolMessageFailed", - "title": "ToolMessageFailed" + "$ref": "#/components/schemas/CreateVoicemailToolDTO", + "title": "VoicemailTool" }, { - "$ref": "#/components/schemas/ToolMessageDelayed", - "title": "ToolMessageDelayed" + "$ref": "#/components/schemas/CreateFunctionToolDTO", + "title": "FunctionTool" + }, + { + "$ref": "#/components/schemas/CreateGhlToolDTO", + "title": "GhlTool" + }, + { + "$ref": "#/components/schemas/CreateMakeToolDTO", + "title": "MakeTool" + }, + { + "$ref": "#/components/schemas/CreateTransferCallToolDTO", + "title": "TransferTool" } ] } }, - "type": { + "toolIds": { + "description": "These are the tools that the assistant can use during the call. To use transient tools, use `tools`.\n\nBoth `tools` and `toolIds` can be used together.", + "type": "array", + "items": { + "type": "string" + } + }, + "knowledgeBase": { + "description": "These are the options for the knowledge base.", + "oneOf": [ + { + "$ref": "#/components/schemas/CreateCustomKnowledgeBaseDTO", + "title": "Custom" + } + ] + }, + "knowledgeBaseId": { + "type": "string", + "description": "This is the ID of the knowledge base the model will use." + }, + "model": { + "type": "string", + "description": "This is the Anthropic/Claude models that will be used.", + "enum": [ + "claude-3-opus-20240229", + "claude-3-sonnet-20240229", + "claude-3-haiku-20240307", + "claude-3-5-sonnet-20240620", + "claude-3-5-sonnet-20241022", + "claude-3-5-haiku-20241022" + ] + }, + "provider": { "type": "string", "enum": [ - "voicemail" - ], - "description": "The type of tool. \"voicemail\". This uses the model itself to determine if a voicemil was reached. Can be used alternatively/alongside with TwilioVoicemailDetection" - }, - "function": { - "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", - "allOf": [ - { - "$ref": "#/components/schemas/OpenAIFunction" - } + "anthropic" ] }, - "server": { - "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", - "allOf": [ - { - "$ref": "#/components/schemas/Server" - } - ] + "temperature": { + "type": "number", + "description": "This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.", + "minimum": 0, + "maximum": 2 + }, + "maxTokens": { + "type": "number", + "description": "This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.", + "minimum": 50, + "maximum": 10000 + }, + "emotionRecognitionEnabled": { + "type": "boolean", + "description": "This determines whether we detect user's emotion while they speak and send it as an additional info to model.\n\nDefault `false` because the model is usually are good at understanding the user's emotion from text.\n\n@default false" + }, + "numFastTurns": { + "type": "number", + "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", + "minimum": 0 } }, "required": [ - "type" + "model", + "provider" ] }, - "CreateFunctionToolDTO": { + "CustomLLMModel": { "type": "object", "properties": { - "async": { - "type": "boolean", - "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", - "example": false - }, "messages": { + "description": "This is the starting state for the conversation.", "type": "array", - "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "$ref": "#/components/schemas/OpenAIMessage" + } + }, + "tools": { + "type": "array", + "description": "These are the tools that the assistant can use during the call. To use existing tools, use `toolIds`.\n\nBoth `tools` and `toolIds` can be used together.", "items": { "oneOf": [ { - "$ref": "#/components/schemas/ToolMessageStart", - "title": "ToolMessageStart" + "$ref": "#/components/schemas/CreateDtmfToolDTO", + "title": "DtmfTool" }, { - "$ref": "#/components/schemas/ToolMessageComplete", - "title": "ToolMessageComplete" + "$ref": "#/components/schemas/CreateEndCallToolDTO", + "title": "EndCallTool" }, { - "$ref": "#/components/schemas/ToolMessageFailed", - "title": "ToolMessageFailed" + "$ref": "#/components/schemas/CreateVoicemailToolDTO", + "title": "VoicemailTool" }, { - "$ref": "#/components/schemas/ToolMessageDelayed", - "title": "ToolMessageDelayed" + "$ref": "#/components/schemas/CreateFunctionToolDTO", + "title": "FunctionTool" + }, + { + "$ref": "#/components/schemas/CreateGhlToolDTO", + "title": "GhlTool" + }, + { + "$ref": "#/components/schemas/CreateMakeToolDTO", + "title": "MakeTool" + }, + { + "$ref": "#/components/schemas/CreateTransferCallToolDTO", + "title": "TransferTool" } ] } }, - "type": { - "type": "string", - "enum": [ - "function" - ], - "description": "The type of tool. \"function\" for Function tool." + "toolIds": { + "description": "These are the tools that the assistant can use during the call. To use transient tools, use `tools`.\n\nBoth `tools` and `toolIds` can be used together.", + "type": "array", + "items": { + "type": "string" + } }, - "function": { - "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", - "allOf": [ + "knowledgeBase": { + "description": "These are the options for the knowledge base.", + "oneOf": [ { - "$ref": "#/components/schemas/OpenAIFunction" + "$ref": "#/components/schemas/CreateCustomKnowledgeBaseDTO", + "title": "Custom" } ] }, - "server": { - "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", - "allOf": [ - { - "$ref": "#/components/schemas/Server" - } + "knowledgeBaseId": { + "type": "string", + "description": "This is the ID of the knowledge base the model will use." + }, + "provider": { + "type": "string", + "description": "This is the provider that will be used for the model. Any service, including your own server, that is compatible with the OpenAI API can be used.", + "enum": [ + "custom-llm" + ] + }, + "metadataSendMode": { + "type": "string", + "description": "This determines whether metadata is sent in requests to the custom provider.\n\n- `off` will not send any metadata. payload will look like `{ messages }`\n- `variable` will send `assistant.metadata` as a variable on the payload. payload will look like `{ messages, metadata }`\n- `destructured` will send `assistant.metadata` fields directly on the payload. payload will look like `{ messages, ...metadata }`\n\nFurther, `variable` and `destructured` will send `call`, `phoneNumber`, and `customer` objects in the payload.\n\nDefault is `variable`.", + "enum": [ + "off", + "variable", + "destructured" ] + }, + "url": { + "type": "string", + "description": "These is the URL we'll use for the OpenAI client's `baseURL`. Ex. https://openrouter.ai/api/v1" + }, + "model": { + "type": "string", + "description": "This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b" + }, + "temperature": { + "type": "number", + "description": "This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.", + "minimum": 0, + "maximum": 2 + }, + "maxTokens": { + "type": "number", + "description": "This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.", + "minimum": 50, + "maximum": 10000 + }, + "emotionRecognitionEnabled": { + "type": "boolean", + "description": "This determines whether we detect user's emotion while they speak and send it as an additional info to model.\n\nDefault `false` because the model is usually are good at understanding the user's emotion from text.\n\n@default false" + }, + "numFastTurns": { + "type": "number", + "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", + "minimum": 0 } }, "required": [ - "type" + "provider", + "url", + "model" ] }, - "GhlToolMetadata": { - "type": "object", - "properties": { - "workflowId": { - "type": "string" - }, - "locationId": { - "type": "string" - } - } - }, - "CreateGhlToolDTO": { + "DeepInfraModel": { "type": "object", "properties": { - "async": { - "type": "boolean", - "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", - "example": false - }, "messages": { + "description": "This is the starting state for the conversation.", "type": "array", - "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "$ref": "#/components/schemas/OpenAIMessage" + } + }, + "tools": { + "type": "array", + "description": "These are the tools that the assistant can use during the call. To use existing tools, use `toolIds`.\n\nBoth `tools` and `toolIds` can be used together.", "items": { "oneOf": [ { - "$ref": "#/components/schemas/ToolMessageStart", - "title": "ToolMessageStart" + "$ref": "#/components/schemas/CreateDtmfToolDTO", + "title": "DtmfTool" }, { - "$ref": "#/components/schemas/ToolMessageComplete", - "title": "ToolMessageComplete" + "$ref": "#/components/schemas/CreateEndCallToolDTO", + "title": "EndCallTool" }, { - "$ref": "#/components/schemas/ToolMessageFailed", - "title": "ToolMessageFailed" + "$ref": "#/components/schemas/CreateVoicemailToolDTO", + "title": "VoicemailTool" }, { - "$ref": "#/components/schemas/ToolMessageDelayed", - "title": "ToolMessageDelayed" + "$ref": "#/components/schemas/CreateFunctionToolDTO", + "title": "FunctionTool" + }, + { + "$ref": "#/components/schemas/CreateGhlToolDTO", + "title": "GhlTool" + }, + { + "$ref": "#/components/schemas/CreateMakeToolDTO", + "title": "MakeTool" + }, + { + "$ref": "#/components/schemas/CreateTransferCallToolDTO", + "title": "TransferTool" } ] } }, - "type": { + "toolIds": { + "description": "These are the tools that the assistant can use during the call. To use transient tools, use `tools`.\n\nBoth `tools` and `toolIds` can be used together.", + "type": "array", + "items": { + "type": "string" + } + }, + "knowledgeBase": { + "description": "These are the options for the knowledge base.", + "oneOf": [ + { + "$ref": "#/components/schemas/CreateCustomKnowledgeBaseDTO", + "title": "Custom" + } + ] + }, + "knowledgeBaseId": { + "type": "string", + "description": "This is the ID of the knowledge base the model will use." + }, + "provider": { "type": "string", "enum": [ - "ghl" - ], - "description": "The type of tool. \"ghl\" for GHL tool." + "deepinfra" + ] + }, + "model": { + "type": "string", + "description": "This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b" + }, + "temperature": { + "type": "number", + "description": "This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.", + "minimum": 0, + "maximum": 2 }, - "metadata": { - "$ref": "#/components/schemas/GhlToolMetadata" + "maxTokens": { + "type": "number", + "description": "This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.", + "minimum": 50, + "maximum": 10000 }, - "function": { - "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", - "allOf": [ - { - "$ref": "#/components/schemas/OpenAIFunction" - } - ] + "emotionRecognitionEnabled": { + "type": "boolean", + "description": "This determines whether we detect user's emotion while they speak and send it as an additional info to model.\n\nDefault `false` because the model is usually are good at understanding the user's emotion from text.\n\n@default false" }, - "server": { - "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", - "allOf": [ - { - "$ref": "#/components/schemas/Server" - } + "numFastTurns": { + "type": "number", + "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", + "minimum": 0 + } + }, + "required": [ + "provider", + "model" + ] + }, + "GeminiMultimodalLivePrebuiltVoiceConfig": { + "type": "object", + "properties": { + "voiceName": { + "type": "string", + "enum": [ + "Puck", + "Charon", + "Kore", + "Fenrir", + "Aoede" ] } }, "required": [ - "type", - "metadata" + "voiceName" ] }, - "MakeToolMetadata": { + "GeminiMultimodalLiveVoiceConfig": { "type": "object", "properties": { - "scenarioId": { - "type": "number" + "prebuiltVoiceConfig": { + "$ref": "#/components/schemas/GeminiMultimodalLivePrebuiltVoiceConfig" + } + }, + "required": [ + "prebuiltVoiceConfig" + ] + }, + "GeminiMultimodalLiveSpeechConfig": { + "type": "object", + "properties": { + "voiceConfig": { + "$ref": "#/components/schemas/GeminiMultimodalLiveVoiceConfig" + } + }, + "required": [ + "voiceConfig" + ] + }, + "GoogleRealtimeConfig": { + "type": "object", + "properties": { + "topP": { + "type": "number", + "description": "This is the nucleus sampling parameter that controls the cumulative probability of tokens considered during text generation.\nOnly applicable with the Gemini Flash 2.0 Multimodal Live API." }, - "triggerHookId": { - "type": "number" + "topK": { + "type": "number", + "description": "This is the top-k sampling parameter that limits the number of highest probability tokens considered during text generation.\nOnly applicable with the Gemini Flash 2.0 Multimodal Live API." + }, + "presencePenalty": { + "type": "number", + "description": "This is the presence penalty parameter that influences the model's likelihood to repeat information by penalizing tokens based on their presence in the text.\nOnly applicable with the Gemini Flash 2.0 Multimodal Live API." + }, + "frequencyPenalty": { + "type": "number", + "description": "This is the frequency penalty parameter that influences the model's likelihood to repeat tokens by penalizing them based on their frequency in the text.\nOnly applicable with the Gemini Flash 2.0 Multimodal Live API." + }, + "speechConfig": { + "description": "This is the speech configuration object that defines the voice settings to be used for the model's speech output.\nOnly applicable with the Gemini Flash 2.0 Multimodal Live API.", + "allOf": [ + { + "$ref": "#/components/schemas/GeminiMultimodalLiveSpeechConfig" + } + ] } } }, - "CreateMakeToolDTO": { + "GoogleModel": { "type": "object", "properties": { - "async": { - "type": "boolean", - "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", - "example": false - }, "messages": { + "description": "This is the starting state for the conversation.", "type": "array", - "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "$ref": "#/components/schemas/OpenAIMessage" + } + }, + "tools": { + "type": "array", + "description": "These are the tools that the assistant can use during the call. To use existing tools, use `toolIds`.\n\nBoth `tools` and `toolIds` can be used together.", "items": { "oneOf": [ { - "$ref": "#/components/schemas/ToolMessageStart", - "title": "ToolMessageStart" + "$ref": "#/components/schemas/CreateDtmfToolDTO", + "title": "DtmfTool" }, { - "$ref": "#/components/schemas/ToolMessageComplete", - "title": "ToolMessageComplete" + "$ref": "#/components/schemas/CreateEndCallToolDTO", + "title": "EndCallTool" }, { - "$ref": "#/components/schemas/ToolMessageFailed", - "title": "ToolMessageFailed" + "$ref": "#/components/schemas/CreateVoicemailToolDTO", + "title": "VoicemailTool" }, { - "$ref": "#/components/schemas/ToolMessageDelayed", - "title": "ToolMessageDelayed" + "$ref": "#/components/schemas/CreateFunctionToolDTO", + "title": "FunctionTool" + }, + { + "$ref": "#/components/schemas/CreateGhlToolDTO", + "title": "GhlTool" + }, + { + "$ref": "#/components/schemas/CreateMakeToolDTO", + "title": "MakeTool" + }, + { + "$ref": "#/components/schemas/CreateTransferCallToolDTO", + "title": "TransferTool" } ] } }, - "type": { - "type": "string", - "enum": [ - "make" - ], - "description": "The type of tool. \"make\" for Make tool." - }, - "metadata": { - "$ref": "#/components/schemas/MakeToolMetadata" + "toolIds": { + "description": "These are the tools that the assistant can use during the call. To use transient tools, use `tools`.\n\nBoth `tools` and `toolIds` can be used together.", + "type": "array", + "items": { + "type": "string" + } }, - "function": { - "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", - "allOf": [ + "knowledgeBase": { + "description": "These are the options for the knowledge base.", + "oneOf": [ { - "$ref": "#/components/schemas/OpenAIFunction" + "$ref": "#/components/schemas/CreateCustomKnowledgeBaseDTO", + "title": "Custom" } ] }, - "server": { - "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "knowledgeBaseId": { + "type": "string", + "description": "This is the ID of the knowledge base the model will use." + }, + "model": { + "type": "string", + "description": "This is the Google model that will be used.", + "enum": [ + "gemini-2.0-flash-thinking-exp", + "gemini-2.0-pro-exp-02-05", + "gemini-2.0-flash", + "gemini-2.0-flash-lite-preview-02-05", + "gemini-2.0-flash-exp", + "gemini-2.0-flash-realtime-exp", + "gemini-1.5-flash", + "gemini-1.5-flash-002", + "gemini-1.5-pro", + "gemini-1.5-pro-002", + "gemini-1.0-pro" + ] + }, + "provider": { + "type": "string", + "enum": [ + "google" + ] + }, + "realtimeConfig": { + "description": "This is the session configuration for the Gemini Flash 2.0 Multimodal Live API.\nOnly applicable if the model `gemini-2.0-flash-realtime-exp` is selected.", "allOf": [ { - "$ref": "#/components/schemas/Server" + "$ref": "#/components/schemas/GoogleRealtimeConfig" } ] + }, + "temperature": { + "type": "number", + "description": "This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.", + "minimum": 0, + "maximum": 2 + }, + "maxTokens": { + "type": "number", + "description": "This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.", + "minimum": 50, + "maximum": 10000 + }, + "emotionRecognitionEnabled": { + "type": "boolean", + "description": "This determines whether we detect user's emotion while they speak and send it as an additional info to model.\n\nDefault `false` because the model is usually are good at understanding the user's emotion from text.\n\n@default false" + }, + "numFastTurns": { + "type": "number", + "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", + "minimum": 0 } }, "required": [ - "type", - "metadata" + "model", + "provider" ] }, - "CustomMessage": { + "GroqModel": { "type": "object", "properties": { - "contents": { + "messages": { + "description": "This is the starting state for the conversation.", "type": "array", - "description": "This is an alternative to the `content` property. It allows to specify variants of the same content, one per language.\n\nUsage:\n- If your assistants are multilingual, you can provide content for each language.\n- If you don't provide content for a language, the first item in the array will be automatically translated to the active language at that moment.\n\nThis will override the `content` property.", + "items": { + "$ref": "#/components/schemas/OpenAIMessage" + } + }, + "tools": { + "type": "array", + "description": "These are the tools that the assistant can use during the call. To use existing tools, use `toolIds`.\n\nBoth `tools` and `toolIds` can be used together.", "items": { "oneOf": [ { - "$ref": "#/components/schemas/TextContent", - "title": "Text" + "$ref": "#/components/schemas/CreateDtmfToolDTO", + "title": "DtmfTool" + }, + { + "$ref": "#/components/schemas/CreateEndCallToolDTO", + "title": "EndCallTool" + }, + { + "$ref": "#/components/schemas/CreateVoicemailToolDTO", + "title": "VoicemailTool" + }, + { + "$ref": "#/components/schemas/CreateFunctionToolDTO", + "title": "FunctionTool" + }, + { + "$ref": "#/components/schemas/CreateGhlToolDTO", + "title": "GhlTool" + }, + { + "$ref": "#/components/schemas/CreateMakeToolDTO", + "title": "MakeTool" + }, + { + "$ref": "#/components/schemas/CreateTransferCallToolDTO", + "title": "TransferTool" } - ] - } - }, - "type": { - "type": "string", - "description": "This is a custom message.", - "enum": [ - "custom-message" - ] - }, - "content": { - "type": "string", - "description": "This is the content that the assistant will say when this message is triggered.", - "maxLength": 1000 - } - }, - "required": [ - "type" - ] - }, - "TransferDestinationAssistant": { - "type": "object", - "properties": { - "message": { - "description": "This is spoken to the customer before connecting them to the destination.\n\nUsage:\n- If this is not provided and transfer tool messages is not provided, default is \"Transferring the call now\".\n- If set to \"\", nothing is spoken. This is useful when you want to silently transfer. This is especially useful when transferring between assistants in a squad. In this scenario, you likely also want to set `assistant.firstMessageMode=assistant-speaks-first-with-model-generated-message` for the destination assistant.\n\nThis accepts a string or a ToolMessageStart class. Latter is useful if you want to specify multiple messages for different languages through the `contents` field.", + ] + } + }, + "toolIds": { + "description": "These are the tools that the assistant can use during the call. To use transient tools, use `tools`.\n\nBoth `tools` and `toolIds` can be used together.", + "type": "array", + "items": { + "type": "string" + } + }, + "knowledgeBase": { + "description": "These are the options for the knowledge base.", "oneOf": [ { - "type": "string" - }, - { - "$ref": "#/components/schemas/CustomMessage" + "$ref": "#/components/schemas/CreateCustomKnowledgeBaseDTO", + "title": "Custom" } ] }, - "type": { + "knowledgeBaseId": { "type": "string", - "enum": [ - "assistant" - ] + "description": "This is the ID of the knowledge base the model will use." }, - "transferMode": { + "model": { "type": "string", - "description": "This is the mode to use for the transfer. Defaults to `rolling-history`.\n\n- `rolling-history`: This is the default mode. It keeps the entire conversation history and appends the new assistant's system message on transfer.\n\n Example:\n\n Pre-transfer:\n system: assistant1 system message\n assistant: assistant1 first message\n user: hey, good morning\n assistant: how can i help?\n user: i need help with my account\n assistant: (destination.message)\n\n Post-transfer:\n system: assistant1 system message\n assistant: assistant1 first message\n user: hey, good morning\n assistant: how can i help?\n user: i need help with my account\n assistant: (destination.message)\n system: assistant2 system message\n assistant: assistant2 first message (or model generated if firstMessageMode is set to `assistant-speaks-first-with-model-generated-message`)\n\n- `swap-system-message-in-history`: This replaces the original system message with the new assistant's system message on transfer.\n\n Example:\n\n Pre-transfer:\n system: assistant1 system message\n assistant: assistant1 first message\n user: hey, good morning\n assistant: how can i help?\n user: i need help with my account\n assistant: (destination.message)\n\n Post-transfer:\n system: assistant2 system message\n assistant: assistant1 first message\n user: hey, good morning\n assistant: how can i help?\n user: i need help with my account\n assistant: (destination.message)\n assistant: assistant2 first message (or model generated if firstMessageMode is set to `assistant-speaks-first-with-model-generated-message`)\n\n- `delete-history`: This deletes the entire conversation history on transfer.\n\n Example:\n\n Pre-transfer:\n system: assistant1 system message\n assistant: assistant1 first message\n user: hey, good morning\n assistant: how can i help?\n user: i need help with my account\n assistant: (destination.message)\n\n Post-transfer:\n system: assistant2 system message\n assistant: assistant2 first message\n user: Yes, please\n assistant: how can i help?\n user: i need help with my account\n\n@default 'rolling-history'", + "description": "This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b", "enum": [ - "rolling-history", - "swap-system-message-in-history", - "delete-history" - ] - }, - "assistantName": { - "type": "string", - "description": "This is the assistant to transfer the call to." - }, - "description": { - "type": "string", - "description": "This is the description of the destination, used by the AI to choose when and how to transfer the call." - } - }, - "required": [ - "type", - "assistantName" - ] - }, - "TransferDestinationStep": { - "type": "object", - "properties": { - "message": { - "description": "This is spoken to the customer before connecting them to the destination.\n\nUsage:\n- If this is not provided and transfer tool messages is not provided, default is \"Transferring the call now\".\n- If set to \"\", nothing is spoken. This is useful when you want to silently transfer. This is especially useful when transferring between assistants in a squad. In this scenario, you likely also want to set `assistant.firstMessageMode=assistant-speaks-first-with-model-generated-message` for the destination assistant.\n\nThis accepts a string or a ToolMessageStart class. Latter is useful if you want to specify multiple messages for different languages through the `contents` field.", - "oneOf": [ - { - "type": "string" - }, - { - "$ref": "#/components/schemas/CustomMessage" - } + "deepseek-r1-distill-llama-70b", + "llama-3.3-70b-versatile", + "llama-3.1-405b-reasoning", + "llama-3.1-70b-versatile", + "llama-3.1-8b-instant", + "mixtral-8x7b-32768", + "llama3-8b-8192", + "llama3-70b-8192", + "gemma2-9b-it" ] }, - "type": { + "provider": { "type": "string", "enum": [ - "step" + "groq" ] }, - "stepName": { - "type": "string", - "description": "This is the step to transfer to." + "temperature": { + "type": "number", + "description": "This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.", + "minimum": 0, + "maximum": 2 }, - "description": { - "type": "string", - "description": "This is the description of the destination, used by the AI to choose when and how to transfer the call." + "maxTokens": { + "type": "number", + "description": "This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.", + "minimum": 50, + "maximum": 10000 + }, + "emotionRecognitionEnabled": { + "type": "boolean", + "description": "This determines whether we detect user's emotion while they speak and send it as an additional info to model.\n\nDefault `false` because the model is usually are good at understanding the user's emotion from text.\n\n@default false" + }, + "numFastTurns": { + "type": "number", + "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", + "minimum": 0 } }, "required": [ - "type", - "stepName" + "model", + "provider" ] }, - "SummaryPlan": { + "InflectionAIModel": { "type": "object", "properties": { "messages": { - "description": "These are the messages used to generate the summary.\n\n@default: ```\n[\n {\n \"role\": \"system\",\n \"content\": \"You are an expert note-taker. You will be given a transcript of a call. Summarize the call in 2-3 sentences. DO NOT return anything except the summary.\"\n },\n {\n \"role\": \"user\",\n \"content\": \"Here is the transcript:\\n\\n{{transcript}}\\n\\n\"\n }\n]```\n\nYou can customize by providing any messages you want.\n\nHere are the template variables available:\n- {{transcript}}: The transcript of the call from `call.artifact.transcript`- {{systemPrompt}}: The system prompt of the call from `assistant.model.messages[type=system].content`", + "description": "This is the starting state for the conversation.", "type": "array", "items": { - "type": "object" + "$ref": "#/components/schemas/OpenAIMessage" } }, - "enabled": { - "type": "boolean", - "description": "This determines whether a summary is generated and stored in `call.analysis.summary`. Defaults to true.\n\nUsage:\n- If you want to disable the summary, set this to false.\n\n@default true" + "tools": { + "type": "array", + "description": "These are the tools that the assistant can use during the call. To use existing tools, use `toolIds`.\n\nBoth `tools` and `toolIds` can be used together.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/CreateDtmfToolDTO", + "title": "DtmfTool" + }, + { + "$ref": "#/components/schemas/CreateEndCallToolDTO", + "title": "EndCallTool" + }, + { + "$ref": "#/components/schemas/CreateVoicemailToolDTO", + "title": "VoicemailTool" + }, + { + "$ref": "#/components/schemas/CreateFunctionToolDTO", + "title": "FunctionTool" + }, + { + "$ref": "#/components/schemas/CreateGhlToolDTO", + "title": "GhlTool" + }, + { + "$ref": "#/components/schemas/CreateMakeToolDTO", + "title": "MakeTool" + }, + { + "$ref": "#/components/schemas/CreateTransferCallToolDTO", + "title": "TransferTool" + } + ] + } }, - "timeoutSeconds": { - "type": "number", - "description": "This is how long the request is tried before giving up. When request times out, `call.analysis.summary` will be empty.\n\nUsage:\n- To guarantee the summary is generated, set this value high. Note, this will delay the end of call report in cases where model is slow to respond.\n\n@default 5 seconds", - "minimum": 1, - "maximum": 60 - } - } - }, - "TransferPlan": { - "type": "object", - "properties": { - "mode": { - "type": "string", - "description": "This configures how transfer is executed and the experience of the destination party receiving the call.\n\nUsage:\n- `blind-transfer`: The assistant forwards the call to the destination without any message or summary.\n- `blind-transfer-add-summary-to-sip-header`: The assistant forwards the call to the destination and adds a SIP header X-Transfer-Summary to the call to include the summary.\n- `warm-transfer-say-message`: The assistant dials the destination, delivers the `message` to the destination party, connects the customer, and leaves the call.\n- `warm-transfer-say-summary`: The assistant dials the destination, provides a summary of the call to the destination party, connects the customer, and leaves the call.\n- `warm-transfer-wait-for-operator-to-speak-first-and-then-say-message`: The assistant dials the destination, waits for the operator to speak, delivers the `message` to the destination party, and then connects the customer.\n- `warm-transfer-wait-for-operator-to-speak-first-and-then-say-summary`: The assistant dials the destination, waits for the operator to speak, provides a summary of the call to the destination party, and then connects the customer.\n\n@default 'blind-transfer'", - "enum": [ - "blind-transfer", - "blind-transfer-add-summary-to-sip-header", - "warm-transfer-say-message", - "warm-transfer-say-summary", - "warm-transfer-wait-for-operator-to-speak-first-and-then-say-message", - "warm-transfer-wait-for-operator-to-speak-first-and-then-say-summary" - ] + "toolIds": { + "description": "These are the tools that the assistant can use during the call. To use transient tools, use `tools`.\n\nBoth `tools` and `toolIds` can be used together.", + "type": "array", + "items": { + "type": "string" + } }, - "message": { - "description": "This is the message the assistant will deliver to the destination party before connecting the customer.\n\nUsage:\n- Used only when `mode` is `warm-transfer-say-message` or `warm-transfer-wait-for-operator-to-speak-first-and-then-say-message`.", + "knowledgeBase": { + "description": "These are the options for the knowledge base.", "oneOf": [ { - "type": "string" - }, - { - "$ref": "#/components/schemas/CustomMessage" + "$ref": "#/components/schemas/CreateCustomKnowledgeBaseDTO", + "title": "Custom" } ] }, - "summaryPlan": { - "description": "This is the plan for generating a summary of the call to present to the destination party.\n\nUsage:\n- Used only when `mode` is `warm-transfer-say-summary` or `warm-transfer-wait-for-operator-to-speak-first-and-then-say-summary`.", - "allOf": [ - { - "$ref": "#/components/schemas/SummaryPlan" - } - ] - } - }, - "required": [ - "mode" - ] - }, - "TransferDestinationNumber": { - "type": "object", - "properties": { - "message": { - "description": "This is spoken to the customer before connecting them to the destination.\n\nUsage:\n- If this is not provided and transfer tool messages is not provided, default is \"Transferring the call now\".\n- If set to \"\", nothing is spoken. This is useful when you want to silently transfer. This is especially useful when transferring between assistants in a squad. In this scenario, you likely also want to set `assistant.firstMessageMode=assistant-speaks-first-with-model-generated-message` for the destination assistant.\n\nThis accepts a string or a ToolMessageStart class. Latter is useful if you want to specify multiple messages for different languages through the `contents` field.", - "oneOf": [ - { - "type": "string" - }, - { - "$ref": "#/components/schemas/CustomMessage" - } - ] + "knowledgeBaseId": { + "type": "string", + "description": "This is the ID of the knowledge base the model will use." }, - "type": { + "model": { "type": "string", + "description": "This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b", "enum": [ - "number" + "inflection_3_pi" ] }, - "numberE164CheckEnabled": { - "type": "boolean", - "description": "This is the flag to toggle the E164 check for the `number` field. This is an advanced property which should be used if you know your use case requires it.\n\nUse cases:\n- `false`: To allow non-E164 numbers like `+001234567890`, `1234`, or `abc`. This is useful for dialing out to non-E164 numbers on your SIP trunks.\n- `true` (default): To allow only E164 numbers like `+14155551234`. This is standard for PSTN calls.\n\nIf `false`, the `number` is still required to only contain alphanumeric characters (regex: `/^\\+?[a-zA-Z0-9]+$/`).\n\n@default true (E164 check is enabled)", - "default": true - }, - "number": { + "provider": { "type": "string", - "description": "This is the phone number to transfer the call to.", - "minLength": 3, - "maxLength": 40 + "enum": [ + "inflection-ai" + ] }, - "extension": { - "type": "string", - "description": "This is the extension to dial after transferring the call to the `number`.", - "minLength": 1, - "maxLength": 10 + "temperature": { + "type": "number", + "description": "This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.", + "minimum": 0, + "maximum": 2 }, - "callerId": { - "type": "string", - "description": "This is the caller ID to use when transferring the call to the `number`.\n\nUsage:\n- If not provided, the caller ID will be the number the call is coming from. Example, +14151111111 calls in to and the assistant transfers out to +16470000000. +16470000000 will see +14151111111 as the caller.\n- To change this behavior, provide a `callerId`.\n- Set to '{{customer.number}}' to always use the customer's number as the caller ID.\n- Set to '{{phoneNumber.number}}' to always use the phone number of the assistant as the caller ID.\n- Set to any E164 number to always use that number as the caller ID. This needs to be a number that is owned or verified by your Transport provider like Twilio.\n\nFor Twilio, you can read up more here: https://www.twilio.com/docs/voice/twiml/dial#callerid", - "maxLength": 40 + "maxTokens": { + "type": "number", + "description": "This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.", + "minimum": 50, + "maximum": 10000 }, - "transferPlan": { - "description": "This configures how transfer is executed and the experience of the destination party receiving the call. Defaults to `blind-transfer`.\n\n@default `transferPlan.mode='blind-transfer'`", - "allOf": [ - { - "$ref": "#/components/schemas/TransferPlan" - } - ] + "emotionRecognitionEnabled": { + "type": "boolean", + "description": "This determines whether we detect user's emotion while they speak and send it as an additional info to model.\n\nDefault `false` because the model is usually are good at understanding the user's emotion from text.\n\n@default false" }, - "description": { - "type": "string", - "description": "This is the description of the destination, used by the AI to choose when and how to transfer the call." + "numFastTurns": { + "type": "number", + "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", + "minimum": 0 } }, "required": [ - "type", - "number" + "model", + "provider" ] }, - "TransferDestinationSip": { + "DeepSeekModel": { "type": "object", "properties": { - "message": { - "description": "This is spoken to the customer before connecting them to the destination.\n\nUsage:\n- If this is not provided and transfer tool messages is not provided, default is \"Transferring the call now\".\n- If set to \"\", nothing is spoken. This is useful when you want to silently transfer. This is especially useful when transferring between assistants in a squad. In this scenario, you likely also want to set `assistant.firstMessageMode=assistant-speaks-first-with-model-generated-message` for the destination assistant.\n\nThis accepts a string or a ToolMessageStart class. Latter is useful if you want to specify multiple messages for different languages through the `contents` field.", + "messages": { + "description": "This is the starting state for the conversation.", + "type": "array", + "items": { + "$ref": "#/components/schemas/OpenAIMessage" + } + }, + "tools": { + "type": "array", + "description": "These are the tools that the assistant can use during the call. To use existing tools, use `toolIds`.\n\nBoth `tools` and `toolIds` can be used together.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/CreateDtmfToolDTO", + "title": "DtmfTool" + }, + { + "$ref": "#/components/schemas/CreateEndCallToolDTO", + "title": "EndCallTool" + }, + { + "$ref": "#/components/schemas/CreateVoicemailToolDTO", + "title": "VoicemailTool" + }, + { + "$ref": "#/components/schemas/CreateFunctionToolDTO", + "title": "FunctionTool" + }, + { + "$ref": "#/components/schemas/CreateGhlToolDTO", + "title": "GhlTool" + }, + { + "$ref": "#/components/schemas/CreateMakeToolDTO", + "title": "MakeTool" + }, + { + "$ref": "#/components/schemas/CreateTransferCallToolDTO", + "title": "TransferTool" + } + ] + } + }, + "toolIds": { + "description": "These are the tools that the assistant can use during the call. To use transient tools, use `tools`.\n\nBoth `tools` and `toolIds` can be used together.", + "type": "array", + "items": { + "type": "string" + } + }, + "knowledgeBase": { + "description": "These are the options for the knowledge base.", "oneOf": [ { - "type": "string" - }, - { - "$ref": "#/components/schemas/CustomMessage" + "$ref": "#/components/schemas/CreateCustomKnowledgeBaseDTO", + "title": "Custom" } ] }, - "type": { + "knowledgeBaseId": { + "type": "string", + "description": "This is the ID of the knowledge base the model will use." + }, + "model": { "type": "string", + "description": "This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b", "enum": [ - "sip" + "deepseek-chat", + "deepseek-reasoner" ] }, - "sipUri": { + "provider": { "type": "string", - "description": "This is the SIP URI to transfer the call to." - }, - "transferPlan": { - "description": "This configures how transfer is executed and the experience of the destination party receiving the call. Defaults to `blind-transfer`.\n\n@default `transferPlan.mode='blind-transfer'`", - "allOf": [ - { - "$ref": "#/components/schemas/TransferPlan" - } + "enum": [ + "deep-seek" ] }, - "sipHeaders": { - "type": "object", - "description": "These are custom headers to be added to SIP refer during transfer call." + "temperature": { + "type": "number", + "description": "This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.", + "minimum": 0, + "maximum": 2 }, - "description": { - "type": "string", - "description": "This is the description of the destination, used by the AI to choose when and how to transfer the call." + "maxTokens": { + "type": "number", + "description": "This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.", + "minimum": 50, + "maximum": 10000 + }, + "emotionRecognitionEnabled": { + "type": "boolean", + "description": "This determines whether we detect user's emotion while they speak and send it as an additional info to model.\n\nDefault `false` because the model is usually are good at understanding the user's emotion from text.\n\n@default false" + }, + "numFastTurns": { + "type": "number", + "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", + "minimum": 0 } }, "required": [ - "type", - "sipUri" + "model", + "provider" ] }, - "CreateTransferCallToolDTO": { + "OpenAIModel": { "type": "object", "properties": { - "async": { - "type": "boolean", - "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", - "example": false - }, "messages": { + "description": "This is the starting state for the conversation.", "type": "array", - "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "$ref": "#/components/schemas/OpenAIMessage" + } + }, + "tools": { + "type": "array", + "description": "These are the tools that the assistant can use during the call. To use existing tools, use `toolIds`.\n\nBoth `tools` and `toolIds` can be used together.", "items": { "oneOf": [ { - "$ref": "#/components/schemas/ToolMessageStart", - "title": "ToolMessageStart" + "$ref": "#/components/schemas/CreateDtmfToolDTO", + "title": "DtmfTool" }, { - "$ref": "#/components/schemas/ToolMessageComplete", - "title": "ToolMessageComplete" + "$ref": "#/components/schemas/CreateEndCallToolDTO", + "title": "EndCallTool" }, { - "$ref": "#/components/schemas/ToolMessageFailed", - "title": "ToolMessageFailed" + "$ref": "#/components/schemas/CreateVoicemailToolDTO", + "title": "VoicemailTool" }, { - "$ref": "#/components/schemas/ToolMessageDelayed", - "title": "ToolMessageDelayed" + "$ref": "#/components/schemas/CreateFunctionToolDTO", + "title": "FunctionTool" + }, + { + "$ref": "#/components/schemas/CreateGhlToolDTO", + "title": "GhlTool" + }, + { + "$ref": "#/components/schemas/CreateMakeToolDTO", + "title": "MakeTool" + }, + { + "$ref": "#/components/schemas/CreateTransferCallToolDTO", + "title": "TransferTool" } ] } }, - "type": { + "toolIds": { + "description": "These are the tools that the assistant can use during the call. To use transient tools, use `tools`.\n\nBoth `tools` and `toolIds` can be used together.", + "type": "array", + "items": { + "type": "string" + } + }, + "knowledgeBase": { + "description": "These are the options for the knowledge base.", + "oneOf": [ + { + "$ref": "#/components/schemas/CreateCustomKnowledgeBaseDTO", + "title": "Custom" + } + ] + }, + "knowledgeBaseId": { + "type": "string", + "description": "This is the ID of the knowledge base the model will use." + }, + "provider": { "type": "string", + "description": "This is the provider that will be used for the model.", "enum": [ - "transferCall" + "openai" ] }, - "destinations": { + "model": { + "type": "string", + "description": "This is the OpenAI model that will be used.", + "enum": [ + "chatgpt-4o-latest", + "o3-mini", + "o1-preview", + "o1-preview-2024-09-12", + "o1-mini", + "o1-mini-2024-09-12", + "gpt-4o-realtime-preview-2024-10-01", + "gpt-4o-realtime-preview-2024-12-17", + "gpt-4o-mini-realtime-preview-2024-12-17", + "gpt-4o-mini", + "gpt-4o-mini-2024-07-18", + "gpt-4o", + "gpt-4o-2024-05-13", + "gpt-4o-2024-08-06", + "gpt-4o-2024-11-20", + "gpt-4-turbo", + "gpt-4-turbo-2024-04-09", + "gpt-4-turbo-preview", + "gpt-4-0125-preview", + "gpt-4-1106-preview", + "gpt-4", + "gpt-4-0613", + "gpt-3.5-turbo", + "gpt-3.5-turbo-0125", + "gpt-3.5-turbo-1106", + "gpt-3.5-turbo-16k", + "gpt-3.5-turbo-0613" + ] + }, + "fallbackModels": { "type": "array", - "description": "These are the destinations that the call can be transferred to. If no destinations are provided, server.url will be used to get the transfer destination once the tool is called.", - "items": { - "oneOf": [ - { - "$ref": "#/components/schemas/TransferDestinationAssistant", - "title": "Assistant" - }, - { - "$ref": "#/components/schemas/TransferDestinationStep", - "title": "Step" - }, - { - "$ref": "#/components/schemas/TransferDestinationNumber", - "title": "Number" - }, - { - "$ref": "#/components/schemas/TransferDestinationSip", - "title": "Sip" - } + "description": "These are the fallback models that will be used if the primary model fails. This shouldn't be specified unless you have a specific reason to do so. Vapi will automatically find the fastest fallbacks that make sense.", + "enum": [ + "chatgpt-4o-latest", + "o3-mini", + "o1-preview", + "o1-preview-2024-09-12", + "o1-mini", + "o1-mini-2024-09-12", + "gpt-4o-realtime-preview-2024-10-01", + "gpt-4o-realtime-preview-2024-12-17", + "gpt-4o-mini-realtime-preview-2024-12-17", + "gpt-4o-mini", + "gpt-4o-mini-2024-07-18", + "gpt-4o", + "gpt-4o-2024-05-13", + "gpt-4o-2024-08-06", + "gpt-4o-2024-11-20", + "gpt-4-turbo", + "gpt-4-turbo-2024-04-09", + "gpt-4-turbo-preview", + "gpt-4-0125-preview", + "gpt-4-1106-preview", + "gpt-4", + "gpt-4-0613", + "gpt-3.5-turbo", + "gpt-3.5-turbo-0125", + "gpt-3.5-turbo-1106", + "gpt-3.5-turbo-16k", + "gpt-3.5-turbo-0613" + ], + "example": [ + "gpt-4-0125-preview", + "gpt-4-0613" + ], + "items": { + "type": "string", + "enum": [ + "chatgpt-4o-latest", + "o3-mini", + "o1-preview", + "o1-preview-2024-09-12", + "o1-mini", + "o1-mini-2024-09-12", + "gpt-4o-realtime-preview-2024-10-01", + "gpt-4o-realtime-preview-2024-12-17", + "gpt-4o-mini-realtime-preview-2024-12-17", + "gpt-4o-mini", + "gpt-4o-mini-2024-07-18", + "gpt-4o", + "gpt-4o-2024-05-13", + "gpt-4o-2024-08-06", + "gpt-4o-2024-11-20", + "gpt-4-turbo", + "gpt-4-turbo-2024-04-09", + "gpt-4-turbo-preview", + "gpt-4-0125-preview", + "gpt-4-1106-preview", + "gpt-4", + "gpt-4-0613", + "gpt-3.5-turbo", + "gpt-3.5-turbo-0125", + "gpt-3.5-turbo-1106", + "gpt-3.5-turbo-16k", + "gpt-3.5-turbo-0613" ] } }, - "function": { - "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", - "allOf": [ - { - "$ref": "#/components/schemas/OpenAIFunction" - } - ] + "temperature": { + "type": "number", + "description": "This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.", + "minimum": 0, + "maximum": 2 }, - "server": { - "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", - "allOf": [ - { - "$ref": "#/components/schemas/Server" - } - ] - } - }, - "required": [ - "type" - ] - }, - "CreateCustomKnowledgeBaseDTO": { - "type": "object", - "properties": { - "provider": { - "type": "string", - "description": "This knowledge base is bring your own knowledge base implementation.", - "enum": [ - "custom-knowledge-base" - ] + "maxTokens": { + "type": "number", + "description": "This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.", + "minimum": 50, + "maximum": 10000 }, - "server": { - "description": "/**\nThis is where the knowledge base request will be sent.\n\nRequest Example:\n\nPOST https://{server.url}\nContent-Type: application/json\n\n{\n \"messsage\": {\n \"type\": \"knowledge-base-request\",\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": \"Why is ocean blue?\"\n }\n ],\n ...other metadata about the call...\n }\n}\n\nResponse Expected:\n```\n{\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"The ocean is blue because water absorbs everything but blue.\",\n }, // YOU CAN RETURN THE EXACT RESPONSE TO SPEAK\n \"documents\": [\n {\n \"content\": \"The ocean is blue primarily because water absorbs colors in the red part of the light spectrum and scatters the blue light, making it more visible to our eyes.\",\n \"similarity\": 1\n },\n {\n \"content\": \"Blue light is scattered more by the water molecules than other colors, enhancing the blue appearance of the ocean.\",\n \"similarity\": .5\n }\n ] // OR, YOU CAN RETURN AN ARRAY OF DOCUMENTS THAT WILL BE SENT TO THE MODEL\n}\n```", - "allOf": [ - { - "$ref": "#/components/schemas/Server" - } - ] - } - }, - "required": [ - "provider", - "server" - ] - }, - "OpenAIMessage": { - "type": "object", - "properties": { - "content": { - "type": "string", - "nullable": true, - "maxLength": 100000000 + "emotionRecognitionEnabled": { + "type": "boolean", + "description": "This determines whether we detect user's emotion while they speak and send it as an additional info to model.\n\nDefault `false` because the model is usually are good at understanding the user's emotion from text.\n\n@default false" }, - "role": { - "type": "string", - "enum": [ - "assistant", - "function", - "user", - "system", - "tool" - ] + "numFastTurns": { + "type": "number", + "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", + "minimum": 0 } }, "required": [ - "content", - "role" + "provider", + "model" ] }, - "AnyscaleModel": { + "OpenRouterModel": { "type": "object", "properties": { "messages": { @@ -4923,7 +7685,7 @@ "provider": { "type": "string", "enum": [ - "anyscale" + "openrouter" ] }, "model": { @@ -4957,7 +7719,7 @@ "model" ] }, - "AnthropicModel": { + "PerplexityAIModel": { "type": "object", "properties": { "messages": { @@ -5023,23 +7785,15 @@ "type": "string", "description": "This is the ID of the knowledge base the model will use." }, - "model": { + "provider": { "type": "string", - "description": "This is the Anthropic/Claude models that will be used.", "enum": [ - "claude-3-opus-20240229", - "claude-3-sonnet-20240229", - "claude-3-haiku-20240307", - "claude-3-5-sonnet-20240620", - "claude-3-5-sonnet-20241022", - "claude-3-5-haiku-20241022" + "perplexity-ai" ] }, - "provider": { + "model": { "type": "string", - "enum": [ - "anthropic" - ] + "description": "This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b" }, "temperature": { "type": "number", @@ -5064,11 +7818,11 @@ } }, "required": [ - "model", - "provider" + "provider", + "model" ] }, - "CustomLLMModel": { + "TogetherAIModel": { "type": "object", "properties": { "messages": { @@ -5136,24 +7890,10 @@ }, "provider": { "type": "string", - "description": "This is the provider that will be used for the model. Any service, including your own server, that is compatible with the OpenAI API can be used.", - "enum": [ - "custom-llm" - ] - }, - "metadataSendMode": { - "type": "string", - "description": "This determines whether metadata is sent in requests to the custom provider.\n\n- `off` will not send any metadata. payload will look like `{ messages }`\n- `variable` will send `assistant.metadata` as a variable on the payload. payload will look like `{ messages, metadata }`\n- `destructured` will send `assistant.metadata` fields directly on the payload. payload will look like `{ messages, ...metadata }`\n\nFurther, `variable` and `destructured` will send `call`, `phoneNumber`, and `customer` objects in the payload.\n\nDefault is `variable`.", "enum": [ - "off", - "variable", - "destructured" + "together-ai" ] }, - "url": { - "type": "string", - "description": "These is the URL we'll use for the OpenAI client's `baseURL`. Ex. https://openrouter.ai/api/v1" - }, "model": { "type": "string", "description": "This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b" @@ -5174,19 +7914,175 @@ "type": "boolean", "description": "This determines whether we detect user's emotion while they speak and send it as an additional info to model.\n\nDefault `false` because the model is usually are good at understanding the user's emotion from text.\n\n@default false" }, - "numFastTurns": { - "type": "number", - "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", - "minimum": 0 + "numFastTurns": { + "type": "number", + "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", + "minimum": 0 + } + }, + "required": [ + "provider", + "model" + ] + }, + "AIEdgeCondition": { + "type": "object", + "properties": { + "type": { + "type": "string", + "enum": [ + "ai" + ] + }, + "matches": { + "type": "array", + "maxLength": 100, + "items": { + "type": "string" + } + } + }, + "required": [ + "type", + "matches" + ] + }, + "LogicEdgeCondition": { + "type": "object", + "properties": { + "type": { + "type": "string", + "enum": [ + "logic" + ] + }, + "liquid": { + "type": "string", + "maxLength": 100 + } + }, + "required": [ + "type", + "liquid" + ] + }, + "FailedEdgeCondition": { + "type": "object", + "properties": { + "type": { + "type": "string", + "enum": [ + "failed" + ] + } + }, + "required": [ + "type" + ] + }, + "Edge": { + "type": "object", + "properties": { + "condition": { + "oneOf": [ + { + "$ref": "#/components/schemas/AIEdgeCondition", + "title": "AIEdgeCondition" + }, + { + "$ref": "#/components/schemas/LogicEdgeCondition", + "title": "LogicEdgeCondition" + }, + { + "$ref": "#/components/schemas/FailedEdgeCondition", + "title": "FailedEdgeCondition" + } + ] + }, + "from": { + "type": "string", + "maxLength": 80 + }, + "to": { + "type": "string", + "maxLength": 80 + }, + "metadata": { + "type": "object", + "description": "This is for metadata you want to store on the edge." + } + }, + "required": [ + "from", + "to" + ] + }, + "Workflow": { + "type": "object", + "properties": { + "nodes": { + "type": "array", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/Say", + "title": "Say" + }, + { + "$ref": "#/components/schemas/Gather", + "title": "Gather" + }, + { + "$ref": "#/components/schemas/ApiRequest", + "title": "ApiRequest" + }, + { + "$ref": "#/components/schemas/Hangup", + "title": "Hangup" + }, + { + "$ref": "#/components/schemas/Transfer", + "title": "Transfer" + } + ] + } + }, + "id": { + "type": "string" + }, + "orgId": { + "type": "string" + }, + "createdAt": { + "format": "date-time", + "type": "string" + }, + "updatedAt": { + "format": "date-time", + "type": "string" + }, + "name": { + "type": "string", + "maxLength": 80 + }, + "edges": { + "type": "array", + "items": { + "$ref": "#/components/schemas/Edge" + } } }, "required": [ - "provider", - "url", - "model" + "nodes", + "id", + "orgId", + "createdAt", + "updatedAt", + "name", + "edges" ] }, - "DeepInfraModel": { + "VapiModel": { "type": "object", "properties": { "messages": { @@ -5252,10 +8148,37 @@ "type": "string", "description": "This is the ID of the knowledge base the model will use." }, + "steps": { + "type": "array", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/HandoffStep", + "title": "HandoffStep" + }, + { + "$ref": "#/components/schemas/CallbackStep", + "title": "CallbackStep" + } + ] + } + }, "provider": { "type": "string", "enum": [ - "deepinfra" + "vapi" + ] + }, + "workflowId": { + "type": "string", + "description": "This is the workflow that will be used for the call. To use a transient workflow, use `workflow` instead." + }, + "workflow": { + "description": "This is the workflow that will be used for the call. To use an existing workflow, use `workflowId` instead.", + "allOf": [ + { + "$ref": "#/components/schemas/Workflow" + } ] }, "model": { @@ -5289,7 +8212,7 @@ "model" ] }, - "GoogleModel": { + "XaiModel": { "type": "object", "properties": { "messages": { @@ -5357,19 +8280,17 @@ }, "model": { "type": "string", - "description": "This is the Google model that will be used.", + "description": "This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b", "enum": [ - "gemini-1.5-flash", - "gemini-1.5-flash-002", - "gemini-1.5-pro", - "gemini-1.5-pro-002", - "gemini-1.0-pro" + "grok-beta", + "grok-2", + "grok-3" ] }, "provider": { "type": "string", "enum": [ - "google" + "xai" ] }, "temperature": { @@ -5388,1216 +8309,1734 @@ "type": "boolean", "description": "This determines whether we detect user's emotion while they speak and send it as an additional info to model.\n\nDefault `false` because the model is usually are good at understanding the user's emotion from text.\n\n@default false" }, - "numFastTurns": { - "type": "number", - "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", - "minimum": 0 + "numFastTurns": { + "type": "number", + "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", + "minimum": 0 + } + }, + "required": [ + "model", + "provider" + ] + }, + "ExactReplacement": { + "type": "object", + "properties": { + "type": { + "type": "string", + "description": "This is the exact replacement type. You can use this to replace a specific word or phrase with a different word or phrase.\n\nUsage:\n- Replace \"hello\" with \"hi\": { type: 'exact', key: 'hello', value: 'hi' }\n- Replace \"good morning\" with \"good day\": { type: 'exact', key: 'good morning', value: 'good day' }\n- Replace a specific name: { type: 'exact', key: 'John Doe', value: 'Jane Smith' }\n- Replace an acronym: { type: 'exact', key: 'AI', value: 'Artificial Intelligence' }\n- Replace a company name with its phonetic pronunciation: { type: 'exact', key: 'Vapi', value: 'Vappy' }", + "enum": [ + "exact" + ] + }, + "key": { + "type": "string", + "description": "This is the key to replace." + }, + "value": { + "type": "string", + "description": "This is the value that will replace the match.", + "maxLength": 1000 + } + }, + "required": [ + "type", + "key", + "value" + ] + }, + "RegexOption": { + "type": "object", + "properties": { + "type": { + "type": "string", + "description": "This is the type of the regex option. Options are:\n- `ignore-case`: Ignores the case of the text being matched. Add\n- `whole-word`: Matches whole words only.\n- `multi-line`: Matches across multiple lines.", + "enum": [ + "ignore-case", + "whole-word", + "multi-line" + ] + }, + "enabled": { + "type": "boolean", + "description": "This is whether to enable the option.\n\n@default false" + } + }, + "required": [ + "type", + "enabled" + ] + }, + "RegexReplacement": { + "type": "object", + "properties": { + "type": { + "type": "string", + "description": "This is the regex replacement type. You can use this to replace a word or phrase that matches a pattern.\n\nUsage:\n- Replace all numbers with \"some number\": { type: 'regex', regex: '\\\\d+', value: 'some number' }\n- Replace email addresses with \"[EMAIL]\": { type: 'regex', regex: '\\\\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\\\.[A-Z|a-z]{2,}\\\\b', value: '[EMAIL]' }\n- Replace phone numbers with a formatted version: { type: 'regex', regex: '(\\\\d{3})(\\\\d{3})(\\\\d{4})', value: '($1) $2-$3' }\n- Replace all instances of \"color\" or \"colour\" with \"hue\": { type: 'regex', regex: 'colou?r', value: 'hue' }\n- Capitalize the first letter of every sentence: { type: 'regex', regex: '(?<=\\\\. |^)[a-z]', value: (match) => match.toUpperCase() }", + "enum": [ + "regex" + ] + }, + "regex": { + "type": "string", + "description": "This is the regex pattern to replace.\n\nNote:\n- This works by using the `string.replace` method in Node.JS. Eg. `\"hello there\".replace(/hello/g, \"hi\")` will return `\"hi there\"`.\n\nHot tip:\n- In JavaScript, escape `\\` when sending the regex pattern. Eg. `\"hello\\sthere\"` will be sent over the wire as `\"hellosthere\"`. Send `\"hello\\\\sthere\"` instead." + }, + "options": { + "description": "These are the options for the regex replacement. Defaults to all disabled.\n\n@default []", + "type": "array", + "items": { + "$ref": "#/components/schemas/RegexOption" + } + }, + "value": { + "type": "string", + "description": "This is the value that will replace the match.", + "maxLength": 1000 + } + }, + "required": [ + "type", + "regex", + "value" + ] + }, + "FormatPlan": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "description": "This determines whether the chunk is formatted before being sent to the voice provider. This helps with enunciation. This includes phone numbers, emails and addresses. Default `true`.\n\nUsage:\n- To rely on the voice provider's formatting logic, set this to `false`.\n\nIf `voice.chunkPlan.enabled` is `false`, this is automatically `false` since there's no chunk to format.\n\n@default true", + "example": true + }, + "numberToDigitsCutoff": { + "type": "number", + "description": "This is the cutoff after which a number is converted to individual digits instead of being spoken as words.\n\nExample:\n- If cutoff 2025, \"12345\" is converted to \"1 2 3 4 5\" while \"1200\" is converted to \"twelve hundred\".\n\nUsage:\n- If your use case doesn't involve IDs like zip codes, set this to a high value.\n- If your use case involves IDs that are shorter than 5 digits, set this to a lower value.\n\n@default 2025", + "minimum": 0, + "example": 2025 + }, + "replacements": { + "type": "array", + "description": "These are the custom replacements you can make to the chunk before it is sent to the voice provider.\n\nUsage:\n- To replace a specific word or phrase with a different word or phrase, use the `ExactReplacement` type. Eg. `{ type: 'exact', key: 'hello', value: 'hi' }`\n- To replace a word or phrase that matches a pattern, use the `RegexReplacement` type. Eg. `{ type: 'regex', regex: '\\\\b[a-zA-Z]{5}\\\\b', value: 'hi' }`\n\n@default []", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ExactReplacement", + "title": "ExactReplacement" + }, + { + "$ref": "#/components/schemas/RegexReplacement", + "title": "RegexReplacement" + } + ] + } + }, + "formattersEnabled": { + "type": "array", + "description": "List of formatters to apply. If not provided, all default formatters will be applied.\nIf provided, only the specified formatters will be applied.\nNote: Some essential formatters like angle bracket removal will always be applied.\n@default undefined", + "enum": [ + "markdown", + "asterisk", + "quote", + "dash", + "newline", + "colon", + "acronym", + "dollarAmount", + "email", + "date", + "time", + "distance", + "unit", + "percentage", + "phoneNumber", + "number" + ], + "items": { + "type": "string", + "enum": [ + "markdown", + "asterisk", + "quote", + "dash", + "newline", + "colon", + "acronym", + "dollarAmount", + "email", + "date", + "time", + "distance", + "unit", + "percentage", + "phoneNumber", + "number" + ] + } + } + } + }, + "ChunkPlan": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "description": "This determines whether the model output is chunked before being sent to the voice provider. Default `true`.\n\nUsage:\n- To rely on the voice provider's audio generation logic, set this to `false`.\n- If seeing issues with quality, set this to `true`.\n\nIf disabled, Vapi-provided audio control tokens like will not work.\n\n@default true", + "example": true + }, + "minCharacters": { + "type": "number", + "description": "This is the minimum number of characters in a chunk.\n\nUsage:\n- To increase quality, set this to a higher value.\n- To decrease latency, set this to a lower value.\n\n@default 30", + "minimum": 1, + "maximum": 80, + "example": 30 + }, + "punctuationBoundaries": { + "type": "array", + "description": "These are the punctuations that are considered valid boundaries for a chunk to be created.\n\nUsage:\n- To increase quality, constrain to fewer boundaries.\n- To decrease latency, enable all.\n\nDefault is automatically set to balance the trade-off between quality and latency based on the provider.", + "enum": [ + "。", + ",", + ".", + "!", + "?", + ";", + ")", + "،", + "۔", + "।", + "॥", + "|", + "||", + ",", + ":" + ], + "example": [ + "。", + ",", + ".", + "!", + "?", + ";", + "،", + "۔", + "।", + "॥", + "|", + "||", + ",", + ":" + ], + "items": { + "type": "string", + "enum": [ + "。", + ",", + ".", + "!", + "?", + ";", + ")", + "،", + "۔", + "।", + "॥", + "|", + "||", + ",", + ":" + ] + } + }, + "formatPlan": { + "description": "This is the plan for formatting the chunk before it is sent to the voice provider.", + "allOf": [ + { + "$ref": "#/components/schemas/FormatPlan" + } + ] } - }, - "required": [ - "model", - "provider" - ] + } }, - "GroqModel": { + "FallbackPlan": { "type": "object", "properties": { - "messages": { - "description": "This is the starting state for the conversation.", - "type": "array", - "items": { - "$ref": "#/components/schemas/OpenAIMessage" - } - }, - "tools": { + "voices": { "type": "array", - "description": "These are the tools that the assistant can use during the call. To use existing tools, use `toolIds`.\n\nBoth `tools` and `toolIds` can be used together.", + "description": "This is the list of voices to fallback to in the event that the primary voice provider fails.", "items": { "oneOf": [ { - "$ref": "#/components/schemas/CreateDtmfToolDTO", - "title": "DtmfTool" + "$ref": "#/components/schemas/FallbackAzureVoice", + "title": "Azure" }, { - "$ref": "#/components/schemas/CreateEndCallToolDTO", - "title": "EndCallTool" + "$ref": "#/components/schemas/FallbackCartesiaVoice", + "title": "Cartesia" }, { - "$ref": "#/components/schemas/CreateVoicemailToolDTO", - "title": "VoicemailTool" + "$ref": "#/components/schemas/FallbackCustomVoice", + "title": "CustomVoice" }, { - "$ref": "#/components/schemas/CreateFunctionToolDTO", - "title": "FunctionTool" + "$ref": "#/components/schemas/FallbackDeepgramVoice", + "title": "Deepgram" }, { - "$ref": "#/components/schemas/CreateGhlToolDTO", - "title": "GhlTool" + "$ref": "#/components/schemas/FallbackElevenLabsVoice", + "title": "ElevenLabs" }, { - "$ref": "#/components/schemas/CreateMakeToolDTO", - "title": "MakeTool" + "$ref": "#/components/schemas/FallbackLMNTVoice", + "title": "LMNT" }, { - "$ref": "#/components/schemas/CreateTransferCallToolDTO", - "title": "TransferTool" + "$ref": "#/components/schemas/FallbackNeetsVoice", + "title": "Neets" + }, + { + "$ref": "#/components/schemas/FallbackOpenAIVoice", + "title": "OpenAI" + }, + { + "$ref": "#/components/schemas/FallbackPlayHTVoice", + "title": "PlayHT" + }, + { + "$ref": "#/components/schemas/FallbackRimeAIVoice", + "title": "RimeAI" + }, + { + "$ref": "#/components/schemas/FallbackSmallestAIVoice", + "title": "Smallest AI" + }, + { + "$ref": "#/components/schemas/FallbackTavusVoice", + "title": "TavusVoice" } ] } + } + }, + "required": [ + "voices" + ] + }, + "AzureVoice": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "description": "This is the voice provider that will be used.", + "enum": [ + "azure" + ] }, - "toolIds": { - "description": "These are the tools that the assistant can use during the call. To use transient tools, use `tools`.\n\nBoth `tools` and `toolIds` can be used together.", - "type": "array", - "items": { - "type": "string" - } - }, - "knowledgeBase": { - "description": "These are the options for the knowledge base.", + "voiceId": { + "description": "This is the provider-specific ID that will be used.", "oneOf": [ { - "$ref": "#/components/schemas/CreateCustomKnowledgeBaseDTO", - "title": "Custom" + "type": "string", + "enum": [ + "andrew", + "brian", + "emma" + ], + "title": "Preset Voice Options" + }, + { + "type": "string", + "title": "Azure Voice ID" } ] }, - "knowledgeBaseId": { + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "allOf": [ + { + "$ref": "#/components/schemas/ChunkPlan" + } + ] + }, + "speed": { + "type": "number", + "description": "This is the speed multiplier that will be used.", + "minimum": 0.5, + "maximum": 2 + }, + "fallbackPlan": { + "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", + "allOf": [ + { + "$ref": "#/components/schemas/FallbackPlan" + } + ] + } + }, + "required": [ + "provider", + "voiceId" + ] + }, + "CartesiaExperimentalControls": { + "type": "object", + "properties": { + "speed": { "type": "string", - "description": "This is the ID of the knowledge base the model will use." + "enum": [ + "slowest", + "slow", + "normal", + "fast", + "fastest" + ], + "example": "normal" + }, + "emotion": { + "type": "string", + "enum": [ + "anger:lowest", + "anger:low", + "anger:high", + "anger:highest", + "positivity:lowest", + "positivity:low", + "positivity:high", + "positivity:highest", + "surprise:lowest", + "surprise:low", + "surprise:high", + "surprise:highest", + "sadness:lowest", + "sadness:low", + "sadness:high", + "sadness:highest", + "curiosity:lowest", + "curiosity:low", + "curiosity:high", + "curiosity:highest" + ], + "example": [ + "happiness:high" + ] + } + } + }, + "CartesiaVoice": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "description": "This is the voice provider that will be used.", + "enum": [ + "cartesia" + ] + }, + "voiceId": { + "type": "string", + "description": "The ID of the particular voice you want to use." }, "model": { "type": "string", - "description": "This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b", + "description": "This is the model that will be used. This is optional and will default to the correct model for the voiceId.", "enum": [ - "llama-3.1-405b-reasoning", - "llama-3.1-70b-versatile", - "llama-3.1-8b-instant", - "mixtral-8x7b-32768", - "llama3-8b-8192", - "llama3-70b-8192", - "llama3-groq-8b-8192-tool-use-preview", - "llama3-groq-70b-8192-tool-use-preview", - "gemma-7b-it", - "gemma2-9b-it" + "sonic-english", + "sonic-multilingual", + "sonic-preview", + "sonic" + ], + "example": "sonic-english" + }, + "language": { + "type": "string", + "description": "This is the language that will be used. This is optional and will default to the correct language for the voiceId.", + "enum": [ + "en", + "de", + "es", + "fr", + "ja", + "pt", + "zh", + "hi", + "it", + "ko", + "nl", + "pl", + "ru", + "sv", + "tr" + ], + "example": "en" + }, + "experimentalControls": { + "description": "Experimental controls for Cartesia voice generation", + "allOf": [ + { + "$ref": "#/components/schemas/CartesiaExperimentalControls" + } + ] + }, + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "allOf": [ + { + "$ref": "#/components/schemas/ChunkPlan" + } ] }, + "fallbackPlan": { + "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", + "allOf": [ + { + "$ref": "#/components/schemas/FallbackPlan" + } + ] + } + }, + "required": [ + "provider", + "voiceId" + ] + }, + "CustomVoice": { + "type": "object", + "properties": { "provider": { "type": "string", + "description": "This is the voice provider that will be used. Use `custom-voice` for providers that are not natively supported.", "enum": [ - "groq" + "custom-voice" ] }, - "temperature": { - "type": "number", - "description": "This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.", - "minimum": 0, - "maximum": 2 - }, - "maxTokens": { - "type": "number", - "description": "This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.", - "minimum": 50, - "maximum": 10000 + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "allOf": [ + { + "$ref": "#/components/schemas/ChunkPlan" + } + ] }, - "emotionRecognitionEnabled": { - "type": "boolean", - "description": "This determines whether we detect user's emotion while they speak and send it as an additional info to model.\n\nDefault `false` because the model is usually are good at understanding the user's emotion from text.\n\n@default false" + "server": { + "description": "This is where the voice request will be sent.\n\nRequest Example:\n\nPOST https://{server.url}\nContent-Type: application/json\n\n{\n \"message\": {\n \"type\": \"voice-request\",\n \"text\": \"Hello, world!\",\n \"sampleRate\": 24000,\n ...other metadata about the call...\n }\n}\n\nResponse Expected: 1-channel 16-bit raw PCM audio at the sample rate specified in the request. Here is how the response will be piped to the transport:\n```\nresponse.on('data', (chunk: Buffer) => {\n outputStream.write(chunk);\n});\n```", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] }, - "numFastTurns": { - "type": "number", - "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", - "minimum": 0 + "fallbackPlan": { + "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", + "allOf": [ + { + "$ref": "#/components/schemas/FallbackPlan" + } + ] } }, "required": [ - "model", - "provider" + "provider", + "server" ] }, - "InflectionAIModel": { + "DeepgramVoice": { "type": "object", "properties": { - "messages": { - "description": "This is the starting state for the conversation.", - "type": "array", - "items": { - "$ref": "#/components/schemas/OpenAIMessage" - } - }, - "tools": { - "type": "array", - "description": "These are the tools that the assistant can use during the call. To use existing tools, use `toolIds`.\n\nBoth `tools` and `toolIds` can be used together.", - "items": { - "oneOf": [ - { - "$ref": "#/components/schemas/CreateDtmfToolDTO", - "title": "DtmfTool" - }, - { - "$ref": "#/components/schemas/CreateEndCallToolDTO", - "title": "EndCallTool" - }, - { - "$ref": "#/components/schemas/CreateVoicemailToolDTO", - "title": "VoicemailTool" - }, - { - "$ref": "#/components/schemas/CreateFunctionToolDTO", - "title": "FunctionTool" - }, - { - "$ref": "#/components/schemas/CreateGhlToolDTO", - "title": "GhlTool" - }, - { - "$ref": "#/components/schemas/CreateMakeToolDTO", - "title": "MakeTool" - }, - { - "$ref": "#/components/schemas/CreateTransferCallToolDTO", - "title": "TransferTool" - } - ] - } - }, - "toolIds": { - "description": "These are the tools that the assistant can use during the call. To use transient tools, use `tools`.\n\nBoth `tools` and `toolIds` can be used together.", - "type": "array", - "items": { - "type": "string" - } + "provider": { + "type": "string", + "description": "This is the voice provider that will be used.", + "enum": [ + "deepgram" + ] }, - "knowledgeBase": { - "description": "These are the options for the knowledge base.", + "voiceId": { + "description": "This is the provider-specific ID that will be used.", "oneOf": [ { - "$ref": "#/components/schemas/CreateCustomKnowledgeBaseDTO", - "title": "Custom" + "type": "string", + "enum": [ + "asteria", + "luna", + "stella", + "athena", + "hera", + "orion", + "arcas", + "perseus", + "angus", + "orpheus", + "helios", + "zeus" + ], + "title": "Preset Voice Options" + }, + { + "type": "string", + "title": "Deepgram Voice ID" } ] }, - "knowledgeBaseId": { - "type": "string", - "description": "This is the ID of the knowledge base the model will use." + "mipOptOut": { + "type": "boolean", + "description": "If set to true, this will add mip_opt_out=true as a query parameter of all API requests. See https://developers.deepgram.com/docs/the-deepgram-model-improvement-partnership-program#want-to-opt-out\n\nThis will only be used if you are using your own Deepgram API key.\n\n@default false", + "example": false, + "default": false }, - "model": { - "type": "string", - "description": "This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b", - "enum": [ - "inflection_3_pi" + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "allOf": [ + { + "$ref": "#/components/schemas/ChunkPlan" + } ] }, + "fallbackPlan": { + "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", + "allOf": [ + { + "$ref": "#/components/schemas/FallbackPlan" + } + ] + } + }, + "required": [ + "provider", + "voiceId" + ] + }, + "ElevenLabsVoice": { + "type": "object", + "properties": { "provider": { "type": "string", + "description": "This is the voice provider that will be used.", "enum": [ - "inflection-ai" + "11labs" ] }, - "temperature": { - "type": "number", - "description": "This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.", - "minimum": 0, - "maximum": 2 + "voiceId": { + "description": "This is the provider-specific ID that will be used. Ensure the Voice is present in your 11Labs Voice Library.", + "oneOf": [ + { + "type": "string", + "enum": [ + "burt", + "marissa", + "andrea", + "sarah", + "phillip", + "steve", + "joseph", + "myra", + "paula", + "ryan", + "drew", + "paul", + "mrb", + "matilda", + "mark" + ], + "title": "Preset Voice Options" + }, + { + "type": "string", + "title": "11Labs Voice ID" + } + ] }, - "maxTokens": { + "stability": { "type": "number", - "description": "This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.", - "minimum": 50, - "maximum": 10000 - }, - "emotionRecognitionEnabled": { - "type": "boolean", - "description": "This determines whether we detect user's emotion while they speak and send it as an additional info to model.\n\nDefault `false` because the model is usually are good at understanding the user's emotion from text.\n\n@default false" + "description": "Defines the stability for voice settings.", + "minimum": 0, + "maximum": 1, + "example": 0.5 }, - "numFastTurns": { + "similarityBoost": { "type": "number", - "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", - "minimum": 0 - } - }, - "required": [ - "model", - "provider" - ] - }, - "OpenAIModel": { - "type": "object", - "properties": { - "messages": { - "description": "This is the starting state for the conversation.", - "type": "array", - "items": { - "$ref": "#/components/schemas/OpenAIMessage" - } + "description": "Defines the similarity boost for voice settings.", + "minimum": 0, + "maximum": 1, + "example": 0.75 }, - "tools": { - "type": "array", - "description": "These are the tools that the assistant can use during the call. To use existing tools, use `toolIds`.\n\nBoth `tools` and `toolIds` can be used together.", - "items": { - "oneOf": [ - { - "$ref": "#/components/schemas/CreateDtmfToolDTO", - "title": "DtmfTool" - }, - { - "$ref": "#/components/schemas/CreateEndCallToolDTO", - "title": "EndCallTool" - }, - { - "$ref": "#/components/schemas/CreateVoicemailToolDTO", - "title": "VoicemailTool" - }, - { - "$ref": "#/components/schemas/CreateFunctionToolDTO", - "title": "FunctionTool" - }, - { - "$ref": "#/components/schemas/CreateGhlToolDTO", - "title": "GhlTool" - }, - { - "$ref": "#/components/schemas/CreateMakeToolDTO", - "title": "MakeTool" - }, - { - "$ref": "#/components/schemas/CreateTransferCallToolDTO", - "title": "TransferTool" - } - ] - } + "style": { + "type": "number", + "description": "Defines the style for voice settings.", + "minimum": 0, + "maximum": 1, + "example": 0 }, - "toolIds": { - "description": "These are the tools that the assistant can use during the call. To use transient tools, use `tools`.\n\nBoth `tools` and `toolIds` can be used together.", - "type": "array", - "items": { - "type": "string" - } + "useSpeakerBoost": { + "type": "boolean", + "description": "Defines the use speaker boost for voice settings.", + "example": false }, - "knowledgeBase": { - "description": "These are the options for the knowledge base.", - "oneOf": [ + "optimizeStreamingLatency": { + "type": "number", + "description": "Defines the optimize streaming latency for voice settings. Defaults to 3.", + "minimum": 0, + "maximum": 4, + "example": 3 + }, + "enableSsmlParsing": { + "type": "boolean", + "description": "This enables the use of https://elevenlabs.io/docs/speech-synthesis/prompting#pronunciation. Defaults to false to save latency.\n\n@default false", + "example": false + }, + "model": { + "type": "string", + "description": "This is the model that will be used. Defaults to 'eleven_turbo_v2' if not specified.", + "enum": [ + "eleven_multilingual_v2", + "eleven_turbo_v2", + "eleven_turbo_v2_5", + "eleven_flash_v2", + "eleven_flash_v2_5", + "eleven_monolingual_v1" + ], + "example": "eleven_turbo_v2_5" + }, + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "allOf": [ { - "$ref": "#/components/schemas/CreateCustomKnowledgeBaseDTO", - "title": "Custom" + "$ref": "#/components/schemas/ChunkPlan" } ] }, - "knowledgeBaseId": { + "language": { "type": "string", - "description": "This is the ID of the knowledge base the model will use." + "description": "This is the language (ISO 639-1) that is enforced for the model. Currently only Turbo v2.5 supports language enforcement. For other models, an error will be returned if language code is provided." }, + "fallbackPlan": { + "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", + "allOf": [ + { + "$ref": "#/components/schemas/FallbackPlan" + } + ] + } + }, + "required": [ + "provider", + "voiceId" + ] + }, + "LMNTVoice": { + "type": "object", + "properties": { "provider": { "type": "string", - "description": "This is the provider that will be used for the model.", + "description": "This is the voice provider that will be used.", "enum": [ - "openai" + "lmnt" ] }, - "model": { - "type": "string", - "description": "This is the OpenAI model that will be used.", - "enum": [ - "gpt-4o-realtime-preview-2024-10-01", - "gpt-4o-mini", - "gpt-4o-mini-2024-07-18", - "gpt-4o", - "gpt-4o-2024-05-13", - "gpt-4o-2024-08-06", - "gpt-4o-2024-11-20", - "gpt-4-turbo", - "gpt-4-turbo-2024-04-09", - "gpt-4-turbo-preview", - "gpt-4-0125-preview", - "gpt-4-1106-preview", - "gpt-4", - "gpt-4-0613", - "gpt-3.5-turbo", - "gpt-3.5-turbo-0125", - "gpt-3.5-turbo-1106", - "gpt-3.5-turbo-16k", - "gpt-3.5-turbo-0613" + "voiceId": { + "description": "This is the provider-specific ID that will be used.", + "oneOf": [ + { + "type": "string", + "enum": [ + "lily", + "daniel" + ], + "title": "Preset Voice Options" + }, + { + "type": "string", + "title": "LMNT Voice ID" + } ] }, - "fallbackModels": { - "type": "array", - "description": "These are the fallback models that will be used if the primary model fails. This shouldn't be specified unless you have a specific reason to do so. Vapi will automatically find the fastest fallbacks that make sense.", - "enum": [ - "gpt-4o-realtime-preview-2024-10-01", - "gpt-4o-mini", - "gpt-4o-mini-2024-07-18", - "gpt-4o", - "gpt-4o-2024-05-13", - "gpt-4o-2024-08-06", - "gpt-4o-2024-11-20", - "gpt-4-turbo", - "gpt-4-turbo-2024-04-09", - "gpt-4-turbo-preview", - "gpt-4-0125-preview", - "gpt-4-1106-preview", - "gpt-4", - "gpt-4-0613", - "gpt-3.5-turbo", - "gpt-3.5-turbo-0125", - "gpt-3.5-turbo-1106", - "gpt-3.5-turbo-16k", - "gpt-3.5-turbo-0613" - ], - "example": [ - "gpt-4-0125-preview", - "gpt-4-0613" - ], - "items": { - "type": "string", - "enum": [ - "gpt-4o-realtime-preview-2024-10-01", - "gpt-4o-mini", - "gpt-4o-mini-2024-07-18", - "gpt-4o", - "gpt-4o-2024-05-13", - "gpt-4o-2024-08-06", - "gpt-4o-2024-11-20", - "gpt-4-turbo", - "gpt-4-turbo-2024-04-09", - "gpt-4-turbo-preview", - "gpt-4-0125-preview", - "gpt-4-1106-preview", - "gpt-4", - "gpt-4-0613", - "gpt-3.5-turbo", - "gpt-3.5-turbo-0125", - "gpt-3.5-turbo-1106", - "gpt-3.5-turbo-16k", - "gpt-3.5-turbo-0613" - ] - } - }, - "semanticCachingEnabled": { - "type": "boolean", - "example": true - }, - "temperature": { - "type": "number", - "description": "This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.", - "minimum": 0, - "maximum": 2 - }, - "maxTokens": { + "speed": { "type": "number", - "description": "This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.", - "minimum": 50, - "maximum": 10000 + "description": "This is the speed multiplier that will be used.", + "minimum": 0.25, + "maximum": 2, + "example": null }, - "emotionRecognitionEnabled": { - "type": "boolean", - "description": "This determines whether we detect user's emotion while they speak and send it as an additional info to model.\n\nDefault `false` because the model is usually are good at understanding the user's emotion from text.\n\n@default false" + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "allOf": [ + { + "$ref": "#/components/schemas/ChunkPlan" + } + ] }, - "numFastTurns": { - "type": "number", - "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", - "minimum": 0 + "fallbackPlan": { + "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", + "allOf": [ + { + "$ref": "#/components/schemas/FallbackPlan" + } + ] } }, "required": [ "provider", - "model" + "voiceId" ] }, - "OpenRouterModel": { + "NeetsVoice": { "type": "object", "properties": { - "messages": { - "description": "This is the starting state for the conversation.", - "type": "array", - "items": { - "$ref": "#/components/schemas/OpenAIMessage" - } - }, - "tools": { - "type": "array", - "description": "These are the tools that the assistant can use during the call. To use existing tools, use `toolIds`.\n\nBoth `tools` and `toolIds` can be used together.", - "items": { - "oneOf": [ - { - "$ref": "#/components/schemas/CreateDtmfToolDTO", - "title": "DtmfTool" - }, - { - "$ref": "#/components/schemas/CreateEndCallToolDTO", - "title": "EndCallTool" - }, - { - "$ref": "#/components/schemas/CreateVoicemailToolDTO", - "title": "VoicemailTool" - }, - { - "$ref": "#/components/schemas/CreateFunctionToolDTO", - "title": "FunctionTool" - }, - { - "$ref": "#/components/schemas/CreateGhlToolDTO", - "title": "GhlTool" - }, - { - "$ref": "#/components/schemas/CreateMakeToolDTO", - "title": "MakeTool" - }, - { - "$ref": "#/components/schemas/CreateTransferCallToolDTO", - "title": "TransferTool" - } - ] - } - }, - "toolIds": { - "description": "These are the tools that the assistant can use during the call. To use transient tools, use `tools`.\n\nBoth `tools` and `toolIds` can be used together.", - "type": "array", - "items": { - "type": "string" - } + "provider": { + "type": "string", + "description": "This is the voice provider that will be used.", + "enum": [ + "neets" + ] }, - "knowledgeBase": { - "description": "These are the options for the knowledge base.", + "voiceId": { + "description": "This is the provider-specific ID that will be used.", "oneOf": [ { - "$ref": "#/components/schemas/CreateCustomKnowledgeBaseDTO", - "title": "Custom" + "type": "string", + "enum": [ + "vits", + "vits" + ], + "title": "Preset Voice Options" + }, + { + "type": "string", + "title": "Neets Voice ID" } ] }, - "knowledgeBaseId": { - "type": "string", - "description": "This is the ID of the knowledge base the model will use." + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "allOf": [ + { + "$ref": "#/components/schemas/ChunkPlan" + } + ] }, + "fallbackPlan": { + "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", + "allOf": [ + { + "$ref": "#/components/schemas/FallbackPlan" + } + ] + } + }, + "required": [ + "provider", + "voiceId" + ] + }, + "OpenAIVoice": { + "type": "object", + "properties": { "provider": { "type": "string", + "description": "This is the voice provider that will be used.", "enum": [ - "openrouter" + "openai" ] }, - "model": { - "type": "string", - "description": "This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b" - }, - "temperature": { - "type": "number", - "description": "This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.", - "minimum": 0, - "maximum": 2 + "voiceId": { + "description": "This is the provider-specific ID that will be used.\nPlease note that ash, ballad, coral, sage, and verse may only be used with realtime models.", + "enum": [ + "alloy", + "echo", + "fable", + "onyx", + "nova", + "shimmer", + "ash", + "ballad", + "coral", + "sage", + "verse" + ], + "oneOf": [ + { + "type": "string", + "enum": [ + "alloy", + "echo", + "fable", + "onyx", + "nova", + "shimmer" + ], + "title": "Preset Voice Options" + }, + { + "type": "string", + "title": "OpenAI Voice ID" + } + ] }, - "maxTokens": { + "speed": { "type": "number", - "description": "This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.", - "minimum": 50, - "maximum": 10000 + "description": "This is the speed multiplier that will be used.", + "minimum": 0.25, + "maximum": 4, + "example": null }, - "emotionRecognitionEnabled": { - "type": "boolean", - "description": "This determines whether we detect user's emotion while they speak and send it as an additional info to model.\n\nDefault `false` because the model is usually are good at understanding the user's emotion from text.\n\n@default false" + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "allOf": [ + { + "$ref": "#/components/schemas/ChunkPlan" + } + ] }, - "numFastTurns": { - "type": "number", - "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", - "minimum": 0 + "fallbackPlan": { + "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", + "allOf": [ + { + "$ref": "#/components/schemas/FallbackPlan" + } + ] } }, "required": [ "provider", - "model" + "voiceId" ] }, - "PerplexityAIModel": { + "PlayHTVoice": { "type": "object", "properties": { - "messages": { - "description": "This is the starting state for the conversation.", - "type": "array", - "items": { - "$ref": "#/components/schemas/OpenAIMessage" - } - }, - "tools": { - "type": "array", - "description": "These are the tools that the assistant can use during the call. To use existing tools, use `toolIds`.\n\nBoth `tools` and `toolIds` can be used together.", - "items": { - "oneOf": [ - { - "$ref": "#/components/schemas/CreateDtmfToolDTO", - "title": "DtmfTool" - }, - { - "$ref": "#/components/schemas/CreateEndCallToolDTO", - "title": "EndCallTool" - }, - { - "$ref": "#/components/schemas/CreateVoicemailToolDTO", - "title": "VoicemailTool" - }, - { - "$ref": "#/components/schemas/CreateFunctionToolDTO", - "title": "FunctionTool" - }, - { - "$ref": "#/components/schemas/CreateGhlToolDTO", - "title": "GhlTool" - }, - { - "$ref": "#/components/schemas/CreateMakeToolDTO", - "title": "MakeTool" - }, - { - "$ref": "#/components/schemas/CreateTransferCallToolDTO", - "title": "TransferTool" - } - ] - } - }, - "toolIds": { - "description": "These are the tools that the assistant can use during the call. To use transient tools, use `tools`.\n\nBoth `tools` and `toolIds` can be used together.", - "type": "array", - "items": { - "type": "string" - } + "provider": { + "type": "string", + "description": "This is the voice provider that will be used.", + "enum": [ + "playht" + ] }, - "knowledgeBase": { - "description": "These are the options for the knowledge base.", + "voiceId": { + "description": "This is the provider-specific ID that will be used.", "oneOf": [ { - "$ref": "#/components/schemas/CreateCustomKnowledgeBaseDTO", - "title": "Custom" + "type": "string", + "enum": [ + "jennifer", + "melissa", + "will", + "chris", + "matt", + "jack", + "ruby", + "davis", + "donna", + "michael" + ], + "title": "Preset Voice Options" + }, + { + "type": "string", + "title": "PlayHT Voice ID" } ] }, - "knowledgeBaseId": { + "speed": { + "type": "number", + "description": "This is the speed multiplier that will be used.", + "minimum": 0.1, + "maximum": 5, + "example": null + }, + "temperature": { + "type": "number", + "description": "A floating point number between 0, exclusive, and 2, inclusive. If equal to null or not provided, the model's default temperature will be used. The temperature parameter controls variance. Lower temperatures result in more predictable results, higher temperatures allow each run to vary more, so the voice may sound less like the baseline voice.", + "minimum": 0.1, + "maximum": 2, + "example": null + }, + "emotion": { "type": "string", - "description": "This is the ID of the knowledge base the model will use." + "description": "An emotion to be applied to the speech.", + "enum": [ + "female_happy", + "female_sad", + "female_angry", + "female_fearful", + "female_disgust", + "female_surprised", + "male_happy", + "male_sad", + "male_angry", + "male_fearful", + "male_disgust", + "male_surprised" + ], + "example": null + }, + "voiceGuidance": { + "type": "number", + "description": "A number between 1 and 6. Use lower numbers to reduce how unique your chosen voice will be compared to other voices.", + "minimum": 1, + "maximum": 6, + "example": null + }, + "styleGuidance": { + "type": "number", + "description": "A number between 1 and 30. Use lower numbers to to reduce how strong your chosen emotion will be. Higher numbers will create a very emotional performance.", + "minimum": 1, + "maximum": 30, + "example": null + }, + "textGuidance": { + "type": "number", + "description": "A number between 1 and 2. This number influences how closely the generated speech adheres to the input text. Use lower values to create more fluid speech, but with a higher chance of deviating from the input text. Higher numbers will make the generated speech more accurate to the input text, ensuring that the words spoken align closely with the provided text.", + "minimum": 1, + "maximum": 2, + "example": null }, - "provider": { + "model": { "type": "string", + "description": "Playht voice model/engine to use.", "enum": [ - "perplexity-ai" + "PlayHT2.0", + "PlayHT2.0-turbo", + "Play3.0-mini", + "PlayDialog" ] }, - "model": { + "language": { "type": "string", - "description": "This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b" - }, - "temperature": { - "type": "number", - "description": "This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.", - "minimum": 0, - "maximum": 2 - }, - "maxTokens": { - "type": "number", - "description": "This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.", - "minimum": 50, - "maximum": 10000 + "description": "The language to use for the speech.", + "enum": [ + "afrikaans", + "albanian", + "amharic", + "arabic", + "bengali", + "bulgarian", + "catalan", + "croatian", + "czech", + "danish", + "dutch", + "english", + "french", + "galician", + "german", + "greek", + "hebrew", + "hindi", + "hungarian", + "indonesian", + "italian", + "japanese", + "korean", + "malay", + "mandarin", + "polish", + "portuguese", + "russian", + "serbian", + "spanish", + "swedish", + "tagalog", + "thai", + "turkish", + "ukrainian", + "urdu", + "xhosa" + ] }, - "emotionRecognitionEnabled": { - "type": "boolean", - "description": "This determines whether we detect user's emotion while they speak and send it as an additional info to model.\n\nDefault `false` because the model is usually are good at understanding the user's emotion from text.\n\n@default false" + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "allOf": [ + { + "$ref": "#/components/schemas/ChunkPlan" + } + ] }, - "numFastTurns": { - "type": "number", - "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", - "minimum": 0 + "fallbackPlan": { + "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", + "allOf": [ + { + "$ref": "#/components/schemas/FallbackPlan" + } + ] } }, "required": [ "provider", - "model" + "voiceId" ] }, - "TogetherAIModel": { + "RimeAIVoice": { "type": "object", "properties": { - "messages": { - "description": "This is the starting state for the conversation.", - "type": "array", - "items": { - "$ref": "#/components/schemas/OpenAIMessage" - } + "provider": { + "type": "string", + "description": "This is the voice provider that will be used.", + "enum": [ + "rime-ai" + ] }, - "tools": { - "type": "array", - "description": "These are the tools that the assistant can use during the call. To use existing tools, use `toolIds`.\n\nBoth `tools` and `toolIds` can be used together.", - "items": { - "oneOf": [ - { - "$ref": "#/components/schemas/CreateDtmfToolDTO", - "title": "DtmfTool" - }, - { - "$ref": "#/components/schemas/CreateEndCallToolDTO", - "title": "EndCallTool" - }, - { - "$ref": "#/components/schemas/CreateVoicemailToolDTO", - "title": "VoicemailTool" - }, - { - "$ref": "#/components/schemas/CreateFunctionToolDTO", - "title": "FunctionTool" - }, - { - "$ref": "#/components/schemas/CreateGhlToolDTO", - "title": "GhlTool" - }, - { - "$ref": "#/components/schemas/CreateMakeToolDTO", - "title": "MakeTool" - }, - { - "$ref": "#/components/schemas/CreateTransferCallToolDTO", - "title": "TransferTool" - } - ] - } + "voiceId": { + "description": "This is the provider-specific ID that will be used.", + "oneOf": [ + { + "type": "string", + "enum": [ + "abbie", + "allison", + "ally", + "alona", + "amber", + "ana", + "antoine", + "armon", + "brenda", + "brittany", + "carol", + "colin", + "courtney", + "elena", + "elliot", + "eva", + "geoff", + "gerald", + "hank", + "helen", + "hera", + "jen", + "joe", + "joy", + "juan", + "kendra", + "kendrick", + "kenneth", + "kevin", + "kris", + "linda", + "madison", + "marge", + "marina", + "marissa", + "marta", + "maya", + "nicholas", + "nyles", + "phil", + "reba", + "rex", + "rick", + "ritu", + "rob", + "rodney", + "rohan", + "rosco", + "samantha", + "sandy", + "selena", + "seth", + "sharon", + "stan", + "tamra", + "tanya", + "tibur", + "tj", + "tyler", + "viv", + "yadira", + "marsh", + "bayou", + "creek", + "brook", + "flower", + "spore", + "glacier", + "gulch", + "alpine", + "cove", + "lagoon", + "tundra", + "steppe", + "mesa", + "grove", + "rainforest", + "moraine", + "wildflower", + "peak", + "boulder", + "gypsum", + "zest" + ], + "title": "Preset Voice Options" + }, + { + "type": "string", + "title": "RimeAI Voice ID" + } + ] }, - "toolIds": { - "description": "These are the tools that the assistant can use during the call. To use transient tools, use `tools`.\n\nBoth `tools` and `toolIds` can be used together.", - "type": "array", - "items": { - "type": "string" - } + "model": { + "type": "string", + "description": "This is the model that will be used. Defaults to 'v1' when not specified.", + "enum": [ + "v1", + "mist", + "mistv2" + ], + "example": "mistv2" }, - "knowledgeBase": { - "description": "These are the options for the knowledge base.", - "oneOf": [ + "speed": { + "type": "number", + "description": "This is the speed multiplier that will be used.", + "minimum": 0.1, + "example": null + }, + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "allOf": [ { - "$ref": "#/components/schemas/CreateCustomKnowledgeBaseDTO", - "title": "Custom" + "$ref": "#/components/schemas/ChunkPlan" } ] }, - "knowledgeBaseId": { - "type": "string", - "description": "This is the ID of the knowledge base the model will use." - }, + "fallbackPlan": { + "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", + "allOf": [ + { + "$ref": "#/components/schemas/FallbackPlan" + } + ] + } + }, + "required": [ + "provider", + "voiceId" + ] + }, + "SmallestAIVoice": { + "type": "object", + "properties": { "provider": { "type": "string", + "description": "This is the voice provider that will be used.", "enum": [ - "together-ai" + "smallest-ai" + ] + }, + "voiceId": { + "description": "This is the provider-specific ID that will be used.", + "oneOf": [ + { + "type": "string", + "enum": [ + "emily", + "jasmine", + "arman", + "james", + "mithali", + "aravind", + "raj", + "diya", + "raman", + "ananya", + "isha", + "william", + "aarav", + "monika", + "niharika", + "deepika", + "raghav", + "kajal", + "radhika", + "mansi", + "nisha", + "saurabh", + "pooja", + "saina", + "sanya" + ], + "title": "Preset Voice Options" + }, + { + "type": "string", + "title": "Smallest AI Voice ID" + } ] }, "model": { "type": "string", - "description": "This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b" - }, - "temperature": { - "type": "number", - "description": "This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.", - "minimum": 0, - "maximum": 2 + "description": "Smallest AI voice model to use. Defaults to 'lightning' when not specified.", + "enum": [ + "lightning" + ] }, - "maxTokens": { + "speed": { "type": "number", - "description": "This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.", - "minimum": 50, - "maximum": 10000 + "description": "This is the speed multiplier that will be used.", + "example": null }, - "emotionRecognitionEnabled": { - "type": "boolean", - "description": "This determines whether we detect user's emotion while they speak and send it as an additional info to model.\n\nDefault `false` because the model is usually are good at understanding the user's emotion from text.\n\n@default false" + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "allOf": [ + { + "$ref": "#/components/schemas/ChunkPlan" + } + ] }, - "numFastTurns": { - "type": "number", - "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", - "minimum": 0 + "fallbackPlan": { + "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", + "allOf": [ + { + "$ref": "#/components/schemas/FallbackPlan" + } + ] } }, "required": [ "provider", - "model" + "voiceId" ] }, - "VapiModel": { + "TavusConversationProperties": { "type": "object", "properties": { - "messages": { - "description": "This is the starting state for the conversation.", - "type": "array", - "items": { - "$ref": "#/components/schemas/OpenAIMessage" - } + "maxCallDuration": { + "type": "number", + "description": "The maximum duration of the call in seconds. The default `maxCallDuration` is 3600 seconds (1 hour).\nOnce the time limit specified by this parameter has been reached, the conversation will automatically shut down." }, - "tools": { - "type": "array", - "description": "These are the tools that the assistant can use during the call. To use existing tools, use `toolIds`.\n\nBoth `tools` and `toolIds` can be used together.", - "items": { - "oneOf": [ - { - "$ref": "#/components/schemas/CreateDtmfToolDTO", - "title": "DtmfTool" - }, - { - "$ref": "#/components/schemas/CreateEndCallToolDTO", - "title": "EndCallTool" - }, - { - "$ref": "#/components/schemas/CreateVoicemailToolDTO", - "title": "VoicemailTool" - }, - { - "$ref": "#/components/schemas/CreateFunctionToolDTO", - "title": "FunctionTool" - }, - { - "$ref": "#/components/schemas/CreateGhlToolDTO", - "title": "GhlTool" - }, - { - "$ref": "#/components/schemas/CreateMakeToolDTO", - "title": "MakeTool" - }, - { - "$ref": "#/components/schemas/CreateTransferCallToolDTO", - "title": "TransferTool" - } - ] - } + "participantLeftTimeout": { + "type": "number", + "description": "The duration in seconds after which the call will be automatically shut down once the last participant leaves." }, - "toolIds": { - "description": "These are the tools that the assistant can use during the call. To use transient tools, use `tools`.\n\nBoth `tools` and `toolIds` can be used together.", - "type": "array", - "items": { - "type": "string" - } + "participantAbsentTimeout": { + "type": "number", + "description": "Starting from conversation creation, the duration in seconds after which the call will be automatically shut down if no participant joins the call.\nDefault is 300 seconds (5 minutes)." }, - "knowledgeBase": { - "description": "These are the options for the knowledge base.", - "oneOf": [ - { - "$ref": "#/components/schemas/CreateCustomKnowledgeBaseDTO", - "title": "Custom" - } - ] + "enableRecording": { + "type": "boolean", + "description": "If true, the user will be able to record the conversation." }, - "knowledgeBaseId": { + "enableTranscription": { + "type": "boolean", + "description": "If true, the user will be able to transcribe the conversation.\nYou can find more instructions on displaying transcriptions if you are using your custom DailyJS components here.\nYou need to have an event listener on Daily that listens for `app-messages`." + }, + "applyGreenscreen": { + "type": "boolean", + "description": "If true, the background will be replaced with a greenscreen (RGB values: `[0, 255, 155]`).\nYou can use WebGL on the frontend to make the greenscreen transparent or change its color." + }, + "language": { "type": "string", - "description": "This is the ID of the knowledge base the model will use." + "description": "The language of the conversation. Please provide the **full language name**, not the two-letter code.\nIf you are using your own TTS voice, please ensure it supports the language you provide.\nIf you are using a stock replica or default persona, please note that only ElevenLabs and Cartesia supported languages are available.\nYou can find a full list of supported languages for Cartesia here, for ElevenLabs here, and for PlayHT here." }, - "steps": { - "type": "array", - "items": { - "oneOf": [ - { - "$ref": "#/components/schemas/HandoffStep", - "title": "HandoffStep" - }, - { - "$ref": "#/components/schemas/CallbackStep", - "title": "CallbackStep" - } - ] - } + "recordingS3BucketName": { + "type": "string", + "description": "The name of the S3 bucket where the recording will be stored." + }, + "recordingS3BucketRegion": { + "type": "string", + "description": "The region of the S3 bucket where the recording will be stored." }, + "awsAssumeRoleArn": { + "type": "string", + "description": "The ARN of the role that will be assumed to access the S3 bucket." + } + } + }, + "TavusVoice": { + "type": "object", + "properties": { "provider": { "type": "string", + "description": "This is the voice provider that will be used.", "enum": [ - "vapi" + "tavus" ] }, - "model": { + "voiceId": { + "description": "This is the provider-specific ID that will be used.", + "oneOf": [ + { + "type": "string", + "enum": [ + "r52da2535a" + ], + "title": "Preset Voice Options" + }, + { + "type": "string", + "title": "Tavus Voice ID" + } + ] + }, + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "allOf": [ + { + "$ref": "#/components/schemas/ChunkPlan" + } + ] + }, + "personaId": { "type": "string", - "description": "This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b" + "description": "This is the unique identifier for the persona that the replica will use in the conversation." }, - "temperature": { - "type": "number", - "description": "This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.", - "minimum": 0, - "maximum": 2 + "callbackUrl": { + "type": "string", + "description": "This is the url that will receive webhooks with updates regarding the conversation state." }, - "maxTokens": { - "type": "number", - "description": "This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.", - "minimum": 50, - "maximum": 10000 + "conversationName": { + "type": "string", + "description": "This is the name for the conversation." }, - "emotionRecognitionEnabled": { - "type": "boolean", - "description": "This determines whether we detect user's emotion while they speak and send it as an additional info to model.\n\nDefault `false` because the model is usually are good at understanding the user's emotion from text.\n\n@default false" + "conversationalContext": { + "type": "string", + "description": "This is the context that will be appended to any context provided in the persona, if one is provided." }, - "numFastTurns": { - "type": "number", - "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", - "minimum": 0 + "customGreeting": { + "type": "string", + "description": "This is the custom greeting that the replica will give once a participant joines the conversation." + }, + "properties": { + "description": "These are optional properties used to customize the conversation.", + "allOf": [ + { + "$ref": "#/components/schemas/TavusConversationProperties" + } + ] + }, + "fallbackPlan": { + "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", + "allOf": [ + { + "$ref": "#/components/schemas/FallbackPlan" + } + ] } }, "required": [ "provider", - "model" + "voiceId" ] }, - "XaiModel": { + "FallbackAzureVoice": { "type": "object", "properties": { - "messages": { - "description": "This is the starting state for the conversation.", - "type": "array", - "items": { - "$ref": "#/components/schemas/OpenAIMessage" - } + "provider": { + "type": "string", + "description": "This is the voice provider that will be used.", + "enum": [ + "azure" + ] }, - "tools": { - "type": "array", - "description": "These are the tools that the assistant can use during the call. To use existing tools, use `toolIds`.\n\nBoth `tools` and `toolIds` can be used together.", - "items": { - "oneOf": [ - { - "$ref": "#/components/schemas/CreateDtmfToolDTO", - "title": "DtmfTool" - }, - { - "$ref": "#/components/schemas/CreateEndCallToolDTO", - "title": "EndCallTool" - }, - { - "$ref": "#/components/schemas/CreateVoicemailToolDTO", - "title": "VoicemailTool" - }, - { - "$ref": "#/components/schemas/CreateFunctionToolDTO", - "title": "FunctionTool" - }, - { - "$ref": "#/components/schemas/CreateGhlToolDTO", - "title": "GhlTool" - }, - { - "$ref": "#/components/schemas/CreateMakeToolDTO", - "title": "MakeTool" - }, - { - "$ref": "#/components/schemas/CreateTransferCallToolDTO", - "title": "TransferTool" - } - ] - } + "voiceId": { + "description": "This is the provider-specific ID that will be used.", + "oneOf": [ + { + "type": "string", + "enum": [ + "andrew", + "brian", + "emma" + ], + "title": "Preset Voice Options" + }, + { + "type": "string", + "title": "Azure Voice ID" + } + ] }, - "toolIds": { - "description": "These are the tools that the assistant can use during the call. To use transient tools, use `tools`.\n\nBoth `tools` and `toolIds` can be used together.", - "type": "array", - "items": { - "type": "string" - } + "speed": { + "type": "number", + "description": "This is the speed multiplier that will be used.", + "minimum": 0.5, + "maximum": 2 }, - "knowledgeBase": { - "description": "These are the options for the knowledge base.", - "oneOf": [ + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "allOf": [ { - "$ref": "#/components/schemas/CreateCustomKnowledgeBaseDTO", - "title": "Custom" + "$ref": "#/components/schemas/ChunkPlan" } ] + } + }, + "required": [ + "provider", + "voiceId" + ] + }, + "FallbackCartesiaVoice": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "description": "This is the voice provider that will be used.", + "enum": [ + "cartesia" + ] }, - "knowledgeBaseId": { + "voiceId": { "type": "string", - "description": "This is the ID of the knowledge base the model will use." + "description": "The ID of the particular voice you want to use." }, "model": { "type": "string", - "description": "This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b", + "description": "This is the model that will be used. This is optional and will default to the correct model for the voiceId.", "enum": [ - "grok-beta" - ] + "sonic-english", + "sonic-multilingual", + "sonic-preview", + "sonic" + ], + "example": "sonic-english" }, - "provider": { + "language": { "type": "string", + "description": "This is the language that will be used. This is optional and will default to the correct language for the voiceId.", "enum": [ - "xai" - ] - }, - "temperature": { - "type": "number", - "description": "This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.", - "minimum": 0, - "maximum": 2 - }, - "maxTokens": { - "type": "number", - "description": "This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.", - "minimum": 50, - "maximum": 10000 + "en", + "de", + "es", + "fr", + "ja", + "pt", + "zh", + "hi", + "it", + "ko", + "nl", + "pl", + "ru", + "sv", + "tr" + ], + "example": "en" }, - "emotionRecognitionEnabled": { - "type": "boolean", - "description": "This determines whether we detect user's emotion while they speak and send it as an additional info to model.\n\nDefault `false` because the model is usually are good at understanding the user's emotion from text.\n\n@default false" + "experimentalControls": { + "description": "Experimental controls for Cartesia voice generation", + "allOf": [ + { + "$ref": "#/components/schemas/CartesiaExperimentalControls" + } + ] }, - "numFastTurns": { - "type": "number", - "description": "This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai.\n\nDefault is 0.\n\n@default 0", - "minimum": 0 + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "allOf": [ + { + "$ref": "#/components/schemas/ChunkPlan" + } + ] } }, "required": [ - "model", - "provider" + "provider", + "voiceId" ] }, - "ExactReplacement": { + "FallbackCustomVoice": { "type": "object", "properties": { - "type": { + "provider": { "type": "string", - "description": "This is the exact replacement type. You can use this to replace a specific word or phrase with a different word or phrase.\n\nUsage:\n- Replace \"hello\" with \"hi\": { type: 'exact', key: 'hello', value: 'hi' }\n- Replace \"good morning\" with \"good day\": { type: 'exact', key: 'good morning', value: 'good day' }\n- Replace a specific name: { type: 'exact', key: 'John Doe', value: 'Jane Smith' }\n- Replace an acronym: { type: 'exact', key: 'AI', value: 'Artificial Intelligence' }\n- Replace a company name with its phonetic pronunciation: { type: 'exact', key: 'Vapi', value: 'Vappy' }", + "description": "This is the voice provider that will be used. Use `custom-voice` for providers that are not natively supported.", "enum": [ - "exact" + "custom-voice" ] }, - "key": { - "type": "string", - "description": "This is the key to replace." + "server": { + "description": "This is where the voice request will be sent.\n\nRequest Example:\n\nPOST https://{server.url}\nContent-Type: application/json\n\n{\n \"message\": {\n \"type\": \"voice-request\",\n \"text\": \"Hello, world!\",\n \"sampleRate\": 24000,\n ...other metadata about the call...\n }\n}\n\nResponse Expected: 1-channel 16-bit raw PCM audio at the sample rate specified in the request. Here is how the response will be piped to the transport:\n```\nresponse.on('data', (chunk: Buffer) => {\n outputStream.write(chunk);\n});\n```", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] }, - "value": { - "type": "string", - "description": "This is the value that will replace the match.", - "maxLength": 1000 + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "allOf": [ + { + "$ref": "#/components/schemas/ChunkPlan" + } + ] } }, "required": [ - "type", - "key", - "value" + "provider", + "server" ] }, - "RegexOption": { + "FallbackDeepgramVoice": { "type": "object", "properties": { - "type": { + "provider": { "type": "string", - "description": "This is the type of the regex option. Options are:\n- `ignore-case`: Ignores the case of the text being matched. Add\n- `whole-word`: Matches whole words only.\n- `multi-line`: Matches across multiple lines.", + "description": "This is the voice provider that will be used.", "enum": [ - "ignore-case", - "whole-word", - "multi-line" + "deepgram" ] }, - "enabled": { + "voiceId": { + "description": "This is the provider-specific ID that will be used.", + "oneOf": [ + { + "type": "string", + "enum": [ + "asteria", + "luna", + "stella", + "athena", + "hera", + "orion", + "arcas", + "perseus", + "angus", + "orpheus", + "helios", + "zeus" + ], + "title": "Preset Voice Options" + }, + { + "type": "string", + "title": "Deepgram Voice ID" + } + ] + }, + "mipOptOut": { "type": "boolean", - "description": "This is whether to enable the option.\n\n@default false" + "description": "If set to true, this will add mip_opt_out=true as a query parameter of all API requests. See https://developers.deepgram.com/docs/the-deepgram-model-improvement-partnership-program#want-to-opt-out\n\nThis will only be used if you are using your own Deepgram API key.\n\n@default false", + "example": false, + "default": false + }, + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "allOf": [ + { + "$ref": "#/components/schemas/ChunkPlan" + } + ] } }, "required": [ - "type", - "enabled" + "provider", + "voiceId" ] }, - "RegexReplacement": { + "FallbackElevenLabsVoice": { "type": "object", "properties": { - "type": { + "provider": { "type": "string", - "description": "This is the regex replacement type. You can use this to replace a word or phrase that matches a pattern.\n\nUsage:\n- Replace all numbers with \"some number\": { type: 'regex', regex: '\\\\d+', value: 'some number' }\n- Replace email addresses with \"[EMAIL]\": { type: 'regex', regex: '\\\\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\\\.[A-Z|a-z]{2,}\\\\b', value: '[EMAIL]' }\n- Replace phone numbers with a formatted version: { type: 'regex', regex: '(\\\\d{3})(\\\\d{3})(\\\\d{4})', value: '($1) $2-$3' }\n- Replace all instances of \"color\" or \"colour\" with \"hue\": { type: 'regex', regex: 'colou?r', value: 'hue' }\n- Capitalize the first letter of every sentence: { type: 'regex', regex: '(?<=\\\\. |^)[a-z]', value: (match) => match.toUpperCase() }", + "description": "This is the voice provider that will be used.", "enum": [ - "regex" + "11labs" ] }, - "regex": { - "type": "string", - "description": "This is the regex pattern to replace.\n\nNote:\n- This works by using the `string.replace` method in Node.JS. Eg. `\"hello there\".replace(/hello/g, \"hi\")` will return `\"hi there\"`.\n\nHot tip:\n- In JavaScript, escape `\\` when sending the regex pattern. Eg. `\"hello\\sthere\"` will be sent over the wire as `\"hellosthere\"`. Send `\"hello\\\\sthere\"` instead." + "voiceId": { + "description": "This is the provider-specific ID that will be used. Ensure the Voice is present in your 11Labs Voice Library.", + "oneOf": [ + { + "type": "string", + "enum": [ + "burt", + "marissa", + "andrea", + "sarah", + "phillip", + "steve", + "joseph", + "myra", + "paula", + "ryan", + "drew", + "paul", + "mrb", + "matilda", + "mark" + ], + "title": "Preset Voice Options" + }, + { + "type": "string", + "title": "11Labs Voice ID" + } + ] }, - "options": { - "description": "These are the options for the regex replacement. Defaults to all disabled.\n\n@default []", - "type": "array", - "items": { - "$ref": "#/components/schemas/RegexOption" - } + "stability": { + "type": "number", + "description": "Defines the stability for voice settings.", + "minimum": 0, + "maximum": 1, + "example": 0.5 }, - "value": { - "type": "string", - "description": "This is the value that will replace the match.", - "maxLength": 1000 - } - }, - "required": [ - "type", - "regex", - "value" - ] - }, - "FormatPlan": { - "type": "object", - "properties": { - "enabled": { - "type": "boolean", - "description": "This determines whether the chunk is formatted before being sent to the voice provider. This helps with enunciation. This includes phone numbers, emails and addresses. Default `true`.\n\nUsage:\n- To rely on the voice provider's formatting logic, set this to `false`.\n\nIf `voice.chunkPlan.enabled` is `false`, this is automatically `false` since there's no chunk to format.\n\n@default true", - "example": true + "similarityBoost": { + "type": "number", + "description": "Defines the similarity boost for voice settings.", + "minimum": 0, + "maximum": 1, + "example": 0.75 }, - "numberToDigitsCutoff": { + "style": { "type": "number", - "description": "This is the cutoff after which a number is converted to individual digits instead of being spoken as words.\n\nExample:\n- If cutoff 2025, \"12345\" is converted to \"1 2 3 4 5\" while \"1200\" is converted to \"twelve hundred\".\n\nUsage:\n- If your use case doesn't involve IDs like zip codes, set this to a high value.\n- If your use case involves IDs that are shorter than 5 digits, set this to a lower value.\n\n@default 2025", + "description": "Defines the style for voice settings.", "minimum": 0, - "example": 2025 + "maximum": 1, + "example": 0 }, - "replacements": { - "type": "array", - "description": "These are the custom replacements you can make to the chunk before it is sent to the voice provider.\n\nUsage:\n- To replace a specific word or phrase with a different word or phrase, use the `ExactReplacement` type. Eg. `{ type: 'exact', key: 'hello', value: 'hi' }`\n- To replace a word or phrase that matches a pattern, use the `RegexReplacement` type. Eg. `{ type: 'regex', regex: '\\\\b[a-zA-Z]{5}\\\\b', value: 'hi' }`\n\n@default []", - "items": { - "oneOf": [ - { - "$ref": "#/components/schemas/ExactReplacement", - "title": "ExactReplacement" - }, - { - "$ref": "#/components/schemas/RegexReplacement", - "title": "RegexReplacement" - } - ] - } - } - } - }, - "ChunkPlan": { - "type": "object", - "properties": { - "enabled": { + "useSpeakerBoost": { "type": "boolean", - "description": "This determines whether the model output is chunked before being sent to the voice provider. Default `true`.\n\nUsage:\n- To rely on the voice provider's audio generation logic, set this to `false`.\n- If seeing issues with quality, set this to `true`.\n\nIf disabled, Vapi-provided audio control tokens like will not work.\n\n@default true", - "example": true + "description": "Defines the use speaker boost for voice settings.", + "example": false }, - "minCharacters": { + "optimizeStreamingLatency": { "type": "number", - "description": "This is the minimum number of characters in a chunk.\n\nUsage:\n- To increase quality, set this to a higher value.\n- To decrease latency, set this to a lower value.\n\n@default 30", - "minimum": 1, - "maximum": 80, - "example": 30 + "description": "Defines the optimize streaming latency for voice settings. Defaults to 3.", + "minimum": 0, + "maximum": 4, + "example": 3 }, - "punctuationBoundaries": { - "type": "array", - "description": "These are the punctuations that are considered valid boundaries for a chunk to be created.\n\nUsage:\n- To increase quality, constrain to fewer boundaries.\n- To decrease latency, enable all.\n\nDefault is automatically set to balance the trade-off between quality and latency based on the provider.", - "enum": [ - "。", - ",", - ".", - "!", - "?", - ";", - ")", - "،", - "۔", - "।", - "॥", - "|", - "||", - ",", - ":" - ], - "example": [ - "。", - ",", - ".", - "!", - "?", - ";", - "،", - "۔", - "।", - "॥", - "|", - "||", - ",", - ":" - ], - "items": { - "type": "string", - "enum": [ - "。", - ",", - ".", - "!", - "?", - ";", - ")", - "،", - "۔", - "।", - "॥", - "|", - "||", - ",", - ":" - ] - } + "enableSsmlParsing": { + "type": "boolean", + "description": "This enables the use of https://elevenlabs.io/docs/speech-synthesis/prompting#pronunciation. Defaults to false to save latency.\n\n@default false", + "example": false }, - "formatPlan": { - "description": "This is the plan for formatting the chunk before it is sent to the voice provider.", - "allOf": [ - { - "$ref": "#/components/schemas/FormatPlan" - } - ] - } - } - }, - "FallbackPlan": { - "type": "object", - "properties": { - "voices": { - "type": "array", - "description": "This is the list of voices to fallback to in the event that the primary voice provider fails.", - "items": { - "oneOf": [ - { - "$ref": "#/components/schemas/FallbackAzureVoice", - "title": "Azure" - }, - { - "$ref": "#/components/schemas/FallbackCartesiaVoice", - "title": "Cartesia" - }, - { - "$ref": "#/components/schemas/FallbackCustomVoice", - "title": "CustomVoice" - }, - { - "$ref": "#/components/schemas/FallbackDeepgramVoice", - "title": "Deepgram" - }, - { - "$ref": "#/components/schemas/FallbackElevenLabsVoice", - "title": "ElevenLabs" - }, - { - "$ref": "#/components/schemas/FallbackLMNTVoice", - "title": "LMNT" - }, - { - "$ref": "#/components/schemas/FallbackNeetsVoice", - "title": "Neets" - }, - { - "$ref": "#/components/schemas/FallbackOpenAIVoice", - "title": "OpenAI" - }, - { - "$ref": "#/components/schemas/FallbackPlayHTVoice", - "title": "PlayHT" - }, - { - "$ref": "#/components/schemas/FallbackRimeAIVoice", - "title": "RimeAI" - }, - { - "$ref": "#/components/schemas/FallbackTavusVoice", - "title": "TavusVoice" - } - ] - } + "model": { + "type": "string", + "description": "This is the model that will be used. Defaults to 'eleven_turbo_v2' if not specified.", + "enum": [ + "eleven_multilingual_v2", + "eleven_turbo_v2", + "eleven_turbo_v2_5", + "eleven_flash_v2", + "eleven_flash_v2_5", + "eleven_monolingual_v1" + ], + "example": "eleven_turbo_v2_5" + }, + "language": { + "type": "string", + "description": "This is the language (ISO 639-1) that is enforced for the model. Currently only Turbo v2.5 supports language enforcement. For other models, an error will be returned if language code is provided." + }, + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "allOf": [ + { + "$ref": "#/components/schemas/ChunkPlan" + } + ] } }, "required": [ - "voices" + "provider", + "voiceId" ] }, - "AzureVoice": { + "FallbackLMNTVoice": { "type": "object", "properties": { "provider": { "type": "string", "description": "This is the voice provider that will be used.", "enum": [ - "azure" + "lmnt" ] }, "voiceId": { @@ -6606,37 +10045,29 @@ { "type": "string", "enum": [ - "andrew", - "brian", - "emma" + "lily", + "daniel" ], "title": "Preset Voice Options" }, { "type": "string", - "title": "Azure Voice ID" - } - ] - }, - "chunkPlan": { - "description": "This is the plan for chunking the model output before it is sent to the voice provider.", - "allOf": [ - { - "$ref": "#/components/schemas/ChunkPlan" + "title": "LMNT Voice ID" } ] }, "speed": { "type": "number", "description": "This is the speed multiplier that will be used.", - "minimum": 0.5, - "maximum": 2 + "minimum": 0.25, + "maximum": 2, + "example": null }, - "fallbackPlan": { - "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", "allOf": [ { - "$ref": "#/components/schemas/FallbackPlan" + "$ref": "#/components/schemas/ChunkPlan" } ] } @@ -6646,64 +10077,38 @@ "voiceId" ] }, - "CartesiaVoice": { + "FallbackNeetsVoice": { "type": "object", "properties": { "provider": { "type": "string", "description": "This is the voice provider that will be used.", "enum": [ - "cartesia" + "neets" ] }, - "model": { - "type": "string", - "description": "This is the model that will be used. This is optional and will default to the correct model for the voiceId.", - "enum": [ - "sonic-english", - "sonic-multilingual" - ], - "example": "sonic-english" - }, - "language": { - "type": "string", - "description": "This is the language that will be used. This is optional and will default to the correct language for the voiceId.", - "enum": [ - "en", - "de", - "es", - "fr", - "ja", - "pt", - "zh", - "hi", - "it", - "ko", - "nl", - "pl", - "ru", - "sv", - "tr" - ], - "example": "en" - }, - "chunkPlan": { - "description": "This is the plan for chunking the model output before it is sent to the voice provider.", - "allOf": [ + "voiceId": { + "description": "This is the provider-specific ID that will be used.", + "oneOf": [ { - "$ref": "#/components/schemas/ChunkPlan" + "type": "string", + "enum": [ + "vits", + "vits" + ], + "title": "Preset Voice Options" + }, + { + "type": "string", + "title": "Neets Voice ID" } ] }, - "voiceId": { - "type": "string", - "description": "This is the provider-specific ID that will be used." - }, - "fallbackPlan": { - "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", "allOf": [ { - "$ref": "#/components/schemas/FallbackPlan" + "$ref": "#/components/schemas/ChunkPlan" } ] } @@ -6713,81 +10118,211 @@ "voiceId" ] }, - "CustomVoice": { + "FallbackOpenAIVoice": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the voice provider that will be used. Use `custom-voice` for providers that are not natively supported.", + "description": "This is the voice provider that will be used.", "enum": [ - "custom-voice" + "openai" ] }, - "chunkPlan": { - "description": "This is the plan for chunking the model output before it is sent to the voice provider.", - "allOf": [ + "voiceId": { + "description": "This is the provider-specific ID that will be used.\nPlease note that ash, ballad, coral, sage, and verse may only be used with realtime models.", + "enum": [ + "alloy", + "echo", + "fable", + "onyx", + "nova", + "shimmer", + "ash", + "ballad", + "coral", + "sage", + "verse" + ], + "oneOf": [ { - "$ref": "#/components/schemas/ChunkPlan" - } - ] - }, - "server": { - "description": "This is where the voice request will be sent.\n\nRequest Example:\n\nPOST https://{server.url}\nContent-Type: application/json\n\n{\n \"message\": {\n \"type\": \"voice-request\",\n \"text\": \"Hello, world!\",\n \"sampleRate\": 24000,\n ...other metadata about the call...\n }\n}\n\nResponse Expected: 1-channel 16-bit raw PCM audio at the sample rate specified in the request. Here is how the response will be piped to the transport:\n```\nresponse.on('data', (chunk: Buffer) => {\n outputStream.write(chunk);\n});\n```", - "allOf": [ + "type": "string", + "enum": [ + "alloy", + "echo", + "fable", + "onyx", + "nova", + "shimmer" + ], + "title": "Preset Voice Options" + }, { - "$ref": "#/components/schemas/Server" + "type": "string", + "title": "OpenAI Voice ID" } ] }, - "fallbackPlan": { - "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", + "speed": { + "type": "number", + "description": "This is the speed multiplier that will be used.", + "minimum": 0.25, + "maximum": 4, + "example": null + }, + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", "allOf": [ { - "$ref": "#/components/schemas/FallbackPlan" + "$ref": "#/components/schemas/ChunkPlan" } ] } }, "required": [ "provider", - "server" + "voiceId" ] }, - "DeepgramVoice": { + "FallbackPlayHTVoice": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the voice provider that will be used.", + "description": "This is the voice provider that will be used.", + "enum": [ + "playht" + ] + }, + "voiceId": { + "description": "This is the provider-specific ID that will be used.", + "oneOf": [ + { + "type": "string", + "enum": [ + "jennifer", + "melissa", + "will", + "chris", + "matt", + "jack", + "ruby", + "davis", + "donna", + "michael" + ], + "title": "Preset Voice Options" + }, + { + "type": "string", + "title": "PlayHT Voice ID" + } + ] + }, + "speed": { + "type": "number", + "description": "This is the speed multiplier that will be used.", + "minimum": 0.1, + "maximum": 5, + "example": null + }, + "temperature": { + "type": "number", + "description": "A floating point number between 0, exclusive, and 2, inclusive. If equal to null or not provided, the model's default temperature will be used. The temperature parameter controls variance. Lower temperatures result in more predictable results, higher temperatures allow each run to vary more, so the voice may sound less like the baseline voice.", + "minimum": 0.1, + "maximum": 2, + "example": null + }, + "emotion": { + "type": "string", + "description": "An emotion to be applied to the speech.", + "enum": [ + "female_happy", + "female_sad", + "female_angry", + "female_fearful", + "female_disgust", + "female_surprised", + "male_happy", + "male_sad", + "male_angry", + "male_fearful", + "male_disgust", + "male_surprised" + ], + "example": null + }, + "voiceGuidance": { + "type": "number", + "description": "A number between 1 and 6. Use lower numbers to reduce how unique your chosen voice will be compared to other voices.", + "minimum": 1, + "maximum": 6, + "example": null + }, + "styleGuidance": { + "type": "number", + "description": "A number between 1 and 30. Use lower numbers to to reduce how strong your chosen emotion will be. Higher numbers will create a very emotional performance.", + "minimum": 1, + "maximum": 30, + "example": null + }, + "textGuidance": { + "type": "number", + "description": "A number between 1 and 2. This number influences how closely the generated speech adheres to the input text. Use lower values to create more fluid speech, but with a higher chance of deviating from the input text. Higher numbers will make the generated speech more accurate to the input text, ensuring that the words spoken align closely with the provided text.", + "minimum": 1, + "maximum": 2, + "example": null + }, + "model": { + "type": "string", + "description": "Playht voice model/engine to use.", "enum": [ - "deepgram" + "PlayHT2.0", + "PlayHT2.0-turbo", + "Play3.0-mini", + "PlayDialog" ] }, - "voiceId": { - "description": "This is the provider-specific ID that will be used.", - "oneOf": [ - { - "type": "string", - "enum": [ - "asteria", - "luna", - "stella", - "athena", - "hera", - "orion", - "arcas", - "perseus", - "angus", - "orpheus", - "helios", - "zeus" - ], - "title": "Preset Voice Options" - }, - { - "type": "string", - "title": "Deepgram Voice ID" - } + "language": { + "type": "string", + "description": "The language to use for the speech.", + "enum": [ + "afrikaans", + "albanian", + "amharic", + "arabic", + "bengali", + "bulgarian", + "catalan", + "croatian", + "czech", + "danish", + "dutch", + "english", + "french", + "galician", + "german", + "greek", + "hebrew", + "hindi", + "hungarian", + "indonesian", + "italian", + "japanese", + "korean", + "malay", + "mandarin", + "polish", + "portuguese", + "russian", + "serbian", + "spanish", + "swedish", + "tagalog", + "thai", + "turkish", + "ukrainian", + "urdu", + "xhosa" ] }, "chunkPlan": { @@ -6797,14 +10332,6 @@ "$ref": "#/components/schemas/ChunkPlan" } ] - }, - "fallbackPlan": { - "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", - "allOf": [ - { - "$ref": "#/components/schemas/FallbackPlan" - } - ] } }, "required": [ @@ -6812,94 +10339,129 @@ "voiceId" ] }, - "ElevenLabsVoice": { + "FallbackRimeAIVoice": { "type": "object", "properties": { "provider": { "type": "string", "description": "This is the voice provider that will be used.", "enum": [ - "11labs" + "rime-ai" ] }, "voiceId": { - "description": "This is the provider-specific ID that will be used. Ensure the Voice is present in your 11Labs Voice Library.", + "description": "This is the provider-specific ID that will be used.", "oneOf": [ { "type": "string", "enum": [ - "burt", + "abbie", + "allison", + "ally", + "alona", + "amber", + "ana", + "antoine", + "armon", + "brenda", + "brittany", + "carol", + "colin", + "courtney", + "elena", + "elliot", + "eva", + "geoff", + "gerald", + "hank", + "helen", + "hera", + "jen", + "joe", + "joy", + "juan", + "kendra", + "kendrick", + "kenneth", + "kevin", + "kris", + "linda", + "madison", + "marge", + "marina", "marissa", - "andrea", - "sarah", - "phillip", - "steve", - "joseph", - "myra", - "paula", - "ryan", - "drew", - "paul", - "mrb", - "matilda", - "mark" + "marta", + "maya", + "nicholas", + "nyles", + "phil", + "reba", + "rex", + "rick", + "ritu", + "rob", + "rodney", + "rohan", + "rosco", + "samantha", + "sandy", + "selena", + "seth", + "sharon", + "stan", + "tamra", + "tanya", + "tibur", + "tj", + "tyler", + "viv", + "yadira", + "marsh", + "bayou", + "creek", + "brook", + "flower", + "spore", + "glacier", + "gulch", + "alpine", + "cove", + "lagoon", + "tundra", + "steppe", + "mesa", + "grove", + "rainforest", + "moraine", + "wildflower", + "peak", + "boulder", + "gypsum", + "zest" ], "title": "Preset Voice Options" }, { "type": "string", - "title": "11Labs Voice ID" + "title": "RimeAI Voice ID" } ] }, - "stability": { - "type": "number", - "description": "Defines the stability for voice settings.", - "minimum": 0, - "maximum": 1, - "example": 0.5 - }, - "similarityBoost": { - "type": "number", - "description": "Defines the similarity boost for voice settings.", - "minimum": 0, - "maximum": 1, - "example": 0.75 - }, - "style": { - "type": "number", - "description": "Defines the style for voice settings.", - "minimum": 0, - "maximum": 1, - "example": 0 - }, - "useSpeakerBoost": { - "type": "boolean", - "description": "Defines the use speaker boost for voice settings.", - "example": false - }, - "optimizeStreamingLatency": { - "type": "number", - "description": "Defines the optimize streaming latency for voice settings. Defaults to 3.", - "minimum": 0, - "maximum": 4, - "example": 3 - }, - "enableSsmlParsing": { - "type": "boolean", - "description": "This enables the use of https://elevenlabs.io/docs/speech-synthesis/prompting#pronunciation. Defaults to false to save latency.\n\n@default false", - "example": false - }, "model": { "type": "string", - "description": "This is the model that will be used. Defaults to 'eleven_turbo_v2' if not specified.", + "description": "This is the model that will be used. Defaults to 'v1' when not specified.", "enum": [ - "eleven_multilingual_v2", - "eleven_turbo_v2", - "eleven_turbo_v2_5", - "eleven_monolingual_v1" + "v1", + "mist", + "mistv2" ], - "example": "eleven_turbo_v2_5" + "example": "mistv2" + }, + "speed": { + "type": "number", + "description": "This is the speed multiplier that will be used.", + "minimum": 0.1, + "example": null }, "chunkPlan": { "description": "This is the plan for chunking the model output before it is sent to the voice provider.", @@ -6908,18 +10470,6 @@ "$ref": "#/components/schemas/ChunkPlan" } ] - }, - "language": { - "type": "string", - "description": "This is the language (ISO 639-1) that is enforced for the model. Currently only Turbo v2.5 supports language enforcement. For other models, an error will be returned if language code is provided." - }, - "fallbackPlan": { - "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", - "allOf": [ - { - "$ref": "#/components/schemas/FallbackPlan" - } - ] } }, "required": [ @@ -6927,14 +10477,14 @@ "voiceId" ] }, - "LMNTVoice": { + "FallbackSmallestAIVoice": { "type": "object", "properties": { "provider": { "type": "string", "description": "This is the voice provider that will be used.", "enum": [ - "lmnt" + "smallest-ai" ] }, "voiceId": { @@ -6943,22 +10493,50 @@ { "type": "string", "enum": [ - "lily", - "daniel" + "emily", + "jasmine", + "arman", + "james", + "mithali", + "aravind", + "raj", + "diya", + "raman", + "ananya", + "isha", + "william", + "aarav", + "monika", + "niharika", + "deepika", + "raghav", + "kajal", + "radhika", + "mansi", + "nisha", + "saurabh", + "pooja", + "saina", + "sanya" ], "title": "Preset Voice Options" }, { "type": "string", - "title": "LMNT Voice ID" + "title": "Smallest AI Voice ID" } ] }, + "model": { + "type": "string", + "description": "Smallest AI voice model to use. Defaults to 'lightning' when not specified.", + "enum": [ + "lightning" + ] + }, "speed": { "type": "number", "description": "This is the speed multiplier that will be used.", - "minimum": 0.25, - "maximum": 2, "example": null }, "chunkPlan": { @@ -6968,14 +10546,6 @@ "$ref": "#/components/schemas/ChunkPlan" } ] - }, - "fallbackPlan": { - "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", - "allOf": [ - { - "$ref": "#/components/schemas/FallbackPlan" - } - ] } }, "required": [ @@ -6983,14 +10553,14 @@ "voiceId" ] }, - "NeetsVoice": { + "FallbackTavusVoice": { "type": "object", "properties": { "provider": { "type": "string", "description": "This is the voice provider that will be used.", "enum": [ - "neets" + "tavus" ] }, "voiceId": { @@ -6999,30 +10569,49 @@ { "type": "string", "enum": [ - "vits", - "vits" + "r52da2535a" ], "title": "Preset Voice Options" }, { "type": "string", - "title": "Neets Voice ID" + "title": "Tavus Voice ID" } ] }, - "chunkPlan": { - "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "personaId": { + "type": "string", + "description": "This is the unique identifier for the persona that the replica will use in the conversation." + }, + "callbackUrl": { + "type": "string", + "description": "This is the url that will receive webhooks with updates regarding the conversation state." + }, + "conversationName": { + "type": "string", + "description": "This is the name for the conversation." + }, + "conversationalContext": { + "type": "string", + "description": "This is the context that will be appended to any context provided in the persona, if one is provided." + }, + "customGreeting": { + "type": "string", + "description": "This is the custom greeting that the replica will give once a participant joines the conversation." + }, + "properties": { + "description": "These are optional properties used to customize the conversation.", "allOf": [ { - "$ref": "#/components/schemas/ChunkPlan" + "$ref": "#/components/schemas/TavusConversationProperties" } ] }, - "fallbackPlan": { - "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", + "chunkPlan": { + "description": "This is the plan for chunking the model output before it is sent to the voice provider.", "allOf": [ { - "$ref": "#/components/schemas/FallbackPlan" + "$ref": "#/components/schemas/ChunkPlan" } ] } @@ -7032,1348 +10621,1416 @@ "voiceId" ] }, - "OpenAIVoice": { + "TransportConfigurationTwilio": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the voice provider that will be used.", "enum": [ - "openai" + "twilio" ] }, - "voiceId": { - "description": "This is the provider-specific ID that will be used.\nPlease note that ash, ballad, coral, sage, and verse may only be used with the `gpt-4o-realtime-preview-2024-10-01` model.", + "timeout": { + "type": "number", + "description": "The integer number of seconds that we should allow the phone to ring before assuming there is no answer.\nThe default is `60` seconds and the maximum is `600` seconds.\nFor some call flows, we will add a 5-second buffer to the timeout value you provide.\nFor this reason, a timeout value of 10 seconds could result in an actual timeout closer to 15 seconds.\nYou can set this to a short time, such as `15` seconds, to hang up before reaching an answering machine or voicemail.\n\n@default 60", + "minimum": 1, + "maximum": 600, + "example": 60 + }, + "record": { + "type": "boolean", + "description": "Whether to record the call.\nCan be `true` to record the phone call, or `false` to not.\nThe default is `false`.\n\n@default false", + "example": false + }, + "recordingChannels": { + "type": "string", + "description": "The number of channels in the final recording.\nCan be: `mono` or `dual`.\nThe default is `mono`.\n`mono` records both legs of the call in a single channel of the recording file.\n`dual` records each leg to a separate channel of the recording file.\nThe first channel of a dual-channel recording contains the parent call and the second channel contains the child call.\n\n@default 'mono'", "enum": [ - "alloy", - "echo", - "fable", - "onyx", - "nova", - "shimmer", - "ash", - "ballad", - "coral", - "sage", - "verse" + "mono", + "dual" ], - "oneOf": [ - { - "type": "string", - "enum": [ - "alloy", - "echo", - "fable", - "onyx", - "nova", - "shimmer" - ], - "title": "Preset Voice Options" - }, - { - "type": "string", - "title": "OpenAI Voice ID" - } + "example": "mono" + } + }, + "required": [ + "provider" + ] + }, + "CreateAnthropicCredentialDTO": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "enum": [ + "anthropic" ] }, - "speed": { - "type": "number", - "description": "This is the speed multiplier that will be used.", - "minimum": 0.25, - "maximum": 4, - "example": null + "apiKey": { + "type": "string", + "maxLength": 10000, + "description": "This is not returned in the API." }, - "chunkPlan": { - "description": "This is the plan for chunking the model output before it is sent to the voice provider.", - "allOf": [ - { - "$ref": "#/components/schemas/ChunkPlan" - } + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + }, + "required": [ + "provider", + "apiKey" + ] + }, + "CreateAnyscaleCredentialDTO": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "enum": [ + "anyscale" ] }, - "fallbackPlan": { - "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", + "apiKey": { + "type": "string", + "maxLength": 10000, + "description": "This is not returned in the API." + }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + }, + "required": [ + "provider", + "apiKey" + ] + }, + "CreateAssemblyAICredentialDTO": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "enum": [ + "assembly-ai" + ] + }, + "apiKey": { + "type": "string", + "description": "This is not returned in the API." + }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + }, + "required": [ + "provider", + "apiKey" + ] + }, + "AzureBlobStorageBucketPlan": { + "type": "object", + "properties": { + "connectionString": { + "type": "string", + "description": "This is the blob storage connection string for the Azure resource." + }, + "containerName": { + "type": "string", + "description": "This is the container name for the Azure blob storage." + }, + "path": { + "type": "string", + "description": "This is the path where call artifacts will be stored.\n\nUsage:\n- To store call artifacts in a specific folder, set this to the full path. Eg. \"/folder-name1/folder-name2\".\n- To store call artifacts in the root of the bucket, leave this blank.\n\n@default \"/\"" + } + }, + "required": [ + "connectionString", + "containerName" + ] + }, + "CreateAzureCredentialDTO": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "enum": [ + "azure" + ] + }, + "service": { + "type": "string", + "description": "This is the service being used in Azure.", + "enum": [ + "speech", + "blob_storage" + ], + "default": "speech" + }, + "region": { + "type": "string", + "description": "This is the region of the Azure resource.", + "enum": [ + "australia", + "canadaeast", + "canadacentral", + "eastus2", + "eastus", + "france", + "india", + "japaneast", + "japanwest", + "uaenorth", + "northcentralus", + "norway", + "southcentralus", + "swedencentral", + "switzerland", + "uk", + "westus", + "westus3" + ] + }, + "apiKey": { + "type": "string", + "description": "This is not returned in the API.", + "maxLength": 10000 + }, + "bucketPlan": { + "description": "This is the bucket plan that can be provided to store call artifacts in Azure Blob Storage.", "allOf": [ { - "$ref": "#/components/schemas/FallbackPlan" + "$ref": "#/components/schemas/AzureBlobStorageBucketPlan" } ] + }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 } }, "required": [ "provider", - "voiceId" + "service" ] }, - "PlayHTVoice": { + "CreateAzureOpenAICredentialDTO": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the voice provider that will be used.", "enum": [ - "playht" + "azure-openai" ] }, - "voiceId": { - "description": "This is the provider-specific ID that will be used.", - "oneOf": [ - { - "type": "string", - "enum": [ - "jennifer", - "melissa", - "will", - "chris", - "matt", - "jack", - "ruby", - "davis", - "donna", - "michael" - ], - "title": "Preset Voice Options" - }, - { - "type": "string", - "title": "PlayHT Voice ID" - } + "region": { + "type": "string", + "enum": [ + "australia", + "canadaeast", + "canadacentral", + "eastus2", + "eastus", + "france", + "india", + "japaneast", + "japanwest", + "uaenorth", + "northcentralus", + "norway", + "southcentralus", + "swedencentral", + "switzerland", + "uk", + "westus", + "westus3" ] }, - "speed": { - "type": "number", - "description": "This is the speed multiplier that will be used.", - "minimum": 0.1, - "maximum": 5, - "example": null + "models": { + "type": "array", + "enum": [ + "gpt-4o-2024-08-06-ptu", + "gpt-4o-2024-08-06", + "gpt-4o-mini-2024-07-18", + "gpt-4o-2024-05-13", + "gpt-4-turbo-2024-04-09", + "gpt-4-0125-preview", + "gpt-4-1106-preview", + "gpt-4-0613", + "gpt-35-turbo-0125", + "gpt-35-turbo-1106" + ], + "example": [ + "gpt-4-0125-preview", + "gpt-4-0613" + ], + "items": { + "type": "string", + "enum": [ + "gpt-4o-2024-08-06-ptu", + "gpt-4o-2024-08-06", + "gpt-4o-mini-2024-07-18", + "gpt-4o-2024-05-13", + "gpt-4-turbo-2024-04-09", + "gpt-4-0125-preview", + "gpt-4-1106-preview", + "gpt-4-0613", + "gpt-35-turbo-0125", + "gpt-35-turbo-1106" + ] + } }, - "temperature": { - "type": "number", - "description": "A floating point number between 0, exclusive, and 2, inclusive. If equal to null or not provided, the model's default temperature will be used. The temperature parameter controls variance. Lower temperatures result in more predictable results, higher temperatures allow each run to vary more, so the voice may sound less like the baseline voice.", - "minimum": 0.1, - "maximum": 2, - "example": null + "openAIKey": { + "type": "string", + "maxLength": 10000, + "description": "This is not returned in the API." }, - "emotion": { + "ocpApimSubscriptionKey": { "type": "string", - "description": "An emotion to be applied to the speech.", - "enum": [ - "female_happy", - "female_sad", - "female_angry", - "female_fearful", - "female_disgust", - "female_surprised", - "male_happy", - "male_sad", - "male_angry", - "male_fearful", - "male_disgust", - "male_surprised" - ], - "example": null + "description": "This is not returned in the API." }, - "voiceGuidance": { - "type": "number", - "description": "A number between 1 and 6. Use lower numbers to reduce how unique your chosen voice will be compared to other voices.", - "minimum": 1, - "maximum": 6, - "example": null + "openAIEndpoint": { + "type": "string", + "maxLength": 10000 }, - "styleGuidance": { + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + }, + "required": [ + "provider", + "region", + "models", + "openAIKey", + "openAIEndpoint" + ] + }, + "SipTrunkGateway": { + "type": "object", + "properties": { + "ip": { + "type": "string", + "description": "This is the address of the gateway. It can be an IPv4 address like 1.1.1.1 or a fully qualified domain name like my-sip-trunk.pstn.twilio.com." + }, + "port": { "type": "number", - "description": "A number between 1 and 30. Use lower numbers to to reduce how strong your chosen emotion will be. Higher numbers will create a very emotional performance.", + "description": "This is the port number of the gateway. Default is 5060.\n\n@default 5060", "minimum": 1, - "maximum": 30, - "example": null + "maximum": 65535 }, - "textGuidance": { + "netmask": { "type": "number", - "description": "A number between 1 and 2. This number influences how closely the generated speech adheres to the input text. Use lower values to create more fluid speech, but with a higher chance of deviating from the input text. Higher numbers will make the generated speech more accurate to the input text, ensuring that the words spoken align closely with the provided text.", - "minimum": 1, - "maximum": 2, - "example": null + "description": "This is the netmask of the gateway. Defaults to 32.\n\n@default 32", + "minimum": 24, + "maximum": 32 }, - "model": { + "inboundEnabled": { + "type": "boolean", + "description": "This is whether inbound calls are allowed from this gateway. Default is true.\n\n@default true" + }, + "outboundEnabled": { + "type": "boolean", + "description": "This is whether outbound calls should be sent to this gateway. Default is true.\n\nNote, if netmask is less than 32, it doesn't affect the outbound IPs that are tried. 1 attempt is made to `ip:port`.\n\n@default true" + }, + "outboundProtocol": { "type": "string", - "description": "Playht voice model/engine to use.", + "description": "This is the protocol to use for SIP signaling outbound calls. Default is udp.\n\n@default udp", "enum": [ - "PlayHT2.0", - "PlayHT2.0-turbo", - "Play3.0-mini" + "tls/srtp", + "tcp", + "tls", + "udp" ] }, - "language": { + "optionsPingEnabled": { + "type": "boolean", + "description": "This is whether to send options ping to the gateway. This can be used to check if the gateway is reachable. Default is false.\n\nThis is useful for high availability setups where you want to check if the gateway is reachable before routing calls to it. Note, if no gateway for a trunk is reachable, outbound calls will be rejected.\n\n@default false" + } + }, + "required": [ + "ip" + ] + }, + "SipTrunkOutboundSipRegisterPlan": { + "type": "object", + "properties": { + "domain": { + "type": "string" + }, + "username": { + "type": "string" + }, + "realm": { + "type": "string" + } + } + }, + "SipTrunkOutboundAuthenticationPlan": { + "type": "object", + "properties": { + "authPassword": { "type": "string", - "description": "The language to use for the speech.", - "enum": [ - "afrikaans", - "albanian", - "amharic", - "arabic", - "bengali", - "bulgarian", - "catalan", - "croatian", - "czech", - "danish", - "dutch", - "english", - "french", - "galician", - "german", - "greek", - "hebrew", - "hindi", - "hungarian", - "indonesian", - "italian", - "japanese", - "korean", - "malay", - "mandarin", - "polish", - "portuguese", - "russian", - "serbian", - "spanish", - "swedish", - "tagalog", - "thai", - "turkish", - "ukrainian", - "urdu", - "xhosa" + "description": "This is not returned in the API." + }, + "authUsername": { + "type": "string" + }, + "sipRegisterPlan": { + "description": "This can be used to configure if SIP register is required by the SIP trunk. If not provided, no SIP registration will be attempted.", + "allOf": [ + { + "$ref": "#/components/schemas/SipTrunkOutboundSipRegisterPlan" + } + ] + } + } + }, + "SbcConfiguration": { + "type": "object", + "properties": {} + }, + "CreateByoSipTrunkCredentialDTO": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "description": "This can be used to bring your own SIP trunks or to connect to a Carrier.", + "enum": [ + "byo-sip-trunk" ] }, - "chunkPlan": { - "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "gateways": { + "description": "This is the list of SIP trunk's gateways.", + "type": "array", + "items": { + "$ref": "#/components/schemas/SipTrunkGateway" + } + }, + "outboundAuthenticationPlan": { + "description": "This can be used to configure the outbound authentication if required by the SIP trunk.", "allOf": [ { - "$ref": "#/components/schemas/ChunkPlan" + "$ref": "#/components/schemas/SipTrunkOutboundAuthenticationPlan" } ] }, - "fallbackPlan": { - "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", + "outboundLeadingPlusEnabled": { + "type": "boolean", + "description": "This ensures the outbound origination attempts have a leading plus. Defaults to false to match conventional telecom behavior.\n\nUsage:\n- Vonage/Twilio requires leading plus for all outbound calls. Set this to true.\n\n@default false" + }, + "techPrefix": { + "type": "string", + "description": "This can be used to configure the tech prefix on outbound calls. This is an advanced property.", + "maxLength": 10000 + }, + "sipDiversionHeader": { + "type": "string", + "description": "This can be used to enable the SIP diversion header for authenticating the calling number if the SIP trunk supports it. This is an advanced property.", + "maxLength": 10000 + }, + "sbcConfiguration": { + "description": "This is an advanced configuration for enterprise deployments. This uses the onprem SBC to trunk into the SIP trunk's `gateways`, rather than the managed SBC provided by Vapi.", "allOf": [ { - "$ref": "#/components/schemas/FallbackPlan" + "$ref": "#/components/schemas/SbcConfiguration" } ] + }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 } }, "required": [ - "provider", - "voiceId" + "gateways" ] }, - "RimeAIVoice": { + "CreateCartesiaCredentialDTO": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the voice provider that will be used.", "enum": [ - "rime-ai" + "cartesia" ] }, - "voiceId": { - "description": "This is the provider-specific ID that will be used.", - "oneOf": [ - { - "type": "string", - "enum": [ - "marsh", - "bayou", - "creek", - "brook", - "flower", - "spore", - "glacier", - "gulch", - "alpine", - "cove", - "lagoon", - "tundra", - "steppe", - "mesa", - "grove", - "rainforest", - "moraine", - "wildflower", - "peak", - "boulder", - "abbie", - "allison", - "ally", - "alona", - "amber", - "ana", - "antoine", - "armon", - "brenda", - "brittany", - "carol", - "colin", - "courtney", - "elena", - "elliot", - "eva", - "geoff", - "gerald", - "hank", - "helen", - "hera", - "jen", - "joe", - "joy", - "juan", - "kendra", - "kendrick", - "kenneth", - "kevin", - "kris", - "linda", - "madison", - "marge", - "marina", - "marissa", - "marta", - "maya", - "nicholas", - "nyles", - "phil", - "reba", - "rex", - "rick", - "ritu", - "rob", - "rodney", - "rohan", - "rosco", - "samantha", - "sandy", - "selena", - "seth", - "sharon", - "stan", - "tamra", - "tanya", - "tibur", - "tj", - "tyler", - "viv", - "yadira" - ], - "title": "Preset Voice Options" - }, - { - "type": "string", - "title": "RimeAI Voice ID" - } - ] + "apiKey": { + "type": "string", + "description": "This is not returned in the API." }, - "model": { + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + }, + "required": [ + "provider", + "apiKey" + ] + }, + "CloudflareR2BucketPlan": { + "type": "object", + "properties": { + "accessKeyId": { + "type": "string", + "description": "Cloudflare R2 Access key ID." + }, + "secretAccessKey": { + "type": "string", + "description": "Cloudflare R2 access key secret. This is not returned in the API." + }, + "url": { + "type": "string", + "description": "Cloudflare R2 base url." + }, + "name": { + "type": "string", + "description": "This is the name of the bucket." + }, + "path": { + "type": "string", + "description": "This is the path where call artifacts will be stored.\n\nUsage:\n- To store call artifacts in a specific folder, set this to the full path. Eg. \"/folder-name1/folder-name2\".\n- To store call artifacts in the root of the bucket, leave this blank.\n\n@default \"/\"" + } + }, + "required": [ + "name" + ] + }, + "CreateCloudflareCredentialDTO": { + "type": "object", + "properties": { + "provider": { "type": "string", - "description": "This is the model that will be used. Defaults to 'v1' when not specified.", "enum": [ - "v1", - "mist" + "cloudflare" ], - "example": "v1" + "description": "Credential provider. Only allowed value is cloudflare" }, - "speed": { - "type": "number", - "description": "This is the speed multiplier that will be used.", - "minimum": 0.1, - "example": null + "accountId": { + "type": "string", + "description": "Cloudflare Account Id." }, - "chunkPlan": { - "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "apiKey": { + "type": "string", + "description": "Cloudflare API Key / Token." + }, + "accountEmail": { + "type": "string", + "description": "Cloudflare Account Email." + }, + "bucketPlan": { + "description": "This is the bucket plan that can be provided to store call artifacts in R2", "allOf": [ { - "$ref": "#/components/schemas/ChunkPlan" + "$ref": "#/components/schemas/CloudflareR2BucketPlan" } ] }, - "fallbackPlan": { - "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + }, + "required": [ + "provider" + ] + }, + "OAuth2AuthenticationPlan": { + "type": "object", + "properties": { + "type": { + "type": "string", + "enum": [ + "oauth2" + ] + }, + "url": { + "type": "string", + "description": "This is the OAuth2 URL." + }, + "clientId": { + "type": "string", + "description": "This is the OAuth2 client ID." + }, + "clientSecret": { + "type": "string", + "description": "This is the OAuth2 client secret." + } + }, + "required": [ + "type", + "url", + "clientId", + "clientSecret" + ] + }, + "CreateCustomLLMCredentialDTO": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "enum": [ + "custom-llm" + ] + }, + "apiKey": { + "type": "string", + "maxLength": 10000, + "description": "This is not returned in the API." + }, + "authenticationPlan": { + "description": "This is the authentication plan. Currently supports OAuth2 RFC 6749. To use Bearer authentication, use apiKey", "allOf": [ { - "$ref": "#/components/schemas/FallbackPlan" + "$ref": "#/components/schemas/OAuth2AuthenticationPlan" } ] + }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 } }, "required": [ "provider", - "voiceId" + "apiKey" ] }, - "TavusConversationProperties": { + "CreateDeepgramCredentialDTO": { "type": "object", "properties": { - "maxCallDuration": { - "type": "number", - "description": "The maximum duration of the call in seconds. The default `maxCallDuration` is 3600 seconds (1 hour).\nOnce the time limit specified by this parameter has been reached, the conversation will automatically shut down." - }, - "participantLeftTimeout": { - "type": "number", - "description": "The duration in seconds after which the call will be automatically shut down once the last participant leaves." - }, - "participantAbsentTimeout": { - "type": "number", - "description": "Starting from conversation creation, the duration in seconds after which the call will be automatically shut down if no participant joins the call.\nDefault is 300 seconds (5 minutes)." - }, - "enableRecording": { - "type": "boolean", - "description": "If true, the user will be able to record the conversation." - }, - "enableTranscription": { - "type": "boolean", - "description": "If true, the user will be able to transcribe the conversation.\nYou can find more instructions on displaying transcriptions if you are using your custom DailyJS components here.\nYou need to have an event listener on Daily that listens for `app-messages`." - }, - "applyGreenscreen": { - "type": "boolean", - "description": "If true, the background will be replaced with a greenscreen (RGB values: `[0, 255, 155]`).\nYou can use WebGL on the frontend to make the greenscreen transparent or change its color." - }, - "language": { + "provider": { "type": "string", - "description": "The language of the conversation. Please provide the **full language name**, not the two-letter code.\nIf you are using your own TTS voice, please ensure it supports the language you provide.\nIf you are using a stock replica or default persona, please note that only ElevenLabs and Cartesia supported languages are available.\nYou can find a full list of supported languages for Cartesia here, for ElevenLabs here, and for PlayHT here." + "enum": [ + "deepgram" + ] }, - "recordingS3BucketName": { + "apiKey": { "type": "string", - "description": "The name of the S3 bucket where the recording will be stored." + "description": "This is not returned in the API." }, - "recordingS3BucketRegion": { + "apiUrl": { "type": "string", - "description": "The region of the S3 bucket where the recording will be stored." + "description": "This can be used to point to an onprem Deepgram instance. Defaults to api.deepgram.com." }, - "awsAssumeRoleArn": { + "name": { "type": "string", - "description": "The ARN of the role that will be assumed to access the S3 bucket." + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 } - } + }, + "required": [ + "provider", + "apiKey" + ] }, - "TavusVoice": { + "CreateDeepInfraCredentialDTO": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the voice provider that will be used.", "enum": [ - "tavus" - ] - }, - "voiceId": { - "description": "This is the provider-specific ID that will be used.", - "oneOf": [ - { - "type": "string", - "enum": [ - "r52da2535a" - ], - "title": "Preset Voice Options" - }, - { - "type": "string", - "title": "Tavus Voice ID" - } - ] - }, - "chunkPlan": { - "description": "This is the plan for chunking the model output before it is sent to the voice provider.", - "allOf": [ - { - "$ref": "#/components/schemas/ChunkPlan" - } + "deepinfra" ] }, - "personaId": { + "apiKey": { "type": "string", - "description": "This is the unique identifier for the persona that the replica will use in the conversation." + "description": "This is not returned in the API." }, - "callbackUrl": { + "name": { "type": "string", - "description": "This is the url that will receive webhooks with updates regarding the conversation state." - }, - "conversationName": { + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + }, + "required": [ + "provider", + "apiKey" + ] + }, + "CreateDeepSeekCredentialDTO": { + "type": "object", + "properties": { + "provider": { "type": "string", - "description": "This is the name for the conversation." + "enum": [ + "deep-seek" + ] }, - "conversationalContext": { + "apiKey": { "type": "string", - "description": "This is the context that will be appended to any context provided in the persona, if one is provided." + "description": "This is not returned in the API." }, - "customGreeting": { + "name": { "type": "string", - "description": "This is the custom greeting that the replica will give once a participant joines the conversation." - }, - "properties": { - "description": "These are optional properties used to customize the conversation.", - "allOf": [ - { - "$ref": "#/components/schemas/TavusConversationProperties" - } - ] - }, - "fallbackPlan": { - "description": "This is the plan for voice provider fallbacks in the event that the primary voice provider fails.", - "allOf": [ - { - "$ref": "#/components/schemas/FallbackPlan" - } - ] + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 } }, "required": [ "provider", - "voiceId" + "apiKey" ] }, - "FallbackAzureVoice": { + "CreateElevenLabsCredentialDTO": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the voice provider that will be used.", "enum": [ - "azure" - ] - }, - "voiceId": { - "description": "This is the provider-specific ID that will be used.", - "oneOf": [ - { - "type": "string", - "enum": [ - "andrew", - "brian", - "emma" - ], - "title": "Preset Voice Options" - }, - { - "type": "string", - "title": "Azure Voice ID" - } + "11labs" ] }, - "speed": { - "type": "number", - "description": "This is the speed multiplier that will be used.", - "minimum": 0.5, - "maximum": 2 + "apiKey": { + "type": "string", + "maxLength": 10000, + "description": "This is not returned in the API." }, - "chunkPlan": { - "description": "This is the plan for chunking the model output before it is sent to the voice provider.", - "allOf": [ - { - "$ref": "#/components/schemas/ChunkPlan" - } - ] + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 } }, "required": [ "provider", - "voiceId" + "apiKey" + ] + }, + "GcpKey": { + "type": "object", + "properties": { + "type": { + "type": "string", + "description": "This is the type of the key. Most likely, this is \"service_account\"." + }, + "projectId": { + "type": "string", + "description": "This is the ID of the Google Cloud project associated with this key." + }, + "privateKeyId": { + "type": "string", + "description": "This is the unique identifier for the private key." + }, + "privateKey": { + "type": "string", + "description": "This is the private key in PEM format.\n\nNote: This is not returned in the API." + }, + "clientEmail": { + "type": "string", + "description": "This is the email address associated with the service account." + }, + "clientId": { + "type": "string", + "description": "This is the unique identifier for the client." + }, + "authUri": { + "type": "string", + "description": "This is the URI for the auth provider's authorization endpoint." + }, + "tokenUri": { + "type": "string", + "description": "This is the URI for the auth provider's token endpoint." + }, + "authProviderX509CertUrl": { + "type": "string", + "description": "This is the URL of the public x509 certificate for the auth provider." + }, + "clientX509CertUrl": { + "type": "string", + "description": "This is the URL of the public x509 certificate for the client." + }, + "universeDomain": { + "type": "string", + "description": "This is the domain associated with the universe this service account belongs to." + } + }, + "required": [ + "type", + "projectId", + "privateKeyId", + "privateKey", + "clientEmail", + "clientId", + "authUri", + "tokenUri", + "authProviderX509CertUrl", + "clientX509CertUrl", + "universeDomain" ] }, - "FallbackCartesiaVoice": { + "BucketPlan": { "type": "object", "properties": { - "provider": { + "name": { "type": "string", - "description": "This is the voice provider that will be used.", - "enum": [ - "cartesia" - ] + "description": "This is the name of the bucket." }, - "model": { + "region": { "type": "string", - "description": "This is the model that will be used. This is optional and will default to the correct model for the voiceId.", - "enum": [ - "sonic-english", - "sonic-multilingual" - ], - "example": "sonic-english" + "description": "This is the region of the bucket.\n\nUsage:\n- If `credential.type` is `aws`, then this is required.\n- If `credential.type` is `gcp`, then this is optional since GCP allows buckets to be accessed without a region but region is required for data residency requirements. Read here: https://cloud.google.com/storage/docs/request-endpoints" }, - "language": { + "path": { "type": "string", - "description": "This is the language that will be used. This is optional and will default to the correct language for the voiceId.", - "enum": [ - "en", - "de", - "es", - "fr", - "ja", - "pt", - "zh", - "hi", - "it", - "ko", - "nl", - "pl", - "ru", - "sv", - "tr" - ], - "example": "en" + "description": "This is the path where call artifacts will be stored.\n\nUsage:\n- To store call artifacts in a specific folder, set this to the full path. Eg. \"/folder-name1/folder-name2\".\n- To store call artifacts in the root of the bucket, leave this blank.\n\n@default \"/\"" }, - "voiceId": { + "hmacAccessKey": { "type": "string", - "description": "This is the provider-specific ID that will be used." + "description": "This is the HMAC access key offered by GCP for interoperability with S3 clients. Here is the guide on how to create: https://cloud.google.com/storage/docs/authentication/managing-hmackeys#console\n\nUsage:\n- If `credential.type` is `gcp`, then this is required.\n- If `credential.type` is `aws`, then this is not required since credential.awsAccessKeyId is used instead." }, - "chunkPlan": { - "description": "This is the plan for chunking the model output before it is sent to the voice provider.", - "allOf": [ - { - "$ref": "#/components/schemas/ChunkPlan" - } - ] + "hmacSecret": { + "type": "string", + "description": "This is the secret for the HMAC access key. Here is the guide on how to create: https://cloud.google.com/storage/docs/authentication/managing-hmackeys#console\n\nUsage:\n- If `credential.type` is `gcp`, then this is required.\n- If `credential.type` is `aws`, then this is not required since credential.awsSecretAccessKey is used instead.\n\nNote: This is not returned in the API." } }, "required": [ - "provider", - "voiceId" + "name" ] }, - "FallbackCustomVoice": { + "CreateGcpCredentialDTO": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the voice provider that will be used. Use `custom-voice` for providers that are not natively supported.", "enum": [ - "custom-voice" + "gcp" ] }, - "server": { - "description": "This is where the voice request will be sent.\n\nRequest Example:\n\nPOST https://{server.url}\nContent-Type: application/json\n\n{\n \"message\": {\n \"type\": \"voice-request\",\n \"text\": \"Hello, world!\",\n \"sampleRate\": 24000,\n ...other metadata about the call...\n }\n}\n\nResponse Expected: 1-channel 16-bit raw PCM audio at the sample rate specified in the request. Here is how the response will be piped to the transport:\n```\nresponse.on('data', (chunk: Buffer) => {\n outputStream.write(chunk);\n});\n```", + "gcpKey": { + "description": "This is the GCP key. This is the JSON that can be generated in the Google Cloud Console at https://console.cloud.google.com/iam-admin/serviceaccounts/details//keys.\n\nThe schema is identical to the JSON that GCP outputs.", "allOf": [ { - "$ref": "#/components/schemas/Server" + "$ref": "#/components/schemas/GcpKey" } ] }, - "chunkPlan": { - "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "bucketPlan": { + "description": "This is the bucket plan that can be provided to store call artifacts in GCP.", "allOf": [ { - "$ref": "#/components/schemas/ChunkPlan" + "$ref": "#/components/schemas/BucketPlan" } ] + }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 } }, "required": [ "provider", - "server" + "gcpKey" ] }, - "FallbackDeepgramVoice": { + "CreateGladiaCredentialDTO": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the voice provider that will be used.", "enum": [ - "deepgram" + "gladia" ] }, - "voiceId": { - "description": "This is the provider-specific ID that will be used.", - "oneOf": [ - { - "type": "string", - "enum": [ - "asteria", - "luna", - "stella", - "athena", - "hera", - "orion", - "arcas", - "perseus", - "angus", - "orpheus", - "helios", - "zeus" - ], - "title": "Preset Voice Options" - }, - { - "type": "string", - "title": "Deepgram Voice ID" - } - ] + "apiKey": { + "type": "string", + "description": "This is not returned in the API." }, - "chunkPlan": { - "description": "This is the plan for chunking the model output before it is sent to the voice provider.", - "allOf": [ - { - "$ref": "#/components/schemas/ChunkPlan" - } - ] + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 } }, "required": [ "provider", - "voiceId" + "apiKey" ] }, - "FallbackElevenLabsVoice": { + "CreateGoHighLevelCredentialDTO": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the voice provider that will be used.", "enum": [ - "11labs" - ] - }, - "voiceId": { - "description": "This is the provider-specific ID that will be used. Ensure the Voice is present in your 11Labs Voice Library.", - "oneOf": [ - { - "type": "string", - "enum": [ - "burt", - "marissa", - "andrea", - "sarah", - "phillip", - "steve", - "joseph", - "myra", - "paula", - "ryan", - "drew", - "paul", - "mrb", - "matilda", - "mark" - ], - "title": "Preset Voice Options" - }, - { - "type": "string", - "title": "11Labs Voice ID" - } + "gohighlevel" ] }, - "stability": { - "type": "number", - "description": "Defines the stability for voice settings.", - "minimum": 0, - "maximum": 1, - "example": 0.5 - }, - "similarityBoost": { - "type": "number", - "description": "Defines the similarity boost for voice settings.", - "minimum": 0, - "maximum": 1, - "example": 0.75 - }, - "style": { - "type": "number", - "description": "Defines the style for voice settings.", - "minimum": 0, - "maximum": 1, - "example": 0 - }, - "useSpeakerBoost": { - "type": "boolean", - "description": "Defines the use speaker boost for voice settings.", - "example": false - }, - "optimizeStreamingLatency": { - "type": "number", - "description": "Defines the optimize streaming latency for voice settings. Defaults to 3.", - "minimum": 0, - "maximum": 4, - "example": 3 - }, - "enableSsmlParsing": { - "type": "boolean", - "description": "This enables the use of https://elevenlabs.io/docs/speech-synthesis/prompting#pronunciation. Defaults to false to save latency.\n\n@default false", - "example": false + "apiKey": { + "type": "string", + "description": "This is not returned in the API." }, - "model": { + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + }, + "required": [ + "provider", + "apiKey" + ] + }, + "CreateGroqCredentialDTO": { + "type": "object", + "properties": { + "provider": { "type": "string", - "description": "This is the model that will be used. Defaults to 'eleven_turbo_v2' if not specified.", "enum": [ - "eleven_multilingual_v2", - "eleven_turbo_v2", - "eleven_turbo_v2_5", - "eleven_monolingual_v1" - ], - "example": "eleven_turbo_v2_5" + "groq" + ] }, - "language": { + "apiKey": { "type": "string", - "description": "This is the language (ISO 639-1) that is enforced for the model. Currently only Turbo v2.5 supports language enforcement. For other models, an error will be returned if language code is provided." + "description": "This is not returned in the API." }, - "chunkPlan": { - "description": "This is the plan for chunking the model output before it is sent to the voice provider.", - "allOf": [ - { - "$ref": "#/components/schemas/ChunkPlan" - } - ] + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 } }, "required": [ "provider", - "voiceId" + "apiKey" ] }, - "FallbackLMNTVoice": { + "CreateLangfuseCredentialDTO": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the voice provider that will be used.", "enum": [ - "lmnt" + "langfuse" ] }, - "voiceId": { - "description": "This is the provider-specific ID that will be used.", - "oneOf": [ - { - "type": "string", - "enum": [ - "lily", - "daniel" - ], - "title": "Preset Voice Options" - }, - { - "type": "string", - "title": "LMNT Voice ID" - } - ] + "publicKey": { + "type": "string", + "description": "The public key for Langfuse project. Eg: pk-lf-..." }, - "speed": { - "type": "number", - "description": "This is the speed multiplier that will be used.", - "minimum": 0.25, - "maximum": 2, - "example": null + "apiKey": { + "type": "string", + "description": "The secret key for Langfuse project. Eg: sk-lf-... .This is not returned in the API." }, - "chunkPlan": { - "description": "This is the plan for chunking the model output before it is sent to the voice provider.", - "allOf": [ - { - "$ref": "#/components/schemas/ChunkPlan" - } - ] + "apiUrl": { + "type": "string", + "description": "The host URL for Langfuse project. Eg: https://cloud.langfuse.com" + }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 } }, "required": [ "provider", - "voiceId" + "publicKey", + "apiKey", + "apiUrl" ] }, - "FallbackNeetsVoice": { + "CreateLmntCredentialDTO": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the voice provider that will be used.", "enum": [ - "neets" + "lmnt" ] }, - "voiceId": { - "description": "This is the provider-specific ID that will be used.", - "oneOf": [ - { - "type": "string", - "enum": [ - "vits", - "vits" - ], - "title": "Preset Voice Options" - }, - { - "type": "string", - "title": "Neets Voice ID" - } - ] + "apiKey": { + "type": "string", + "description": "This is not returned in the API." }, - "chunkPlan": { - "description": "This is the plan for chunking the model output before it is sent to the voice provider.", - "allOf": [ - { - "$ref": "#/components/schemas/ChunkPlan" - } - ] + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 } }, "required": [ "provider", - "voiceId" + "apiKey" ] }, - "FallbackOpenAIVoice": { + "CreateMakeCredentialDTO": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the voice provider that will be used.", "enum": [ - "openai" + "make" ] }, - "voiceId": { - "description": "This is the provider-specific ID that will be used.\nPlease note that ash, ballad, coral, sage, and verse may only be used with the `gpt-4o-realtime-preview-2024-10-01` model.", - "enum": [ - "alloy", - "echo", - "fable", - "onyx", - "nova", - "shimmer", - "ash", - "ballad", - "coral", - "sage", - "verse" - ], - "oneOf": [ - { - "type": "string", - "enum": [ - "alloy", - "echo", - "fable", - "onyx", - "nova", - "shimmer" - ], - "title": "Preset Voice Options" - }, - { - "type": "string", - "title": "OpenAI Voice ID" - } - ] + "teamId": { + "type": "string", + "description": "Team ID" }, - "speed": { - "type": "number", - "description": "This is the speed multiplier that will be used.", - "minimum": 0.25, - "maximum": 4, - "example": null + "region": { + "type": "string", + "description": "Region of your application. For example: eu1, eu2, us1, us2" }, - "chunkPlan": { - "description": "This is the plan for chunking the model output before it is sent to the voice provider.", - "allOf": [ - { - "$ref": "#/components/schemas/ChunkPlan" - } - ] + "apiKey": { + "type": "string", + "description": "This is not returned in the API." + }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 } }, "required": [ "provider", - "voiceId" + "teamId", + "region", + "apiKey" ] }, - "FallbackPlayHTVoice": { + "CreateOpenAICredentialDTO": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the voice provider that will be used.", "enum": [ - "playht" - ] - }, - "voiceId": { - "description": "This is the provider-specific ID that will be used.", - "oneOf": [ - { - "type": "string", - "enum": [ - "jennifer", - "melissa", - "will", - "chris", - "matt", - "jack", - "ruby", - "davis", - "donna", - "michael" - ], - "title": "Preset Voice Options" - }, - { - "type": "string", - "title": "PlayHT Voice ID" - } + "openai" ] }, - "speed": { - "type": "number", - "description": "This is the speed multiplier that will be used.", - "minimum": 0.1, - "maximum": 5, - "example": null - }, - "temperature": { - "type": "number", - "description": "A floating point number between 0, exclusive, and 2, inclusive. If equal to null or not provided, the model's default temperature will be used. The temperature parameter controls variance. Lower temperatures result in more predictable results, higher temperatures allow each run to vary more, so the voice may sound less like the baseline voice.", - "minimum": 0.1, - "maximum": 2, - "example": null - }, - "emotion": { + "apiKey": { "type": "string", - "description": "An emotion to be applied to the speech.", - "enum": [ - "female_happy", - "female_sad", - "female_angry", - "female_fearful", - "female_disgust", - "female_surprised", - "male_happy", - "male_sad", - "male_angry", - "male_fearful", - "male_disgust", - "male_surprised" - ], - "example": null - }, - "voiceGuidance": { - "type": "number", - "description": "A number between 1 and 6. Use lower numbers to reduce how unique your chosen voice will be compared to other voices.", - "minimum": 1, - "maximum": 6, - "example": null + "description": "This is not returned in the API." }, - "styleGuidance": { - "type": "number", - "description": "A number between 1 and 30. Use lower numbers to to reduce how strong your chosen emotion will be. Higher numbers will create a very emotional performance.", - "minimum": 1, - "maximum": 30, - "example": null + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + }, + "required": [ + "provider", + "apiKey" + ] + }, + "CreateOpenRouterCredentialDTO": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "enum": [ + "openrouter" + ] }, - "textGuidance": { - "type": "number", - "description": "A number between 1 and 2. This number influences how closely the generated speech adheres to the input text. Use lower values to create more fluid speech, but with a higher chance of deviating from the input text. Higher numbers will make the generated speech more accurate to the input text, ensuring that the words spoken align closely with the provided text.", - "minimum": 1, - "maximum": 2, - "example": null + "apiKey": { + "type": "string", + "description": "This is not returned in the API." }, - "model": { + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + }, + "required": [ + "provider", + "apiKey" + ] + }, + "CreatePerplexityAICredentialDTO": { + "type": "object", + "properties": { + "provider": { "type": "string", - "description": "Playht voice model/engine to use.", "enum": [ - "PlayHT2.0", - "PlayHT2.0-turbo", - "Play3.0-mini" + "perplexity-ai" ] }, - "language": { + "apiKey": { + "type": "string", + "description": "This is not returned in the API." + }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + }, + "required": [ + "provider", + "apiKey" + ] + }, + "CreatePlayHTCredentialDTO": { + "type": "object", + "properties": { + "provider": { "type": "string", - "description": "The language to use for the speech.", "enum": [ - "afrikaans", - "albanian", - "amharic", - "arabic", - "bengali", - "bulgarian", - "catalan", - "croatian", - "czech", - "danish", - "dutch", - "english", - "french", - "galician", - "german", - "greek", - "hebrew", - "hindi", - "hungarian", - "indonesian", - "italian", - "japanese", - "korean", - "malay", - "mandarin", - "polish", - "portuguese", - "russian", - "serbian", - "spanish", - "swedish", - "tagalog", - "thai", - "turkish", - "ukrainian", - "urdu", - "xhosa" + "playht" ] }, - "chunkPlan": { - "description": "This is the plan for chunking the model output before it is sent to the voice provider.", - "allOf": [ - { - "$ref": "#/components/schemas/ChunkPlan" - } - ] + "apiKey": { + "type": "string", + "description": "This is not returned in the API." + }, + "userId": { + "type": "string" + }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 } }, "required": [ "provider", - "voiceId" + "apiKey", + "userId" ] }, - "FallbackRimeAIVoice": { + "CreateRimeAICredentialDTO": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the voice provider that will be used.", "enum": [ "rime-ai" ] }, - "voiceId": { - "description": "This is the provider-specific ID that will be used.", - "oneOf": [ - { - "type": "string", - "enum": [ - "marsh", - "bayou", - "creek", - "brook", - "flower", - "spore", - "glacier", - "gulch", - "alpine", - "cove", - "lagoon", - "tundra", - "steppe", - "mesa", - "grove", - "rainforest", - "moraine", - "wildflower", - "peak", - "boulder", - "abbie", - "allison", - "ally", - "alona", - "amber", - "ana", - "antoine", - "armon", - "brenda", - "brittany", - "carol", - "colin", - "courtney", - "elena", - "elliot", - "eva", - "geoff", - "gerald", - "hank", - "helen", - "hera", - "jen", - "joe", - "joy", - "juan", - "kendra", - "kendrick", - "kenneth", - "kevin", - "kris", - "linda", - "madison", - "marge", - "marina", - "marissa", - "marta", - "maya", - "nicholas", - "nyles", - "phil", - "reba", - "rex", - "rick", - "ritu", - "rob", - "rodney", - "rohan", - "rosco", - "samantha", - "sandy", - "selena", - "seth", - "sharon", - "stan", - "tamra", - "tanya", - "tibur", - "tj", - "tyler", - "viv", - "yadira" - ], - "title": "Preset Voice Options" - }, - { - "type": "string", - "title": "RimeAI Voice ID" - } + "apiKey": { + "type": "string", + "description": "This is not returned in the API." + }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + }, + "required": [ + "provider", + "apiKey" + ] + }, + "CreateRunpodCredentialDTO": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "enum": [ + "runpod" ] }, - "model": { + "apiKey": { + "type": "string", + "description": "This is not returned in the API." + }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + }, + "required": [ + "provider", + "apiKey" + ] + }, + "CreateS3CredentialDTO": { + "type": "object", + "properties": { + "provider": { "type": "string", - "description": "This is the model that will be used. Defaults to 'v1' when not specified.", "enum": [ - "v1", - "mist" + "s3" ], - "example": "v1" + "description": "Credential provider. Only allowed value is s3" }, - "speed": { - "type": "number", - "description": "This is the speed multiplier that will be used.", - "minimum": 0.1, - "example": null + "awsAccessKeyId": { + "type": "string", + "description": "AWS access key ID." }, - "chunkPlan": { - "description": "This is the plan for chunking the model output before it is sent to the voice provider.", - "allOf": [ - { - "$ref": "#/components/schemas/ChunkPlan" - } + "awsSecretAccessKey": { + "type": "string", + "description": "AWS access key secret. This is not returned in the API." + }, + "region": { + "type": "string", + "description": "AWS region in which the S3 bucket is located." + }, + "s3BucketName": { + "type": "string", + "description": "AWS S3 bucket name." + }, + "s3PathPrefix": { + "type": "string", + "description": "The path prefix for the uploaded recording. Ex. \"recordings/\"" + }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + }, + "required": [ + "provider", + "awsAccessKeyId", + "awsSecretAccessKey", + "region", + "s3BucketName", + "s3PathPrefix" + ] + }, + "CreateSmallestAICredentialDTO": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "enum": [ + "smallest-ai" ] + }, + "apiKey": { + "type": "string", + "description": "This is not returned in the API." + }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 } }, "required": [ "provider", - "voiceId" + "apiKey" ] }, - "FallbackTavusVoice": { + "CreateTavusCredentialDTO": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the voice provider that will be used.", "enum": [ "tavus" ] }, - "voiceId": { - "description": "This is the provider-specific ID that will be used.", - "oneOf": [ - { - "type": "string", - "enum": [ - "r52da2535a" - ], - "title": "Preset Voice Options" - }, - { - "type": "string", - "title": "Tavus Voice ID" - } + "apiKey": { + "type": "string", + "description": "This is not returned in the API." + }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + }, + "required": [ + "provider", + "apiKey" + ] + }, + "CreateTogetherAICredentialDTO": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "enum": [ + "together-ai" ] }, - "personaId": { + "apiKey": { "type": "string", - "description": "This is the unique identifier for the persona that the replica will use in the conversation." + "description": "This is not returned in the API." }, - "callbackUrl": { + "name": { "type": "string", - "description": "This is the url that will receive webhooks with updates regarding the conversation state." + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + }, + "required": [ + "provider", + "apiKey" + ] + }, + "CreateTwilioCredentialDTO": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "enum": [ + "twilio" + ] }, - "conversationName": { + "authToken": { "type": "string", - "description": "This is the name for the conversation." + "description": "This is not returned in the API." }, - "conversationalContext": { + "accountSid": { + "type": "string" + }, + "name": { "type": "string", - "description": "This is the context that will be appended to any context provided in the persona, if one is provided." + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + }, + "required": [ + "provider", + "authToken", + "accountSid" + ] + }, + "CreateVonageCredentialDTO": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "enum": [ + "vonage" + ] }, - "customGreeting": { + "apiSecret": { "type": "string", - "description": "This is the custom greeting that the replica will give once a participant joines the conversation." + "description": "This is not returned in the API." }, - "properties": { - "description": "These are optional properties used to customize the conversation.", - "allOf": [ - { - "$ref": "#/components/schemas/TavusConversationProperties" - } + "apiKey": { + "type": "string" + }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + }, + "required": [ + "provider", + "apiSecret", + "apiKey" + ] + }, + "CreateWebhookCredentialDTO": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "enum": [ + "webhook" ] }, - "chunkPlan": { - "description": "This is the plan for chunking the model output before it is sent to the voice provider.", + "authenticationPlan": { + "description": "This is the authentication plan. Currently supports OAuth2 RFC 6749.", "allOf": [ { - "$ref": "#/components/schemas/ChunkPlan" + "$ref": "#/components/schemas/OAuth2AuthenticationPlan" } ] + }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 } }, "required": [ "provider", - "voiceId" + "authenticationPlan" ] }, - "TransportConfigurationTwilio": { + "CreateXAiCredentialDTO": { "type": "object", "properties": { "provider": { "type": "string", + "description": "This is the api key for Grok in XAi's console. Get it from here: https://console.x.ai", "enum": [ - "twilio" + "xai" ] }, - "timeout": { - "type": "number", - "description": "The integer number of seconds that we should allow the phone to ring before assuming there is no answer.\nThe default is `60` seconds and the maximum is `600` seconds.\nFor some call flows, we will add a 5-second buffer to the timeout value you provide.\nFor this reason, a timeout value of 10 seconds could result in an actual timeout closer to 15 seconds.\nYou can set this to a short time, such as `15` seconds, to hang up before reaching an answering machine or voicemail.\n\n@default 60", - "minimum": 1, - "maximum": 600, - "example": 60 - }, - "record": { - "type": "boolean", - "description": "Whether to record the call.\nCan be `true` to record the phone call, or `false` to not.\nThe default is `false`.\n\n@default false", - "example": false + "apiKey": { + "type": "string", + "maxLength": 10000, + "description": "This is not returned in the API." }, - "recordingChannels": { + "name": { "type": "string", - "description": "The number of channels in the final recording.\nCan be: `mono` or `dual`.\nThe default is `mono`.\n`mono` records both legs of the call in a single channel of the recording file.\n`dual` records each leg to a separate channel of the recording file.\nThe first channel of a dual-channel recording contains the parent call and the second channel contains the child call.\n\n@default 'mono'", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + }, + "required": [ + "provider", + "apiKey" + ] + }, + "TransferAssistantHookAction": { + "type": "object", + "properties": { + "type": { + "type": "string", + "description": "This is the type of action - must be \"transfer\"", "enum": [ - "mono", - "dual" - ], - "example": "mono" + "transfer" + ] + }, + "destination": { + "description": "This is the destination details for the transfer - can be a phone number or SIP URI", + "oneOf": [ + { + "$ref": "#/components/schemas/TransferDestinationNumber", + "title": "NumberTransferDestination" + }, + { + "$ref": "#/components/schemas/TransferDestinationSip", + "title": "SipTransferDestination" + } + ] } }, "required": [ - "provider" + "type" ] }, "TwilioVoicemailDetection": { @@ -8448,6 +12105,25 @@ "provider" ] }, + "CompliancePlan": { + "type": "object", + "properties": { + "hipaaEnabled": { + "type": "boolean", + "description": "When this is enabled, no logs, recordings, or transcriptions will be stored. At the end of the call, you will still receive an end-of-call-report message to store on your server. Defaults to false.", + "example": { + "hipaaEnabled": false + } + }, + "pciEnabled": { + "type": "boolean", + "description": "When this is enabled, the user will be restricted to use PCI-compliant providers, and no logs or transcripts are stored. At the end of the call, you will receive an end-of-call-report message to store on your server. Defaults to false.", + "example": { + "pciEnabled": false + } + } + } + }, "StructuredDataPlan": { "type": "object", "properties": { @@ -8566,7 +12242,7 @@ "properties": { "recordingEnabled": { "type": "boolean", - "description": "This determines whether assistant's calls are recorded. Defaults to true.\n\nUsage:\n- If you don't want to record the calls, set this to false.\n- If you want to record the calls when `assistant.hipaaEnabled`, explicity set this to true and make sure to provide S3 or GCP credentials on the Provider Credentials page in the Dashboard.\n\nYou can find the recording at `call.artifact.recordingUrl` and `call.artifact.stereoRecordingUrl` after the call is ended.\n\n@default true", + "description": "This determines whether assistant's calls are recorded. Defaults to true.\n\nUsage:\n- If you don't want to record the calls, set this to false.\n- If you want to record the calls when `assistant.hipaaEnabled` (deprecated) or `assistant.compliancePlan.hipaaEnabled` explicity set this to true and make sure to provide S3 or GCP credentials on the Provider Credentials page in the Dashboard.\n\nYou can find the recording at `call.artifact.recordingUrl` and `call.artifact.stereoRecordingUrl` after the call is ended.\n\n@default true", "example": true }, "videoRecordingEnabled": { @@ -8574,6 +12250,16 @@ "description": "This determines whether the video is recorded during the call. Defaults to false. Only relevant for `webCall` type.\n\nYou can find the video recording at `call.artifact.videoRecordingUrl` after the call is ended.\n\n@default false", "example": false }, + "pcapEnabled": { + "type": "boolean", + "description": "This determines whether the SIP packet capture is enabled. Defaults to true. Only relevant for `phone` type calls where phone number's provider is `vapi` or `byo-phone-number`.\n\nYou can find the packet capture at `call.artifact.pcapUrl` after the call is ended.\n\n@default true", + "example": true + }, + "pcapS3PathPrefix": { + "type": "string", + "description": "This is the path where the SIP packet capture will be uploaded. This is only used if you have provided S3 or GCP credentials on the Provider Credentials page in the Dashboard.\n\nIf credential.s3PathPrefix or credential.bucketPlan.path is set, this will append to it.\n\nUsage:\n- If you want to upload the packet capture to a specific path, set this to the path. Example: `/my-assistant-captures`.\n- If you want to upload the packet capture to the root of the bucket, set this to `/`.\n\n@default '/'", + "example": "/pcaps" + }, "transcriptPlan": { "description": "This is the plan for `call.artifact.transcript`. To disable, set `transcriptPlan.enabled` to false.", "allOf": [ @@ -8609,7 +12295,12 @@ "type": "number", "description": "This is the timeout in seconds before a message from `idleMessages` is spoken. The clock starts when the assistant finishes speaking and remains active until the user speaks.\n\n@default 10", "minimum": 5, - "maximum": 30 + "maximum": 60 + }, + "silenceTimeoutMessage": { + "type": "string", + "description": "This is the message that the assistant will say if the call ends due to silence.\n\nIf unspecified, it will hang up without saying anything.", + "maxLength": 1000 } } }, @@ -8820,6 +12511,114 @@ "minimum": 0, "maximum": 10, "example": 1 + }, + "acknowledgementPhrases": { + "description": "These are the phrases that will never interrupt the assistant, even if numWords threshold is met.\nThese are typically acknowledgement or backchanneling phrases.", + "example": [ + "i understand", + "i see", + "i got it", + "i hear you", + "im listening", + "im with you", + "right", + "okay", + "ok", + "sure", + "alright", + "got it", + "understood", + "yeah", + "yes", + "uh-huh", + "mm-hmm", + "gotcha", + "mhmm", + "ah", + "yeah okay", + "yeah sure" + ], + "default": [ + "i understand", + "i see", + "i got it", + "i hear you", + "im listening", + "im with you", + "right", + "okay", + "ok", + "sure", + "alright", + "got it", + "understood", + "yeah", + "yes", + "uh-huh", + "mm-hmm", + "gotcha", + "mhmm", + "ah", + "yeah okay", + "yeah sure" + ], + "type": "array", + "items": { + "type": "string", + "maxLength": 240 + } + }, + "interruptionPhrases": { + "description": "These are the phrases that will always interrupt the assistant immediately, regardless of numWords.\nThese are typically phrases indicating disagreement or desire to stop.", + "example": [ + "stop", + "shut", + "up", + "enough", + "quiet", + "silence", + "but", + "dont", + "not", + "no", + "hold", + "wait", + "cut", + "pause", + "nope", + "nah", + "nevermind", + "never", + "bad", + "actually" + ], + "default": [ + "stop", + "shut", + "up", + "enough", + "quiet", + "silence", + "but", + "dont", + "not", + "no", + "hold", + "wait", + "cut", + "pause", + "nope", + "nah", + "nevermind", + "never", + "bad", + "actually" + ], + "type": "array", + "items": { + "type": "string", + "maxLength": 240 + } } } }, @@ -8838,6 +12637,72 @@ } } }, + "AssistantHookFilter": { + "type": "object", + "properties": { + "type": { + "type": "string", + "description": "This is the type of filter - currently only \"oneOf\" is supported", + "enum": [ + "oneOf" + ], + "maxLength": 1000 + }, + "key": { + "type": "string", + "description": "This is the key to filter on (e.g. \"call.endedReason\")", + "maxLength": 1000 + }, + "oneOf": { + "description": "This is the array of possible values to match against", + "type": "array", + "items": { + "type": "string", + "maxLength": 1000 + } + } + }, + "required": [ + "type", + "key", + "oneOf" + ] + }, + "AssistantHookActionBase": { + "type": "object", + "properties": {} + }, + "AssistantHooks": { + "type": "object", + "properties": { + "on": { + "type": "string", + "description": "This is the event that triggers this hook", + "enum": [ + "call.ending" + ], + "maxLength": 1000 + }, + "filters": { + "description": "This is the set of filters that must match for the hook to trigger", + "type": "array", + "items": { + "$ref": "#/components/schemas/AssistantHookFilter" + } + }, + "do": { + "description": "This is the set of actions to perform when the hook triggers", + "type": "array", + "items": { + "$ref": "#/components/schemas/AssistantHookActionBase" + } + } + }, + "required": [ + "on", + "do" + ] + }, "CreateAssistantDTO": { "type": "object", "properties": { @@ -8848,6 +12713,10 @@ "$ref": "#/components/schemas/AssemblyAITranscriber", "title": "AssemblyAI" }, + { + "$ref": "#/components/schemas/AzureSpeechTranscriber", + "title": "Azure" + }, { "$ref": "#/components/schemas/CustomTranscriber", "title": "CustomTranscriber" @@ -8897,6 +12766,10 @@ "$ref": "#/components/schemas/InflectionAIModel", "title": "InflectionAI" }, + { + "$ref": "#/components/schemas/DeepSeekModel", + "title": "DeepSeek" + }, { "$ref": "#/components/schemas/OpenAIModel", "title": "OpenAI" @@ -8966,6 +12839,10 @@ "$ref": "#/components/schemas/RimeAIVoice", "title": "RimeAI" }, + { + "$ref": "#/components/schemas/SmallestAIVoice", + "title": "SmallestAI" + }, { "$ref": "#/components/schemas/TavusVoice", "title": "TavusVoice" @@ -8991,11 +12868,6 @@ ], "example": "assistant-speaks-first" }, - "hipaaEnabled": { - "type": "boolean", - "description": "When this is enabled, no logs, recordings, or transcriptions will be stored. At the end of the call, you will still receive an end-of-call-report message to store on your server. Defaults to false.", - "example": false - }, "clientMessages": { "type": "array", "enum": [ @@ -9064,6 +12936,7 @@ "speech-update", "status-update", "transcript", + "transcript[transcriptType=\"final\"]", "tool-calls", "transfer-destination-request", "transfer-update", @@ -9096,6 +12969,7 @@ "speech-update", "status-update", "transcript", + "transcript[transcriptType=\"final\"]", "tool-calls", "transfer-destination-request", "transfer-update", @@ -9149,6 +13023,150 @@ ] } }, + "credentials": { + "type": "array", + "description": "These are dynamic credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can supplement an additional credentials using this. Dynamic credentials override existing credentials.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/CreateAnthropicCredentialDTO", + "title": "AnthropicCredential" + }, + { + "$ref": "#/components/schemas/CreateAnyscaleCredentialDTO", + "title": "AnyscaleCredential" + }, + { + "$ref": "#/components/schemas/CreateAssemblyAICredentialDTO", + "title": "AssemblyAICredential" + }, + { + "$ref": "#/components/schemas/CreateAzureOpenAICredentialDTO", + "title": "AzureOpenAICredential" + }, + { + "$ref": "#/components/schemas/CreateAzureCredentialDTO", + "title": "AzureCredential" + }, + { + "$ref": "#/components/schemas/CreateByoSipTrunkCredentialDTO", + "title": "ByoSipTrunkCredential" + }, + { + "$ref": "#/components/schemas/CreateCartesiaCredentialDTO", + "title": "CartesiaCredential" + }, + { + "$ref": "#/components/schemas/CreateCloudflareCredentialDTO", + "title": "CloudflareCredential" + }, + { + "$ref": "#/components/schemas/CreateCustomLLMCredentialDTO", + "title": "CustomLLMCredential" + }, + { + "$ref": "#/components/schemas/CreateDeepgramCredentialDTO", + "title": "DeepgramCredential" + }, + { + "$ref": "#/components/schemas/CreateDeepInfraCredentialDTO", + "title": "DeepInfraCredential" + }, + { + "$ref": "#/components/schemas/CreateDeepSeekCredentialDTO", + "title": "DeepSeekCredential" + }, + { + "$ref": "#/components/schemas/CreateElevenLabsCredentialDTO", + "title": "ElevenLabsCredential" + }, + { + "$ref": "#/components/schemas/CreateGcpCredentialDTO", + "title": "GcpCredential" + }, + { + "$ref": "#/components/schemas/CreateGladiaCredentialDTO", + "title": "GladiaCredential" + }, + { + "$ref": "#/components/schemas/CreateGoHighLevelCredentialDTO", + "title": "GhlCredential" + }, + { + "$ref": "#/components/schemas/CreateGroqCredentialDTO", + "title": "GroqCredential" + }, + { + "$ref": "#/components/schemas/CreateLangfuseCredentialDTO", + "title": "LangfuseCredential" + }, + { + "$ref": "#/components/schemas/CreateLmntCredentialDTO", + "title": "LmntCredential" + }, + { + "$ref": "#/components/schemas/CreateMakeCredentialDTO", + "title": "MakeCredential" + }, + { + "$ref": "#/components/schemas/CreateOpenAICredentialDTO", + "title": "OpenAICredential" + }, + { + "$ref": "#/components/schemas/CreateOpenRouterCredentialDTO", + "title": "OpenRouterCredential" + }, + { + "$ref": "#/components/schemas/CreatePerplexityAICredentialDTO", + "title": "PerplexityAICredential" + }, + { + "$ref": "#/components/schemas/CreatePlayHTCredentialDTO", + "title": "PlayHTCredential" + }, + { + "$ref": "#/components/schemas/CreateRimeAICredentialDTO", + "title": "RimeAICredential" + }, + { + "$ref": "#/components/schemas/CreateRunpodCredentialDTO", + "title": "RunpodCredential" + }, + { + "$ref": "#/components/schemas/CreateS3CredentialDTO", + "title": "S3Credential" + }, + { + "$ref": "#/components/schemas/CreateSmallestAICredentialDTO", + "title": "SmallestAICredential" + }, + { + "$ref": "#/components/schemas/CreateTavusCredentialDTO", + "title": "TavusCredential" + }, + { + "$ref": "#/components/schemas/CreateTogetherAICredentialDTO", + "title": "TogetherAICredential" + }, + { + "$ref": "#/components/schemas/CreateTwilioCredentialDTO", + "title": "TwilioCredential" + }, + { + "$ref": "#/components/schemas/CreateVonageCredentialDTO", + "title": "VonageCredential" + }, + { + "$ref": "#/components/schemas/CreateWebhookCredentialDTO", + "title": "WebhookCredential" + }, + { + "$ref": "#/components/schemas/CreateXAiCredentialDTO", + "title": "XAiCredential" + } + ] + } + }, "name": { "type": "string", "description": "This is the name of the assistant.\n\nThis is required when you want to transfer between assistants in a call.", @@ -9181,6 +13199,9 @@ "minLength": 2 } }, + "compliancePlan": { + "$ref": "#/components/schemas/CompliancePlan" + }, "metadata": { "type": "object", "description": "This is for metadata you want to store on the assistant." @@ -9247,6 +13268,13 @@ "$ref": "#/components/schemas/Server" } ] + }, + "hooks": { + "description": "This is a set of actions that will be performed on certain events.", + "type": "array", + "items": { + "$ref": "#/components/schemas/AssistantHooks" + } } } }, @@ -9260,6 +13288,10 @@ "$ref": "#/components/schemas/AssemblyAITranscriber", "title": "AssemblyAI" }, + { + "$ref": "#/components/schemas/AzureSpeechTranscriber", + "title": "Azure" + }, { "$ref": "#/components/schemas/CustomTranscriber", "title": "CustomTranscriber" @@ -9309,6 +13341,10 @@ "$ref": "#/components/schemas/InflectionAIModel", "title": "InflectionAI" }, + { + "$ref": "#/components/schemas/DeepSeekModel", + "title": "DeepSeek" + }, { "$ref": "#/components/schemas/OpenAIModel", "title": "OpenAI" @@ -9378,6 +13414,10 @@ "$ref": "#/components/schemas/RimeAIVoice", "title": "RimeAI" }, + { + "$ref": "#/components/schemas/SmallestAIVoice", + "title": "SmallestAI" + }, { "$ref": "#/components/schemas/TavusVoice", "title": "TavusVoice" @@ -9403,11 +13443,6 @@ ], "example": "assistant-speaks-first" }, - "hipaaEnabled": { - "type": "boolean", - "description": "When this is enabled, no logs, recordings, or transcriptions will be stored. At the end of the call, you will still receive an end-of-call-report message to store on your server. Defaults to false.", - "example": false - }, "clientMessages": { "type": "array", "enum": [ @@ -9476,6 +13511,7 @@ "speech-update", "status-update", "transcript", + "transcript[transcriptType=\"final\"]", "tool-calls", "transfer-destination-request", "transfer-update", @@ -9508,6 +13544,7 @@ "speech-update", "status-update", "transcript", + "transcript[transcriptType=\"final\"]", "tool-calls", "transfer-destination-request", "transfer-update", @@ -9555,8 +13592,152 @@ "items": { "oneOf": [ { - "$ref": "#/components/schemas/TransportConfigurationTwilio", - "title": "Twilio" + "$ref": "#/components/schemas/TransportConfigurationTwilio", + "title": "Twilio" + } + ] + } + }, + "credentials": { + "type": "array", + "description": "These are dynamic credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can supplement an additional credentials using this. Dynamic credentials override existing credentials.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/CreateAnthropicCredentialDTO", + "title": "AnthropicCredential" + }, + { + "$ref": "#/components/schemas/CreateAnyscaleCredentialDTO", + "title": "AnyscaleCredential" + }, + { + "$ref": "#/components/schemas/CreateAssemblyAICredentialDTO", + "title": "AssemblyAICredential" + }, + { + "$ref": "#/components/schemas/CreateAzureOpenAICredentialDTO", + "title": "AzureOpenAICredential" + }, + { + "$ref": "#/components/schemas/CreateAzureCredentialDTO", + "title": "AzureCredential" + }, + { + "$ref": "#/components/schemas/CreateByoSipTrunkCredentialDTO", + "title": "ByoSipTrunkCredential" + }, + { + "$ref": "#/components/schemas/CreateCartesiaCredentialDTO", + "title": "CartesiaCredential" + }, + { + "$ref": "#/components/schemas/CreateCloudflareCredentialDTO", + "title": "CloudflareCredential" + }, + { + "$ref": "#/components/schemas/CreateCustomLLMCredentialDTO", + "title": "CustomLLMCredential" + }, + { + "$ref": "#/components/schemas/CreateDeepgramCredentialDTO", + "title": "DeepgramCredential" + }, + { + "$ref": "#/components/schemas/CreateDeepInfraCredentialDTO", + "title": "DeepInfraCredential" + }, + { + "$ref": "#/components/schemas/CreateDeepSeekCredentialDTO", + "title": "DeepSeekCredential" + }, + { + "$ref": "#/components/schemas/CreateElevenLabsCredentialDTO", + "title": "ElevenLabsCredential" + }, + { + "$ref": "#/components/schemas/CreateGcpCredentialDTO", + "title": "GcpCredential" + }, + { + "$ref": "#/components/schemas/CreateGladiaCredentialDTO", + "title": "GladiaCredential" + }, + { + "$ref": "#/components/schemas/CreateGoHighLevelCredentialDTO", + "title": "GhlCredential" + }, + { + "$ref": "#/components/schemas/CreateGroqCredentialDTO", + "title": "GroqCredential" + }, + { + "$ref": "#/components/schemas/CreateLangfuseCredentialDTO", + "title": "LangfuseCredential" + }, + { + "$ref": "#/components/schemas/CreateLmntCredentialDTO", + "title": "LmntCredential" + }, + { + "$ref": "#/components/schemas/CreateMakeCredentialDTO", + "title": "MakeCredential" + }, + { + "$ref": "#/components/schemas/CreateOpenAICredentialDTO", + "title": "OpenAICredential" + }, + { + "$ref": "#/components/schemas/CreateOpenRouterCredentialDTO", + "title": "OpenRouterCredential" + }, + { + "$ref": "#/components/schemas/CreatePerplexityAICredentialDTO", + "title": "PerplexityAICredential" + }, + { + "$ref": "#/components/schemas/CreatePlayHTCredentialDTO", + "title": "PlayHTCredential" + }, + { + "$ref": "#/components/schemas/CreateRimeAICredentialDTO", + "title": "RimeAICredential" + }, + { + "$ref": "#/components/schemas/CreateRunpodCredentialDTO", + "title": "RunpodCredential" + }, + { + "$ref": "#/components/schemas/CreateS3CredentialDTO", + "title": "S3Credential" + }, + { + "$ref": "#/components/schemas/CreateSmallestAICredentialDTO", + "title": "SmallestAICredential" + }, + { + "$ref": "#/components/schemas/CreateTavusCredentialDTO", + "title": "TavusCredential" + }, + { + "$ref": "#/components/schemas/CreateTogetherAICredentialDTO", + "title": "TogetherAICredential" + }, + { + "$ref": "#/components/schemas/CreateTwilioCredentialDTO", + "title": "TwilioCredential" + }, + { + "$ref": "#/components/schemas/CreateVonageCredentialDTO", + "title": "VonageCredential" + }, + { + "$ref": "#/components/schemas/CreateWebhookCredentialDTO", + "title": "WebhookCredential" + }, + { + "$ref": "#/components/schemas/CreateXAiCredentialDTO", + "title": "XAiCredential" } ] } @@ -9597,6 +13778,9 @@ "minLength": 2 } }, + "compliancePlan": { + "$ref": "#/components/schemas/CompliancePlan" + }, "metadata": { "type": "object", "description": "This is for metadata you want to store on the assistant." @@ -9663,6 +13847,13 @@ "$ref": "#/components/schemas/Server" } ] + }, + "hooks": { + "description": "This is a set of actions that will be performed on certain events.", + "type": "array", + "items": { + "$ref": "#/components/schemas/AssistantHooks" + } } } }, @@ -9768,13 +13959,13 @@ "type": "string", "description": "This is the squad that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." }, - "serverUrl": { - "type": "string", - "description": "This is the server URL where messages will be sent for calls on this number. This includes the `assistant-request` message.\n\nYou can see the shape of the messages sent in `ServerMessage`.\n\nThis overrides the `org.serverUrl`. Order of precedence: tool.server.url > assistant.serverUrl > phoneNumber.serverUrl > org.serverUrl." - }, - "serverUrlSecret": { - "type": "string", - "description": "This is the secret Vapi will send with every message to your server. It's sent as a header called x-vapi-secret.\n\nSame precedence logic as serverUrl." + "server": { + "description": "This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.\n\nThe order of precedence is:\n\n1. assistant.server\n2. phoneNumber.server\n3. org.server", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] } }, "required": [ @@ -10057,6 +14248,10 @@ "transcript": { "type": "string", "description": "This is the transcript of the call. This is derived from `artifact.messages` but provided for convenience." + }, + "pcapUrl": { + "type": "string", + "description": "This is the packet capture url for the call. This is only available for `phone` type calls where phone number's provider is `vapi` or `byo-phone-number`." } } }, @@ -10182,6 +14377,31 @@ "type": "string", "description": "This is the explanation for how the call ended.", "enum": [ + "assistant-not-valid", + "assistant-not-provided", + "call-start-error-neither-assistant-nor-server-set", + "assistant-request-failed", + "assistant-request-returned-error", + "assistant-request-returned-unspeakable-error", + "assistant-request-returned-invalid-assistant", + "assistant-request-returned-no-assistant", + "assistant-request-returned-forwarding-phone-number", + "assistant-ended-call", + "assistant-said-end-call-phrase", + "assistant-ended-call-with-hangup-task", + "assistant-forwarded-call", + "assistant-join-timed-out", + "customer-busy", + "customer-ended-call", + "customer-did-not-answer", + "customer-did-not-give-microphone-permission", + "assistant-said-message-with-end-call-enabled", + "exceeded-max-duration", + "manually-canceled", + "phone-call-provider-closed-websocket", + "db-error", + "assistant-not-found", + "license-check-failed", "pipeline-error-openai-voice-failed", "pipeline-error-cartesia-voice-failed", "pipeline-error-deepgram-voice-failed", @@ -10191,9 +14411,14 @@ "pipeline-error-azure-voice-failed", "pipeline-error-rime-ai-voice-failed", "pipeline-error-neets-voice-failed", - "db-error", - "assistant-not-found", - "license-check-failed", + "pipeline-error-smallest-ai-voice-failed", + "pipeline-error-neuphonic-voice-failed", + "pipeline-error-deepgram-transcriber-failed", + "pipeline-error-gladia-transcriber-failed", + "pipeline-error-speechmatics-transcriber-failed", + "pipeline-error-assembly-ai-transcriber-failed", + "pipeline-error-talkscriber-transcriber-failed", + "pipeline-error-azure-speech-transcriber-failed", "pipeline-error-vapi-llm-failed", "pipeline-error-vapi-400-bad-request-validation-failed", "pipeline-error-vapi-401-unauthorized", @@ -10213,36 +14438,15 @@ "vapifault-web-call-worker-setup-failed", "vapifault-transport-connected-but-call-not-active", "vapifault-call-started-but-connection-to-transport-missing", - "pipeline-error-deepgram-transcriber-failed", - "pipeline-error-gladia-transcriber-failed", - "pipeline-error-assembly-ai-transcriber-failed", "pipeline-error-openai-llm-failed", "pipeline-error-azure-openai-llm-failed", "pipeline-error-groq-llm-failed", "pipeline-error-google-llm-failed", "pipeline-error-xai-llm-failed", + "pipeline-error-mistral-llm-failed", "pipeline-error-inflection-ai-llm-failed", - "assistant-not-invalid", - "assistant-not-provided", - "call-start-error-neither-assistant-nor-server-set", - "assistant-request-failed", - "assistant-request-returned-error", - "assistant-request-returned-unspeakable-error", - "assistant-request-returned-invalid-assistant", - "assistant-request-returned-no-assistant", - "assistant-request-returned-forwarding-phone-number", - "assistant-ended-call", - "assistant-said-end-call-phrase", - "assistant-forwarded-call", - "assistant-join-timed-out", - "customer-busy", - "customer-ended-call", - "customer-did-not-answer", - "customer-did-not-give-microphone-permission", - "assistant-said-message-with-end-call-enabled", - "exceeded-max-duration", - "manually-canceled", - "phone-call-provider-closed-websocket", + "pipeline-error-cerebras-llm-failed", + "pipeline-error-deep-seek-llm-failed", "pipeline-error-openai-400-bad-request-validation-failed", "pipeline-error-openai-401-unauthorized", "pipeline-error-openai-403-model-access-denied", @@ -10258,11 +14462,21 @@ "pipeline-error-xai-403-model-access-denied", "pipeline-error-xai-429-exceeded-quota", "pipeline-error-xai-500-server-error", + "pipeline-error-mistral-400-bad-request-validation-failed", + "pipeline-error-mistral-401-unauthorized", + "pipeline-error-mistral-403-model-access-denied", + "pipeline-error-mistral-429-exceeded-quota", + "pipeline-error-mistral-500-server-error", "pipeline-error-inflection-ai-400-bad-request-validation-failed", "pipeline-error-inflection-ai-401-unauthorized", "pipeline-error-inflection-ai-403-model-access-denied", "pipeline-error-inflection-ai-429-exceeded-quota", "pipeline-error-inflection-ai-500-server-error", + "pipeline-error-deep-seek-400-bad-request-validation-failed", + "pipeline-error-deep-seek-401-unauthorized", + "pipeline-error-deep-seek-403-model-access-denied", + "pipeline-error-deep-seek-429-exceeded-quota", + "pipeline-error-deep-seek-500-server-error", "pipeline-error-azure-openai-400-bad-request-validation-failed", "pipeline-error-azure-openai-401-unauthorized", "pipeline-error-azure-openai-403-model-access-denied", @@ -10273,6 +14487,11 @@ "pipeline-error-groq-403-model-access-denied", "pipeline-error-groq-429-exceeded-quota", "pipeline-error-groq-500-server-error", + "pipeline-error-cerebras-400-bad-request-validation-failed", + "pipeline-error-cerebras-401-unauthorized", + "pipeline-error-cerebras-403-model-access-denied", + "pipeline-error-cerebras-429-exceeded-quota", + "pipeline-error-cerebras-500-server-error", "pipeline-error-anthropic-400-bad-request-validation-failed", "pipeline-error-anthropic-401-unauthorized", "pipeline-error-anthropic-403-model-access-denied", @@ -10360,6 +14579,8 @@ "pipeline-error-playht-429-exceeded-quota", "pipeline-error-playht-502-gateway-error", "pipeline-error-playht-504-gateway-error", + "pipeline-error-tavus-video-failed", + "pipeline-error-custom-transcriber-failed", "pipeline-error-deepgram-returning-403-model-access-denied", "pipeline-error-deepgram-returning-401-invalid-credentials", "pipeline-error-deepgram-returning-404-not-found", @@ -10367,7 +14588,6 @@ "pipeline-error-deepgram-returning-500-invalid-json", "pipeline-error-deepgram-returning-502-network-error", "pipeline-error-deepgram-returning-502-bad-gateway-ehostunreach", - "pipeline-error-custom-transcriber-failed", "silence-timed-out", "sip-gateway-failed-to-connect-call", "twilio-failed-to-connect-call", @@ -10702,6 +14922,10 @@ "$ref": "#/components/schemas/AssemblyAITranscriber", "title": "AssemblyAI" }, + { + "$ref": "#/components/schemas/AzureSpeechTranscriber", + "title": "Azure" + }, { "$ref": "#/components/schemas/CustomTranscriber", "title": "CustomTranscriber" @@ -10751,6 +14975,10 @@ "$ref": "#/components/schemas/InflectionAIModel", "title": "InflectionAI" }, + { + "$ref": "#/components/schemas/DeepSeekModel", + "title": "DeepSeek" + }, { "$ref": "#/components/schemas/OpenAIModel", "title": "OpenAI" @@ -10820,6 +15048,10 @@ "$ref": "#/components/schemas/RimeAIVoice", "title": "RimeAI" }, + { + "$ref": "#/components/schemas/SmallestAIVoice", + "title": "SmallestAI" + }, { "$ref": "#/components/schemas/TavusVoice", "title": "TavusVoice" @@ -10845,11 +15077,6 @@ ], "example": "assistant-speaks-first" }, - "hipaaEnabled": { - "type": "boolean", - "description": "When this is enabled, no logs, recordings, or transcriptions will be stored. At the end of the call, you will still receive an end-of-call-report message to store on your server. Defaults to false.", - "example": false - }, "clientMessages": { "type": "array", "enum": [ @@ -10918,6 +15145,7 @@ "speech-update", "status-update", "transcript", + "transcript[transcriptType=\"final\"]", "tool-calls", "transfer-destination-request", "transfer-update", @@ -10950,6 +15178,7 @@ "speech-update", "status-update", "transcript", + "transcript[transcriptType=\"final\"]", "tool-calls", "transfer-destination-request", "transfer-update", @@ -11003,6 +15232,150 @@ ] } }, + "credentials": { + "type": "array", + "description": "These are dynamic credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can supplement an additional credentials using this. Dynamic credentials override existing credentials.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/CreateAnthropicCredentialDTO", + "title": "AnthropicCredential" + }, + { + "$ref": "#/components/schemas/CreateAnyscaleCredentialDTO", + "title": "AnyscaleCredential" + }, + { + "$ref": "#/components/schemas/CreateAssemblyAICredentialDTO", + "title": "AssemblyAICredential" + }, + { + "$ref": "#/components/schemas/CreateAzureOpenAICredentialDTO", + "title": "AzureOpenAICredential" + }, + { + "$ref": "#/components/schemas/CreateAzureCredentialDTO", + "title": "AzureCredential" + }, + { + "$ref": "#/components/schemas/CreateByoSipTrunkCredentialDTO", + "title": "ByoSipTrunkCredential" + }, + { + "$ref": "#/components/schemas/CreateCartesiaCredentialDTO", + "title": "CartesiaCredential" + }, + { + "$ref": "#/components/schemas/CreateCloudflareCredentialDTO", + "title": "CloudflareCredential" + }, + { + "$ref": "#/components/schemas/CreateCustomLLMCredentialDTO", + "title": "CustomLLMCredential" + }, + { + "$ref": "#/components/schemas/CreateDeepgramCredentialDTO", + "title": "DeepgramCredential" + }, + { + "$ref": "#/components/schemas/CreateDeepInfraCredentialDTO", + "title": "DeepInfraCredential" + }, + { + "$ref": "#/components/schemas/CreateDeepSeekCredentialDTO", + "title": "DeepSeekCredential" + }, + { + "$ref": "#/components/schemas/CreateElevenLabsCredentialDTO", + "title": "ElevenLabsCredential" + }, + { + "$ref": "#/components/schemas/CreateGcpCredentialDTO", + "title": "GcpCredential" + }, + { + "$ref": "#/components/schemas/CreateGladiaCredentialDTO", + "title": "GladiaCredential" + }, + { + "$ref": "#/components/schemas/CreateGoHighLevelCredentialDTO", + "title": "GhlCredential" + }, + { + "$ref": "#/components/schemas/CreateGroqCredentialDTO", + "title": "GroqCredential" + }, + { + "$ref": "#/components/schemas/CreateLangfuseCredentialDTO", + "title": "LangfuseCredential" + }, + { + "$ref": "#/components/schemas/CreateLmntCredentialDTO", + "title": "LmntCredential" + }, + { + "$ref": "#/components/schemas/CreateMakeCredentialDTO", + "title": "MakeCredential" + }, + { + "$ref": "#/components/schemas/CreateOpenAICredentialDTO", + "title": "OpenAICredential" + }, + { + "$ref": "#/components/schemas/CreateOpenRouterCredentialDTO", + "title": "OpenRouterCredential" + }, + { + "$ref": "#/components/schemas/CreatePerplexityAICredentialDTO", + "title": "PerplexityAICredential" + }, + { + "$ref": "#/components/schemas/CreatePlayHTCredentialDTO", + "title": "PlayHTCredential" + }, + { + "$ref": "#/components/schemas/CreateRimeAICredentialDTO", + "title": "RimeAICredential" + }, + { + "$ref": "#/components/schemas/CreateRunpodCredentialDTO", + "title": "RunpodCredential" + }, + { + "$ref": "#/components/schemas/CreateS3CredentialDTO", + "title": "S3Credential" + }, + { + "$ref": "#/components/schemas/CreateSmallestAICredentialDTO", + "title": "SmallestAICredential" + }, + { + "$ref": "#/components/schemas/CreateTavusCredentialDTO", + "title": "TavusCredential" + }, + { + "$ref": "#/components/schemas/CreateTogetherAICredentialDTO", + "title": "TogetherAICredential" + }, + { + "$ref": "#/components/schemas/CreateTwilioCredentialDTO", + "title": "TwilioCredential" + }, + { + "$ref": "#/components/schemas/CreateVonageCredentialDTO", + "title": "VonageCredential" + }, + { + "$ref": "#/components/schemas/CreateWebhookCredentialDTO", + "title": "WebhookCredential" + }, + { + "$ref": "#/components/schemas/CreateXAiCredentialDTO", + "title": "XAiCredential" + } + ] + } + }, "name": { "type": "string", "description": "This is the name of the assistant.\n\nThis is required when you want to transfer between assistants in a call.", @@ -11035,6 +15408,9 @@ "minLength": 2 } }, + "compliancePlan": { + "$ref": "#/components/schemas/CompliancePlan" + }, "metadata": { "type": "object", "description": "This is for metadata you want to store on the assistant." @@ -11102,6 +15478,13 @@ } ] }, + "hooks": { + "description": "This is a set of actions that will be performed on certain events.", + "type": "array", + "items": { + "$ref": "#/components/schemas/AssistantHooks" + } + }, "id": { "type": "string", "description": "This is the unique identifier for the assistant." @@ -11138,6 +15521,10 @@ "$ref": "#/components/schemas/AssemblyAITranscriber", "title": "AssemblyAI" }, + { + "$ref": "#/components/schemas/AzureSpeechTranscriber", + "title": "Azure" + }, { "$ref": "#/components/schemas/CustomTranscriber", "title": "CustomTranscriber" @@ -11187,6 +15574,10 @@ "$ref": "#/components/schemas/InflectionAIModel", "title": "InflectionAI" }, + { + "$ref": "#/components/schemas/DeepSeekModel", + "title": "DeepSeek" + }, { "$ref": "#/components/schemas/OpenAIModel", "title": "OpenAI" @@ -11256,6 +15647,10 @@ "$ref": "#/components/schemas/RimeAIVoice", "title": "RimeAI" }, + { + "$ref": "#/components/schemas/SmallestAIVoice", + "title": "SmallestAI" + }, { "$ref": "#/components/schemas/TavusVoice", "title": "TavusVoice" @@ -11281,11 +15676,6 @@ ], "example": "assistant-speaks-first" }, - "hipaaEnabled": { - "type": "boolean", - "description": "When this is enabled, no logs, recordings, or transcriptions will be stored. At the end of the call, you will still receive an end-of-call-report message to store on your server. Defaults to false.", - "example": false - }, "clientMessages": { "type": "array", "enum": [ @@ -11354,6 +15744,7 @@ "speech-update", "status-update", "transcript", + "transcript[transcriptType=\"final\"]", "tool-calls", "transfer-destination-request", "transfer-update", @@ -11386,6 +15777,7 @@ "speech-update", "status-update", "transcript", + "transcript[transcriptType=\"final\"]", "tool-calls", "transfer-destination-request", "transfer-update", @@ -11433,8 +15825,152 @@ "items": { "oneOf": [ { - "$ref": "#/components/schemas/TransportConfigurationTwilio", - "title": "Twilio" + "$ref": "#/components/schemas/TransportConfigurationTwilio", + "title": "Twilio" + } + ] + } + }, + "credentials": { + "type": "array", + "description": "These are dynamic credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can supplement an additional credentials using this. Dynamic credentials override existing credentials.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/CreateAnthropicCredentialDTO", + "title": "AnthropicCredential" + }, + { + "$ref": "#/components/schemas/CreateAnyscaleCredentialDTO", + "title": "AnyscaleCredential" + }, + { + "$ref": "#/components/schemas/CreateAssemblyAICredentialDTO", + "title": "AssemblyAICredential" + }, + { + "$ref": "#/components/schemas/CreateAzureOpenAICredentialDTO", + "title": "AzureOpenAICredential" + }, + { + "$ref": "#/components/schemas/CreateAzureCredentialDTO", + "title": "AzureCredential" + }, + { + "$ref": "#/components/schemas/CreateByoSipTrunkCredentialDTO", + "title": "ByoSipTrunkCredential" + }, + { + "$ref": "#/components/schemas/CreateCartesiaCredentialDTO", + "title": "CartesiaCredential" + }, + { + "$ref": "#/components/schemas/CreateCloudflareCredentialDTO", + "title": "CloudflareCredential" + }, + { + "$ref": "#/components/schemas/CreateCustomLLMCredentialDTO", + "title": "CustomLLMCredential" + }, + { + "$ref": "#/components/schemas/CreateDeepgramCredentialDTO", + "title": "DeepgramCredential" + }, + { + "$ref": "#/components/schemas/CreateDeepInfraCredentialDTO", + "title": "DeepInfraCredential" + }, + { + "$ref": "#/components/schemas/CreateDeepSeekCredentialDTO", + "title": "DeepSeekCredential" + }, + { + "$ref": "#/components/schemas/CreateElevenLabsCredentialDTO", + "title": "ElevenLabsCredential" + }, + { + "$ref": "#/components/schemas/CreateGcpCredentialDTO", + "title": "GcpCredential" + }, + { + "$ref": "#/components/schemas/CreateGladiaCredentialDTO", + "title": "GladiaCredential" + }, + { + "$ref": "#/components/schemas/CreateGoHighLevelCredentialDTO", + "title": "GhlCredential" + }, + { + "$ref": "#/components/schemas/CreateGroqCredentialDTO", + "title": "GroqCredential" + }, + { + "$ref": "#/components/schemas/CreateLangfuseCredentialDTO", + "title": "LangfuseCredential" + }, + { + "$ref": "#/components/schemas/CreateLmntCredentialDTO", + "title": "LmntCredential" + }, + { + "$ref": "#/components/schemas/CreateMakeCredentialDTO", + "title": "MakeCredential" + }, + { + "$ref": "#/components/schemas/CreateOpenAICredentialDTO", + "title": "OpenAICredential" + }, + { + "$ref": "#/components/schemas/CreateOpenRouterCredentialDTO", + "title": "OpenRouterCredential" + }, + { + "$ref": "#/components/schemas/CreatePerplexityAICredentialDTO", + "title": "PerplexityAICredential" + }, + { + "$ref": "#/components/schemas/CreatePlayHTCredentialDTO", + "title": "PlayHTCredential" + }, + { + "$ref": "#/components/schemas/CreateRimeAICredentialDTO", + "title": "RimeAICredential" + }, + { + "$ref": "#/components/schemas/CreateRunpodCredentialDTO", + "title": "RunpodCredential" + }, + { + "$ref": "#/components/schemas/CreateS3CredentialDTO", + "title": "S3Credential" + }, + { + "$ref": "#/components/schemas/CreateSmallestAICredentialDTO", + "title": "SmallestAICredential" + }, + { + "$ref": "#/components/schemas/CreateTavusCredentialDTO", + "title": "TavusCredential" + }, + { + "$ref": "#/components/schemas/CreateTogetherAICredentialDTO", + "title": "TogetherAICredential" + }, + { + "$ref": "#/components/schemas/CreateTwilioCredentialDTO", + "title": "TwilioCredential" + }, + { + "$ref": "#/components/schemas/CreateVonageCredentialDTO", + "title": "VonageCredential" + }, + { + "$ref": "#/components/schemas/CreateWebhookCredentialDTO", + "title": "WebhookCredential" + }, + { + "$ref": "#/components/schemas/CreateXAiCredentialDTO", + "title": "XAiCredential" } ] } @@ -11471,6 +16007,9 @@ "minLength": 2 } }, + "compliancePlan": { + "$ref": "#/components/schemas/CompliancePlan" + }, "metadata": { "type": "object", "description": "This is for metadata you want to store on the assistant." @@ -11537,6 +16076,13 @@ "$ref": "#/components/schemas/Server" } ] + }, + "hooks": { + "description": "This is a set of actions that will be performed on certain events.", + "type": "array", + "items": { + "$ref": "#/components/schemas/AssistantHooks" + } } } }, @@ -11586,6 +16132,15 @@ "type": "string", "description": "This is the ISO 8601 date-time string of when the phone number was last updated." }, + "status": { + "type": "string", + "description": "This is the status of the phone number.", + "enum": [ + "active", + "activating", + "blocked" + ] + }, "name": { "type": "string", "description": "This is the name of the phone number. This is just for your own reference.", @@ -11599,13 +16154,13 @@ "type": "string", "description": "This is the squad that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." }, - "serverUrl": { - "type": "string", - "description": "This is the server URL where messages will be sent for calls on this number. This includes the `assistant-request` message.\n\nYou can see the shape of the messages sent in `ServerMessage`.\n\nThis overrides the `org.serverUrl`. Order of precedence: tool.server.url > assistant.serverUrl > phoneNumber.serverUrl > org.serverUrl." - }, - "serverUrlSecret": { - "type": "string", - "description": "This is the secret Vapi will send with every message to your server. It's sent as a header called x-vapi-secret.\n\nSame precedence logic as serverUrl." + "server": { + "description": "This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.\n\nThe order of precedence is:\n\n1. assistant.server\n2. phoneNumber.server\n3. org.server", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] }, "number": { "type": "string", @@ -11668,6 +16223,15 @@ "type": "string", "description": "This is the ISO 8601 date-time string of when the phone number was last updated." }, + "status": { + "type": "string", + "description": "This is the status of the phone number.", + "enum": [ + "active", + "activating", + "blocked" + ] + }, "name": { "type": "string", "description": "This is the name of the phone number. This is just for your own reference.", @@ -11681,13 +16245,13 @@ "type": "string", "description": "This is the squad that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." }, - "serverUrl": { - "type": "string", - "description": "This is the server URL where messages will be sent for calls on this number. This includes the `assistant-request` message.\n\nYou can see the shape of the messages sent in `ServerMessage`.\n\nThis overrides the `org.serverUrl`. Order of precedence: tool.server.url > assistant.serverUrl > phoneNumber.serverUrl > org.serverUrl." - }, - "serverUrlSecret": { - "type": "string", - "description": "This is the secret Vapi will send with every message to your server. It's sent as a header called x-vapi-secret.\n\nSame precedence logic as serverUrl." + "server": { + "description": "This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.\n\nThe order of precedence is:\n\n1. assistant.server\n2. phoneNumber.server\n3. org.server", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] }, "number": { "type": "string", @@ -11754,6 +16318,15 @@ "type": "string", "description": "This is the ISO 8601 date-time string of when the phone number was last updated." }, + "status": { + "type": "string", + "description": "This is the status of the phone number.", + "enum": [ + "active", + "activating", + "blocked" + ] + }, "name": { "type": "string", "description": "This is the name of the phone number. This is just for your own reference.", @@ -11767,13 +16340,13 @@ "type": "string", "description": "This is the squad that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." }, - "serverUrl": { - "type": "string", - "description": "This is the server URL where messages will be sent for calls on this number. This includes the `assistant-request` message.\n\nYou can see the shape of the messages sent in `ServerMessage`.\n\nThis overrides the `org.serverUrl`. Order of precedence: tool.server.url > assistant.serverUrl > phoneNumber.serverUrl > org.serverUrl." - }, - "serverUrlSecret": { - "type": "string", - "description": "This is the secret Vapi will send with every message to your server. It's sent as a header called x-vapi-secret.\n\nSame precedence logic as serverUrl." + "server": { + "description": "This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.\n\nThe order of precedence is:\n\n1. assistant.server\n2. phoneNumber.server\n3. org.server", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] }, "number": { "type": "string", @@ -11860,86 +16433,18 @@ "type": "string", "description": "This is the ISO 8601 date-time string of when the phone number was last updated." }, - "name": { - "type": "string", - "description": "This is the name of the phone number. This is just for your own reference.", - "maxLength": 40 - }, - "assistantId": { - "type": "string", - "description": "This is the assistant that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." - }, - "squadId": { - "type": "string", - "description": "This is the squad that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." - }, - "serverUrl": { - "type": "string", - "description": "This is the server URL where messages will be sent for calls on this number. This includes the `assistant-request` message.\n\nYou can see the shape of the messages sent in `ServerMessage`.\n\nThis overrides the `org.serverUrl`. Order of precedence: tool.server.url > assistant.serverUrl > phoneNumber.serverUrl > org.serverUrl." - }, - "serverUrlSecret": { - "type": "string", - "description": "This is the secret Vapi will send with every message to your server. It's sent as a header called x-vapi-secret.\n\nSame precedence logic as serverUrl." - }, - "sipUri": { - "type": "string", - "description": "This is the SIP URI of the phone number. You can SIP INVITE this. The assistant attached to this number will answer.\n\nThis is case-insensitive." - }, - "authentication": { - "description": "This enables authentication for incoming SIP INVITE requests to the `sipUri`.\n\nIf not set, any username/password to the 401 challenge of the SIP INVITE will be accepted.", - "allOf": [ - { - "$ref": "#/components/schemas/SipAuthentication" - } - ] - } - }, - "required": [ - "provider", - "id", - "orgId", - "createdAt", - "updatedAt", - "sipUri" - ] - }, - "CreateByoPhoneNumberDTO": { - "type": "object", - "properties": { - "fallbackDestination": { - "description": "This is the fallback destination an inbound call will be transferred to if:\n1. `assistantId` is not set\n2. `squadId` is not set\n3. and, `assistant-request` message to the `serverUrl` fails\n\nIf this is not set and above conditions are met, the inbound call is hung up with an error message.", - "oneOf": [ - { - "$ref": "#/components/schemas/TransferDestinationNumber", - "title": "NumberTransferDestination" - }, - { - "$ref": "#/components/schemas/TransferDestinationSip", - "title": "SipTransferDestination" - } - ] - }, - "provider": { + "status": { "type": "string", - "description": "This is to bring your own phone numbers from your own SIP trunks or Carriers.", + "description": "This is the status of the phone number.", "enum": [ - "byo-phone-number" + "active", + "activating", + "blocked" ] }, - "numberE164CheckEnabled": { - "type": "boolean", - "description": "This is the flag to toggle the E164 check for the `number` field. This is an advanced property which should be used if you know your use case requires it.\n\nUse cases:\n- `false`: To allow non-E164 numbers like `+001234567890`, `1234`, or `abc`. This is useful for dialing out to non-E164 numbers on your SIP trunks.\n- `true` (default): To allow only E164 numbers like `+14155551234`. This is standard for PSTN calls.\n\nIf `false`, the `number` is still required to only contain alphanumeric characters (regex: `/^\\+?[a-zA-Z0-9]+$/`).\n\n@default true (E164 check is enabled)", - "default": true - }, "number": { "type": "string", - "description": "This is the number of the customer.", - "minLength": 3, - "maxLength": 40 - }, - "credentialId": { - "type": "string", - "description": "This is the credential of your own SIP trunk or Carrier (type `byo-sip-trunk`) which can be used to make calls to this phone number.\n\nYou can add the SIP trunk or Carrier credential in the Provider Credentials page on the Dashboard to get the credentialId." + "description": "These are the digits of the phone number you purchased from Vapi." }, "name": { "type": "string", @@ -11954,144 +16459,42 @@ "type": "string", "description": "This is the squad that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." }, - "serverUrl": { - "type": "string", - "description": "This is the server URL where messages will be sent for calls on this number. This includes the `assistant-request` message.\n\nYou can see the shape of the messages sent in `ServerMessage`.\n\nThis overrides the `org.serverUrl`. Order of precedence: tool.server.url > assistant.serverUrl > phoneNumber.serverUrl > org.serverUrl." - }, - "serverUrlSecret": { - "type": "string", - "description": "This is the secret Vapi will send with every message to your server. It's sent as a header called x-vapi-secret.\n\nSame precedence logic as serverUrl." - } - }, - "required": [ - "provider", - "credentialId" - ] - }, - "CreateTwilioPhoneNumberDTO": { - "type": "object", - "properties": { - "fallbackDestination": { - "description": "This is the fallback destination an inbound call will be transferred to if:\n1. `assistantId` is not set\n2. `squadId` is not set\n3. and, `assistant-request` message to the `serverUrl` fails\n\nIf this is not set and above conditions are met, the inbound call is hung up with an error message.", - "oneOf": [ - { - "$ref": "#/components/schemas/TransferDestinationNumber", - "title": "NumberTransferDestination" - }, + "server": { + "description": "This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.\n\nThe order of precedence is:\n\n1. assistant.server\n2. phoneNumber.server\n3. org.server", + "allOf": [ { - "$ref": "#/components/schemas/TransferDestinationSip", - "title": "SipTransferDestination" + "$ref": "#/components/schemas/Server" } ] }, - "provider": { - "type": "string", - "description": "This is to use numbers bought on Twilio.", - "enum": [ - "twilio" - ] - }, - "number": { - "type": "string", - "description": "These are the digits of the phone number you own on your Twilio." - }, - "twilioAccountSid": { - "type": "string", - "description": "This is the Twilio Account SID for the phone number." - }, - "twilioAuthToken": { + "numberDesiredAreaCode": { "type": "string", - "description": "This is the Twilio Auth Token for the phone number." - }, - "name": { - "type": "string", - "description": "This is the name of the phone number. This is just for your own reference.", - "maxLength": 40 - }, - "assistantId": { - "type": "string", - "description": "This is the assistant that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." - }, - "squadId": { - "type": "string", - "description": "This is the squad that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." + "description": "This is the area code of the phone number to purchase.", + "minLength": 3, + "maxLength": 3 }, - "serverUrl": { + "sipUri": { "type": "string", - "description": "This is the server URL where messages will be sent for calls on this number. This includes the `assistant-request` message.\n\nYou can see the shape of the messages sent in `ServerMessage`.\n\nThis overrides the `org.serverUrl`. Order of precedence: tool.server.url > assistant.serverUrl > phoneNumber.serverUrl > org.serverUrl." + "description": "This is the SIP URI of the phone number. You can SIP INVITE this. The assistant attached to this number will answer.\n\nThis is case-insensitive." }, - "serverUrlSecret": { - "type": "string", - "description": "This is the secret Vapi will send with every message to your server. It's sent as a header called x-vapi-secret.\n\nSame precedence logic as serverUrl." - } - }, - "required": [ - "provider", - "number", - "twilioAccountSid", - "twilioAuthToken" - ] - }, - "CreateVonagePhoneNumberDTO": { - "type": "object", - "properties": { - "fallbackDestination": { - "description": "This is the fallback destination an inbound call will be transferred to if:\n1. `assistantId` is not set\n2. `squadId` is not set\n3. and, `assistant-request` message to the `serverUrl` fails\n\nIf this is not set and above conditions are met, the inbound call is hung up with an error message.", - "oneOf": [ - { - "$ref": "#/components/schemas/TransferDestinationNumber", - "title": "NumberTransferDestination" - }, + "authentication": { + "description": "This enables authentication for incoming SIP INVITE requests to the `sipUri`.\n\nIf not set, any username/password to the 401 challenge of the SIP INVITE will be accepted.", + "allOf": [ { - "$ref": "#/components/schemas/TransferDestinationSip", - "title": "SipTransferDestination" + "$ref": "#/components/schemas/SipAuthentication" } ] - }, - "provider": { - "type": "string", - "description": "This is to use numbers bought on Vonage.", - "enum": [ - "vonage" - ] - }, - "number": { - "type": "string", - "description": "These are the digits of the phone number you own on your Vonage." - }, - "credentialId": { - "type": "string", - "description": "This is the credential that is used to make outgoing calls, and do operations like call transfer and hang up." - }, - "name": { - "type": "string", - "description": "This is the name of the phone number. This is just for your own reference.", - "maxLength": 40 - }, - "assistantId": { - "type": "string", - "description": "This is the assistant that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." - }, - "squadId": { - "type": "string", - "description": "This is the squad that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." - }, - "serverUrl": { - "type": "string", - "description": "This is the server URL where messages will be sent for calls on this number. This includes the `assistant-request` message.\n\nYou can see the shape of the messages sent in `ServerMessage`.\n\nThis overrides the `org.serverUrl`. Order of precedence: tool.server.url > assistant.serverUrl > phoneNumber.serverUrl > org.serverUrl." - }, - "serverUrlSecret": { - "type": "string", - "description": "This is the secret Vapi will send with every message to your server. It's sent as a header called x-vapi-secret.\n\nSame precedence logic as serverUrl." } }, "required": [ - "provider", - "number", - "credentialId" + "provider", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateVapiPhoneNumberDTO": { + "CreateByoPhoneNumberDTO": { "type": "object", "properties": { "fallbackDestination": { @@ -12109,22 +16512,25 @@ }, "provider": { "type": "string", - "description": "This is to create free SIP phone numbers on Vapi.", + "description": "This is to bring your own phone numbers from your own SIP trunks or Carriers.", "enum": [ - "vapi" + "byo-phone-number" ] }, - "sipUri": { + "numberE164CheckEnabled": { + "type": "boolean", + "description": "This is the flag to toggle the E164 check for the `number` field. This is an advanced property which should be used if you know your use case requires it.\n\nUse cases:\n- `false`: To allow non-E164 numbers like `+001234567890`, `1234`, or `abc`. This is useful for dialing out to non-E164 numbers on your SIP trunks.\n- `true` (default): To allow only E164 numbers like `+14155551234`. This is standard for PSTN calls.\n\nIf `false`, the `number` is still required to only contain alphanumeric characters (regex: `/^\\+?[a-zA-Z0-9]+$/`).\n\n@default true (E164 check is enabled)", + "default": true + }, + "number": { "type": "string", - "description": "This is the SIP URI of the phone number. You can SIP INVITE this. The assistant attached to this number will answer.\n\nThis is case-insensitive." + "description": "This is the number of the customer.", + "minLength": 3, + "maxLength": 40 }, - "authentication": { - "description": "This enables authentication for incoming SIP INVITE requests to the `sipUri`.\n\nIf not set, any username/password to the 401 challenge of the SIP INVITE will be accepted.", - "allOf": [ - { - "$ref": "#/components/schemas/SipAuthentication" - } - ] + "credentialId": { + "type": "string", + "description": "This is the credential of your own SIP trunk or Carrier (type `byo-sip-trunk`) which can be used to make calls to this phone number.\n\nYou can add the SIP trunk or Carrier credential in the Provider Credentials page on the Dashboard to get the credentialId." }, "name": { "type": "string", @@ -12139,21 +16545,21 @@ "type": "string", "description": "This is the squad that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." }, - "serverUrl": { - "type": "string", - "description": "This is the server URL where messages will be sent for calls on this number. This includes the `assistant-request` message.\n\nYou can see the shape of the messages sent in `ServerMessage`.\n\nThis overrides the `org.serverUrl`. Order of precedence: tool.server.url > assistant.serverUrl > phoneNumber.serverUrl > org.serverUrl." - }, - "serverUrlSecret": { - "type": "string", - "description": "This is the secret Vapi will send with every message to your server. It's sent as a header called x-vapi-secret.\n\nSame precedence logic as serverUrl." + "server": { + "description": "This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.\n\nThe order of precedence is:\n\n1. assistant.server\n2. phoneNumber.server\n3. org.server", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] } }, "required": [ "provider", - "sipUri" + "credentialId" ] }, - "BuyPhoneNumberDTO": { + "CreateTwilioPhoneNumberDTO": { "type": "object", "properties": { "fallbackDestination": { @@ -12169,11 +16575,24 @@ } ] }, - "areaCode": { + "provider": { "type": "string", - "description": "This is the area code of the phone number to purchase.", - "minLength": 3, - "maxLength": 3 + "description": "This is to use numbers bought on Twilio.", + "enum": [ + "twilio" + ] + }, + "number": { + "type": "string", + "description": "These are the digits of the phone number you own on your Twilio." + }, + "twilioAccountSid": { + "type": "string", + "description": "This is the Twilio Account SID for the phone number." + }, + "twilioAuthToken": { + "type": "string", + "description": "This is the Twilio Auth Token for the phone number." }, "name": { "type": "string", @@ -12188,20 +16607,23 @@ "type": "string", "description": "This is the squad that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." }, - "serverUrl": { - "type": "string", - "description": "This is the server URL where messages will be sent for calls on this number. This includes the `assistant-request` message.\n\nYou can see the shape of the messages sent in `ServerMessage`.\n\nThis overrides the `org.serverUrl`. Order of precedence: tool.server.url > assistant.serverUrl > phoneNumber.serverUrl > org.serverUrl." - }, - "serverUrlSecret": { - "type": "string", - "description": "This is the secret Vapi will send with every message to your server. It's sent as a header called x-vapi-secret.\n\nSame precedence logic as serverUrl." + "server": { + "description": "This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.\n\nThe order of precedence is:\n\n1. assistant.server\n2. phoneNumber.server\n3. org.server", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] } }, "required": [ - "areaCode" + "provider", + "number", + "twilioAccountSid", + "twilioAuthToken" ] }, - "ImportVonagePhoneNumberDTO": { + "CreateVonagePhoneNumberDTO": { "type": "object", "properties": { "fallbackDestination": { @@ -12217,14 +16639,20 @@ } ] }, - "vonagePhoneNumber": { + "provider": { "type": "string", - "description": "These are the digits of the phone number you own on your Vonage.", - "deprecated": true + "description": "This is to use numbers bought on Vonage.", + "enum": [ + "vonage" + ] + }, + "number": { + "type": "string", + "description": "These are the digits of the phone number you own on your Vonage." }, "credentialId": { "type": "string", - "description": "This is the credential that is used to make outgoing calls, and do operations like call transfer and hang up.\n\nYou can add the Vonage Credential in the Provider Credentials page on the dashboard to get the credentialId." + "description": "This is the credential that is used to make outgoing calls, and do operations like call transfer and hang up." }, "name": { "type": "string", @@ -12239,58 +16667,22 @@ "type": "string", "description": "This is the squad that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." }, - "serverUrl": { - "type": "string", - "description": "This is the server URL where messages will be sent for calls on this number. This includes the `assistant-request` message.\n\nYou can see the shape of the messages sent in `ServerMessage`.\n\nThis overrides the `org.serverUrl`. Order of precedence: tool.server.url > assistant.serverUrl > phoneNumber.serverUrl > org.serverUrl." - }, - "serverUrlSecret": { - "type": "string", - "description": "This is the secret Vapi will send with every message to your server. It's sent as a header called x-vapi-secret.\n\nSame precedence logic as serverUrl." - } - }, - "required": [ - "vonagePhoneNumber", - "credentialId" - ] - }, - "PhoneNumberPaginatedResponse": { - "type": "object", - "properties": { - "results": { - "type": "array", - "description": "A list of phone numbers, which can be of any provider type.", - "items": { - "oneOf": [ - { - "$ref": "#/components/schemas/ByoPhoneNumber" - }, - { - "$ref": "#/components/schemas/TwilioPhoneNumber" - }, - { - "$ref": "#/components/schemas/VonagePhoneNumber" - }, - { - "$ref": "#/components/schemas/VapiPhoneNumber" - } - ] - } - }, - "metadata": { - "description": "Metadata about the pagination.", + "server": { + "description": "This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.\n\nThe order of precedence is:\n\n1. assistant.server\n2. phoneNumber.server\n3. org.server", "allOf": [ { - "$ref": "#/components/schemas/PaginationMeta" + "$ref": "#/components/schemas/Server" } ] } }, "required": [ - "results", - "metadata" + "provider", + "number", + "credentialId" ] }, - "UpdatePhoneNumberDTO": { + "CreateVapiPhoneNumberDTO": { "type": "object", "properties": { "fallbackDestination": { @@ -12306,6 +16698,31 @@ } ] }, + "provider": { + "type": "string", + "description": "This is to create free SIP phone numbers on Vapi.", + "enum": [ + "vapi" + ] + }, + "numberDesiredAreaCode": { + "type": "string", + "description": "This is the area code of the phone number to purchase.", + "minLength": 3, + "maxLength": 3 + }, + "sipUri": { + "type": "string", + "description": "This is the SIP URI of the phone number. You can SIP INVITE this. The assistant attached to this number will answer.\n\nThis is case-insensitive." + }, + "authentication": { + "description": "This enables authentication for incoming SIP INVITE requests to the `sipUri`.\n\nIf not set, any username/password to the 401 challenge of the SIP INVITE will be accepted.", + "allOf": [ + { + "$ref": "#/components/schemas/SipAuthentication" + } + ] + }, "name": { "type": "string", "description": "This is the name of the phone number. This is just for your own reference.", @@ -12319,403 +16736,309 @@ "type": "string", "description": "This is the squad that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." }, - "serverUrl": { - "type": "string", - "description": "This is the server URL where messages will be sent for calls on this number. This includes the `assistant-request` message.\n\nYou can see the shape of the messages sent in `ServerMessage`.\n\nThis overrides the `org.serverUrl`. Order of precedence: tool.server.url > assistant.serverUrl > phoneNumber.serverUrl > org.serverUrl." - }, - "serverUrlSecret": { - "type": "string", - "description": "This is the secret Vapi will send with every message to your server. It's sent as a header called x-vapi-secret.\n\nSame precedence logic as serverUrl." - } - } - }, - "AutoReloadPlan": { - "type": "object", - "properties": { - "credits": { - "type": "number", - "description": "This the amount of credits to reload." - }, - "threshold": { - "type": "number", - "description": "This is the limit at which the reload is triggered." + "server": { + "description": "This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.\n\nThe order of precedence is:\n\n1. assistant.server\n2. phoneNumber.server\n3. org.server", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] } }, "required": [ - "credits", - "threshold" + "provider" ] }, - "Subscription": { - "type": "object", - "properties": { - "id": { - "type": "string", - "description": "This is the unique identifier for the subscription." - }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the timestamp when the subscription was created." - }, - "updatedAt": { - "format": "date-time", - "type": "string", - "description": "This is the timestamp when the subscription was last updated." - }, - "type": { - "type": "string", - "description": "This is the type / tier of the subscription.", - "enum": [ - "trial", - "pay-as-you-go", - "enterprise" - ] - }, - "status": { - "type": "string", - "description": "This is the status of the subscription. Past due subscriptions are subscriptions\nwith past due payments.", - "enum": [ - "active", - "frozen" - ] - }, - "credits": { - "type": "string", - "description": "This is the number of credits the subscription currently has.\n\nNote: This is a string to avoid floating point precision issues." - }, - "concurrencyLimit": { - "type": "number", - "description": "This is the total concurrency limit for the subscription.", - "minimum": 10 - }, - "concurrencyLimitIncluded": { - "type": "number", - "description": "This is the default concurrency limit for the subscription." - }, - "concurrencyLimitPurchased": { - "type": "number", - "description": "This is the purchased add-on concurrency limit for the subscription." - }, - "monthlyChargeScheduleId": { - "type": "number", - "description": "This is the ID of the monthly job that charges for subscription add ons and phone numbers." - }, - "monthlyCreditCheckScheduleId": { - "type": "number", - "description": "This is the ID of the monthly job that checks whether the credit balance of the subscription\nis sufficient for the monthly charge." - }, - "stripeCustomerId": { - "type": "string", - "description": "This is the Stripe customer ID." - }, - "stripePaymentMethodId": { - "type": "string", - "description": "This is the Stripe payment ID." - }, - "slackSupportEnabled": { - "type": "boolean", - "description": "If this flag is true, then the user has purchased slack support." - }, - "slackChannelId": { - "type": "string", - "description": "If this subscription has a slack support subscription, the slack channel's ID will be stored here." + "UpdateByoPhoneNumberDTO": { + "type": "object", + "properties": { + "fallbackDestination": { + "description": "This is the fallback destination an inbound call will be transferred to if:\n1. `assistantId` is not set\n2. `squadId` is not set\n3. and, `assistant-request` message to the `serverUrl` fails\n\nIf this is not set and above conditions are met, the inbound call is hung up with an error message.", + "oneOf": [ + { + "$ref": "#/components/schemas/TransferDestinationNumber", + "title": "NumberTransferDestination" + }, + { + "$ref": "#/components/schemas/TransferDestinationSip", + "title": "SipTransferDestination" + } + ] }, - "hipaaEnabled": { + "numberE164CheckEnabled": { "type": "boolean", - "description": "This is the HIPAA enabled flag for the subscription. It determines whether orgs under this\nsubscription have the option to enable HIPAA compliance." - }, - "hipaaCommonPaperAgreementId": { - "type": "string", - "description": "This is the ID for the Common Paper agreement outlining the HIPAA contract." + "description": "This is the flag to toggle the E164 check for the `number` field. This is an advanced property which should be used if you know your use case requires it.\n\nUse cases:\n- `false`: To allow non-E164 numbers like `+001234567890`, `1234`, or `abc`. This is useful for dialing out to non-E164 numbers on your SIP trunks.\n- `true` (default): To allow only E164 numbers like `+14155551234`. This is standard for PSTN calls.\n\nIf `false`, the `number` is still required to only contain alphanumeric characters (regex: `/^\\+?[a-zA-Z0-9]+$/`).\n\n@default true (E164 check is enabled)", + "default": true }, - "stripePaymentMethodFingerprint": { + "name": { "type": "string", - "description": "This is the Stripe fingerprint of the payment method (card). It allows us\nto detect users who try to abuse our system through multiple sign-ups." + "description": "This is the name of the phone number. This is just for your own reference.", + "maxLength": 40 }, - "stripeCustomerEmail": { + "assistantId": { "type": "string", - "description": "This is the stripe customer's email." + "description": "This is the assistant that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." }, - "referredByEmail": { + "squadId": { "type": "string", - "description": "This is the email of the referrer for the subscription." + "description": "This is the squad that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." }, - "autoReloadPlan": { - "description": "This is the auto reload plan configured for the subscription.", + "server": { + "description": "This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.\n\nThe order of precedence is:\n\n1. assistant.server\n2. phoneNumber.server\n3. org.server", "allOf": [ { - "$ref": "#/components/schemas/AutoReloadPlan" + "$ref": "#/components/schemas/Server" } ] }, - "minutesIncluded": { - "type": "number", - "description": "The number of minutes included in the subscription. Enterprise only." - }, - "minutesUsed": { - "type": "number", - "description": "The number of minutes used in the subscription. Enterprise only." - }, - "minutesOverageCost": { - "type": "number", - "description": "The per minute charge on minutes that exceed the included minutes. Enterprise only." - }, - "providersIncluded": { - "description": "The list of providers included in the subscription. Enterprise only.", - "type": "array", - "items": { - "type": "string" - } - }, - "outboundCallsDailyLimit": { - "type": "number", - "description": "The maximum number of outbound calls this subscription may make in a day. Resets every night.", - "minimum": 10 - }, - "outboundCallsCounter": { - "type": "number", - "description": "The current number of outbound calls the subscription has made in the current day.", - "minimum": 0 - }, - "outboundCallsCounterNextResetAt": { - "format": "date-time", + "number": { "type": "string", - "description": "This is the timestamp at which the outbound calls counter is scheduled to reset at." - }, - "couponIds": { - "description": "This is the IDs of the coupons applicable to this subscription.", - "type": "array", - "items": { - "type": "string" - } + "description": "This is the number of the customer.", + "minLength": 3, + "maxLength": 40 }, - "couponUsageLeft": { + "credentialId": { "type": "string", - "description": "This is the number of credits left obtained from a coupon." + "description": "This is the credential of your own SIP trunk or Carrier (type `byo-sip-trunk`) which can be used to make calls to this phone number.\n\nYou can add the SIP trunk or Carrier credential in the Provider Credentials page on the Dashboard to get the credentialId." } - }, - "required": [ - "id", - "createdAt", - "updatedAt", - "type", - "status", - "credits", - "concurrencyLimit", - "concurrencyLimitIncluded", - "concurrencyLimitPurchased" - ] + } }, - "Payment": { + "UpdateTwilioPhoneNumberDTO": { "type": "object", "properties": { - "id": { - "type": "string", - "description": "Unique identifier for the payment" + "fallbackDestination": { + "description": "This is the fallback destination an inbound call will be transferred to if:\n1. `assistantId` is not set\n2. `squadId` is not set\n3. and, `assistant-request` message to the `serverUrl` fails\n\nIf this is not set and above conditions are met, the inbound call is hung up with an error message.", + "oneOf": [ + { + "$ref": "#/components/schemas/TransferDestinationNumber", + "title": "NumberTransferDestination" + }, + { + "$ref": "#/components/schemas/TransferDestinationSip", + "title": "SipTransferDestination" + } + ] }, - "orgId": { + "name": { "type": "string", - "description": "Unique identifier for the organization" + "description": "This is the name of the phone number. This is just for your own reference.", + "maxLength": 40 }, - "cost": { + "assistantId": { "type": "string", - "description": "This is the total cost of the payment, which is the sum of all the costs in the costs object.\n\nNote: this is a string to avoid floating point precision issues." - }, - "costs": { - "description": "The different costs for the payment.", - "type": "array", - "items": { - "type": "object" - } + "description": "This is the assistant that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." }, - "status": { + "squadId": { "type": "string", - "description": "Status of the payment", - "enum": [ - "past-due", - "pending", - "finalized", - "refunded" + "description": "This is the squad that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." + }, + "server": { + "description": "This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.\n\nThe order of precedence is:\n\n1. assistant.server\n2. phoneNumber.server\n3. org.server", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } ] }, - "createdAt": { - "format": "date-time", + "number": { "type": "string", - "description": "Timestamp when the payment was created" + "description": "These are the digits of the phone number you own on your Twilio." }, - "updatedAt": { - "format": "date-time", + "twilioAccountSid": { "type": "string", - "description": "Timestamp when the payment was last updated" + "description": "This is the Twilio Account SID for the phone number." }, - "isAutoReload": { - "type": "boolean", - "description": "Is the payment an auto reload payment" + "twilioAuthToken": { + "type": "string", + "description": "This is the Twilio Auth Token for the phone number." + } + } + }, + "UpdateVonagePhoneNumberDTO": { + "type": "object", + "properties": { + "fallbackDestination": { + "description": "This is the fallback destination an inbound call will be transferred to if:\n1. `assistantId` is not set\n2. `squadId` is not set\n3. and, `assistant-request` message to the `serverUrl` fails\n\nIf this is not set and above conditions are met, the inbound call is hung up with an error message.", + "oneOf": [ + { + "$ref": "#/components/schemas/TransferDestinationNumber", + "title": "NumberTransferDestination" + }, + { + "$ref": "#/components/schemas/TransferDestinationSip", + "title": "SipTransferDestination" + } + ] }, - "subscriptionId": { + "name": { "type": "string", - "description": "Unique identifier of the associated subscription" + "description": "This is the name of the phone number. This is just for your own reference.", + "maxLength": 40 }, - "callId": { + "assistantId": { "type": "string", - "description": "Unique identifier for the call" + "description": "This is the assistant that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." }, - "phoneNumberId": { + "squadId": { "type": "string", - "description": "Unique identifier of the associated phone number" + "description": "This is the squad that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." + }, + "server": { + "description": "This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.\n\nThe order of precedence is:\n\n1. assistant.server\n2. phoneNumber.server\n3. org.server", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] }, - "stripePaymentIntentId": { + "number": { "type": "string", - "description": "Unique identifier of the associated stripe payment intent" + "description": "These are the digits of the phone number you own on your Vonage." }, - "stripeInvoiceId": { + "credentialId": { "type": "string", - "description": "Unique identifier of the associated stripe invoice" + "description": "This is the credential that is used to make outgoing calls, and do operations like call transfer and hang up." } - }, - "required": [ - "id", - "cost", - "costs", - "status", - "createdAt", - "updatedAt", - "isAutoReload", - "subscriptionId" - ] + } }, - "PaymentsPaginatedResponse": { + "UpdateVapiPhoneNumberDTO": { "type": "object", "properties": { - "results": { - "type": "array", - "items": { - "$ref": "#/components/schemas/Payment" - } + "fallbackDestination": { + "description": "This is the fallback destination an inbound call will be transferred to if:\n1. `assistantId` is not set\n2. `squadId` is not set\n3. and, `assistant-request` message to the `serverUrl` fails\n\nIf this is not set and above conditions are met, the inbound call is hung up with an error message.", + "oneOf": [ + { + "$ref": "#/components/schemas/TransferDestinationNumber", + "title": "NumberTransferDestination" + }, + { + "$ref": "#/components/schemas/TransferDestinationSip", + "title": "SipTransferDestination" + } + ] }, - "metadata": { - "$ref": "#/components/schemas/PaginationMeta" - } - }, - "required": [ - "results", - "metadata" - ] - }, - "SubscriptionMonthlyCharge": { - "type": "object", - "properties": { - "monthlyCharge": { - "type": "number", - "description": "This is the monthly charge for the subscription." + "name": { + "type": "string", + "description": "This is the name of the phone number. This is just for your own reference.", + "maxLength": 40 }, - "costs": { - "description": "These are the different costs that make up the monthly charge.", - "type": "array", - "items": { - "type": "object" - } - } - }, - "required": [ - "monthlyCharge", - "costs" - ] - }, - "CreditsBuyDTO": { - "type": "object", - "properties": { - "credits": { - "type": "number", - "description": "This is the number of credits to add to the subscription." - } - }, - "required": [ - "credits" - ] - }, - "AutoReloadPlanDTO": { - "type": "object", - "properties": { - "autoReloadPlan": { - "description": "This is the auto reload plan to be configured for the subscription.\nIt can be null if no auto reload plan is set.", + "assistantId": { + "type": "string", + "description": "This is the assistant that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." + }, + "squadId": { + "type": "string", + "description": "This is the squad that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." + }, + "server": { + "description": "This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.\n\nThe order of precedence is:\n\n1. assistant.server\n2. phoneNumber.server\n3. org.server", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] + }, + "sipUri": { + "type": "string", + "description": "This is the SIP URI of the phone number. You can SIP INVITE this. The assistant attached to this number will answer.\n\nThis is case-insensitive." + }, + "authentication": { + "description": "This enables authentication for incoming SIP INVITE requests to the `sipUri`.\n\nIf not set, any username/password to the 401 challenge of the SIP INVITE will be accepted.", "allOf": [ { - "$ref": "#/components/schemas/AutoReloadPlan" + "$ref": "#/components/schemas/SipAuthentication" } ] } } }, - "PaymentRetryDTO": { + "ImportVonagePhoneNumberDTO": { "type": "object", "properties": { - "paymentId": { + "fallbackDestination": { + "description": "This is the fallback destination an inbound call will be transferred to if:\n1. `assistantId` is not set\n2. `squadId` is not set\n3. and, `assistant-request` message to the `serverUrl` fails\n\nIf this is not set and above conditions are met, the inbound call is hung up with an error message.", + "oneOf": [ + { + "$ref": "#/components/schemas/TransferDestinationNumber", + "title": "NumberTransferDestination" + }, + { + "$ref": "#/components/schemas/TransferDestinationSip", + "title": "SipTransferDestination" + } + ] + }, + "vonagePhoneNumber": { "type": "string", - "description": "This is the payment ID to retry." - } - }, - "required": [ - "paymentId" - ] - }, - "SubscriptionConcurrencyLineBuyDTO": { - "type": "object", - "properties": { - "quantity": { - "type": "number", - "description": "This is the number of concurrency lines to purchase." - } - }, - "required": [ - "quantity" - ] - }, - "SubscriptionConcurrencyLineRemoveDTO": { - "type": "object", - "properties": { - "quantity": { - "type": "number", - "description": "This is the number of concurrency lines to remove." - } - }, - "required": [ - "quantity" - ] - }, - "HipaaBuyDTO": { - "type": "object", - "properties": { - "recipientName": { + "description": "These are the digits of the phone number you own on your Vonage.", + "deprecated": true + }, + "credentialId": { + "type": "string", + "description": "This is the credential that is used to make outgoing calls, and do operations like call transfer and hang up.\n\nYou can add the Vonage Credential in the Provider Credentials page on the dashboard to get the credentialId." + }, + "name": { + "type": "string", + "description": "This is the name of the phone number. This is just for your own reference.", + "maxLength": 40 + }, + "assistantId": { "type": "string", - "description": "This is the name of the recipient." + "description": "This is the assistant that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." }, - "recipientOrganization": { + "squadId": { "type": "string", - "description": "This is the name of the recipient organization." + "description": "This is the squad that will be used for incoming calls to this phone number.\n\nIf neither `assistantId` nor `squadId` is set, `assistant-request` will be sent to your Server URL. Check `ServerMessage` and `ServerMessageResponse` for the shape of the message and response that is expected." + }, + "server": { + "description": "This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.\n\nThe order of precedence is:\n\n1. assistant.server\n2. phoneNumber.server\n3. org.server", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] } }, "required": [ - "recipientName", - "recipientOrganization" + "vonagePhoneNumber", + "credentialId" ] }, - "SubscriptionCouponAddDTO": { + "PhoneNumberPaginatedResponse": { "type": "object", "properties": { - "orgId": { - "type": "string", - "description": "This is the ID of the org within the subscription which the coupon will take effect on." + "results": { + "type": "array", + "description": "A list of phone numbers, which can be of any provider type.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ByoPhoneNumber" + }, + { + "$ref": "#/components/schemas/TwilioPhoneNumber" + }, + { + "$ref": "#/components/schemas/VonagePhoneNumber" + }, + { + "$ref": "#/components/schemas/VapiPhoneNumber" + } + ] + } }, - "couponCode": { - "type": "string", - "description": "This is the code of the coupon to apply to the subscription." + "metadata": { + "description": "Metadata about the pagination.", + "allOf": [ + { + "$ref": "#/components/schemas/PaginationMeta" + } + ] } }, "required": [ - "orgId", - "couponCode" + "results", + "metadata" ] }, "Squad": { @@ -12794,9 +17117,13 @@ "members" ] }, - "TrieveKnowledgeBaseVectorStoreSearchPlan": { + "TrieveKnowledgeBaseSearchPlan": { "type": "object", "properties": { + "topK": { + "type": "number", + "description": "Specifies the number of top chunks to return. This corresponds to the `page_size` parameter in Trieve." + }, "removeStopWords": { "type": "boolean", "description": "If true, stop words (specified in server/src/stop-words.txt in the git repo) will be removed. This will preserve queries that are entirely stop words." @@ -12820,36 +17147,6 @@ "searchType" ] }, - "TrieveKnowledgeBaseVectorStoreCreatePlan": { - "type": "object", - "properties": { - "fileIds": { - "description": "These are the file ids that will be used to create the vector store. To upload files, use the `POST /files` endpoint.", - "type": "array", - "items": { - "type": "string" - } - }, - "targetSplitsPerChunk": { - "type": "number", - "description": "This is an optional field which allows you to specify the number of splits you want per chunk. If not specified, the default 20 is used. However, you may want to use a different number." - }, - "splitDelimiters": { - "description": "This is an optional field which allows you to specify the delimiters to use when splitting the file before chunking the text. If not specified, the default [.!?\\n] are used to split into sentences. However, you may want to use spaces or other delimiters.", - "type": "array", - "items": { - "type": "string" - } - }, - "rebalanceChunks": { - "type": "boolean", - "description": "This is an optional field which allows you to specify whether or not to rebalance the chunks created from the file. If not specified, the default true is used. If true, Trieve will evenly distribute remainder splits across chunks such that 66 splits with a target_splits_per_chunk of 20 will result in 3 chunks with 22 splits each." - } - }, - "required": [ - "fileIds" - ] - }, "TrieveKnowledgeBase": { "type": "object", "properties": { @@ -12864,26 +17161,27 @@ "type": "string", "description": "This is the name of the knowledge base." }, - "vectorStoreSearchPlan": { - "description": "This is the plan on how to search the vector store while a call is going on.", + "searchPlan": { + "description": "This is the searching plan used when searching for relevant chunks from the vector store.\n\nYou should configure this if you're running into these issues:\n- Too much unnecessary context is being fed as knowledge base context.\n- Not enough relevant context is being fed as knowledge base context.", "allOf": [ { - "$ref": "#/components/schemas/TrieveKnowledgeBaseVectorStoreSearchPlan" + "$ref": "#/components/schemas/TrieveKnowledgeBaseSearchPlan" } ] }, - "vectorStoreCreatePlan": { - "description": "This is the plan if you want us to create a new vector store on your behalf. To use an existing vector store from your account, use `vectoreStoreProviderId`", - "allOf": [ + "createPlan": { + "description": "This is the plan if you want us to create/import a new vector store using Trieve.", + "oneOf": [ { - "$ref": "#/components/schemas/TrieveKnowledgeBaseVectorStoreCreatePlan" + "$ref": "#/components/schemas/TrieveKnowledgeBaseCreate", + "title": "Create" + }, + { + "$ref": "#/components/schemas/TrieveKnowledgeBaseImport", + "title": "Import" } ] }, - "vectorStoreProviderId": { - "type": "string", - "description": "This is an vector store that you already have on your account with the provider. To create a new vector store, use vectorStoreCreatePlan.\n\nUsage:\n- To bring your own vector store from Trieve, go to https://trieve.ai\n- Create a dataset, and use the datasetId here." - }, "id": { "type": "string", "description": "This is the id of the knowledge base." @@ -12895,7 +17193,6 @@ }, "required": [ "provider", - "vectorStoreSearchPlan", "id", "orgId" ] @@ -12911,7 +17208,7 @@ ] }, "server": { - "description": "/**\nThis is where the knowledge base request will be sent.\n\nRequest Example:\n\nPOST https://{server.url}\nContent-Type: application/json\n\n{\n \"messsage\": {\n \"type\": \"knowledge-base-request\",\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": \"Why is ocean blue?\"\n }\n ],\n ...other metadata about the call...\n }\n}\n\nResponse Expected:\n```\n{\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"The ocean is blue because water absorbs everything but blue.\",\n }, // YOU CAN RETURN THE EXACT RESPONSE TO SPEAK\n \"documents\": [\n {\n \"content\": \"The ocean is blue primarily because water absorbs colors in the red part of the light spectrum and scatters the blue light, making it more visible to our eyes.\",\n \"similarity\": 1\n },\n {\n \"content\": \"Blue light is scattered more by the water molecules than other colors, enhancing the blue appearance of the ocean.\",\n \"similarity\": .5\n }\n ] // OR, YOU CAN RETURN AN ARRAY OF DOCUMENTS THAT WILL BE SENT TO THE MODEL\n}\n```", + "description": "This is where the knowledge base request will be sent.\n\nRequest Example:\n\nPOST https://{server.url}\nContent-Type: application/json\n\n{\n \"messsage\": {\n \"type\": \"knowledge-base-request\",\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": \"Why is ocean blue?\"\n }\n ],\n ...other metadata about the call...\n }\n}\n\nResponse Expected:\n```\n{\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"The ocean is blue because water absorbs everything but blue.\",\n }, // YOU CAN RETURN THE EXACT RESPONSE TO SPEAK\n \"documents\": [\n {\n \"content\": \"The ocean is blue primarily because water absorbs colors in the red part of the light spectrum and scatters the blue light, making it more visible to our eyes.\",\n \"similarity\": 1\n },\n {\n \"content\": \"Blue light is scattered more by the water molecules than other colors, enhancing the blue appearance of the ocean.\",\n \"similarity\": .5\n }\n ] // OR, YOU CAN RETURN AN ARRAY OF DOCUMENTS THAT WILL BE SENT TO THE MODEL\n}\n```", "allOf": [ { "$ref": "#/components/schemas/Server" @@ -12922,56 +17219,176 @@ "type": "string", "description": "This is the id of the knowledge base." }, - "orgId": { - "type": "string", - "description": "This is the org id of the knowledge base." + "orgId": { + "type": "string", + "description": "This is the org id of the knowledge base." + } + }, + "required": [ + "provider", + "server", + "id", + "orgId" + ] + }, + "CreateTrieveKnowledgeBaseDTO": { + "type": "object", + "properties": { + "provider": { + "type": "string", + "description": "This knowledge base is provided by Trieve.\n\nTo learn more about Trieve, visit https://trieve.ai.", + "enum": [ + "trieve" + ] + }, + "name": { + "type": "string", + "description": "This is the name of the knowledge base." + }, + "searchPlan": { + "description": "This is the searching plan used when searching for relevant chunks from the vector store.\n\nYou should configure this if you're running into these issues:\n- Too much unnecessary context is being fed as knowledge base context.\n- Not enough relevant context is being fed as knowledge base context.", + "allOf": [ + { + "$ref": "#/components/schemas/TrieveKnowledgeBaseSearchPlan" + } + ] + }, + "createPlan": { + "description": "This is the plan if you want us to create/import a new vector store using Trieve.", + "oneOf": [ + { + "$ref": "#/components/schemas/TrieveKnowledgeBaseCreate", + "title": "Create" + }, + { + "$ref": "#/components/schemas/TrieveKnowledgeBaseImport", + "title": "Import" + } + ] + } + }, + "required": [ + "provider" + ] + }, + "UpdateTrieveKnowledgeBaseDTO": { + "type": "object", + "properties": { + "name": { + "type": "string", + "description": "This is the name of the knowledge base." + }, + "searchPlan": { + "description": "This is the searching plan used when searching for relevant chunks from the vector store.\n\nYou should configure this if you're running into these issues:\n- Too much unnecessary context is being fed as knowledge base context.\n- Not enough relevant context is being fed as knowledge base context.", + "allOf": [ + { + "$ref": "#/components/schemas/TrieveKnowledgeBaseSearchPlan" + } + ] + }, + "createPlan": { + "description": "This is the plan if you want us to create/import a new vector store using Trieve.", + "oneOf": [ + { + "$ref": "#/components/schemas/TrieveKnowledgeBaseCreate", + "title": "Create" + }, + { + "$ref": "#/components/schemas/TrieveKnowledgeBaseImport", + "title": "Import" + } + ] + } + } + }, + "UpdateCustomKnowledgeBaseDTO": { + "type": "object", + "properties": { + "server": { + "description": "This is where the knowledge base request will be sent.\n\nRequest Example:\n\nPOST https://{server.url}\nContent-Type: application/json\n\n{\n \"messsage\": {\n \"type\": \"knowledge-base-request\",\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": \"Why is ocean blue?\"\n }\n ],\n ...other metadata about the call...\n }\n}\n\nResponse Expected:\n```\n{\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"The ocean is blue because water absorbs everything but blue.\",\n }, // YOU CAN RETURN THE EXACT RESPONSE TO SPEAK\n \"documents\": [\n {\n \"content\": \"The ocean is blue primarily because water absorbs colors in the red part of the light spectrum and scatters the blue light, making it more visible to our eyes.\",\n \"similarity\": 1\n },\n {\n \"content\": \"Blue light is scattered more by the water molecules than other colors, enhancing the blue appearance of the ocean.\",\n \"similarity\": .5\n }\n ] // OR, YOU CAN RETURN AN ARRAY OF DOCUMENTS THAT WILL BE SENT TO THE MODEL\n}\n```", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] + } + } + }, + "TrieveKnowledgeBaseChunkPlan": { + "type": "object", + "properties": { + "fileIds": { + "description": "These are the file ids that will be used to create the vector store. To upload files, use the `POST /files` endpoint.", + "type": "array", + "items": { + "type": "string" + } + }, + "websites": { + "description": "These are the websites that will be used to create the vector store.", + "type": "array", + "items": { + "type": "string" + } + }, + "targetSplitsPerChunk": { + "type": "number", + "description": "This is an optional field which allows you to specify the number of splits you want per chunk. If not specified, the default 20 is used. However, you may want to use a different number." + }, + "splitDelimiters": { + "description": "This is an optional field which allows you to specify the delimiters to use when splitting the file before chunking the text. If not specified, the default [.!?\\n] are used to split into sentences. However, you may want to use spaces or other delimiters.", + "type": "array", + "items": { + "type": "string" + } + }, + "rebalanceChunks": { + "type": "boolean", + "description": "This is an optional field which allows you to specify whether or not to rebalance the chunks created from the file. If not specified, the default true is used. If true, Trieve will evenly distribute remainder splits across chunks such that 66 splits with a target_splits_per_chunk of 20 will result in 3 chunks with 22 splits each." + } + } + }, + "TrieveKnowledgeBaseCreate": { + "type": "object", + "properties": { + "type": { + "type": "string", + "description": "This is to create a new dataset on Trieve.", + "enum": [ + "create" + ] + }, + "chunkPlans": { + "description": "These are the chunk plans used to create the dataset.", + "type": "array", + "items": { + "$ref": "#/components/schemas/TrieveKnowledgeBaseChunkPlan" + } } }, "required": [ - "provider", - "server", - "id", - "orgId" + "type", + "chunkPlans" ] }, - "CreateTrieveKnowledgeBaseDTO": { + "TrieveKnowledgeBaseImport": { "type": "object", "properties": { - "provider": { + "type": { "type": "string", - "description": "This knowledge base is provided by Trieve.\n\nTo learn more about Trieve, visit https://trieve.ai.", + "description": "This is to import an existing dataset from Trieve.", "enum": [ - "trieve" + "import" ] }, - "name": { - "type": "string", - "description": "This is the name of the knowledge base." - }, - "vectorStoreSearchPlan": { - "description": "This is the plan on how to search the vector store while a call is going on.", - "allOf": [ - { - "$ref": "#/components/schemas/TrieveKnowledgeBaseVectorStoreSearchPlan" - } - ] - }, - "vectorStoreCreatePlan": { - "description": "This is the plan if you want us to create a new vector store on your behalf. To use an existing vector store from your account, use `vectoreStoreProviderId`", - "allOf": [ - { - "$ref": "#/components/schemas/TrieveKnowledgeBaseVectorStoreCreatePlan" - } - ] - }, - "vectorStoreProviderId": { + "providerId": { "type": "string", - "description": "This is an vector store that you already have on your account with the provider. To create a new vector store, use vectorStoreCreatePlan.\n\nUsage:\n- To bring your own vector store from Trieve, go to https://trieve.ai\n- Create a dataset, and use the datasetId here." + "description": "This is the `datasetId` of the dataset on your Trieve account." } }, "required": [ - "provider", - "vectorStoreSearchPlan" + "type", + "providerId" ] }, "ConversationBlock": { @@ -13621,7 +18038,7 @@ "type" ] }, - "UpdateBlockDTO": { + "UpdateConversationBlockDTO": { "type": "object", "properties": { "messages": { @@ -13656,403 +18073,153 @@ } ] }, - "tool": { - "description": "This is the tool that the block will call. To use an existing tool, use `toolId`.", - "oneOf": [ - { - "$ref": "#/components/schemas/CreateDtmfToolDTO", - "title": "DtmfTool" - }, - { - "$ref": "#/components/schemas/CreateEndCallToolDTO", - "title": "EndCallTool" - }, - { - "$ref": "#/components/schemas/CreateVoicemailToolDTO", - "title": "VoicemailTool" - }, - { - "$ref": "#/components/schemas/CreateFunctionToolDTO", - "title": "FunctionTool" - }, - { - "$ref": "#/components/schemas/CreateGhlToolDTO", - "title": "GhlTool" - }, - { - "$ref": "#/components/schemas/CreateMakeToolDTO", - "title": "MakeTool" - }, - { - "$ref": "#/components/schemas/CreateTransferCallToolDTO", - "title": "TransferCallTool" - } - ] - }, - "steps": { - "type": "array", - "description": "These are the steps in the workflow.", - "items": { - "oneOf": [ - { - "$ref": "#/components/schemas/HandoffStep", - "title": "HandoffStep" - }, - { - "$ref": "#/components/schemas/CallbackStep", - "title": "CallbackStep" - } - ] - } - }, "name": { "type": "string", "description": "This is the name of the block. This is just for your reference." }, - "instruction": { - "type": "string", - "description": "This is the instruction to the model.\n\nYou can reference any variable in the context of the current block execution (step):\n- \"{{input.your-property-name}}\" for the current step's input\n- \"{{your-step-name.output.your-property-name}}\" for another step's output (in the same workflow; read caveat #1)\n- \"{{your-step-name.input.your-property-name}}\" for another step's input (in the same workflow; read caveat #1)\n- \"{{your-block-name.output.your-property-name}}\" for another block's output (in the same workflow; read caveat #2)\n- \"{{your-block-name.input.your-property-name}}\" for another block's input (in the same workflow; read caveat #2)\n- \"{{workflow.input.your-property-name}}\" for the current workflow's input\n- \"{{global.your-property-name}}\" for the global context\n\nThis can be as simple or as complex as you want it to be.\n- \"say hello and ask the user about their day!\"\n- \"collect the user's first and last name\"\n- \"user is {{input.firstName}} {{input.lastName}}. their age is {{input.age}}. ask them about their salary and if they might be interested in buying a house. we offer {{input.offer}}\"\n\nCaveats:\n1. a workflow can execute a step multiple times. example, if a loop is used in the graph. {{stepName.output/input.propertyName}} will reference the latest usage of the step.\n2. a workflow can execute a block multiple times. example, if a step is called multiple times or if a block is used in multiple steps. {{blockName.output/input.propertyName}} will reference the latest usage of the block. this liquid variable is just provided for convenience when creating blocks outside of a workflow with steps.", - "minLength": 1 - }, - "toolId": { - "type": "string", - "description": "This is the id of the tool that the block will call. To use a transient tool, use `tool`." - } - } - }, - "DtmfTool": { - "type": "object", - "properties": { - "async": { - "type": "boolean", - "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", - "example": false - }, - "messages": { - "type": "array", - "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", - "items": { - "oneOf": [ - { - "$ref": "#/components/schemas/ToolMessageStart", - "title": "ToolMessageStart" - }, - { - "$ref": "#/components/schemas/ToolMessageComplete", - "title": "ToolMessageComplete" - }, - { - "$ref": "#/components/schemas/ToolMessageFailed", - "title": "ToolMessageFailed" - }, - { - "$ref": "#/components/schemas/ToolMessageDelayed", - "title": "ToolMessageDelayed" - } - ] - } - }, - "type": { - "type": "string", - "enum": [ - "dtmf" - ], - "description": "The type of tool. \"dtmf\" for DTMF tool." - }, - "id": { - "type": "string", - "description": "This is the unique identifier for the tool." - }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the organization that this tool belongs to." - }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the tool was created." - }, - "updatedAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the tool was last updated." - }, - "function": { - "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", - "allOf": [ - { - "$ref": "#/components/schemas/OpenAIFunction" - } - ] - }, - "server": { - "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", - "allOf": [ - { - "$ref": "#/components/schemas/Server" - } - ] - } - }, - "required": [ - "type", - "id", - "orgId", - "createdAt", - "updatedAt" - ] - }, - "EndCallTool": { - "type": "object", - "properties": { - "async": { - "type": "boolean", - "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", - "example": false - }, - "messages": { - "type": "array", - "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", - "items": { - "oneOf": [ - { - "$ref": "#/components/schemas/ToolMessageStart", - "title": "ToolMessageStart" - }, - { - "$ref": "#/components/schemas/ToolMessageComplete", - "title": "ToolMessageComplete" - }, - { - "$ref": "#/components/schemas/ToolMessageFailed", - "title": "ToolMessageFailed" - }, - { - "$ref": "#/components/schemas/ToolMessageDelayed", - "title": "ToolMessageDelayed" - } - ] - } - }, - "type": { - "type": "string", - "enum": [ - "endCall" - ], - "description": "The type of tool. \"endCall\" for End Call tool." - }, - "id": { - "type": "string", - "description": "This is the unique identifier for the tool." - }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the organization that this tool belongs to." - }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the tool was created." - }, - "updatedAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the tool was last updated." - }, - "function": { - "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", - "allOf": [ - { - "$ref": "#/components/schemas/OpenAIFunction" - } - ] - }, - "server": { - "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", - "allOf": [ - { - "$ref": "#/components/schemas/Server" - } - ] - } - }, - "required": [ - "type", - "id", - "orgId", - "createdAt", - "updatedAt" - ] + "instruction": { + "type": "string", + "description": "This is the instruction to the model.\n\nYou can reference any variable in the context of the current block execution (step):\n- \"{{input.your-property-name}}\" for the current step's input\n- \"{{your-step-name.output.your-property-name}}\" for another step's output (in the same workflow; read caveat #1)\n- \"{{your-step-name.input.your-property-name}}\" for another step's input (in the same workflow; read caveat #1)\n- \"{{your-block-name.output.your-property-name}}\" for another block's output (in the same workflow; read caveat #2)\n- \"{{your-block-name.input.your-property-name}}\" for another block's input (in the same workflow; read caveat #2)\n- \"{{workflow.input.your-property-name}}\" for the current workflow's input\n- \"{{global.your-property-name}}\" for the global context\n\nThis can be as simple or as complex as you want it to be.\n- \"say hello and ask the user about their day!\"\n- \"collect the user's first and last name\"\n- \"user is {{input.firstName}} {{input.lastName}}. their age is {{input.age}}. ask them about their salary and if they might be interested in buying a house. we offer {{input.offer}}\"\n\nCaveats:\n1. a workflow can execute a step multiple times. example, if a loop is used in the graph. {{stepName.output/input.propertyName}} will reference the latest usage of the step.\n2. a workflow can execute a block multiple times. example, if a step is called multiple times or if a block is used in multiple steps. {{blockName.output/input.propertyName}} will reference the latest usage of the block. this liquid variable is just provided for convenience when creating blocks outside of a workflow with steps.", + "minLength": 1 + } + } }, - "FunctionTool": { + "UpdateToolCallBlockDTO": { "type": "object", "properties": { - "async": { - "type": "boolean", - "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", - "example": false - }, "messages": { "type": "array", - "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "description": "These are the pre-configured messages that will be spoken to the user while the block is running.", "items": { "oneOf": [ { - "$ref": "#/components/schemas/ToolMessageStart", - "title": "ToolMessageStart" - }, - { - "$ref": "#/components/schemas/ToolMessageComplete", - "title": "ToolMessageComplete" - }, - { - "$ref": "#/components/schemas/ToolMessageFailed", - "title": "ToolMessageFailed" + "$ref": "#/components/schemas/BlockStartMessage", + "title": "BlockStartMessage" }, { - "$ref": "#/components/schemas/ToolMessageDelayed", - "title": "ToolMessageDelayed" + "$ref": "#/components/schemas/BlockCompleteMessage", + "title": "BlockCompleteMessage" } ] } }, - "type": { - "type": "string", - "enum": [ - "function" - ], - "description": "The type of tool. \"function\" for Function tool." - }, - "id": { - "type": "string", - "description": "This is the unique identifier for the tool." - }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the organization that this tool belongs to." - }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the tool was created." - }, - "updatedAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the tool was last updated." - }, - "function": { - "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "inputSchema": { + "description": "This is the input schema for the block. This is the input the block needs to run. It's given to the block as `steps[0].input`\n\nThese are accessible as variables:\n- ({{input.propertyName}}) in context of the block execution (step)\n- ({{stepName.input.propertyName}}) in context of the workflow", "allOf": [ { - "$ref": "#/components/schemas/OpenAIFunction" + "$ref": "#/components/schemas/JsonSchema" } ] }, - "server": { - "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "outputSchema": { + "description": "This is the output schema for the block. This is the output the block will return to the workflow (`{{stepName.output}}`).\n\nThese are accessible as variables:\n- ({{output.propertyName}}) in context of the block execution (step)\n- ({{stepName.output.propertyName}}) in context of the workflow (read caveat #1)\n- ({{blockName.output.propertyName}}) in context of the workflow (read caveat #2)\n\nCaveats:\n1. a workflow can execute a step multiple times. example, if a loop is used in the graph. {{stepName.output.propertyName}} will reference the latest usage of the step.\n2. a workflow can execute a block multiple times. example, if a step is called multiple times or if a block is used in multiple steps. {{blockName.output.propertyName}} will reference the latest usage of the block. this liquid variable is just provided for convenience when creating blocks outside of a workflow with steps.", "allOf": [ { - "$ref": "#/components/schemas/Server" + "$ref": "#/components/schemas/JsonSchema" + } + ] + }, + "tool": { + "description": "This is the tool that the block will call. To use an existing tool, use `toolId`.", + "oneOf": [ + { + "$ref": "#/components/schemas/CreateDtmfToolDTO", + "title": "DtmfTool" + }, + { + "$ref": "#/components/schemas/CreateEndCallToolDTO", + "title": "EndCallTool" + }, + { + "$ref": "#/components/schemas/CreateVoicemailToolDTO", + "title": "VoicemailTool" + }, + { + "$ref": "#/components/schemas/CreateFunctionToolDTO", + "title": "FunctionTool" + }, + { + "$ref": "#/components/schemas/CreateGhlToolDTO", + "title": "GhlTool" + }, + { + "$ref": "#/components/schemas/CreateMakeToolDTO", + "title": "MakeTool" + }, + { + "$ref": "#/components/schemas/CreateTransferCallToolDTO", + "title": "TransferCallTool" } ] + }, + "name": { + "type": "string", + "description": "This is the name of the block. This is just for your reference." + }, + "toolId": { + "type": "string", + "description": "This is the id of the tool that the block will call. To use a transient tool, use `tool`." } - }, - "required": [ - "type", - "id", - "orgId", - "createdAt", - "updatedAt" - ] + } }, - "GhlTool": { + "UpdateWorkflowBlockDTO": { "type": "object", "properties": { - "async": { - "type": "boolean", - "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", - "example": false - }, "messages": { "type": "array", - "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "description": "These are the pre-configured messages that will be spoken to the user while the block is running.", "items": { "oneOf": [ { - "$ref": "#/components/schemas/ToolMessageStart", - "title": "ToolMessageStart" - }, - { - "$ref": "#/components/schemas/ToolMessageComplete", - "title": "ToolMessageComplete" - }, - { - "$ref": "#/components/schemas/ToolMessageFailed", - "title": "ToolMessageFailed" + "$ref": "#/components/schemas/BlockStartMessage", + "title": "BlockStartMessage" }, { - "$ref": "#/components/schemas/ToolMessageDelayed", - "title": "ToolMessageDelayed" + "$ref": "#/components/schemas/BlockCompleteMessage", + "title": "BlockCompleteMessage" } ] } }, - "type": { - "type": "string", - "enum": [ - "ghl" - ], - "description": "The type of tool. \"ghl\" for GHL tool." - }, - "id": { - "type": "string", - "description": "This is the unique identifier for the tool." - }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the organization that this tool belongs to." - }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the tool was created." - }, - "updatedAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the tool was last updated." - }, - "function": { - "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "inputSchema": { + "description": "This is the input schema for the block. This is the input the block needs to run. It's given to the block as `steps[0].input`\n\nThese are accessible as variables:\n- ({{input.propertyName}}) in context of the block execution (step)\n- ({{stepName.input.propertyName}}) in context of the workflow", "allOf": [ { - "$ref": "#/components/schemas/OpenAIFunction" + "$ref": "#/components/schemas/JsonSchema" } ] }, - "server": { - "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "outputSchema": { + "description": "This is the output schema for the block. This is the output the block will return to the workflow (`{{stepName.output}}`).\n\nThese are accessible as variables:\n- ({{output.propertyName}}) in context of the block execution (step)\n- ({{stepName.output.propertyName}}) in context of the workflow (read caveat #1)\n- ({{blockName.output.propertyName}}) in context of the workflow (read caveat #2)\n\nCaveats:\n1. a workflow can execute a step multiple times. example, if a loop is used in the graph. {{stepName.output.propertyName}} will reference the latest usage of the step.\n2. a workflow can execute a block multiple times. example, if a step is called multiple times or if a block is used in multiple steps. {{blockName.output.propertyName}} will reference the latest usage of the block. this liquid variable is just provided for convenience when creating blocks outside of a workflow with steps.", "allOf": [ { - "$ref": "#/components/schemas/Server" + "$ref": "#/components/schemas/JsonSchema" } ] }, - "metadata": { - "$ref": "#/components/schemas/GhlToolMetadata" + "steps": { + "type": "array", + "description": "These are the steps in the workflow.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/HandoffStep", + "title": "HandoffStep" + }, + { + "$ref": "#/components/schemas/CallbackStep", + "title": "CallbackStep" + } + ] + } + }, + "name": { + "type": "string", + "description": "This is the name of the block. This is just for your reference." } - }, - "required": [ - "type", - "id", - "orgId", - "createdAt", - "updatedAt", - "metadata" - ] + } }, - "MakeTool": { + "DtmfTool": { "type": "object", "properties": { "async": { @@ -14087,9 +18254,9 @@ "type": { "type": "string", "enum": [ - "make" + "dtmf" ], - "description": "The type of tool. \"make\" for Make tool." + "description": "The type of tool. \"dtmf\" for DTMF tool." }, "id": { "type": "string", @@ -14124,9 +18291,6 @@ "$ref": "#/components/schemas/Server" } ] - }, - "metadata": { - "$ref": "#/components/schemas/MakeToolMetadata" } }, "required": [ @@ -14134,11 +18298,10 @@ "id", "orgId", "createdAt", - "updatedAt", - "metadata" + "updatedAt" ] }, - "TransferCallTool": { + "EndCallTool": { "type": "object", "properties": { "async": { @@ -14151,55 +18314,32 @@ "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", "items": { "oneOf": [ - { - "$ref": "#/components/schemas/ToolMessageStart", - "title": "ToolMessageStart" - }, - { - "$ref": "#/components/schemas/ToolMessageComplete", - "title": "ToolMessageComplete" - }, - { - "$ref": "#/components/schemas/ToolMessageFailed", - "title": "ToolMessageFailed" - }, - { - "$ref": "#/components/schemas/ToolMessageDelayed", - "title": "ToolMessageDelayed" - } - ] - } - }, - "type": { - "type": "string", - "enum": [ - "transferCall" - ] - }, - "destinations": { - "type": "array", - "description": "These are the destinations that the call can be transferred to. If no destinations are provided, server.url will be used to get the transfer destination once the tool is called.", - "items": { - "oneOf": [ - { - "$ref": "#/components/schemas/TransferDestinationAssistant", - "title": "Assistant" + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" }, { - "$ref": "#/components/schemas/TransferDestinationStep", - "title": "Step" + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" }, { - "$ref": "#/components/schemas/TransferDestinationNumber", - "title": "Number" + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" }, { - "$ref": "#/components/schemas/TransferDestinationSip", - "title": "Sip" + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" } ] } }, + "type": { + "type": "string", + "enum": [ + "endCall" + ], + "description": "The type of tool. \"endCall\" for End Call tool." + }, "id": { "type": "string", "description": "This is the unique identifier for the tool." @@ -14243,7 +18383,7 @@ "updatedAt" ] }, - "OutputTool": { + "FunctionTool": { "type": "object", "properties": { "async": { @@ -14278,9 +18418,9 @@ "type": { "type": "string", "enum": [ - "output" + "function" ], - "description": "The type of tool. \"output\" for Output tool." + "description": "The type of tool. \"function\" for Function tool." }, "id": { "type": "string", @@ -14325,7 +18465,7 @@ "updatedAt" ] }, - "CreateOutputToolDTO": { + "GhlTool": { "type": "object", "properties": { "async": { @@ -14360,9 +18500,27 @@ "type": { "type": "string", "enum": [ - "output" + "ghl" ], - "description": "The type of tool. \"output\" for Output tool." + "description": "The type of tool. \"ghl\" for GHL tool." + }, + "id": { + "type": "string", + "description": "This is the unique identifier for the tool." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the organization that this tool belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the tool was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the tool was last updated." }, "function": { "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", @@ -14379,13 +18537,21 @@ "$ref": "#/components/schemas/Server" } ] + }, + "metadata": { + "$ref": "#/components/schemas/GhlToolMetadata" } }, "required": [ - "type" + "type", + "id", + "orgId", + "createdAt", + "updatedAt", + "metadata" ] }, - "UpdateToolDTO": { + "MakeTool": { "type": "object", "properties": { "async": { @@ -14417,2047 +18583,2428 @@ ] } }, - "function": { - "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", - "allOf": [ - { - "$ref": "#/components/schemas/OpenAIFunction" - } - ] - }, - "server": { - "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", - "allOf": [ - { - "$ref": "#/components/schemas/Server" - } - ] - } - } - }, - "CreateFileDTO": { - "type": "object", - "properties": { - "file": { - "type": "string", - "description": "This is the File you want to upload for use with the Knowledge Base.", - "format": "binary" - } - }, - "required": [ - "file" - ] - }, - "File": { - "type": "object", - "properties": { - "object": { + "type": { "type": "string", "enum": [ - "file" - ] - }, - "status": { - "enum": [ - "indexed", - "not_indexed" + "make" ], - "type": "string" - }, - "name": { - "type": "string", - "description": "This is the name of the file. This is just for your own reference.", - "maxLength": 40 - }, - "originalName": { - "type": "string" - }, - "bytes": { - "type": "number" - }, - "purpose": { - "type": "string" - }, - "mimetype": { - "type": "string" - }, - "key": { - "type": "string" - }, - "path": { - "type": "string" - }, - "bucket": { - "type": "string" - }, - "url": { - "type": "string" - }, - "metadata": { - "type": "object" + "description": "The type of tool. \"make\" for Make tool." }, "id": { "type": "string", - "description": "This is the unique identifier for the file." - }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the org that this file belongs to." - }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the file was created." + "description": "This is the unique identifier for the tool." }, - "updatedAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the file was last updated." - } - }, - "required": [ - "id", - "orgId", - "createdAt", - "updatedAt" - ] - }, - "UpdateFileDTO": { - "type": "object", - "properties": { - "name": { - "type": "string", - "description": "This is the name of the file. This is just for your own reference.", - "minLength": 1, - "maxLength": 40 - } - } - }, - "Metrics": { - "type": "object", - "properties": { "orgId": { - "type": "string" - }, - "rangeStart": { - "type": "string" - }, - "rangeEnd": { - "type": "string" - }, - "bill": { - "type": "number" - }, - "billWithinBillingLimit": { - "type": "boolean" - }, - "billDailyBreakdown": { - "type": "object" - }, - "callActive": { - "type": "number" - }, - "callActiveWithinConcurrencyLimit": { - "type": "boolean" - }, - "callMinutes": { - "type": "number" - }, - "callMinutesDailyBreakdown": { - "type": "object" - }, - "callMinutesAverage": { - "type": "number" - }, - "callMinutesAverageDailyBreakdown": { - "type": "object" - }, - "callCount": { - "type": "number" - }, - "callCountDailyBreakdown": { - "type": "object" - } - }, - "required": [ - "orgId", - "rangeStart", - "rangeEnd", - "bill", - "billWithinBillingLimit", - "billDailyBreakdown", - "callActive", - "callActiveWithinConcurrencyLimit", - "callMinutes", - "callMinutesDailyBreakdown", - "callMinutesAverage", - "callMinutesAverageDailyBreakdown", - "callCount", - "callCountDailyBreakdown" - ] - }, - "TimeRange": { - "type": "object", - "properties": { - "step": { - "type": "string", - "description": "This is the time step for aggregations.\n\nIf not provided, defaults to returning for the entire time range.", - "enum": [ - "minute", - "hour", - "day", - "week", - "month", - "quarter", - "year", - "decade", - "century", - "millennium" - ] + "type": "string", + "description": "This is the unique identifier for the organization that this tool belongs to." }, - "start": { + "createdAt": { "format": "date-time", "type": "string", - "description": "This is the start date for the time range.\n\nIf not provided, defaults to the 7 days ago." + "description": "This is the ISO 8601 date-time string of when the tool was created." }, - "end": { + "updatedAt": { "format": "date-time", "type": "string", - "description": "This is the end date for the time range.\n\nIf not provided, defaults to now." + "description": "This is the ISO 8601 date-time string of when the tool was last updated." }, - "timezone": { - "type": "string", - "description": "This is the timezone you want to set for the query.\n\nIf not provided, defaults to UTC." - } - } - }, - "AnalyticsOperation": { - "type": "object", - "properties": { - "operation": { - "type": "string", - "description": "This is the aggregation operation you want to perform.", - "enum": [ - "sum", - "avg", - "count", - "min", - "max" + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } ] }, - "column": { - "type": "string", - "description": "This is the columns you want to perform the aggregation operation on.", - "enum": [ - "id", - "cost", - "costBreakdown.llm", - "costBreakdown.stt", - "costBreakdown.tts", - "costBreakdown.vapi", - "costBreakdown.ttsCharacters", - "costBreakdown.llmPromptTokens", - "costBreakdown.llmCompletionTokens", - "duration" + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } ] }, - "alias": { - "type": "string", - "description": "This is the alias for column name returned. Defaults to `${operation}${column}`.", - "maxLength": 40 + "metadata": { + "$ref": "#/components/schemas/MakeToolMetadata" } }, "required": [ - "operation", - "column" + "type", + "id", + "orgId", + "createdAt", + "updatedAt", + "metadata" ] }, - "AnalyticsQuery": { + "TransferCallTool": { "type": "object", "properties": { - "table": { + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false + }, + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } + }, + "type": { "type": "string", - "description": "This is the table you want to query.", "enum": [ - "call" + "transferCall" ] }, - "groupBy": { + "destinations": { "type": "array", - "description": "This is the list of columns you want to group by.", - "enum": [ - "type", - "assistantId", - "endedReason", - "analysis.successEvaluation", - "status" - ], + "description": "These are the destinations that the call can be transferred to. If no destinations are provided, server.url will be used to get the transfer destination once the tool is called.", "items": { - "type": "string", - "enum": [ - "type", - "assistantId", - "endedReason", - "analysis.successEvaluation", - "status" + "oneOf": [ + { + "$ref": "#/components/schemas/TransferDestinationAssistant", + "title": "Assistant" + }, + { + "$ref": "#/components/schemas/TransferDestinationStep", + "title": "Step" + }, + { + "$ref": "#/components/schemas/TransferDestinationNumber", + "title": "Number" + }, + { + "$ref": "#/components/schemas/TransferDestinationSip", + "title": "Sip" + } ] } }, - "name": { + "id": { "type": "string", - "description": "This is the name of the query. This will be used to identify the query in the response.", - "maxLength": 40 + "description": "This is the unique identifier for the tool." }, - "timeRange": { - "description": "This is the time range for the query.", + "orgId": { + "type": "string", + "description": "This is the unique identifier for the organization that this tool belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the tool was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the tool was last updated." + }, + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", "allOf": [ { - "$ref": "#/components/schemas/TimeRange" + "$ref": "#/components/schemas/OpenAIFunction" } ] }, - "operations": { - "description": "This is the list of operations you want to perform.", - "type": "array", - "items": { - "$ref": "#/components/schemas/AnalyticsOperation" - } - } - }, - "required": [ - "table", - "name", - "operations" - ] - }, - "AnalyticsQueryDTO": { - "type": "object", - "properties": { - "queries": { - "description": "This is the list of metric queries you want to perform.", - "type": "array", - "items": { - "$ref": "#/components/schemas/AnalyticsQuery" - } - } - }, - "required": [ - "queries" - ] - }, - "AnalyticsQueryResult": { - "type": "object", - "properties": { - "name": { - "type": "string", - "description": "This is the unique key for the query." - }, - "timeRange": { - "description": "This is the time range for the query.", + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", "allOf": [ { - "$ref": "#/components/schemas/TimeRange" + "$ref": "#/components/schemas/Server" } ] - }, - "result": { - "description": "This is the result of the query, a list of unique groups with result of their aggregations.\n\nExample:\n\"result\": [\n { \"date\": \"2023-01-01\", \"assistantId\": \"123\", \"endedReason\": \"customer-ended-call\", \"sumDuration\": 120, \"avgCost\": 10.5 },\n { \"date\": \"2023-01-02\", \"assistantId\": \"123\", \"endedReason\": \"customer-did-not-give-microphone-permission\", \"sumDuration\": 0, \"avgCost\": 0 },\n // Additional results\n]", - "type": "array", - "items": { - "type": "object" - } } }, "required": [ - "name", - "timeRange", - "result" + "type", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CallLogPrivileged": { + "OutputTool": { "type": "object", "properties": { - "callId": { + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false + }, + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } + }, + "type": { + "type": "string", + "enum": [ + "output" + ], + "description": "The type of tool. \"output\" for Output tool." + }, + "id": { "type": "string", - "description": "This is the unique identifier for the call." + "description": "This is the unique identifier for the tool." }, "orgId": { "type": "string", - "description": "This is the unique identifier for the org that this call log belongs to." + "description": "This is the unique identifier for the organization that this tool belongs to." }, - "log": { + "createdAt": { + "format": "date-time", "type": "string", - "description": "This is the log message associated with the call." + "description": "This is the ISO 8601 date-time string of when the tool was created." }, - "level": { + "updatedAt": { + "format": "date-time", "type": "string", - "description": "This is the level of the log message.", - "enum": [ - "INFO", - "LOG", - "WARN", - "ERROR", - "CHECKPOINT" + "description": "This is the ISO 8601 date-time string of when the tool was last updated." + }, + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } ] }, - "time": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the log was created." + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] } }, "required": [ - "callId", + "type", + "id", "orgId", - "log", - "level", - "time" + "createdAt", + "updatedAt" ] }, - "CallLogsPaginatedResponse": { + "BashTool": { "type": "object", "properties": { - "results": { + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false + }, + "messages": { "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", "items": { - "$ref": "#/components/schemas/CallLogPrivileged" + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] } }, - "metadata": { - "$ref": "#/components/schemas/PaginationMeta" - } - }, - "required": [ - "results", - "metadata" - ] - }, - "Error": { - "type": "object", - "properties": { - "message": { - "type": "string" - } - }, - "required": [ - "message" - ] - }, - "Log": { - "type": "object", - "properties": { - "time": { - "type": "string", - "description": "This is the timestamp at which the log was written." - }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the org that this log belongs to." - }, "type": { "type": "string", - "description": "This is the type of the log.", - "enum": [ - "API", - "Webhook", - "Call", - "Provider" - ] - }, - "webhookType": { - "type": "string", - "description": "This is the type of the webhook, given the log is from a webhook." - }, - "resource": { - "type": "string", - "description": "This is the specific resource, relevant only to API logs.", "enum": [ - "org", - "assistant", - "analytics", - "credential", - "phone-number", - "block", - "voice-library", - "provider", - "tool", - "token", - "template", - "squad", - "call", - "file", - "metric", - "log" - ] - }, - "requestDurationSeconds": { - "type": "number", - "description": "'This is how long the request took.", - "minimum": 0 - }, - "requestStartedAt": { - "type": "string", - "description": "This is the timestamp at which the request began." - }, - "requestFinishedAt": { - "type": "string", - "description": "This is the timestamp at which the request finished." - }, - "requestBody": { - "type": "object", - "description": "This is the body of the request." + "bash" + ], + "description": "The type of tool. \"bash\" for Bash tool." }, - "requestHttpMethod": { + "subType": { "type": "string", - "description": "This is the request method.", "enum": [ - "POST", - "GET", - "PUT", - "PATCH", - "DELETE" - ] - }, - "requestUrl": { - "type": "string", - "description": "This is the request URL." + "bash_20241022" + ], + "description": "The sub type of tool." }, - "requestPath": { + "id": { "type": "string", - "description": "This is the request path." + "description": "This is the unique identifier for the tool." }, - "requestQuery": { + "orgId": { "type": "string", - "description": "This is the request query." - }, - "responseHttpCode": { - "type": "number", - "description": "This the HTTP status code of the response." + "description": "This is the unique identifier for the organization that this tool belongs to." }, - "requestIpAddress": { + "createdAt": { + "format": "date-time", "type": "string", - "description": "This is the request IP address." + "description": "This is the ISO 8601 date-time string of when the tool was created." }, - "requestOrigin": { + "updatedAt": { + "format": "date-time", "type": "string", - "description": "This is the origin of the request" - }, - "responseBody": { - "type": "object", - "description": "This is the body of the response." - }, - "requestHeaders": { - "type": "object", - "description": "These are the headers of the request." + "description": "This is the ISO 8601 date-time string of when the tool was last updated." }, - "error": { - "description": "This is the error, if one occurred.", + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", "allOf": [ { - "$ref": "#/components/schemas/Error" + "$ref": "#/components/schemas/OpenAIFunction" } ] }, - "assistantId": { - "type": "string", - "description": "This is the ID of the assistant." - }, - "phoneNumberId": { - "type": "string", - "description": "This is the ID of the phone number." - }, - "customerId": { - "type": "string", - "description": "This is the ID of the customer." - }, - "squadId": { - "type": "string", - "description": "This is the ID of the squad." + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] }, - "callId": { + "name": { "type": "string", - "description": "This is the ID of the call." + "description": "The name of the tool, fixed to 'bash'", + "default": "bash", + "enum": [ + "bash" + ] } }, "required": [ - "time", - "orgId", "type", - "requestDurationSeconds", - "requestStartedAt", - "requestFinishedAt", - "requestBody", - "requestHttpMethod", - "requestUrl", - "requestPath", - "responseHttpCode" + "subType", + "id", + "orgId", + "createdAt", + "updatedAt", + "name" ] }, - "LogsPaginatedResponse": { + "ComputerTool": { "type": "object", "properties": { - "results": { + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false + }, + "messages": { "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", "items": { - "$ref": "#/components/schemas/Log" + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] } }, - "metadata": { - "$ref": "#/components/schemas/PaginationMeta" - } - }, - "required": [ - "results", - "metadata" - ] - }, - "AnthropicCredential": { - "type": "object", - "properties": { - "provider": { + "type": { "type": "string", "enum": [ - "anthropic" - ] + "computer" + ], + "description": "The type of tool. \"computer\" for Computer tool." }, - "apiKey": { + "subType": { "type": "string", - "maxLength": 10000, - "description": "This is not returned in the API." + "enum": [ + "computer_20241022" + ], + "description": "The sub type of tool." }, "id": { "type": "string", - "description": "This is the unique identifier for the credential." + "description": "This is the unique identifier for the tool." }, "orgId": { "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." + "description": "This is the unique identifier for the organization that this tool belongs to." }, "createdAt": { "format": "date-time", "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." + "description": "This is the ISO 8601 date-time string of when the tool was created." }, "updatedAt": { "format": "date-time", "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + "description": "This is the ISO 8601 date-time string of when the tool was last updated." + }, + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] + }, + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] }, "name": { "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 + "description": "The name of the tool, fixed to 'computer'", + "default": "computer", + "enum": [ + "computer" + ] + }, + "displayWidthPx": { + "type": "number", + "description": "The display width in pixels" + }, + "displayHeightPx": { + "type": "number", + "description": "The display height in pixels" + }, + "displayNumber": { + "type": "number", + "description": "Optional display number" } }, "required": [ - "provider", - "apiKey", + "type", + "subType", "id", "orgId", "createdAt", - "updatedAt" + "updatedAt", + "name", + "displayWidthPx", + "displayHeightPx" ] }, - "AnyscaleCredential": { + "TextEditorTool": { "type": "object", "properties": { - "provider": { + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false + }, + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } + }, + "type": { "type": "string", "enum": [ - "anyscale" - ] + "textEditor" + ], + "description": "The type of tool. \"textEditor\" for Text Editor tool." }, - "apiKey": { + "subType": { "type": "string", - "maxLength": 10000, - "description": "This is not returned in the API." + "enum": [ + "text_editor_20241022" + ], + "description": "The sub type of tool." }, "id": { "type": "string", - "description": "This is the unique identifier for the credential." + "description": "This is the unique identifier for the tool." }, "orgId": { "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." + "description": "This is the unique identifier for the organization that this tool belongs to." }, "createdAt": { "format": "date-time", "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." + "description": "This is the ISO 8601 date-time string of when the tool was created." }, "updatedAt": { "format": "date-time", "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + "description": "This is the ISO 8601 date-time string of when the tool was last updated." + }, + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] + }, + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] }, "name": { "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 + "description": "The name of the tool, fixed to 'str_replace_editor'", + "default": "str_replace_editor", + "enum": [ + "str_replace_editor" + ] } }, "required": [ - "provider", - "apiKey", + "type", + "subType", "id", "orgId", "createdAt", - "updatedAt" + "updatedAt", + "name" ] }, - "AssemblyAICredential": { + "CreateOutputToolDTO": { "type": "object", "properties": { - "provider": { + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false + }, + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } + }, + "type": { "type": "string", "enum": [ - "assembly-ai" - ] + "output" + ], + "description": "The type of tool. \"output\" for Output tool." }, - "apiKey": { - "type": "string", - "description": "This is not returned in the API." + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] }, - "id": { - "type": "string", - "description": "This is the unique identifier for the credential." + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] + } + }, + "required": [ + "type" + ] + }, + "CreateBashToolDTO": { + "type": "object", + "properties": { + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } }, - "createdAt": { - "format": "date-time", + "type": { "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." + "enum": [ + "bash" + ], + "description": "The type of tool. \"bash\" for Bash tool." }, - "updatedAt": { - "format": "date-time", + "subType": { "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + "enum": [ + "bash_20241022" + ], + "description": "The sub type of tool." }, "name": { "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 + "description": "The name of the tool, fixed to 'bash'", + "default": "bash", + "enum": [ + "bash" + ] + }, + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] + }, + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] } }, "required": [ - "provider", - "apiKey", - "id", - "orgId", - "createdAt", - "updatedAt" + "type", + "subType", + "name" ] }, - "AzureCredential": { + "CreateComputerToolDTO": { "type": "object", "properties": { - "provider": { + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false + }, + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } + }, + "type": { "type": "string", "enum": [ - "azure" - ] + "computer" + ], + "description": "The type of tool. \"computer\" for Computer tool." }, - "service": { + "subType": { "type": "string", - "description": "This is the service being used in Azure.", "enum": [ - "speech" + "computer_20241022" ], - "default": "speech" + "description": "The sub type of tool." }, - "region": { + "name": { "type": "string", - "description": "This is the region of the Azure resource.", + "description": "The name of the tool, fixed to 'computer'", + "default": "computer", "enum": [ - "australia", - "canada", - "eastus2", - "eastus", - "france", - "india", - "japan", - "uaenorth", - "northcentralus", - "norway", - "southcentralus", - "sweden", - "switzerland", - "uk", - "westus", - "westus3" + "computer" ] }, - "apiKey": { - "type": "string", - "description": "This is not returned in the API.", - "maxLength": 10000 - }, - "id": { - "type": "string", - "description": "This is the unique identifier for the credential." + "displayWidthPx": { + "type": "number", + "description": "The display width in pixels" }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." + "displayHeightPx": { + "type": "number", + "description": "The display height in pixels" }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." + "displayNumber": { + "type": "number", + "description": "Optional display number" }, - "updatedAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] }, - "name": { - "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] } }, "required": [ - "provider", - "service", - "id", - "orgId", - "createdAt", - "updatedAt" + "type", + "subType", + "name", + "displayWidthPx", + "displayHeightPx" ] }, - "AzureOpenAICredential": { + "CreateTextEditorToolDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "azure-openai" - ] - }, - "region": { - "type": "string", - "enum": [ - "australia", - "canada", - "eastus2", - "eastus", - "france", - "india", - "japan", - "uaenorth", - "northcentralus", - "norway", - "southcentralus", - "sweden", - "switzerland", - "uk", - "westus", - "westus3" - ] + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false }, - "models": { + "messages": { "type": "array", - "enum": [ - "gpt-4o-2024-08-06", - "gpt-4o-mini-2024-07-18", - "gpt-4o-2024-05-13", - "gpt-4-turbo-2024-04-09", - "gpt-4-0125-preview", - "gpt-4-1106-preview", - "gpt-4-0613", - "gpt-35-turbo-0125", - "gpt-35-turbo-1106" - ], - "example": [ - "gpt-4-0125-preview", - "gpt-4-0613" - ], + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", "items": { - "type": "string", - "enum": [ - "gpt-4o-2024-08-06", - "gpt-4o-mini-2024-07-18", - "gpt-4o-2024-05-13", - "gpt-4-turbo-2024-04-09", - "gpt-4-0125-preview", - "gpt-4-1106-preview", - "gpt-4-0613", - "gpt-35-turbo-0125", - "gpt-35-turbo-1106" + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } ] } }, - "openAIKey": { - "type": "string", - "maxLength": 10000, - "description": "This is not returned in the API." - }, - "id": { - "type": "string", - "description": "This is the unique identifier for the credential." - }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." - }, - "createdAt": { - "format": "date-time", + "type": { "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." + "enum": [ + "textEditor" + ], + "description": "The type of tool. \"textEditor\" for Text Editor tool." }, - "updatedAt": { - "format": "date-time", + "subType": { "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + "enum": [ + "text_editor_20241022" + ], + "description": "The sub type of tool." }, "name": { "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 + "description": "The name of the tool, fixed to 'str_replace_editor'", + "default": "str_replace_editor", + "enum": [ + "str_replace_editor" + ] }, - "openAIEndpoint": { - "type": "string", - "maxLength": 10000 + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] + }, + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] } }, "required": [ - "provider", - "region", - "models", - "openAIKey", - "id", - "orgId", - "createdAt", - "updatedAt", - "openAIEndpoint" + "type", + "subType", + "name" ] }, - "SipTrunkGateway": { + "UpdateDtmfToolDTO": { "type": "object", "properties": { - "ip": { - "type": "string", - "description": "This is the address of the gateway. It can be an IPv4 address like 1.1.1.1 or a fully qualified domain name like my-sip-trunk.pstn.twilio.com." + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false }, - "port": { - "type": "number", - "description": "This is the port number of the gateway. Default is 5060.\n\n@default 5060", - "minimum": 1, - "maximum": 65535 + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } }, - "netmask": { - "type": "number", - "description": "This is the netmask of the gateway. Defaults to 32.\n\n@default 32", - "minimum": 24, - "maximum": 32 + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] }, - "inboundEnabled": { + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] + } + } + }, + "UpdateEndCallToolDTO": { + "type": "object", + "properties": { + "async": { "type": "boolean", - "description": "This is whether inbound calls are allowed from this gateway. Default is true.\n\n@default true" + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false }, - "outboundEnabled": { - "type": "boolean", - "description": "This is whether outbound calls should be sent to this gateway. Default is true.\n\nNote, if netmask is less than 32, it doesn't affect the outbound IPs that are tried. 1 attempt is made to `ip:port`.\n\n@default true" + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } }, - "outboundProtocol": { - "type": "string", - "description": "This is the protocol to use for SIP signaling outbound calls. Default is udp.\n\n@default udp", - "enum": [ - "tls/srtp", - "tcp", - "tls", - "udp" + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } ] }, - "optionsPingEnabled": { + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] + } + } + }, + "UpdateFunctionToolDTO": { + "type": "object", + "properties": { + "async": { "type": "boolean", - "description": "This is whether to send options ping to the gateway. This can be used to check if the gateway is reachable. Default is false.\n\nThis is useful for high availability setups where you want to check if the gateway is reachable before routing calls to it. Note, if no gateway for a trunk is reachable, outbound calls will be rejected.\n\n@default false" + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false + }, + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } + }, + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] + }, + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] } - }, - "required": [ - "ip" - ] + } }, - "SipTrunkOutboundSipRegisterPlan": { + "UpdateGhlToolDTO": { "type": "object", "properties": { - "domain": { - "type": "string" + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false }, - "username": { - "type": "string" + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } }, - "realm": { - "type": "string" + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] + }, + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] + }, + "metadata": { + "$ref": "#/components/schemas/GhlToolMetadata" } } }, - "SipTrunkOutboundAuthenticationPlan": { + "UpdateMakeToolDTO": { "type": "object", "properties": { - "authPassword": { - "type": "string", - "description": "This is not returned in the API." + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false + }, + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } }, - "authUsername": { - "type": "string" + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] }, - "sipRegisterPlan": { - "description": "This can be used to configure if SIP register is required by the SIP trunk. If not provided, no SIP registration will be attempted.", + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", "allOf": [ { - "$ref": "#/components/schemas/SipTrunkOutboundSipRegisterPlan" + "$ref": "#/components/schemas/Server" } ] + }, + "metadata": { + "$ref": "#/components/schemas/MakeToolMetadata" } } }, - "SbcConfiguration": { - "type": "object", - "properties": {} - }, - "ByoSipTrunkCredential": { + "UpdateTransferCallToolDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "description": "This can be used to bring your own SIP trunks or to connect to a Carrier.", - "enum": [ - "byo-sip-trunk" - ] - }, - "id": { - "type": "string", - "description": "This is the unique identifier for the credential." - }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." - }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." - }, - "updatedAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false }, - "name": { - "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } }, - "gateways": { - "description": "This is the list of SIP trunk's gateways.", + "destinations": { "type": "array", + "description": "These are the destinations that the call can be transferred to. If no destinations are provided, server.url will be used to get the transfer destination once the tool is called.", "items": { - "$ref": "#/components/schemas/SipTrunkGateway" + "oneOf": [ + { + "$ref": "#/components/schemas/TransferDestinationAssistant", + "title": "Assistant" + }, + { + "$ref": "#/components/schemas/TransferDestinationStep", + "title": "Step" + }, + { + "$ref": "#/components/schemas/TransferDestinationNumber", + "title": "Number" + }, + { + "$ref": "#/components/schemas/TransferDestinationSip", + "title": "Sip" + } + ] } }, - "outboundAuthenticationPlan": { - "description": "This can be used to configure the outbound authentication if required by the SIP trunk.", + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", "allOf": [ { - "$ref": "#/components/schemas/SipTrunkOutboundAuthenticationPlan" + "$ref": "#/components/schemas/OpenAIFunction" } ] }, - "outboundLeadingPlusEnabled": { - "type": "boolean", - "description": "This ensures the outbound origination attempts have a leading plus. Defaults to false to match conventional telecom behavior.\n\nUsage:\n- Vonage/Twilio requires leading plus for all outbound calls. Set this to true.\n\n@default false" - }, - "techPrefix": { - "type": "string", - "description": "This can be used to configure the tech prefix on outbound calls. This is an advanced property.", - "maxLength": 10000 - }, - "sipDiversionHeader": { - "type": "string", - "description": "This can be used to enable the SIP diversion header for authenticating the calling number if the SIP trunk supports it. This is an advanced property.", - "maxLength": 10000 - }, - "sbcConfiguration": { - "description": "This is an advanced configuration for enterprise deployments. This uses the onprem SBC to trunk into the SIP trunk's `gateways`, rather than the managed SBC provided by Vapi.", + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", "allOf": [ { - "$ref": "#/components/schemas/SbcConfiguration" + "$ref": "#/components/schemas/Server" } ] } - }, - "required": [ - "id", - "orgId", - "createdAt", - "updatedAt", - "gateways" - ] - }, - "CartesiaCredential": { - "type": "object", - "properties": { - "provider": { - "type": "string", - "enum": [ - "cartesia" - ] - }, - "apiKey": { - "type": "string", - "description": "This is not returned in the API." - }, - "id": { - "type": "string", - "description": "This is the unique identifier for the credential." - }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." - }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." - }, - "updatedAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." - }, - "name": { - "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 - } - }, - "required": [ - "provider", - "apiKey", - "id", - "orgId", - "createdAt", - "updatedAt" - ] + } }, - "OAuth2AuthenticationPlan": { + "UpdateOutputToolDTO": { "type": "object", "properties": { - "type": { - "type": "string", - "enum": [ - "oauth2" - ] - }, - "url": { - "type": "string", - "description": "This is the OAuth2 URL." - }, - "clientId": { - "type": "string", - "description": "This is the OAuth2 client ID." + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false }, - "clientSecret": { - "type": "string", - "description": "This is the OAuth2 client secret." - } - }, - "required": [ - "type", - "url", - "clientId", - "clientSecret" - ] - }, - "Oauth2AuthenticationSession": { - "type": "object", - "properties": { - "accessToken": { - "type": "string", - "description": "This is the OAuth2 access token." + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } }, - "expiresAt": { - "format": "date-time", - "type": "string", - "description": "This is the OAuth2 access token expiration." + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] + }, + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] } } }, - "CustomLLMCredential": { + "UpdateBashToolDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "custom-llm" - ] + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false }, - "apiKey": { + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } + }, + "subType": { "type": "string", - "maxLength": 10000, - "description": "This is not returned in the API." + "enum": [ + "bash_20241022" + ], + "description": "The sub type of tool." }, - "authenticationPlan": { - "description": "This is the authentication plan. Currently supports OAuth2 RFC 6749. To use Bearer authentication, use apiKey", + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", "allOf": [ { - "$ref": "#/components/schemas/OAuth2AuthenticationPlan" + "$ref": "#/components/schemas/OpenAIFunction" } ] }, - "id": { - "type": "string", - "description": "This is the unique identifier for the credential." - }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." - }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." - }, - "updatedAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." - }, - "authenticationSession": { - "description": "This is the authentication session for the credential. Available for credentials that have an authentication plan.", + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", "allOf": [ { - "$ref": "#/components/schemas/Oauth2AuthenticationSession" + "$ref": "#/components/schemas/Server" } ] }, "name": { "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 + "description": "The name of the tool, fixed to 'bash'", + "default": "bash", + "enum": [ + "bash" + ] } - }, - "required": [ - "provider", - "apiKey", - "id", - "orgId", - "createdAt", - "updatedAt" - ] + } }, - "DeepgramCredential": { + "UpdateComputerToolDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "deepgram" - ] - }, - "apiKey": { - "type": "string", - "description": "This is not returned in the API." + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false }, - "id": { - "type": "string", - "description": "This is the unique identifier for the credential." + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } }, - "orgId": { + "subType": { "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." + "enum": [ + "computer_20241022" + ], + "description": "The sub type of tool." }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] }, - "updatedAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] }, "name": { "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 + "description": "The name of the tool, fixed to 'computer'", + "default": "computer", + "enum": [ + "computer" + ] }, - "apiUrl": { - "type": "string", - "description": "This can be used to point to an onprem Deepgram instance. Defaults to api.deepgram.com." + "displayWidthPx": { + "type": "number", + "description": "The display width in pixels" + }, + "displayHeightPx": { + "type": "number", + "description": "The display height in pixels" + }, + "displayNumber": { + "type": "number", + "description": "Optional display number" } - }, - "required": [ - "provider", - "apiKey", - "id", - "orgId", - "createdAt", - "updatedAt" - ] + } }, - "DeepInfraCredential": { + "UpdateTextEditorToolDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "deepinfra" - ] - }, - "apiKey": { - "type": "string", - "description": "This is not returned in the API." - }, - "id": { - "type": "string", - "description": "This is the unique identifier for the credential." + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } }, - "createdAt": { - "format": "date-time", + "subType": { "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." + "enum": [ + "text_editor_20241022" + ], + "description": "The sub type of tool." }, - "updatedAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] + }, + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] }, "name": { "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 + "description": "The name of the tool, fixed to 'str_replace_editor'", + "default": "str_replace_editor", + "enum": [ + "str_replace_editor" + ] + } + } + }, + "CreateFileDTO": { + "type": "object", + "properties": { + "file": { + "type": "string", + "description": "This is the File you want to upload for use with the Knowledge Base.", + "format": "binary" } }, "required": [ - "provider", - "apiKey", - "id", - "orgId", - "createdAt", - "updatedAt" + "file" ] }, - "ElevenLabsCredential": { + "File": { "type": "object", "properties": { - "provider": { + "object": { "type": "string", "enum": [ - "11labs" + "file" ] }, - "apiKey": { + "status": { + "enum": [ + "indexed", + "not_indexed" + ], + "type": "string" + }, + "name": { "type": "string", - "maxLength": 10000, - "description": "This is not returned in the API." + "description": "This is the name of the file. This is just for your own reference.", + "maxLength": 40 + }, + "originalName": { + "type": "string" + }, + "bytes": { + "type": "number" + }, + "purpose": { + "type": "string" + }, + "mimetype": { + "type": "string" + }, + "key": { + "type": "string" + }, + "path": { + "type": "string" + }, + "bucket": { + "type": "string" + }, + "url": { + "type": "string" + }, + "metadata": { + "type": "object" }, "id": { "type": "string", - "description": "This is the unique identifier for the credential." + "description": "This is the unique identifier for the file." }, "orgId": { "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." + "description": "This is the unique identifier for the org that this file belongs to." }, "createdAt": { "format": "date-time", "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." + "description": "This is the ISO 8601 date-time string of when the file was created." }, "updatedAt": { "format": "date-time", "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." - }, - "name": { - "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 + "description": "This is the ISO 8601 date-time string of when the file was last updated." } }, "required": [ - "provider", - "apiKey", "id", "orgId", "createdAt", "updatedAt" ] }, - "GcpKey": { + "UpdateFileDTO": { "type": "object", "properties": { - "type": { + "name": { "type": "string", - "description": "This is the type of the key. Most likely, this is \"service_account\"." + "description": "This is the name of the file. This is just for your own reference.", + "minLength": 1, + "maxLength": 40 + } + } + }, + "Metrics": { + "type": "object", + "properties": { + "orgId": { + "type": "string" }, - "projectId": { - "type": "string", - "description": "This is the ID of the Google Cloud project associated with this key." + "rangeStart": { + "type": "string" }, - "privateKeyId": { - "type": "string", - "description": "This is the unique identifier for the private key." + "rangeEnd": { + "type": "string" }, - "privateKey": { - "type": "string", - "description": "This is the private key in PEM format.\n\nNote: This is not returned in the API." + "bill": { + "type": "number" }, - "clientEmail": { - "type": "string", - "description": "This is the email address associated with the service account." + "billWithinBillingLimit": { + "type": "boolean" }, - "clientId": { - "type": "string", - "description": "This is the unique identifier for the client." + "billDailyBreakdown": { + "type": "object" }, - "authUri": { - "type": "string", - "description": "This is the URI for the auth provider's authorization endpoint." + "callActive": { + "type": "number" }, - "tokenUri": { - "type": "string", - "description": "This is the URI for the auth provider's token endpoint." + "callActiveWithinConcurrencyLimit": { + "type": "boolean" }, - "authProviderX509CertUrl": { - "type": "string", - "description": "This is the URL of the public x509 certificate for the auth provider." + "callMinutes": { + "type": "number" }, - "clientX509CertUrl": { - "type": "string", - "description": "This is the URL of the public x509 certificate for the client." + "callMinutesDailyBreakdown": { + "type": "object" }, - "universeDomain": { - "type": "string", - "description": "This is the domain associated with the universe this service account belongs to." + "callMinutesAverage": { + "type": "number" + }, + "callMinutesAverageDailyBreakdown": { + "type": "object" + }, + "callCount": { + "type": "number" + }, + "callCountDailyBreakdown": { + "type": "object" } }, "required": [ - "type", - "projectId", - "privateKeyId", - "privateKey", - "clientEmail", - "clientId", - "authUri", - "tokenUri", - "authProviderX509CertUrl", - "clientX509CertUrl", - "universeDomain" + "orgId", + "rangeStart", + "rangeEnd", + "bill", + "billWithinBillingLimit", + "billDailyBreakdown", + "callActive", + "callActiveWithinConcurrencyLimit", + "callMinutes", + "callMinutesDailyBreakdown", + "callMinutesAverage", + "callMinutesAverageDailyBreakdown", + "callCount", + "callCountDailyBreakdown" ] }, - "BucketPlan": { + "TimeRange": { "type": "object", "properties": { - "name": { + "step": { "type": "string", - "description": "This is the name of the bucket." + "description": "This is the time step for aggregations.\n\nIf not provided, defaults to returning for the entire time range.", + "enum": [ + "second", + "minute", + "hour", + "day", + "week", + "month", + "quarter", + "year", + "decade", + "century", + "millennium" + ] }, - "region": { + "start": { + "format": "date-time", "type": "string", - "description": "This is the region of the bucket.\n\nUsage:\n- If `credential.type` is `aws`, then this is required.\n- If `credential.type` is `gcp`, then this is optional since GCP allows buckets to be accessed without a region but region is required for data residency requirements. Read here: https://cloud.google.com/storage/docs/request-endpoints" + "description": "This is the start date for the time range.\n\nIf not provided, defaults to the 7 days ago." }, - "path": { + "end": { + "format": "date-time", "type": "string", - "description": "This is the path where call artifacts will be stored.\n\nUsage:\n- To store call artifacts in a specific folder, set this to the full path. Eg. \"/folder-name1/folder-name2\".\n- To store call artifacts in the root of the bucket, leave this blank.\n\n@default \"/\"" + "description": "This is the end date for the time range.\n\nIf not provided, defaults to now." }, - "hmacAccessKey": { + "timezone": { "type": "string", - "description": "This is the HMAC access key offered by GCP for interoperability with S3 clients. Here is the guide on how to create: https://cloud.google.com/storage/docs/authentication/managing-hmackeys#console\n\nUsage:\n- If `credential.type` is `gcp`, then this is required.\n- If `credential.type` is `aws`, then this is not required since credential.awsAccessKeyId is used instead." + "description": "This is the timezone you want to set for the query.\n\nIf not provided, defaults to UTC." + } + } + }, + "AnalyticsOperation": { + "type": "object", + "properties": { + "operation": { + "type": "string", + "description": "This is the aggregation operation you want to perform.", + "enum": [ + "sum", + "avg", + "count", + "min", + "max", + "history" + ] + }, + "column": { + "type": "string", + "description": "This is the columns you want to perform the aggregation operation on.", + "enum": [ + "id", + "cost", + "costBreakdown.llm", + "costBreakdown.stt", + "costBreakdown.tts", + "costBreakdown.vapi", + "costBreakdown.ttsCharacters", + "costBreakdown.llmPromptTokens", + "costBreakdown.llmCompletionTokens", + "duration", + "concurrency" + ] }, - "hmacSecret": { + "alias": { "type": "string", - "description": "This is the secret for the HMAC access key. Here is the guide on how to create: https://cloud.google.com/storage/docs/authentication/managing-hmackeys#console\n\nUsage:\n- If `credential.type` is `gcp`, then this is required.\n- If `credential.type` is `aws`, then this is not required since credential.awsSecretAccessKey is used instead.\n\nNote: This is not returned in the API." + "description": "This is the alias for column name returned. Defaults to `${operation}${column}`.", + "maxLength": 40 } }, "required": [ - "name" + "operation", + "column" ] }, - "GcpCredential": { + "AnalyticsQuery": { "type": "object", "properties": { - "provider": { + "table": { "type": "string", + "description": "This is the table you want to query.", "enum": [ - "gcp" + "call", + "subscription" ] }, - "id": { - "type": "string", - "description": "This is the unique identifier for the credential." - }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." - }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." - }, - "updatedAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + "groupBy": { + "type": "array", + "description": "This is the list of columns you want to group by.", + "enum": [ + "type", + "assistantId", + "endedReason", + "analysis.successEvaluation", + "status" + ], + "items": { + "type": "string", + "enum": [ + "type", + "assistantId", + "endedReason", + "analysis.successEvaluation", + "status" + ] + } }, "name": { "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, + "description": "This is the name of the query. This will be used to identify the query in the response.", "maxLength": 40 }, - "gcpKey": { - "description": "This is the GCP key. This is the JSON that can be generated in the Google Cloud Console at https://console.cloud.google.com/iam-admin/serviceaccounts/details//keys.\n\nThe schema is identical to the JSON that GCP outputs.", + "timeRange": { + "description": "This is the time range for the query.", "allOf": [ { - "$ref": "#/components/schemas/GcpKey" + "$ref": "#/components/schemas/TimeRange" } ] }, - "bucketPlan": { - "description": "This is the bucket plan that can be provided to store call artifacts in GCP.", + "operations": { + "description": "This is the list of operations you want to perform.", + "type": "array", + "items": { + "$ref": "#/components/schemas/AnalyticsOperation" + } + } + }, + "required": [ + "table", + "name", + "operations" + ] + }, + "AnalyticsQueryDTO": { + "type": "object", + "properties": { + "queries": { + "description": "This is the list of metric queries you want to perform.", + "type": "array", + "items": { + "$ref": "#/components/schemas/AnalyticsQuery" + } + } + }, + "required": [ + "queries" + ] + }, + "AnalyticsQueryResult": { + "type": "object", + "properties": { + "name": { + "type": "string", + "description": "This is the unique key for the query." + }, + "timeRange": { + "description": "This is the time range for the query.", "allOf": [ { - "$ref": "#/components/schemas/BucketPlan" + "$ref": "#/components/schemas/TimeRange" } ] + }, + "result": { + "description": "This is the result of the query, a list of unique groups with result of their aggregations.\n\nExample:\n\"result\": [\n { \"date\": \"2023-01-01\", \"assistantId\": \"123\", \"endedReason\": \"customer-ended-call\", \"sumDuration\": 120, \"avgCost\": 10.5 },\n { \"date\": \"2023-01-02\", \"assistantId\": \"123\", \"endedReason\": \"customer-did-not-give-microphone-permission\", \"sumDuration\": 0, \"avgCost\": 0 },\n // Additional results\n]", + "type": "array", + "items": { + "type": "object" + } } }, "required": [ - "provider", - "id", - "orgId", - "createdAt", - "updatedAt", - "gcpKey" + "name", + "timeRange", + "result" ] }, - "GladiaCredential": { + "CallLogPrivileged": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "gladia" - ] - }, - "apiKey": { - "type": "string", - "description": "This is not returned in the API." - }, - "id": { + "callId": { "type": "string", - "description": "This is the unique identifier for the credential." + "description": "This is the unique identifier for the call." }, "orgId": { "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." + "description": "This is the unique identifier for the org that this call log belongs to." }, - "createdAt": { - "format": "date-time", + "log": { "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." + "description": "This is the log message associated with the call." }, - "updatedAt": { - "format": "date-time", + "level": { "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + "description": "This is the level of the log message.", + "enum": [ + "INFO", + "LOG", + "WARN", + "ERROR", + "CHECKPOINT" + ] }, - "name": { + "time": { + "format": "date-time", "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 + "description": "This is the ISO 8601 date-time string of when the log was created." } }, "required": [ - "provider", - "apiKey", - "id", + "callId", "orgId", - "createdAt", - "updatedAt" + "log", + "level", + "time" ] }, - "GoHighLevelCredential": { + "CallLogsPaginatedResponse": { "type": "object", "properties": { - "provider": { + "results": { + "type": "array", + "items": { + "$ref": "#/components/schemas/CallLogPrivileged" + } + }, + "metadata": { + "$ref": "#/components/schemas/PaginationMeta" + } + }, + "required": [ + "results", + "metadata" + ] + }, + "Error": { + "type": "object", + "properties": { + "message": { + "type": "string" + } + }, + "required": [ + "message" + ] + }, + "Log": { + "type": "object", + "properties": { + "time": { + "type": "string", + "description": "This is the timestamp at which the log was written." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this log belongs to." + }, + "type": { + "type": "string", + "description": "This is the type of the log.", + "enum": [ + "API", + "Webhook", + "Call", + "Provider" + ] + }, + "webhookType": { + "type": "string", + "description": "This is the type of the webhook, given the log is from a webhook." + }, + "resource": { + "type": "string", + "description": "This is the specific resource, relevant only to API logs.", + "enum": [ + "org", + "assistant", + "analytics", + "credential", + "phone-number", + "block", + "voice-library", + "provider", + "tool", + "token", + "template", + "squad", + "call", + "file", + "metric", + "log" + ] + }, + "requestDurationSeconds": { + "type": "number", + "description": "'This is how long the request took.", + "minimum": 0 + }, + "requestStartedAt": { + "type": "string", + "description": "This is the timestamp at which the request began." + }, + "requestFinishedAt": { "type": "string", + "description": "This is the timestamp at which the request finished." + }, + "requestBody": { + "type": "object", + "description": "This is the body of the request." + }, + "requestHttpMethod": { + "type": "string", + "description": "This is the request method.", "enum": [ - "gohighlevel" + "POST", + "GET", + "PUT", + "PATCH", + "DELETE" ] }, - "apiKey": { + "requestUrl": { "type": "string", - "description": "This is not returned in the API." + "description": "This is the request URL." }, - "id": { + "requestPath": { "type": "string", - "description": "This is the unique identifier for the credential." + "description": "This is the request path." }, - "orgId": { + "requestQuery": { "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." + "description": "This is the request query." }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." + "responseHttpCode": { + "type": "number", + "description": "This the HTTP status code of the response." }, - "updatedAt": { - "format": "date-time", + "requestIpAddress": { "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + "description": "This is the request IP address." }, - "name": { - "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 - } - }, - "required": [ - "provider", - "apiKey", - "id", - "orgId", - "createdAt", - "updatedAt" - ] - }, - "GoogleCredential": { - "type": "object", - "properties": { - "provider": { + "requestOrigin": { "type": "string", - "description": "This is the key for Gemini in Google AI Studio. Get it from here: https://aistudio.google.com/app/apikey", - "enum": [ - "google" - ] + "description": "This is the origin of the request" }, - "apiKey": { - "type": "string", - "maxLength": 10000, - "description": "This is not returned in the API." + "responseBody": { + "type": "object", + "description": "This is the body of the response." }, - "id": { + "requestHeaders": { + "type": "object", + "description": "These are the headers of the request." + }, + "error": { + "description": "This is the error, if one occurred.", + "allOf": [ + { + "$ref": "#/components/schemas/Error" + } + ] + }, + "assistantId": { "type": "string", - "description": "This is the unique identifier for the credential." + "description": "This is the ID of the assistant." }, - "orgId": { + "phoneNumberId": { "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." + "description": "This is the ID of the phone number." }, - "createdAt": { - "format": "date-time", + "customerId": { "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." + "description": "This is the ID of the customer." }, - "updatedAt": { - "format": "date-time", + "squadId": { "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + "description": "This is the ID of the squad." }, - "name": { + "callId": { "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 + "description": "This is the ID of the call." } }, "required": [ - "provider", - "apiKey", - "id", + "time", "orgId", - "createdAt", - "updatedAt" + "type" ] }, - "GroqCredential": { + "LogsPaginatedResponse": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "groq" - ] - }, - "apiKey": { - "type": "string", - "description": "This is not returned in the API." - }, - "id": { - "type": "string", - "description": "This is the unique identifier for the credential." - }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." - }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." - }, - "updatedAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + "results": { + "type": "array", + "items": { + "$ref": "#/components/schemas/Log" + } }, - "name": { - "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 + "metadata": { + "$ref": "#/components/schemas/PaginationMeta" } }, "required": [ - "provider", - "apiKey", - "id", - "orgId", - "createdAt", - "updatedAt" + "results", + "metadata" ] }, - "InflectionAICredential": { + "ChatDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "description": "This is the api key for Pi in InflectionAI's console. Get it from here: https://developers.inflection.ai/keys, billing will need to be setup", - "enum": [ - "inflection-ai" - ] - }, - "apiKey": { - "type": "string", - "maxLength": 10000, - "description": "This is not returned in the API." - }, - "id": { - "type": "string", - "description": "This is the unique identifier for the credential." - }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." + "messages": { + "type": "array", + "items": { + "$ref": "#/components/schemas/OpenAIMessage" + } }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." + "assistantId": { + "type": "string" }, - "updatedAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + "assistant": { + "$ref": "#/components/schemas/CreateAssistantDTO" }, - "name": { - "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 + "assistantOverrides": { + "$ref": "#/components/schemas/AssistantOverrides" } }, "required": [ - "provider", - "apiKey", - "id", - "orgId", - "createdAt", - "updatedAt" + "messages" ] }, - "LangfuseCredential": { + "ChatServiceResponse": { + "type": "object", + "properties": {} + }, + "ChatCompletionMessageMetadata": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "langfuse" - ] - }, - "publicKey": { - "type": "string", - "description": "The public key for Langfuse project. Eg: pk-lf-..." - }, - "apiKey": { - "type": "string", - "description": "The secret key for Langfuse project. Eg: sk-lf-... .This is not returned in the API." + "taskName": { + "type": "string" }, - "apiUrl": { - "type": "string", - "description": "The host URL for Langfuse project. Eg: https://cloud.langfuse.com" + "taskType": { + "type": "string" }, - "id": { - "type": "string", - "description": "This is the unique identifier for the credential." + "taskOutput": { + "type": "string" }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." + "taskState": { + "type": "object" }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." + "nodeTrace": { + "type": "array", + "items": { + "type": "string" + } + } + }, + "required": [ + "taskName", + "taskType", + "taskOutput" + ] + }, + "ChatCompletionMessage": { + "type": "object", + "properties": { + "role": { + "type": "object" }, - "updatedAt": { - "format": "date-time", + "content": { "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + "nullable": true }, - "name": { - "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 + "metadata": { + "$ref": "#/components/schemas/ChatCompletionMessageMetadata" } }, "required": [ - "provider", - "publicKey", - "apiKey", - "apiUrl", - "id", - "orgId", - "createdAt", - "updatedAt" + "role", + "content" ] }, - "LmntCredential": { + "Say": { "type": "object", "properties": { - "provider": { + "type": { "type": "string", "enum": [ - "lmnt" + "say" ] }, - "apiKey": { + "exact": { "type": "string", - "description": "This is not returned in the API." + "maxLength": 1000 }, - "id": { + "prompt": { "type": "string", - "description": "This is the unique identifier for the credential." + "maxLength": 1000 }, - "orgId": { + "name": { "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." + "maxLength": 80 }, - "createdAt": { - "format": "date-time", + "metadata": { + "type": "object", + "description": "This is for metadata you want to store on the task." + } + }, + "required": [ + "type", + "name" + ] + }, + "SayHook": { + "type": "object", + "properties": { + "type": { "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." + "enum": [ + "say" + ] }, - "updatedAt": { - "format": "date-time", + "metadata": { + "type": "object", + "description": "This is for metadata you want to store on the task." + }, + "exact": { "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + "maxLength": 1000 }, - "name": { + "prompt": { "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 + "maxLength": 1000 } }, "required": [ - "provider", - "apiKey", - "id", - "orgId", - "createdAt", - "updatedAt" + "type" ] }, - "MakeCredential": { + "Hook": { "type": "object", "properties": { - "provider": { + "on": { "type": "string", "enum": [ - "make" - ] - }, - "teamId": { - "type": "string", - "description": "Team ID" + "task.start", + "task.output.confirmation", + "task.delayed" + ], + "maxLength": 80 }, - "region": { + "do": { + "type": "array", + "items": { + "$ref": "#/components/schemas/SayHook" + } + } + }, + "required": [ + "on", + "do" + ] + }, + "Gather": { + "type": "object", + "properties": { + "type": { "type": "string", - "description": "Region of your application. For example: eu1, eu2, us1, us2" + "enum": [ + "gather" + ] }, - "apiKey": { - "type": "string", - "description": "This is not returned in the API." + "output": { + "$ref": "#/components/schemas/JsonSchema" }, - "id": { - "type": "string", - "description": "This is the unique identifier for the credential." + "confirmContent": { + "type": "boolean", + "description": "This is whether or not the workflow should read back the gathered data to the user, and ask about its correctness." }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." + "hooks": { + "description": "This is a list of hooks for a task.\nEach hook is a list of tasks to run on a trigger (such as on start, on failure, etc).\nOnly Say is supported for now.", + "type": "array", + "items": { + "$ref": "#/components/schemas/Hook" + } }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." + "maxRetries": { + "type": "number", + "description": "This is the number of times we should try to gather the information from the user before we failover to the fail path. An example of this would be a user refusing to give their phone number for privacy reasons, and then going down a different path on account of this" }, - "updatedAt": { - "format": "date-time", + "literalTemplate": { "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + "description": "This is a liquid templating string. On the first call to Gather, the template will be filled out with variables from the context, and will be spoken verbatim to the user. An example would be \"Base on your zipcode, it looks like you could be in one of these counties: {{ counties | join: \", \" }}. Which one do you live in?\"" }, "name": { "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 + "maxLength": 80 + }, + "metadata": { + "type": "object", + "description": "This is for metadata you want to store on the task." } }, "required": [ - "provider", - "teamId", - "region", - "apiKey", - "id", - "orgId", - "createdAt", - "updatedAt" + "type", + "output", + "name" ] }, - "OpenAICredential": { + "ApiRequest": { "type": "object", "properties": { - "provider": { + "type": { "type": "string", "enum": [ - "openai" + "apiRequest" ] }, - "apiKey": { + "method": { "type": "string", - "description": "This is not returned in the API." + "enum": [ + "POST", + "GET" + ] }, - "id": { + "url": { "type": "string", - "description": "This is the unique identifier for the credential." + "description": "Api endpoint to send requests to." }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." + "headers": { + "description": "These are the custom headers to include in the Api Request sent.\n\nEach key-value pair represents a header name and its value.", + "allOf": [ + { + "$ref": "#/components/schemas/JsonSchema" + } + ] }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." + "body": { + "description": "This defined the JSON body of your Api Request. For example, if `body_schema`\nincluded \"my_field\": \"my_gather_statement.user_age\", then the json body sent to the server would have that particular value assign to it.\nRight now, only data from gather statements are supported.", + "allOf": [ + { + "$ref": "#/components/schemas/JsonSchema" + } + ] }, - "updatedAt": { - "format": "date-time", + "mode": { "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + "description": "This is the mode of the Api Request.\nWe only support BLOCKING and BACKGROUND for now.", + "enum": [ + "blocking", + "background" + ] + }, + "hooks": { + "description": "This is a list of hooks for a task.\nEach hook is a list of tasks to run on a trigger (such as on start, on failure, etc).\nOnly Say is supported for now.", + "type": "array", + "items": { + "$ref": "#/components/schemas/Hook" + } + }, + "output": { + "description": "This is the schema for the outputs of the Api Request.", + "allOf": [ + { + "$ref": "#/components/schemas/JsonSchema" + } + ] }, "name": { "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 + "maxLength": 80 + }, + "metadata": { + "type": "object", + "description": "This is for metadata you want to store on the task." } }, "required": [ - "provider", - "apiKey", - "id", - "orgId", - "createdAt", - "updatedAt" + "type", + "method", + "url", + "mode", + "name" ] }, - "OpenRouterCredential": { + "Hangup": { "type": "object", "properties": { - "provider": { + "type": { "type": "string", "enum": [ - "openrouter" + "hangup" ] }, - "apiKey": { + "name": { "type": "string", - "description": "This is not returned in the API." + "maxLength": 80 }, - "id": { + "metadata": { + "type": "object", + "description": "This is for metadata you want to store on the task." + } + }, + "required": [ + "type", + "name" + ] + }, + "Transfer": { + "type": "object", + "properties": { + "type": { "type": "string", - "description": "This is the unique identifier for the credential." + "enum": [ + "transfer" + ] }, - "orgId": { - "type": "string", - "description": "This is the unique identifier for the org that this credential belongs to." + "destination": { + "type": "object" }, - "createdAt": { - "format": "date-time", + "name": { "type": "string", - "description": "This is the ISO 8601 date-time string of when the credential was created." + "maxLength": 80 }, - "updatedAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + "metadata": { + "type": "object", + "description": "This is for metadata you want to store on the task." + } + }, + "required": [ + "type", + "destination", + "name" + ] + }, + "CreateWorkflowDTO": { + "type": "object", + "properties": { + "nodes": { + "type": "array", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/Say", + "title": "Say" + }, + { + "$ref": "#/components/schemas/Gather", + "title": "Gather" + }, + { + "$ref": "#/components/schemas/ApiRequest", + "title": "ApiRequest" + }, + { + "$ref": "#/components/schemas/Hangup", + "title": "Hangup" + }, + { + "$ref": "#/components/schemas/Transfer", + "title": "Transfer" + } + ] + } }, "name": { "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 + "maxLength": 80 + }, + "edges": { + "type": "array", + "items": { + "$ref": "#/components/schemas/Edge" + } } }, "required": [ - "provider", - "apiKey", - "id", - "orgId", - "createdAt", - "updatedAt" + "nodes", + "name", + "edges" ] }, - "PerplexityAICredential": { + "ChatCompletionsDTO": { + "type": "object", + "properties": { + "messages": { + "type": "array", + "items": { + "$ref": "#/components/schemas/ChatCompletionMessage" + } + }, + "workflowId": { + "type": "string" + }, + "workflow": { + "$ref": "#/components/schemas/CreateWorkflowDTO" + } + }, + "required": [ + "messages" + ] + }, + "AnthropicCredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "perplexity-ai" + "anthropic" ] }, "apiKey": { "type": "string", + "maxLength": 10000, "description": "This is not returned in the API." }, "id": { @@ -16494,17 +21041,18 @@ "updatedAt" ] }, - "PlayHTCredential": { + "AnyscaleCredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "playht" + "anyscale" ] }, "apiKey": { "type": "string", + "maxLength": 10000, "description": "This is not returned in the API." }, "id": { @@ -16530,9 +21078,6 @@ "description": "This is the name of credential. This is just for your reference.", "minLength": 1, "maxLength": 40 - }, - "userId": { - "type": "string" } }, "required": [ @@ -16541,17 +21086,16 @@ "id", "orgId", "createdAt", - "updatedAt", - "userId" + "updatedAt" ] }, - "RimeAICredential": { + "AssemblyAICredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "rime-ai" + "assembly-ai" ] }, "apiKey": { @@ -16592,18 +21136,52 @@ "updatedAt" ] }, - "RunpodCredential": { + "AzureCredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "runpod" + "azure" + ] + }, + "service": { + "type": "string", + "description": "This is the service being used in Azure.", + "enum": [ + "speech", + "blob_storage" + ], + "default": "speech" + }, + "region": { + "type": "string", + "description": "This is the region of the Azure resource.", + "enum": [ + "australia", + "canadaeast", + "canadacentral", + "eastus2", + "eastus", + "france", + "india", + "japaneast", + "japanwest", + "uaenorth", + "northcentralus", + "norway", + "southcentralus", + "swedencentral", + "switzerland", + "uk", + "westus", + "westus3" ] }, "apiKey": { "type": "string", - "description": "This is not returned in the API." + "description": "This is not returned in the API.", + "maxLength": 10000 }, "id": { "type": "string", @@ -16628,46 +21206,99 @@ "description": "This is the name of credential. This is just for your reference.", "minLength": 1, "maxLength": 40 + }, + "bucketPlan": { + "description": "This is the bucket plan that can be provided to store call artifacts in Azure Blob Storage.", + "allOf": [ + { + "$ref": "#/components/schemas/AzureBlobStorageBucketPlan" + } + ] } }, "required": [ "provider", - "apiKey", + "service", "id", "orgId", "createdAt", "updatedAt" ] }, - "S3Credential": { + "AzureOpenAICredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "s3" - ], - "description": "Credential provider. Only allowed value is s3" - }, - "awsAccessKeyId": { - "type": "string", - "description": "AWS access key ID." - }, - "awsSecretAccessKey": { - "type": "string", - "description": "AWS access key secret. This is not returned in the API." + "azure-openai" + ] }, "region": { "type": "string", - "description": "AWS region in which the S3 bucket is located." + "enum": [ + "australia", + "canadaeast", + "canadacentral", + "eastus2", + "eastus", + "france", + "india", + "japaneast", + "japanwest", + "uaenorth", + "northcentralus", + "norway", + "southcentralus", + "swedencentral", + "switzerland", + "uk", + "westus", + "westus3" + ] }, - "s3BucketName": { + "models": { + "type": "array", + "enum": [ + "gpt-4o-2024-08-06-ptu", + "gpt-4o-2024-08-06", + "gpt-4o-mini-2024-07-18", + "gpt-4o-2024-05-13", + "gpt-4-turbo-2024-04-09", + "gpt-4-0125-preview", + "gpt-4-1106-preview", + "gpt-4-0613", + "gpt-35-turbo-0125", + "gpt-35-turbo-1106" + ], + "example": [ + "gpt-4-0125-preview", + "gpt-4-0613" + ], + "items": { + "type": "string", + "enum": [ + "gpt-4o-2024-08-06-ptu", + "gpt-4o-2024-08-06", + "gpt-4o-mini-2024-07-18", + "gpt-4o-2024-05-13", + "gpt-4-turbo-2024-04-09", + "gpt-4-0125-preview", + "gpt-4-1106-preview", + "gpt-4-0613", + "gpt-35-turbo-0125", + "gpt-35-turbo-1106" + ] + } + }, + "openAIKey": { "type": "string", - "description": "AWS S3 bucket name." + "maxLength": 10000, + "description": "This is not returned in the API." }, - "s3PathPrefix": { + "ocpApimSubscriptionKey": { "type": "string", - "description": "The path prefix for the uploaded recording. Ex. \"recordings/\"" + "description": "This is not returned in the API." }, "id": { "type": "string", @@ -16692,34 +21323,34 @@ "description": "This is the name of credential. This is just for your reference.", "minLength": 1, "maxLength": 40 + }, + "openAIEndpoint": { + "type": "string", + "maxLength": 10000 } }, "required": [ "provider", - "awsAccessKeyId", - "awsSecretAccessKey", "region", - "s3BucketName", - "s3PathPrefix", + "models", + "openAIKey", "id", "orgId", "createdAt", - "updatedAt" + "updatedAt", + "openAIEndpoint" ] }, - "TavusCredential": { + "ByoSipTrunkCredential": { "type": "object", "properties": { "provider": { "type": "string", + "description": "This can be used to bring your own SIP trunks or to connect to a Carrier.", "enum": [ - "tavus" + "byo-sip-trunk" ] }, - "apiKey": { - "type": "string", - "description": "This is not returned in the API." - }, "id": { "type": "string", "description": "This is the unique identifier for the credential." @@ -16743,24 +21374,60 @@ "description": "This is the name of credential. This is just for your reference.", "minLength": 1, "maxLength": 40 + }, + "gateways": { + "description": "This is the list of SIP trunk's gateways.", + "type": "array", + "items": { + "$ref": "#/components/schemas/SipTrunkGateway" + } + }, + "outboundAuthenticationPlan": { + "description": "This can be used to configure the outbound authentication if required by the SIP trunk.", + "allOf": [ + { + "$ref": "#/components/schemas/SipTrunkOutboundAuthenticationPlan" + } + ] + }, + "outboundLeadingPlusEnabled": { + "type": "boolean", + "description": "This ensures the outbound origination attempts have a leading plus. Defaults to false to match conventional telecom behavior.\n\nUsage:\n- Vonage/Twilio requires leading plus for all outbound calls. Set this to true.\n\n@default false" + }, + "techPrefix": { + "type": "string", + "description": "This can be used to configure the tech prefix on outbound calls. This is an advanced property.", + "maxLength": 10000 + }, + "sipDiversionHeader": { + "type": "string", + "description": "This can be used to enable the SIP diversion header for authenticating the calling number if the SIP trunk supports it. This is an advanced property.", + "maxLength": 10000 + }, + "sbcConfiguration": { + "description": "This is an advanced configuration for enterprise deployments. This uses the onprem SBC to trunk into the SIP trunk's `gateways`, rather than the managed SBC provided by Vapi.", + "allOf": [ + { + "$ref": "#/components/schemas/SbcConfiguration" + } + ] } }, "required": [ - "provider", - "apiKey", "id", "orgId", "createdAt", - "updatedAt" + "updatedAt", + "gateways" ] }, - "TogetherAICredential": { + "CartesiaCredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "together-ai" + "cartesia" ] }, "apiKey": { @@ -16801,17 +21468,18 @@ "updatedAt" ] }, - "TwilioCredential": { + "CerebrasCredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "twilio" + "cerebras" ] }, - "authToken": { + "apiKey": { "type": "string", + "maxLength": 10000, "description": "This is not returned in the API." }, "id": { @@ -16837,38 +21505,38 @@ "description": "This is the name of credential. This is just for your reference.", "minLength": 1, "maxLength": 40 - }, - "accountSid": { - "type": "string" } }, "required": [ "provider", - "authToken", + "apiKey", "id", "orgId", "createdAt", - "updatedAt", - "accountSid" - ] - }, - "VonageCredential": { - "type": "object", - "properties": { - "vonageApplicationPrivateKey": { - "type": "string", - "description": "This is not returned in the API.", - "maxLength": 10000 - }, + "updatedAt" + ] + }, + "CloudflareCredential": { + "type": "object", + "properties": { "provider": { "type": "string", "enum": [ - "vonage" - ] + "cloudflare" + ], + "description": "Credential provider. Only allowed value is cloudflare" }, - "apiSecret": { + "accountId": { "type": "string", - "description": "This is not returned in the API." + "description": "Cloudflare Account Id." + }, + "apiKey": { + "type": "string", + "description": "Cloudflare API Key / Token." + }, + "accountEmail": { + "type": "string", + "description": "Cloudflare Account Email." }, "id": { "type": "string", @@ -16888,44 +21556,59 @@ "type": "string", "description": "This is the ISO 8601 date-time string of when the assistant was last updated." }, - "vonageApplicationId": { - "type": "string", - "description": "This is the Vonage Application ID for the credential.\n\nOnly relevant for Vonage credentials.", - "maxLength": 10000 - }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", "minLength": 1, "maxLength": 40 }, - "apiKey": { - "type": "string" + "bucketPlan": { + "description": "This is the bucket plan that can be provided to store call artifacts in R2", + "allOf": [ + { + "$ref": "#/components/schemas/CloudflareR2BucketPlan" + } + ] } }, "required": [ - "vonageApplicationPrivateKey", "provider", - "apiSecret", "id", "orgId", "createdAt", - "updatedAt", - "vonageApplicationId", - "apiKey" + "updatedAt" ] }, - "WebhookCredential": { + "Oauth2AuthenticationSession": { + "type": "object", + "properties": { + "accessToken": { + "type": "string", + "description": "This is the OAuth2 access token." + }, + "expiresAt": { + "format": "date-time", + "type": "string", + "description": "This is the OAuth2 access token expiration." + } + } + }, + "CustomLLMCredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "webhook" + "custom-llm" ] }, + "apiKey": { + "type": "string", + "maxLength": 10000, + "description": "This is not returned in the API." + }, "authenticationPlan": { - "description": "This is the authentication plan. Currently supports OAuth2 RFC 6749.", + "description": "This is the authentication plan. Currently supports OAuth2 RFC 6749. To use Bearer authentication, use apiKey", "allOf": [ { "$ref": "#/components/schemas/OAuth2AuthenticationPlan" @@ -16967,27 +21650,24 @@ }, "required": [ "provider", - "authenticationPlan", + "apiKey", "id", "orgId", "createdAt", - "updatedAt", - "authenticationSession" + "updatedAt" ] }, - "XAiCredential": { + "DeepgramCredential": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the api key for Grok in XAi's console. Get it from here: https://console.x.ai", "enum": [ - "xai" + "deepgram" ] }, "apiKey": { "type": "string", - "maxLength": 10000, "description": "This is not returned in the API." }, "id": { @@ -17013,6 +21693,10 @@ "description": "This is the name of credential. This is just for your reference.", "minLength": 1, "maxLength": 40 + }, + "apiUrl": { + "type": "string", + "description": "This can be used to point to an onprem Deepgram instance. Defaults to api.deepgram.com." } }, "required": [ @@ -17024,70 +21708,36 @@ "updatedAt" ] }, - "CreateAnthropicCredentialDTO": { + "DeepInfraCredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "anthropic" + "deepinfra" ] }, "apiKey": { "type": "string", - "maxLength": 10000, "description": "This is not returned in the API." }, - "name": { - "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 - } - }, - "required": [ - "provider", - "apiKey" - ] - }, - "CreateAnyscaleCredentialDTO": { - "type": "object", - "properties": { - "provider": { + "id": { "type": "string", - "enum": [ - "anyscale" - ] + "description": "This is the unique identifier for the credential." }, - "apiKey": { + "orgId": { "type": "string", - "maxLength": 10000, - "description": "This is not returned in the API." + "description": "This is the unique identifier for the org that this credential belongs to." }, - "name": { - "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 - } - }, - "required": [ - "provider", - "apiKey" - ] - }, - "CreateAssemblyAICredentialDTO": { - "type": "object", - "properties": { - "provider": { + "createdAt": { + "format": "date-time", "type": "string", - "enum": [ - "assembly-ai" - ] + "description": "This is the ISO 8601 date-time string of when the credential was created." }, - "apiKey": { + "updatedAt": { + "format": "date-time", "type": "string", - "description": "This is not returned in the API." + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." }, "name": { "type": "string", @@ -17098,135 +21748,43 @@ }, "required": [ "provider", - "apiKey" + "apiKey", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateAzureCredentialDTO": { + "DeepSeekCredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "azure" - ] - }, - "service": { - "type": "string", - "description": "This is the service being used in Azure.", - "enum": [ - "speech" - ], - "default": "speech" - }, - "region": { - "type": "string", - "description": "This is the region of the Azure resource.", - "enum": [ - "australia", - "canada", - "eastus2", - "eastus", - "france", - "india", - "japan", - "uaenorth", - "northcentralus", - "norway", - "southcentralus", - "sweden", - "switzerland", - "uk", - "westus", - "westus3" + "deep-seek" ] }, "apiKey": { "type": "string", - "description": "This is not returned in the API.", - "maxLength": 10000 + "description": "This is not returned in the API." }, - "name": { - "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 - } - }, - "required": [ - "provider", - "service" - ] - }, - "CreateAzureOpenAICredentialDTO": { - "type": "object", - "properties": { - "provider": { + "id": { "type": "string", - "enum": [ - "azure-openai" - ] + "description": "This is the unique identifier for the credential." }, - "region": { + "orgId": { "type": "string", - "enum": [ - "australia", - "canada", - "eastus2", - "eastus", - "france", - "india", - "japan", - "uaenorth", - "northcentralus", - "norway", - "southcentralus", - "sweden", - "switzerland", - "uk", - "westus", - "westus3" - ] - }, - "models": { - "type": "array", - "enum": [ - "gpt-4o-2024-08-06", - "gpt-4o-mini-2024-07-18", - "gpt-4o-2024-05-13", - "gpt-4-turbo-2024-04-09", - "gpt-4-0125-preview", - "gpt-4-1106-preview", - "gpt-4-0613", - "gpt-35-turbo-0125", - "gpt-35-turbo-1106" - ], - "example": [ - "gpt-4-0125-preview", - "gpt-4-0613" - ], - "items": { - "type": "string", - "enum": [ - "gpt-4o-2024-08-06", - "gpt-4o-mini-2024-07-18", - "gpt-4o-2024-05-13", - "gpt-4-turbo-2024-04-09", - "gpt-4-0125-preview", - "gpt-4-1106-preview", - "gpt-4-0613", - "gpt-35-turbo-0125", - "gpt-35-turbo-1106" - ] - } + "description": "This is the unique identifier for the org that this credential belongs to." }, - "openAIKey": { + "createdAt": { + "format": "date-time", "type": "string", - "maxLength": 10000, - "description": "This is not returned in the API." + "description": "This is the ISO 8601 date-time string of when the credential was created." }, - "openAIEndpoint": { + "updatedAt": { + "format": "date-time", "type": "string", - "maxLength": 10000 + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." }, "name": { "type": "string", @@ -17237,58 +21795,44 @@ }, "required": [ "provider", - "region", - "models", - "openAIKey", - "openAIEndpoint" + "apiKey", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateByoSipTrunkCredentialDTO": { + "ElevenLabsCredential": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This can be used to bring your own SIP trunks or to connect to a Carrier.", "enum": [ - "byo-sip-trunk" + "11labs" ] }, - "gateways": { - "description": "This is the list of SIP trunk's gateways.", - "type": "array", - "items": { - "$ref": "#/components/schemas/SipTrunkGateway" - } - }, - "outboundAuthenticationPlan": { - "description": "This can be used to configure the outbound authentication if required by the SIP trunk.", - "allOf": [ - { - "$ref": "#/components/schemas/SipTrunkOutboundAuthenticationPlan" - } - ] + "apiKey": { + "type": "string", + "maxLength": 10000, + "description": "This is not returned in the API." }, - "outboundLeadingPlusEnabled": { - "type": "boolean", - "description": "This ensures the outbound origination attempts have a leading plus. Defaults to false to match conventional telecom behavior.\n\nUsage:\n- Vonage/Twilio requires leading plus for all outbound calls. Set this to true.\n\n@default false" + "id": { + "type": "string", + "description": "This is the unique identifier for the credential." }, - "techPrefix": { + "orgId": { "type": "string", - "description": "This can be used to configure the tech prefix on outbound calls. This is an advanced property.", - "maxLength": 10000 + "description": "This is the unique identifier for the org that this credential belongs to." }, - "sipDiversionHeader": { + "createdAt": { + "format": "date-time", "type": "string", - "description": "This can be used to enable the SIP diversion header for authenticating the calling number if the SIP trunk supports it. This is an advanced property.", - "maxLength": 10000 + "description": "This is the ISO 8601 date-time string of when the credential was created." }, - "sbcConfiguration": { - "description": "This is an advanced configuration for enterprise deployments. This uses the onprem SBC to trunk into the SIP trunk's `gateways`, rather than the managed SBC provided by Vapi.", - "allOf": [ - { - "$ref": "#/components/schemas/SbcConfiguration" - } - ] + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." }, "name": { "type": "string", @@ -17298,55 +21842,103 @@ } }, "required": [ - "gateways" + "provider", + "apiKey", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateCartesiaCredentialDTO": { + "GcpCredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "cartesia" + "gcp" ] }, - "apiKey": { + "id": { "type": "string", - "description": "This is not returned in the API." + "description": "This is the unique identifier for the credential." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", "minLength": 1, "maxLength": 40 + }, + "gcpKey": { + "description": "This is the GCP key. This is the JSON that can be generated in the Google Cloud Console at https://console.cloud.google.com/iam-admin/serviceaccounts/details//keys.\n\nThe schema is identical to the JSON that GCP outputs.", + "allOf": [ + { + "$ref": "#/components/schemas/GcpKey" + } + ] + }, + "bucketPlan": { + "description": "This is the bucket plan that can be provided to store call artifacts in GCP.", + "allOf": [ + { + "$ref": "#/components/schemas/BucketPlan" + } + ] } }, "required": [ "provider", - "apiKey" + "id", + "orgId", + "createdAt", + "updatedAt", + "gcpKey" ] }, - "CreateCustomLLMCredentialDTO": { + "GladiaCredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "custom-llm" + "gladia" ] }, "apiKey": { "type": "string", - "maxLength": 10000, "description": "This is not returned in the API." }, - "authenticationPlan": { - "description": "This is the authentication plan. Currently supports OAuth2 RFC 6749. To use Bearer authentication, use apiKey", - "allOf": [ - { - "$ref": "#/components/schemas/OAuth2AuthenticationPlan" - } - ] + "id": { + "type": "string", + "description": "This is the unique identifier for the credential." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." }, "name": { "type": "string", @@ -17357,25 +21949,43 @@ }, "required": [ "provider", - "apiKey" + "apiKey", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateDeepgramCredentialDTO": { + "GoHighLevelCredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "deepgram" + "gohighlevel" ] }, "apiKey": { "type": "string", "description": "This is not returned in the API." }, - "apiUrl": { + "id": { "type": "string", - "description": "This can be used to point to an onprem Deepgram instance. Defaults to api.deepgram.com." + "description": "This is the unique identifier for the credential." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." }, "name": { "type": "string", @@ -17386,22 +21996,46 @@ }, "required": [ "provider", - "apiKey" + "apiKey", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateDeepInfraCredentialDTO": { + "GoogleCredential": { "type": "object", "properties": { "provider": { "type": "string", + "description": "This is the key for Gemini in Google AI Studio. Get it from here: https://aistudio.google.com/app/apikey", "enum": [ - "deepinfra" + "google" ] }, "apiKey": { "type": "string", + "maxLength": 10000, "description": "This is not returned in the API." }, + "id": { + "type": "string", + "description": "This is the unique identifier for the credential." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", @@ -17411,23 +22045,44 @@ }, "required": [ "provider", - "apiKey" + "apiKey", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateElevenLabsCredentialDTO": { + "GroqCredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "11labs" + "groq" ] }, "apiKey": { "type": "string", - "maxLength": 10000, "description": "This is not returned in the API." }, + "id": { + "type": "string", + "description": "This is the unique identifier for the credential." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", @@ -17437,33 +22092,45 @@ }, "required": [ "provider", - "apiKey" + "apiKey", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateGcpCredentialDTO": { + "InflectionAICredential": { "type": "object", "properties": { "provider": { "type": "string", + "description": "This is the api key for Pi in InflectionAI's console. Get it from here: https://developers.inflection.ai/keys, billing will need to be setup", "enum": [ - "gcp" + "inflection-ai" ] }, - "gcpKey": { - "description": "This is the GCP key. This is the JSON that can be generated in the Google Cloud Console at https://console.cloud.google.com/iam-admin/serviceaccounts/details//keys.\n\nThe schema is identical to the JSON that GCP outputs.", - "allOf": [ - { - "$ref": "#/components/schemas/GcpKey" - } - ] + "apiKey": { + "type": "string", + "maxLength": 10000, + "description": "This is not returned in the API." }, - "bucketPlan": { - "description": "This is the bucket plan that can be provided to store call artifacts in GCP.", - "allOf": [ - { - "$ref": "#/components/schemas/BucketPlan" - } - ] + "id": { + "type": "string", + "description": "This is the unique identifier for the credential." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." }, "name": { "type": "string", @@ -17474,21 +22141,51 @@ }, "required": [ "provider", - "gcpKey" + "apiKey", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateGladiaCredentialDTO": { + "LangfuseCredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "gladia" + "langfuse" ] }, + "publicKey": { + "type": "string", + "description": "The public key for Langfuse project. Eg: pk-lf-..." + }, "apiKey": { "type": "string", - "description": "This is not returned in the API." + "description": "The secret key for Langfuse project. Eg: sk-lf-... .This is not returned in the API." + }, + "apiUrl": { + "type": "string", + "description": "The host URL for Langfuse project. Eg: https://cloud.langfuse.com" + }, + "id": { + "type": "string", + "description": "This is the unique identifier for the credential." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." }, "name": { "type": "string", @@ -17499,22 +22196,46 @@ }, "required": [ "provider", - "apiKey" + "publicKey", + "apiKey", + "apiUrl", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateGoHighLevelCredentialDTO": { + "LmntCredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "gohighlevel" + "lmnt" ] }, "apiKey": { "type": "string", "description": "This is not returned in the API." }, + "id": { + "type": "string", + "description": "This is the unique identifier for the credential." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", @@ -17524,24 +22245,52 @@ }, "required": [ "provider", - "apiKey" + "apiKey", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateGoogleCredentialDTO": { + "MakeCredential": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the key for Gemini in Google AI Studio. Get it from here: https://aistudio.google.com/app/apikey", "enum": [ - "google" + "make" ] }, + "teamId": { + "type": "string", + "description": "Team ID" + }, + "region": { + "type": "string", + "description": "Region of your application. For example: eu1, eu2, us1, us2" + }, "apiKey": { "type": "string", - "maxLength": 10000, "description": "This is not returned in the API." }, + "id": { + "type": "string", + "description": "This is the unique identifier for the credential." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", @@ -17551,22 +22300,46 @@ }, "required": [ "provider", - "apiKey" + "teamId", + "region", + "apiKey", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateGroqCredentialDTO": { + "OpenAICredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "groq" + "openai" ] }, "apiKey": { "type": "string", "description": "This is not returned in the API." }, + "id": { + "type": "string", + "description": "This is the unique identifier for the credential." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", @@ -17576,24 +22349,44 @@ }, "required": [ "provider", - "apiKey" + "apiKey", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateInflectionAICredentialDTO": { + "OpenRouterCredential": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the api key for Pi in InflectionAI's console. Get it from here: https://developers.inflection.ai/keys, billing will need to be setup", "enum": [ - "inflection-ai" + "openrouter" ] }, "apiKey": { "type": "string", - "maxLength": 10000, "description": "This is not returned in the API." }, + "id": { + "type": "string", + "description": "This is the unique identifier for the credential." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", @@ -17603,29 +22396,43 @@ }, "required": [ "provider", - "apiKey" + "apiKey", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateLangfuseCredentialDTO": { + "PerplexityAICredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "langfuse" + "perplexity-ai" ] }, - "publicKey": { + "apiKey": { "type": "string", - "description": "The public key for Langfuse project. Eg: pk-lf-..." + "description": "This is not returned in the API." }, - "apiKey": { + "id": { "type": "string", - "description": "The secret key for Langfuse project. Eg: sk-lf-... .This is not returned in the API." + "description": "This is the unique identifier for the credential." }, - "apiUrl": { + "orgId": { "type": "string", - "description": "The host URL for Langfuse project. Eg: https://cloud.langfuse.com" + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." }, "name": { "type": "string", @@ -17636,56 +22443,94 @@ }, "required": [ "provider", - "publicKey", "apiKey", - "apiUrl" + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateLmntCredentialDTO": { + "PlayHTCredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "lmnt" + "playht" ] }, "apiKey": { "type": "string", "description": "This is not returned in the API." }, + "id": { + "type": "string", + "description": "This is the unique identifier for the credential." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", "minLength": 1, "maxLength": 40 + }, + "userId": { + "type": "string" } }, "required": [ "provider", - "apiKey" + "apiKey", + "id", + "orgId", + "createdAt", + "updatedAt", + "userId" ] }, - "CreateMakeCredentialDTO": { + "RimeAICredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "make" + "rime-ai" ] }, - "teamId": { + "apiKey": { + "type": "string", + "description": "This is not returned in the API." + }, + "id": { + "type": "string", + "description": "This is the unique identifier for the credential." + }, + "orgId": { "type": "string", - "description": "Team ID" + "description": "This is the unique identifier for the org that this credential belongs to." }, - "region": { + "createdAt": { + "format": "date-time", "type": "string", - "description": "Region of your application. For example: eu1, eu2, us1, us2" + "description": "This is the ISO 8601 date-time string of when the credential was created." }, - "apiKey": { + "updatedAt": { + "format": "date-time", "type": "string", - "description": "This is not returned in the API." + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." }, "name": { "type": "string", @@ -17696,24 +22541,44 @@ }, "required": [ "provider", - "teamId", - "region", - "apiKey" + "apiKey", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateOpenAICredentialDTO": { + "RunpodCredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "openai" + "runpod" ] }, "apiKey": { "type": "string", "description": "This is not returned in the API." }, + "id": { + "type": "string", + "description": "This is the unique identifier for the credential." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", @@ -17723,21 +22588,60 @@ }, "required": [ "provider", - "apiKey" + "apiKey", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateOpenRouterCredentialDTO": { + "S3Credential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "openrouter" - ] + "s3" + ], + "description": "Credential provider. Only allowed value is s3" }, - "apiKey": { + "awsAccessKeyId": { "type": "string", - "description": "This is not returned in the API." + "description": "AWS access key ID." + }, + "awsSecretAccessKey": { + "type": "string", + "description": "AWS access key secret. This is not returned in the API." + }, + "region": { + "type": "string", + "description": "AWS region in which the S3 bucket is located." + }, + "s3BucketName": { + "type": "string", + "description": "AWS S3 bucket name." + }, + "s3PathPrefix": { + "type": "string", + "description": "The path prefix for the uploaded recording. Ex. \"recordings/\"" + }, + "id": { + "type": "string", + "description": "This is the unique identifier for the credential." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." }, "name": { "type": "string", @@ -17748,22 +22652,48 @@ }, "required": [ "provider", - "apiKey" + "awsAccessKeyId", + "awsSecretAccessKey", + "region", + "s3BucketName", + "s3PathPrefix", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreatePerplexityAICredentialDTO": { + "SmallestAICredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "perplexity-ai" + "smallest-ai" ] }, "apiKey": { "type": "string", "description": "This is not returned in the API." }, + "id": { + "type": "string", + "description": "This is the unique identifier for the credential." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", @@ -17773,24 +22703,43 @@ }, "required": [ "provider", - "apiKey" + "apiKey", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreatePlayHTCredentialDTO": { + "TavusCredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "playht" + "tavus" ] }, "apiKey": { "type": "string", "description": "This is not returned in the API." }, - "userId": { - "type": "string" + "id": { + "type": "string", + "description": "This is the unique identifier for the credential." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." }, "name": { "type": "string", @@ -17802,22 +22751,43 @@ "required": [ "provider", "apiKey", - "userId" + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateRimeAICredentialDTO": { + "TogetherAICredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "rime-ai" + "together-ai" ] }, "apiKey": { "type": "string", "description": "This is not returned in the API." }, + "id": { + "type": "string", + "description": "This is the unique identifier for the credential." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", @@ -17827,92 +22797,169 @@ }, "required": [ "provider", - "apiKey" + "apiKey", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateRunpodCredentialDTO": { + "TwilioCredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "runpod" + "twilio" ] }, - "apiKey": { + "authToken": { "type": "string", "description": "This is not returned in the API." }, + "id": { + "type": "string", + "description": "This is the unique identifier for the credential." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", "minLength": 1, "maxLength": 40 + }, + "accountSid": { + "type": "string" } }, "required": [ "provider", - "apiKey" + "authToken", + "id", + "orgId", + "createdAt", + "updatedAt", + "accountSid" ] }, - "CreateS3CredentialDTO": { + "VonageCredential": { "type": "object", "properties": { + "vonageApplicationPrivateKey": { + "type": "string", + "description": "This is not returned in the API.", + "maxLength": 10000 + }, "provider": { "type": "string", "enum": [ - "s3" - ], - "description": "Credential provider. Only allowed value is s3" + "vonage" + ] }, - "awsAccessKeyId": { + "apiSecret": { "type": "string", - "description": "AWS access key ID." + "description": "This is not returned in the API." }, - "awsSecretAccessKey": { + "id": { "type": "string", - "description": "AWS access key secret. This is not returned in the API." + "description": "This is the unique identifier for the credential." }, - "region": { + "orgId": { "type": "string", - "description": "AWS region in which the S3 bucket is located." + "description": "This is the unique identifier for the org that this credential belongs to." }, - "s3BucketName": { + "createdAt": { + "format": "date-time", "type": "string", - "description": "AWS S3 bucket name." + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." }, - "s3PathPrefix": { + "vonageApplicationId": { "type": "string", - "description": "The path prefix for the uploaded recording. Ex. \"recordings/\"" + "description": "This is the Vonage Application ID for the credential.\n\nOnly relevant for Vonage credentials.", + "maxLength": 10000 }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", "minLength": 1, "maxLength": 40 + }, + "apiKey": { + "type": "string" } }, "required": [ + "vonageApplicationPrivateKey", "provider", - "awsAccessKeyId", - "awsSecretAccessKey", - "region", - "s3BucketName", - "s3PathPrefix" + "apiSecret", + "id", + "orgId", + "createdAt", + "updatedAt", + "vonageApplicationId", + "apiKey" ] }, - "CreateTavusCredentialDTO": { + "WebhookCredential": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "tavus" + "webhook" ] }, - "apiKey": { + "authenticationPlan": { + "description": "This is the authentication plan. Currently supports OAuth2 RFC 6749.", + "allOf": [ + { + "$ref": "#/components/schemas/OAuth2AuthenticationPlan" + } + ] + }, + "id": { "type": "string", - "description": "This is not returned in the API." + "description": "This is the unique identifier for the credential." + }, + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this credential belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the credential was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." + }, + "authenticationSession": { + "description": "This is the authentication session for the credential. Available for credentials that have an authentication plan.", + "allOf": [ + { + "$ref": "#/components/schemas/Oauth2AuthenticationSession" + } + ] }, "name": { "type": "string", @@ -17923,49 +22970,46 @@ }, "required": [ "provider", - "apiKey" + "authenticationPlan", + "id", + "orgId", + "createdAt", + "updatedAt", + "authenticationSession" ] }, - "CreateTogetherAICredentialDTO": { + "XAiCredential": { "type": "object", "properties": { "provider": { "type": "string", + "description": "This is the api key for Grok in XAi's console. Get it from here: https://console.x.ai", "enum": [ - "together-ai" + "xai" ] }, "apiKey": { "type": "string", + "maxLength": 10000, "description": "This is not returned in the API." }, - "name": { + "id": { "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 - } - }, - "required": [ - "provider", - "apiKey" - ] - }, - "CreateTwilioCredentialDTO": { - "type": "object", - "properties": { - "provider": { + "description": "This is the unique identifier for the credential." + }, + "orgId": { "type": "string", - "enum": [ - "twilio" - ] + "description": "This is the unique identifier for the org that this credential belongs to." }, - "authToken": { + "createdAt": { + "format": "date-time", "type": "string", - "description": "This is not returned in the API." + "description": "This is the ISO 8601 date-time string of when the credential was created." }, - "accountSid": { - "type": "string" + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the assistant was last updated." }, "name": { "type": "string", @@ -17976,26 +23020,27 @@ }, "required": [ "provider", - "authToken", - "accountSid" + "apiKey", + "id", + "orgId", + "createdAt", + "updatedAt" ] }, - "CreateVonageCredentialDTO": { + "CreateCerebrasCredentialDTO": { "type": "object", "properties": { "provider": { "type": "string", "enum": [ - "vonage" + "cerebras" ] }, - "apiSecret": { + "apiKey": { "type": "string", + "maxLength": 10000, "description": "This is not returned in the API." }, - "apiKey": { - "type": "string" - }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", @@ -18005,26 +23050,23 @@ }, "required": [ "provider", - "apiSecret", "apiKey" ] }, - "CreateWebhookCredentialDTO": { + "CreateGoogleCredentialDTO": { "type": "object", "properties": { "provider": { "type": "string", + "description": "This is the key for Gemini in Google AI Studio. Get it from here: https://aistudio.google.com/app/apikey", "enum": [ - "webhook" + "google" ] }, - "authenticationPlan": { - "description": "This is the authentication plan. Currently supports OAuth2 RFC 6749.", - "allOf": [ - { - "$ref": "#/components/schemas/OAuth2AuthenticationPlan" - } - ] + "apiKey": { + "type": "string", + "maxLength": 10000, + "description": "This is not returned in the API." }, "name": { "type": "string", @@ -18035,17 +23077,17 @@ }, "required": [ "provider", - "authenticationPlan" + "apiKey" ] }, - "CreateXAiCredentialDTO": { + "CreateInflectionAICredentialDTO": { "type": "object", "properties": { "provider": { "type": "string", - "description": "This is the api key for Grok in XAi's console. Get it from here: https://console.x.ai", + "description": "This is the api key for Pi in InflectionAI's console. Get it from here: https://developers.inflection.ai/keys, billing will need to be setup", "enum": [ - "xai" + "inflection-ai" ] }, "apiKey": { @@ -18068,12 +23110,6 @@ "UpdateAnthropicCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "anthropic" - ] - }, "apiKey": { "type": "string", "maxLength": 10000, @@ -18085,21 +23121,11 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdateAnyscaleCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "anyscale" - ] - }, "apiKey": { "type": "string", "maxLength": 10000, @@ -18111,21 +23137,11 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdateAssemblyAICredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "assembly-ai" - ] - }, "apiKey": { "type": "string", "description": "This is not returned in the API." @@ -18136,26 +23152,17 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdateAzureCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "azure" - ] - }, "service": { "type": "string", "description": "This is the service being used in Azure.", "enum": [ - "speech" + "speech", + "blob_storage" ], "default": "speech" }, @@ -18164,17 +23171,19 @@ "description": "This is the region of the Azure resource.", "enum": [ "australia", - "canada", + "canadaeast", + "canadacentral", "eastus2", "eastus", "france", "india", - "japan", + "japaneast", + "japanwest", "uaenorth", "northcentralus", "norway", "southcentralus", - "sweden", + "swedencentral", "switzerland", "uk", "westus", @@ -18191,37 +23200,37 @@ "description": "This is the name of credential. This is just for your reference.", "minLength": 1, "maxLength": 40 + }, + "bucketPlan": { + "description": "This is the bucket plan that can be provided to store call artifacts in Azure Blob Storage.", + "allOf": [ + { + "$ref": "#/components/schemas/AzureBlobStorageBucketPlan" + } + ] } - }, - "required": [ - "provider", - "service" - ] + } }, "UpdateAzureOpenAICredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "azure-openai" - ] - }, "region": { "type": "string", "enum": [ "australia", - "canada", + "canadaeast", + "canadacentral", "eastus2", "eastus", "france", "india", - "japan", + "japaneast", + "japanwest", "uaenorth", "northcentralus", "norway", "southcentralus", - "sweden", + "swedencentral", "switzerland", "uk", "westus", @@ -18231,6 +23240,7 @@ "models": { "type": "array", "enum": [ + "gpt-4o-2024-08-06-ptu", "gpt-4o-2024-08-06", "gpt-4o-mini-2024-07-18", "gpt-4o-2024-05-13", @@ -18248,6 +23258,7 @@ "items": { "type": "string", "enum": [ + "gpt-4o-2024-08-06-ptu", "gpt-4o-2024-08-06", "gpt-4o-mini-2024-07-18", "gpt-4o-2024-05-13", @@ -18265,34 +23276,30 @@ "maxLength": 10000, "description": "This is not returned in the API." }, - "openAIEndpoint": { + "ocpApimSubscriptionKey": { "type": "string", - "maxLength": 10000 + "description": "This is not returned in the API." }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", "minLength": 1, "maxLength": 40 + }, + "openAIEndpoint": { + "type": "string", + "maxLength": 10000 } - }, - "required": [ - "provider", - "region", - "models", - "openAIKey", - "openAIEndpoint" - ] + } }, "UpdateByoSipTrunkCredentialDTO": { "type": "object", "properties": { - "provider": { + "name": { "type": "string", - "description": "This can be used to bring your own SIP trunks or to connect to a Carrier.", - "enum": [ - "byo-sip-trunk" - ] + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 }, "gateways": { "description": "This is the list of SIP trunk's gateways.", @@ -18330,6 +23337,31 @@ "$ref": "#/components/schemas/SbcConfiguration" } ] + } + } + }, + "UpdateCartesiaCredentialDTO": { + "type": "object", + "properties": { + "apiKey": { + "type": "string", + "description": "This is not returned in the API." + }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + } + }, + "UpdateCerebrasCredentialDTO": { + "type": "object", + "properties": { + "apiKey": { + "type": "string", + "maxLength": 10000, + "description": "This is not returned in the API." }, "name": { "type": "string", @@ -18337,45 +23369,42 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "gateways" - ] + } }, - "UpdateCartesiaCredentialDTO": { + "UpdateCloudflareCredentialDTO": { "type": "object", "properties": { - "provider": { + "accountId": { "type": "string", - "enum": [ - "cartesia" - ] + "description": "Cloudflare Account Id." }, "apiKey": { "type": "string", - "description": "This is not returned in the API." + "description": "Cloudflare API Key / Token." + }, + "accountEmail": { + "type": "string", + "description": "Cloudflare Account Email." }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", "minLength": 1, "maxLength": 40 + }, + "bucketPlan": { + "description": "This is the bucket plan that can be provided to store call artifacts in R2", + "allOf": [ + { + "$ref": "#/components/schemas/CloudflareR2BucketPlan" + } + ] } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdateCustomLLMCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "custom-llm" - ] - }, "apiKey": { "type": "string", "maxLength": 10000, @@ -18395,28 +23424,33 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdateDeepgramCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "deepgram" - ] - }, "apiKey": { "type": "string", "description": "This is not returned in the API." }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + }, "apiUrl": { "type": "string", "description": "This can be used to point to an onprem Deepgram instance. Defaults to api.deepgram.com." + } + } + }, + "UpdateDeepInfraCredentialDTO": { + "type": "object", + "properties": { + "apiKey": { + "type": "string", + "description": "This is not returned in the API." }, "name": { "type": "string", @@ -18424,21 +23458,11 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, - "UpdateDeepInfraCredentialDTO": { + "UpdateDeepSeekCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "deepinfra" - ] - }, "apiKey": { "type": "string", "description": "This is not returned in the API." @@ -18449,21 +23473,11 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdateElevenLabsCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "11labs" - ] - }, "apiKey": { "type": "string", "maxLength": 10000, @@ -18475,20 +23489,16 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdateGcpCredentialDTO": { "type": "object", "properties": { - "provider": { + "name": { "type": "string", - "enum": [ - "gcp" - ] + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 }, "gcpKey": { "description": "This is the GCP key. This is the JSON that can be generated in the Google Cloud Console at https://console.cloud.google.com/iam-admin/serviceaccounts/details//keys.\n\nThe schema is identical to the JSON that GCP outputs.", @@ -18505,28 +23515,12 @@ "$ref": "#/components/schemas/BucketPlan" } ] - }, - "name": { - "type": "string", - "description": "This is the name of credential. This is just for your reference.", - "minLength": 1, - "maxLength": 40 } - }, - "required": [ - "provider", - "gcpKey" - ] + } }, "UpdateGladiaCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "gladia" - ] - }, "apiKey": { "type": "string", "description": "This is not returned in the API." @@ -18537,21 +23531,11 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdateGoHighLevelCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "gohighlevel" - ] - }, "apiKey": { "type": "string", "description": "This is not returned in the API." @@ -18562,22 +23546,11 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdateGoogleCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "description": "This is the key for Gemini in Google AI Studio. Get it from here: https://aistudio.google.com/app/apikey", - "enum": [ - "google" - ] - }, "apiKey": { "type": "string", "maxLength": 10000, @@ -18589,21 +23562,11 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdateGroqCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "groq" - ] - }, "apiKey": { "type": "string", "description": "This is not returned in the API." @@ -18614,22 +23577,11 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdateInflectionAICredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "description": "This is the api key for Pi in InflectionAI's console. Get it from here: https://developers.inflection.ai/keys, billing will need to be setup", - "enum": [ - "inflection-ai" - ] - }, "apiKey": { "type": "string", "maxLength": 10000, @@ -18641,21 +23593,11 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdateLangfuseCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "langfuse" - ] - }, "publicKey": { "type": "string", "description": "The public key for Langfuse project. Eg: pk-lf-..." @@ -18674,23 +23616,11 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "publicKey", - "apiKey", - "apiUrl" - ] + } }, "UpdateLmntCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "lmnt" - ] - }, "apiKey": { "type": "string", "description": "This is not returned in the API." @@ -18701,21 +23631,11 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdateMakeCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "make" - ] - }, "teamId": { "type": "string", "description": "Team ID" @@ -18734,23 +23654,11 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "teamId", - "region", - "apiKey" - ] + } }, "UpdateOpenAICredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "openai" - ] - }, "apiKey": { "type": "string", "description": "This is not returned in the API." @@ -18761,21 +23669,11 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdateOpenRouterCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "openrouter" - ] - }, "apiKey": { "type": "string", "description": "This is not returned in the API." @@ -18786,21 +23684,11 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdatePerplexityAICredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "perplexity-ai" - ] - }, "apiKey": { "type": "string", "description": "This is not returned in the API." @@ -18811,50 +23699,29 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdatePlayHTCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "playht" - ] - }, "apiKey": { "type": "string", "description": "This is not returned in the API." }, - "userId": { - "type": "string" - }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", "minLength": 1, "maxLength": 40 + }, + "userId": { + "type": "string" } - }, - "required": [ - "provider", - "apiKey", - "userId" - ] + } }, "UpdateRimeAICredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "rime-ai" - ] - }, "apiKey": { "type": "string", "description": "This is not returned in the API." @@ -18865,21 +23732,11 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdateRunpodCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "runpod" - ] - }, "apiKey": { "type": "string", "description": "This is not returned in the API." @@ -18890,22 +23747,11 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdateS3CredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "s3" - ], - "description": "Credential provider. Only allowed value is s3" - }, "awsAccessKeyId": { "type": "string", "description": "AWS access key ID." @@ -18932,25 +23778,26 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "awsAccessKeyId", - "awsSecretAccessKey", - "region", - "s3BucketName", - "s3PathPrefix" - ] + } }, - "UpdateTavusCredentialDTO": { + "UpdateSmallestAICredentialDTO": { "type": "object", "properties": { - "provider": { + "apiKey": { "type": "string", - "enum": [ - "tavus" - ] + "description": "This is not returned in the API." }, + "name": { + "type": "string", + "description": "This is the name of credential. This is just for your reference.", + "minLength": 1, + "maxLength": 40 + } + } + }, + "UpdateTavusCredentialDTO": { + "type": "object", + "properties": { "apiKey": { "type": "string", "description": "This is not returned in the API." @@ -18961,21 +23808,11 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdateTogetherAICredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "together-ai" - ] - }, "apiKey": { "type": "string", "description": "This is not returned in the API." @@ -18986,80 +23823,47 @@ "minLength": 1, "maxLength": 40 } - }, - "required": [ - "provider", - "apiKey" - ] + } }, "UpdateTwilioCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "twilio" - ] - }, "authToken": { "type": "string", "description": "This is not returned in the API." }, - "accountSid": { - "type": "string" - }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", "minLength": 1, "maxLength": 40 + }, + "accountSid": { + "type": "string" } - }, - "required": [ - "provider", - "authToken", - "accountSid" - ] + } }, "UpdateVonageCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "enum": [ - "vonage" - ] - }, "apiSecret": { "type": "string", "description": "This is not returned in the API." }, - "apiKey": { - "type": "string" - }, "name": { "type": "string", "description": "This is the name of credential. This is just for your reference.", "minLength": 1, "maxLength": 40 + }, + "apiKey": { + "type": "string" } - }, - "required": [ - "provider", - "apiSecret", - "apiKey" - ] + } }, "UpdateXAiCredentialDTO": { "type": "object", "properties": { - "provider": { - "type": "string", - "description": "This is the api key for Grok in XAi's console. Get it from here: https://console.x.ai", - "enum": [ - "xai" - ] - }, "apiKey": { "type": "string", "maxLength": 10000, @@ -19071,58 +23875,251 @@ "minLength": 1, "maxLength": 40 } + } + }, + "CreateOrgDTO": { + "type": "object", + "properties": { + "hipaaEnabled": { + "type": "boolean", + "description": "When this is enabled, no logs, recordings, or transcriptions will be stored. At the end of the call, you will still receive an end-of-call-report message to store on your server. Defaults to false.\nWhen HIPAA is enabled, only OpenAI/Custom LLM or Azure Providers will be available for LLM and Voice respectively.\nThis is due to the compliance requirements of HIPAA. Other providers may not meet these requirements.", + "example": false + }, + "subscriptionId": { + "type": "string", + "description": "This is the ID of the subscription the org belongs to." + }, + "name": { + "type": "string", + "description": "This is the name of the org. This is just for your own reference.", + "maxLength": 40 + }, + "channel": { + "type": "string", + "description": "This is the channel of the org. There is the cluster the API traffic for the org will be directed.", + "enum": [ + "default", + "weekly" + ] + }, + "billingLimit": { + "type": "number", + "description": "This is the monthly billing limit for the org. To go beyond $1000/mo, please contact us at support@vapi.ai.", + "minimum": 0, + "maximum": 1000 + }, + "server": { + "description": "This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.\n\nThe order of precedence is:\n\n1. assistant.server\n2. phoneNumber.server\n3. org.server", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] + }, + "concurrencyLimit": { + "type": "number", + "description": "This is the concurrency limit for the org. This is the maximum number of calls that can be active at any given time. To go beyond 10, please contact us at support@vapi.ai.", + "minimum": 1, + "maximum": 10 + } + } + }, + "AutoReloadPlan": { + "type": "object", + "properties": { + "credits": { + "type": "number", + "description": "This the amount of credits to reload." + }, + "threshold": { + "type": "number", + "description": "This is the limit at which the reload is triggered." + } }, "required": [ - "provider", - "apiKey" + "credits", + "threshold" ] }, - "CreateOrgDTO": { + "Subscription": { "type": "object", "properties": { + "id": { + "type": "string", + "description": "This is the unique identifier for the subscription." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the timestamp when the subscription was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the timestamp when the subscription was last updated." + }, + "type": { + "type": "string", + "description": "This is the type / tier of the subscription.", + "enum": [ + "trial", + "pay-as-you-go", + "enterprise" + ] + }, + "status": { + "type": "string", + "description": "This is the status of the subscription. Past due subscriptions are subscriptions\nwith past due payments.", + "enum": [ + "active", + "frozen" + ] + }, + "credits": { + "type": "string", + "description": "This is the number of credits the subscription currently has.\n\nNote: This is a string to avoid floating point precision issues." + }, + "concurrencyCounter": { + "type": "number", + "description": "This is the total number of active calls (concurrency) across all orgs under this subscription.", + "minimum": 1 + }, + "concurrencyLimitIncluded": { + "type": "number", + "description": "This is the default concurrency limit for the subscription.", + "minimum": 1 + }, + "phoneNumbersCounter": { + "type": "number", + "description": "This is the number of free phone numbers the subscription has", + "minimum": 1 + }, + "phoneNumbersIncluded": { + "type": "number", + "description": "This is the maximum number of free phone numbers the subscription can have", + "minimum": 1 + }, + "concurrencyLimitPurchased": { + "type": "number", + "description": "This is the purchased add-on concurrency limit for the subscription.", + "minimum": 1 + }, + "monthlyChargeScheduleId": { + "type": "number", + "description": "This is the ID of the monthly job that charges for subscription add ons and phone numbers." + }, + "monthlyCreditCheckScheduleId": { + "type": "number", + "description": "This is the ID of the monthly job that checks whether the credit balance of the subscription\nis sufficient for the monthly charge." + }, + "stripeCustomerId": { + "type": "string", + "description": "This is the Stripe customer ID." + }, + "stripePaymentMethodId": { + "type": "string", + "description": "This is the Stripe payment ID." + }, + "slackSupportEnabled": { + "type": "boolean", + "description": "If this flag is true, then the user has purchased slack support." + }, + "slackChannelId": { + "type": "string", + "description": "If this subscription has a slack support subscription, the slack channel's ID will be stored here." + }, "hipaaEnabled": { "type": "boolean", - "description": "When this is enabled, no logs, recordings, or transcriptions will be stored. At the end of the call, you will still receive an end-of-call-report message to store on your server. Defaults to false.\nWhen HIPAA is enabled, only OpenAI/Custom LLM or Azure Providers will be available for LLM and Voice respectively.\nThis is due to the compliance requirements of HIPAA. Other providers may not meet these requirements.", - "example": false + "description": "This is the HIPAA enabled flag for the subscription. It determines whether orgs under this\nsubscription have the option to enable HIPAA compliance." }, - "subscriptionId": { + "hipaaCommonPaperAgreementId": { "type": "string", - "description": "This is the ID of the subscription the org belongs to." + "description": "This is the ID for the Common Paper agreement outlining the HIPAA contract." }, - "name": { + "stripePaymentMethodFingerprint": { "type": "string", - "description": "This is the name of the org. This is just for your own reference.", - "maxLength": 40 + "description": "This is the Stripe fingerprint of the payment method (card). It allows us\nto detect users who try to abuse our system through multiple sign-ups." }, - "channel": { + "stripeCustomerEmail": { "type": "string", - "description": "This is the channel of the org. There is the cluster the API traffic for the org will be directed.", - "enum": [ - "default", - "weekly" + "description": "This is the customer's email on Stripe." + }, + "referredByEmail": { + "type": "string", + "description": "This is the email of the referrer for the subscription." + }, + "autoReloadPlan": { + "description": "This is the auto reload plan configured for the subscription.", + "allOf": [ + { + "$ref": "#/components/schemas/AutoReloadPlan" + } ] }, - "billingLimit": { + "minutesIncluded": { "type": "number", - "description": "This is the monthly billing limit for the org. To go beyond $1000/mo, please contact us at support@vapi.ai.", - "minimum": 0, - "maximum": 1000 + "description": "The number of minutes included in the subscription.", + "minimum": 1 }, - "serverUrl": { - "type": "string", - "description": "This is the URL Vapi will communicate with via HTTP GET and POST Requests. This is used for retrieving context, function calling, and end-of-call reports.\n\nAll requests will be sent with the call object among other things relevant to that message. You can find more details in the Server URL documentation." + "minutesUsed": { + "type": "number", + "description": "The number of minutes used in the subscription.", + "minimum": 1 }, - "serverUrlSecret": { + "minutesUsedNextResetAt": { + "format": "date-time", "type": "string", - "description": "This is the secret you can set that Vapi will send with every request to your server. Will be sent as a header called x-vapi-secret." + "description": "This is the timestamp at which the number of monthly free minutes is scheduled to reset at." }, - "concurrencyLimit": { + "minutesOverageCost": { "type": "number", - "description": "This is the concurrency limit for the org. This is the maximum number of calls that can be active at any given time. To go beyond 10, please contact us at support@vapi.ai.", - "minimum": 1, - "maximum": 10 + "description": "The per minute charge on minutes that exceed the included minutes. Enterprise only." + }, + "providersIncluded": { + "description": "The list of providers included in the subscription. Enterprise only.", + "type": "array", + "items": { + "type": "string" + } + }, + "outboundCallsDailyLimit": { + "type": "number", + "description": "The maximum number of outbound calls this subscription may make in a day. Resets every night.", + "minimum": 1 + }, + "outboundCallsCounter": { + "type": "number", + "description": "The current number of outbound calls the subscription has made in the current day.", + "minimum": 1 + }, + "outboundCallsCounterNextResetAt": { + "format": "date-time", + "type": "string", + "description": "This is the timestamp at which the outbound calls counter is scheduled to reset at." + }, + "couponIds": { + "description": "This is the IDs of the coupons applicable to this subscription.", + "type": "array", + "items": { + "type": "string" + } + }, + "couponUsageLeft": { + "type": "string", + "description": "This is the number of credits left obtained from a coupon." } - } + }, + "required": [ + "id", + "createdAt", + "updatedAt", + "type", + "status", + "credits", + "concurrencyCounter", + "concurrencyLimitIncluded", + "concurrencyLimitPurchased" + ] }, "OrgPlan": { "type": "object", @@ -19218,13 +24215,13 @@ "minimum": 0, "maximum": 1000 }, - "serverUrl": { - "type": "string", - "description": "This is the URL Vapi will communicate with via HTTP GET and POST Requests. This is used for retrieving context, function calling, and end-of-call reports.\n\nAll requests will be sent with the call object among other things relevant to that message. You can find more details in the Server URL documentation." - }, - "serverUrlSecret": { - "type": "string", - "description": "This is the secret you can set that Vapi will send with every request to your server. Will be sent as a header called x-vapi-secret." + "server": { + "description": "This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.\n\nThe order of precedence is:\n\n1. assistant.server\n2. phoneNumber.server\n3. org.server", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] }, "concurrencyLimit": { "type": "number", @@ -19247,159 +24244,9 @@ "description": "When this is enabled, no logs, recordings, or transcriptions will be stored. At the end of the call, you will still receive an end-of-call-report message to store on your server. Defaults to false.\nWhen HIPAA is enabled, only OpenAI/Custom LLM or Azure Providers will be available for LLM and Voice respectively.\nThis is due to the compliance requirements of HIPAA. Other providers may not meet these requirements.", "example": false }, - "subscriptionId": { - "type": "string", - "description": "This is the ID of the subscription the org belongs to." - }, - "name": { - "type": "string", - "description": "This is the name of the org. This is just for your own reference.", - "maxLength": 40 - }, - "channel": { - "type": "string", - "description": "This is the channel of the org. There is the cluster the API traffic for the org will be directed.", - "enum": [ - "default", - "weekly" - ] - }, - "billingLimit": { - "type": "number", - "description": "This is the monthly billing limit for the org. To go beyond $1000/mo, please contact us at support@vapi.ai.", - "minimum": 0, - "maximum": 1000 - }, - "serverUrl": { - "type": "string", - "description": "This is the URL Vapi will communicate with via HTTP GET and POST Requests. This is used for retrieving context, function calling, and end-of-call reports.\n\nAll requests will be sent with the call object among other things relevant to that message. You can find more details in the Server URL documentation." - }, - "serverUrlSecret": { - "type": "string", - "description": "This is the secret you can set that Vapi will send with every request to your server. Will be sent as a header called x-vapi-secret." - }, - "concurrencyLimit": { - "type": "number", - "description": "This is the concurrency limit for the org. This is the maximum number of calls that can be active at any given time. To go beyond 10, please contact us at support@vapi.ai.", - "minimum": 1, - "maximum": 10 - } - } - }, - "User": { - "type": "object", - "properties": { - "id": { - "type": "string", - "description": "This is the unique identifier for the profile or user." - }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the profile was created." - }, - "updatedAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the profile was last updated." - }, - "email": { - "type": "string", - "description": "This is the email of the user that is associated with the profile." - }, - "fullName": { - "type": "string", - "description": "This is the full name of the user that is associated with the profile." - } - }, - "required": [ - "id", - "createdAt", - "updatedAt", - "email" - ] - }, - "InviteUserDTO": { - "type": "object", - "properties": { - "emails": { - "maxItems": 100, - "type": "array", - "items": { - "type": "string" - } - }, - "role": { - "enum": [ - "admin", - "editor", - "viewer" - ], - "type": "string" - } - }, - "required": [ - "emails", - "role" - ] - }, - "OrgWithOrgUser": { - "type": "object", - "properties": { - "hipaaEnabled": { - "type": "boolean", - "description": "When this is enabled, no logs, recordings, or transcriptions will be stored. At the end of the call, you will still receive an end-of-call-report message to store on your server. Defaults to false.\nWhen HIPAA is enabled, only OpenAI/Custom LLM or Azure Providers will be available for LLM and Voice respectively.\nThis is due to the compliance requirements of HIPAA. Other providers may not meet these requirements.", - "example": false - }, - "subscription": { - "$ref": "#/components/schemas/Subscription" - }, - "subscriptionId": { - "type": "string", - "description": "This is the ID of the subscription the org belongs to." - }, - "id": { - "type": "string", - "description": "This is the unique identifier for the org." - }, - "createdAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the org was created." - }, - "updatedAt": { - "format": "date-time", - "type": "string", - "description": "This is the ISO 8601 date-time string of when the org was last updated." - }, - "stripeCustomerId": { - "type": "string", - "description": "This is the Stripe customer for the org." - }, - "stripeSubscriptionId": { - "type": "string", - "description": "This is the subscription for the org." - }, - "stripeSubscriptionItemId": { - "type": "string", - "description": "This is the subscription's subscription item." - }, - "stripeSubscriptionCurrentPeriodStart": { - "format": "date-time", - "type": "string", - "description": "This is the subscription's current period start." - }, - "stripeSubscriptionStatus": { - "type": "string", - "description": "This is the subscription's status." - }, - "plan": { - "description": "This is the plan for the org.", - "allOf": [ - { - "$ref": "#/components/schemas/OrgPlan" - } - ] + "subscriptionId": { + "type": "string", + "description": "This is the ID of the subscription the org belongs to." }, "name": { "type": "string", @@ -19420,36 +24267,80 @@ "minimum": 0, "maximum": 1000 }, - "serverUrl": { - "type": "string", - "description": "This is the URL Vapi will communicate with via HTTP GET and POST Requests. This is used for retrieving context, function calling, and end-of-call reports.\n\nAll requests will be sent with the call object among other things relevant to that message. You can find more details in the Server URL documentation." - }, - "serverUrlSecret": { - "type": "string", - "description": "This is the secret you can set that Vapi will send with every request to your server. Will be sent as a header called x-vapi-secret." + "server": { + "description": "This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.\n\nThe order of precedence is:\n\n1. assistant.server\n2. phoneNumber.server\n3. org.server", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] }, "concurrencyLimit": { "type": "number", "description": "This is the concurrency limit for the org. This is the maximum number of calls that can be active at any given time. To go beyond 10, please contact us at support@vapi.ai.", "minimum": 1, "maximum": 10 + } + } + }, + "User": { + "type": "object", + "properties": { + "id": { + "type": "string", + "description": "This is the unique identifier for the profile or user." }, - "invitedByUserId": { - "type": "string" + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the profile was created." }, - "role": { + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the profile was last updated." + }, + "email": { + "type": "string", + "description": "This is the email of the user that is associated with the profile." + }, + "fullName": { "type": "string", + "description": "This is the full name of the user that is associated with the profile." + } + }, + "required": [ + "id", + "createdAt", + "updatedAt", + "email" + ] + }, + "InviteUserDTO": { + "type": "object", + "properties": { + "emails": { + "maxItems": 100, + "type": "array", + "items": { + "type": "string" + } + }, + "role": { "enum": [ "admin", "editor", "viewer" - ] + ], + "type": "string" + }, + "redirectTo": { + "type": "string" } }, "required": [ - "id", - "createdAt", - "updatedAt" + "emails", + "role" ] }, "UpdateUserRoleDTO": { @@ -19538,9 +24429,11 @@ "deepgram", "lmnt", "neets", + "neuphonic", "openai", "playht", "rime-ai", + "smallest-ai", "tavus" ] }, @@ -19797,7 +24690,203 @@ } } }, - "CreateToolTemplateDTO": { + "CreateToolTemplateDTO": { + "type": "object", + "properties": { + "details": { + "oneOf": [ + { + "$ref": "#/components/schemas/CreateDtmfToolDTO", + "title": "DtmfTool" + }, + { + "$ref": "#/components/schemas/CreateEndCallToolDTO", + "title": "EndCallTool" + }, + { + "$ref": "#/components/schemas/CreateVoicemailToolDTO", + "title": "VoicemailTool" + }, + { + "$ref": "#/components/schemas/CreateFunctionToolDTO", + "title": "FunctionTool" + }, + { + "$ref": "#/components/schemas/CreateGhlToolDTO", + "title": "GhlTool" + }, + { + "$ref": "#/components/schemas/CreateMakeToolDTO", + "title": "MakeTool" + }, + { + "$ref": "#/components/schemas/CreateTransferCallToolDTO", + "title": "TransferCallTool" + } + ] + }, + "providerDetails": { + "oneOf": [ + { + "$ref": "#/components/schemas/MakeToolProviderDetails", + "title": "MakeToolProviderDetails" + }, + { + "$ref": "#/components/schemas/GhlToolProviderDetails", + "title": "GhlToolProviderDetails" + }, + { + "$ref": "#/components/schemas/FunctionToolProviderDetails", + "title": "FunctionToolProviderDetails" + } + ] + }, + "metadata": { + "$ref": "#/components/schemas/ToolTemplateMetadata" + }, + "visibility": { + "type": "string", + "default": "private", + "enum": [ + "public", + "private" + ] + }, + "type": { + "type": "string", + "default": "tool", + "enum": [ + "tool" + ] + }, + "name": { + "type": "string", + "description": "The name of the template. This is just for your own reference.", + "maxLength": 40 + }, + "provider": { + "type": "string", + "enum": [ + "make", + "gohighlevel", + "function" + ] + } + }, + "required": [ + "type" + ] + }, + "Template": { + "type": "object", + "properties": { + "details": { + "oneOf": [ + { + "$ref": "#/components/schemas/CreateDtmfToolDTO", + "title": "DtmfTool" + }, + { + "$ref": "#/components/schemas/CreateEndCallToolDTO", + "title": "EndCallTool" + }, + { + "$ref": "#/components/schemas/CreateVoicemailToolDTO", + "title": "VoicemailTool" + }, + { + "$ref": "#/components/schemas/CreateFunctionToolDTO", + "title": "FunctionTool" + }, + { + "$ref": "#/components/schemas/CreateGhlToolDTO", + "title": "GhlTool" + }, + { + "$ref": "#/components/schemas/CreateMakeToolDTO", + "title": "MakeTool" + }, + { + "$ref": "#/components/schemas/CreateTransferCallToolDTO", + "title": "TransferCallTool" + } + ] + }, + "providerDetails": { + "oneOf": [ + { + "$ref": "#/components/schemas/MakeToolProviderDetails", + "title": "MakeToolProviderDetails" + }, + { + "$ref": "#/components/schemas/GhlToolProviderDetails", + "title": "GhlToolProviderDetails" + }, + { + "$ref": "#/components/schemas/FunctionToolProviderDetails", + "title": "FunctionToolProviderDetails" + } + ] + }, + "metadata": { + "$ref": "#/components/schemas/ToolTemplateMetadata" + }, + "visibility": { + "default": "private", + "enum": [ + "public", + "private" + ], + "type": "string" + }, + "type": { + "type": "string", + "default": "tool", + "enum": [ + "tool" + ] + }, + "name": { + "type": "string", + "description": "The name of the template. This is just for your own reference.", + "maxLength": 40 + }, + "provider": { + "enum": [ + "make", + "gohighlevel", + "function" + ], + "type": "string" + }, + "id": { + "type": "string", + "description": "The unique identifier for the template." + }, + "orgId": { + "type": "string", + "description": "The unique identifier for the organization that this template belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "The ISO 8601 date-time string of when the template was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "The ISO 8601 date-time string of when the template was last updated." + } + }, + "required": [ + "type", + "id", + "orgId", + "createdAt", + "updatedAt" + ] + }, + "UpdateToolTemplateDTO": { "type": "object", "properties": { "details": { @@ -19884,348 +24973,696 @@ "type" ] }, - "Template": { + "TokenRestrictions": { "type": "object", "properties": { - "details": { - "oneOf": [ - { - "$ref": "#/components/schemas/CreateDtmfToolDTO", - "title": "DtmfTool" - }, - { - "$ref": "#/components/schemas/CreateEndCallToolDTO", - "title": "EndCallTool" - }, - { - "$ref": "#/components/schemas/CreateVoicemailToolDTO", - "title": "VoicemailTool" - }, - { - "$ref": "#/components/schemas/CreateFunctionToolDTO", - "title": "FunctionTool" - }, - { - "$ref": "#/components/schemas/CreateGhlToolDTO", - "title": "GhlTool" - }, - { - "$ref": "#/components/schemas/CreateMakeToolDTO", - "title": "MakeTool" - }, + "enabled": { + "type": "boolean", + "description": "This determines whether the token is enabled or disabled. Default is true, it's enabled." + }, + "allowedOrigins": { + "description": "This determines the allowed origins for this token. Validates the `Origin` header. Default is any origin.\n\nOnly relevant for `public` tokens.", + "type": "array", + "items": { + "type": "string" + } + }, + "allowedAssistantIds": { + "description": "This determines which assistantIds can be used when creating a call. Default is any assistantId.\n\nOnly relevant for `public` tokens.", + "type": "array", + "items": { + "type": "string" + } + }, + "allowTransientAssistant": { + "type": "boolean", + "description": "This determines whether transient assistants can be used when creating a call. Default is true.\n\nIf `allowedAssistantIds` is provided, this is automatically false.\n\nOnly relevant for `public` tokens." + } + } + }, + "CreateTokenDTO": { + "type": "object", + "properties": { + "tag": { + "type": "string", + "description": "This is the tag for the token. It represents its scope.", + "enum": [ + "private", + "public" + ] + }, + "name": { + "type": "string", + "description": "This is the name of the token. This is just for your own reference.", + "maxLength": 40 + }, + "restrictions": { + "description": "This are the restrictions for the token.", + "allOf": [ { - "$ref": "#/components/schemas/CreateTransferCallToolDTO", - "title": "TransferCallTool" + "$ref": "#/components/schemas/TokenRestrictions" } ] + } + } + }, + "Token": { + "type": "object", + "properties": { + "tag": { + "type": "string", + "description": "This is the tag for the token. It represents its scope.", + "enum": [ + "private", + "public" + ] }, - "providerDetails": { - "oneOf": [ - { - "$ref": "#/components/schemas/MakeToolProviderDetails", - "title": "MakeToolProviderDetails" - }, - { - "$ref": "#/components/schemas/GhlToolProviderDetails", - "title": "GhlToolProviderDetails" - }, + "id": { + "type": "string", + "description": "This is the unique identifier for the token." + }, + "orgId": { + "type": "string", + "description": "This is unique identifier for the org that this token belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the token was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the token was last updated." + }, + "value": { + "type": "string", + "description": "This is the token key." + }, + "name": { + "type": "string", + "description": "This is the name of the token. This is just for your own reference.", + "maxLength": 40 + }, + "restrictions": { + "description": "This are the restrictions for the token.", + "allOf": [ { - "$ref": "#/components/schemas/FunctionToolProviderDetails", - "title": "FunctionToolProviderDetails" + "$ref": "#/components/schemas/TokenRestrictions" } ] + } + }, + "required": [ + "id", + "orgId", + "createdAt", + "updatedAt", + "value" + ] + }, + "UpdateTokenDTO": { + "type": "object", + "properties": { + "tag": { + "type": "string", + "description": "This is the tag for the token. It represents its scope.", + "enum": [ + "private", + "public" + ] }, - "metadata": { - "$ref": "#/components/schemas/ToolTemplateMetadata" + "name": { + "type": "string", + "description": "This is the name of the token. This is just for your own reference.", + "maxLength": 40 }, - "visibility": { - "default": "private", + "restrictions": { + "description": "This are the restrictions for the token.", + "allOf": [ + { + "$ref": "#/components/schemas/TokenRestrictions" + } + ] + } + } + }, + "SyncVoiceLibraryDTO": { + "type": "object", + "properties": { + "providers": { + "type": "array", + "description": "List of providers you want to sync.", "enum": [ - "public", - "private" + "11labs", + "azure", + "cartesia", + "custom-voice", + "deepgram", + "lmnt", + "neets", + "neuphonic", + "openai", + "playht", + "rime-ai", + "smallest-ai", + "tavus" ], - "type": "string" + "items": { + "type": "string", + "enum": [ + "11labs", + "azure", + "cartesia", + "custom-voice", + "deepgram", + "lmnt", + "neets", + "neuphonic", + "openai", + "playht", + "rime-ai", + "smallest-ai", + "tavus" + ] + } + } + } + }, + "TestSuite": { + "type": "object", + "properties": { + "id": { + "type": "string", + "description": "This is the unique identifier for the test suite." }, - "type": { + "orgId": { + "type": "string", + "description": "This is the unique identifier for the org that this test suite belongs to." + }, + "createdAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the test suite was created." + }, + "updatedAt": { + "format": "date-time", + "type": "string", + "description": "This is the ISO 8601 date-time string of when the test suite was last updated." + }, + "name": { + "type": "string", + "description": "This is the name of the test suite.", + "maxLength": 80 + }, + "phoneNumberId": { + "type": "string", + "description": "This is the phone number ID associated with this test suite." + } + }, + "required": [ + "id", + "orgId", + "createdAt", + "updatedAt" + ] + }, + "TestSuitesPaginatedResponse": { + "type": "object", + "properties": { + "results": { + "type": "array", + "items": { + "$ref": "#/components/schemas/TestSuite" + } + }, + "metadata": { + "$ref": "#/components/schemas/PaginationMeta" + } + }, + "required": [ + "results", + "metadata" + ] + }, + "CreateTestSuiteDto": { + "type": "object", + "properties": { + "name": { "type": "string", - "default": "tool", - "enum": [ - "tool" - ] + "description": "This is the name of the test suite.", + "maxLength": 80 }, + "phoneNumberId": { + "type": "string", + "description": "This is the phone number ID associated with this test suite." + } + } + }, + "UpdateTestSuiteDto": { + "type": "object", + "properties": { "name": { "type": "string", - "description": "The name of the template. This is just for your own reference.", - "maxLength": 40 + "description": "This is the name of the test suite.", + "maxLength": 80 }, - "provider": { + "phoneNumberId": { + "type": "string", + "description": "This is the phone number ID associated with this test suite." + } + } + }, + "TestSuiteTestVoice": { + "type": "object", + "properties": { + "scorers": { + "type": "array", + "description": "These are the scorers used to evaluate the test.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/TestSuiteTestScorerAI", + "title": "AI" + } + ] + } + }, + "type": { + "type": "string", + "description": "This is the type of the test, which must be voice.", "enum": [ - "make", - "gohighlevel", - "function" + "voice" ], - "type": "string" + "maxLength": 100 }, "id": { "type": "string", - "description": "The unique identifier for the template." + "description": "This is the unique identifier for the test." + }, + "testSuiteId": { + "type": "string", + "description": "This is the unique identifier for the test suite this test belongs to." }, "orgId": { "type": "string", - "description": "The unique identifier for the organization that this template belongs to." + "description": "This is the unique identifier for the organization this test belongs to." }, "createdAt": { "format": "date-time", "type": "string", - "description": "The ISO 8601 date-time string of when the template was created." + "description": "This is the ISO 8601 date-time string of when the test was created." }, "updatedAt": { "format": "date-time", "type": "string", - "description": "The ISO 8601 date-time string of when the template was last updated." + "description": "This is the ISO 8601 date-time string of when the test was last updated." + }, + "name": { + "type": "string", + "description": "This is the name of the test.", + "maxLength": 80 + }, + "script": { + "type": "string", + "description": "This is the script to be used for the voice test.", + "maxLength": 10000 + }, + "numAttempts": { + "type": "number", + "description": "This is the number of attempts allowed for the test.", + "minimum": 1, + "maximum": 10 } }, "required": [ + "scorers", "type", "id", + "testSuiteId", "orgId", "createdAt", - "updatedAt" + "updatedAt", + "script" ] }, - "UpdateToolTemplateDTO": { + "CreateTestSuiteTestVoiceDto": { "type": "object", "properties": { - "details": { - "oneOf": [ - { - "$ref": "#/components/schemas/CreateDtmfToolDTO", - "title": "DtmfTool" - }, - { - "$ref": "#/components/schemas/CreateEndCallToolDTO", - "title": "EndCallTool" - }, - { - "$ref": "#/components/schemas/CreateVoicemailToolDTO", - "title": "VoicemailTool" - }, - { - "$ref": "#/components/schemas/CreateFunctionToolDTO", - "title": "FunctionTool" - }, - { - "$ref": "#/components/schemas/CreateGhlToolDTO", - "title": "GhlTool" - }, - { - "$ref": "#/components/schemas/CreateMakeToolDTO", - "title": "MakeTool" - }, - { - "$ref": "#/components/schemas/CreateTransferCallToolDTO", - "title": "TransferCallTool" - } - ] - }, - "providerDetails": { - "oneOf": [ - { - "$ref": "#/components/schemas/MakeToolProviderDetails", - "title": "MakeToolProviderDetails" - }, - { - "$ref": "#/components/schemas/GhlToolProviderDetails", - "title": "GhlToolProviderDetails" - }, - { - "$ref": "#/components/schemas/FunctionToolProviderDetails", - "title": "FunctionToolProviderDetails" - } - ] - }, - "metadata": { - "$ref": "#/components/schemas/ToolTemplateMetadata" + "scorers": { + "type": "array", + "description": "These are the scorers used to evaluate the test.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/TestSuiteTestScorerAI", + "title": "AI" + } + ] + } }, - "visibility": { + "type": { "type": "string", - "default": "private", + "description": "This is the type of the test, which must be voice.", "enum": [ - "public", - "private" - ] + "voice" + ], + "maxLength": 100 }, - "type": { + "script": { "type": "string", - "default": "tool", - "enum": [ - "tool" - ] + "description": "This is the script to be used for the voice test.", + "maxLength": 10000 + }, + "numAttempts": { + "type": "number", + "description": "This is the number of attempts allowed for the test.", + "minimum": 1, + "maximum": 10 }, "name": { "type": "string", - "description": "The name of the template. This is just for your own reference.", - "maxLength": 40 + "description": "This is the name of the test.", + "maxLength": 80 + } + }, + "required": [ + "scorers", + "type", + "script" + ] + }, + "UpdateTestSuiteTestVoiceDto": { + "type": "object", + "properties": { + "scorers": { + "type": "array", + "description": "These are the scorers used to evaluate the test.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/TestSuiteTestScorerAI", + "title": "AI" + } + ] + } }, - "provider": { + "name": { + "type": "string", + "description": "This is the name of the test.", + "maxLength": 80 + }, + "script": { "type": "string", + "description": "This is the script to be used for the voice test.", + "maxLength": 10000 + }, + "numAttempts": { + "type": "number", + "description": "This is the number of attempts allowed for the test.", + "minimum": 1, + "maximum": 10 + } + } + }, + "TestSuiteTestScorerAI": { + "type": "object", + "properties": { + "type": { + "type": "string", + "description": "This is the type of the scorer, which must be AI.", "enum": [ - "make", - "gohighlevel", - "function" - ] + "ai" + ], + "maxLength": 100 + }, + "rubric": { + "type": "string", + "description": "This is the rubric used by the AI scorer.", + "maxLength": 1000 } }, "required": [ - "type" + "type", + "rubric" ] }, - "TokenRestrictions": { + "TestSuiteTestsPaginatedResponse": { "type": "object", "properties": { - "enabled": { - "type": "boolean", - "description": "This determines whether the token is enabled or disabled. Default is true, it's enabled." - }, - "allowedOrigins": { - "description": "This determines the allowed origins for this token. Validates the `Origin` header. Default is any origin.\n\nOnly relevant for `public` tokens.", + "results": { "type": "array", + "description": "A list of test suite tests.", "items": { - "type": "string" + "oneOf": [ + { + "$ref": "#/components/schemas/TestSuiteTestVoice" + } + ] } }, - "allowedAssistantIds": { - "description": "This determines which assistantIds can be used when creating a call. Default is any assistantId.\n\nOnly relevant for `public` tokens.", + "metadata": { + "description": "Metadata about the pagination.", + "allOf": [ + { + "$ref": "#/components/schemas/PaginationMeta" + } + ] + } + }, + "required": [ + "results", + "metadata" + ] + }, + "TestSuiteRunScorerAI": { + "type": "object", + "properties": { + "type": { + "type": "string", + "description": "This is the type of the scorer, which must be AI.", + "enum": [ + "ai" + ], + "maxLength": 100 + }, + "result": { + "type": "string", + "description": "This is the result of the test suite.", + "enum": [ + "pass", + "fail" + ], + "maxLength": 100 + }, + "reasoning": { + "type": "string", + "description": "This is the reasoning provided by the AI scorer.", + "maxLength": 10000 + }, + "rubric": { + "type": "string", + "description": "This is the rubric used by the AI scorer.", + "maxLength": 1000 + } + }, + "required": [ + "type", + "result", + "reasoning", + "rubric" + ] + }, + "TestSuiteRunTestAttemptCall": { + "type": "object", + "properties": { + "artifact": { + "description": "This is the artifact associated with the call.", + "allOf": [ + { + "$ref": "#/components/schemas/Artifact" + } + ] + } + }, + "required": [ + "artifact" + ] + }, + "TestSuiteRunTestAttempt": { + "type": "object", + "properties": { + "scorerResults": { "type": "array", + "description": "These are the results of the scorers used to evaluate the test attempt.", "items": { - "type": "string" + "oneOf": [ + { + "$ref": "#/components/schemas/TestSuiteRunScorerAI", + "title": "AI" + } + ] } }, - "allowTransientAssistant": { - "type": "boolean", - "description": "This determines whether transient assistants can be used when creating a call. Default is true.\n\nIf `allowedAssistantIds` is provided, this is automatically false.\n\nOnly relevant for `public` tokens." + "call": { + "description": "This is the call made during the test attempt.", + "allOf": [ + { + "$ref": "#/components/schemas/TestSuiteRunTestAttemptCall" + } + ] } - } + }, + "required": [ + "scorerResults", + "call" + ] }, - "CreateTokenDTO": { + "TestSuiteRunTestResult": { "type": "object", "properties": { - "tag": { - "type": "string", - "description": "This is the tag for the token. It represents its scope.", - "enum": [ - "private", - "public" - ] - }, - "name": { - "type": "string", - "description": "This is the name of the token. This is just for your own reference.", - "maxLength": 40 - }, - "restrictions": { - "description": "This are the restrictions for the token.", + "test": { + "description": "This is the test that was run.", + "oneOf": [ + { + "$ref": "#/components/schemas/TestSuiteTestVoice", + "title": "TestSuiteTestVoice" + } + ], "allOf": [ { - "$ref": "#/components/schemas/TokenRestrictions" + "$ref": "#/components/schemas/TestSuiteTestVoice" } ] + }, + "attempts": { + "description": "These are the attempts made for this test.", + "type": "array", + "items": { + "$ref": "#/components/schemas/TestSuiteRunTestAttempt" + } } - } + }, + "required": [ + "test", + "attempts" + ] }, - "Token": { + "TestSuiteRun": { "type": "object", "properties": { - "tag": { + "status": { "type": "string", - "description": "This is the tag for the token. It represents its scope.", + "description": "This is the current status of the test suite run.", "enum": [ - "private", - "public" + "queued", + "in-progress", + "completed" ] }, "id": { "type": "string", - "description": "This is the unique identifier for the token." + "description": "This is the unique identifier for the test suite run." }, "orgId": { "type": "string", - "description": "This is unique identifier for the org that this token belongs to." + "description": "This is the unique identifier for the organization this run belongs to." + }, + "testSuiteId": { + "type": "string", + "description": "This is the unique identifier for the test suite this run belongs to." }, "createdAt": { "format": "date-time", "type": "string", - "description": "This is the ISO 8601 date-time string of when the token was created." + "description": "This is the ISO 8601 date-time string of when the test suite run was created." }, "updatedAt": { "format": "date-time", "type": "string", - "description": "This is the ISO 8601 date-time string of when the token was last updated." + "description": "This is the ISO 8601 date-time string of when the test suite run was last updated." }, - "value": { - "type": "string", - "description": "This is the token key." + "testResults": { + "description": "These are the results of the tests in this test suite run.", + "type": "array", + "items": { + "$ref": "#/components/schemas/TestSuiteRunTestResult" + } }, "name": { "type": "string", - "description": "This is the name of the token. This is just for your own reference.", - "maxLength": 40 - }, - "restrictions": { - "description": "This are the restrictions for the token.", - "allOf": [ - { - "$ref": "#/components/schemas/TokenRestrictions" - } - ] + "description": "This is the name of the test suite run.", + "maxLength": 80 } }, "required": [ + "status", "id", "orgId", + "testSuiteId", "createdAt", "updatedAt", - "value" + "testResults" ] }, - "SyncVoiceLibraryDTO": { + "TestSuiteRunsPaginatedResponse": { "type": "object", "properties": { - "providers": { + "results": { "type": "array", - "description": "List of providers you want to sync.", - "enum": [ - "11labs", - "azure", - "cartesia", - "custom-voice", - "deepgram", - "lmnt", - "neets", - "openai", - "playht", - "rime-ai", - "tavus" - ], "items": { - "type": "string", - "enum": [ - "11labs", - "azure", - "cartesia", - "custom-voice", - "deepgram", - "lmnt", - "neets", - "openai", - "playht", - "rime-ai", - "tavus" - ] + "$ref": "#/components/schemas/TestSuiteRun" } + }, + "metadata": { + "$ref": "#/components/schemas/PaginationMeta" + } + }, + "required": [ + "results", + "metadata" + ] + }, + "CreateTestSuiteRunDto": { + "type": "object", + "properties": { + "name": { + "type": "string", + "description": "This is the name of the test suite run.", + "maxLength": 80 + } + } + }, + "UpdateTestSuiteRunDto": { + "type": "object", + "properties": { + "name": { + "type": "string", + "description": "This is the name of the test suite run.", + "maxLength": 80 } } }, + "ClientMessageWorkflowNodeStarted": { + "type": "object", + "properties": { + "type": { + "type": "string", + "description": "This is the type of the message. \"workflow.node.started\" is sent when the active node changes.", + "enum": [ + "workflow.node.started" + ] + }, + "node": { + "type": "object", + "description": "This is the active node." + } + }, + "required": [ + "type", + "node" + ] + }, "ClientMessageConversationUpdate": { "type": "object", "properties": { @@ -20372,7 +25809,8 @@ "type": "string", "description": "This is the type of the message. \"transcript\" is sent as transcriber outputs partial or final transcript.", "enum": [ - "transcript" + "transcript", + "transcript[transcriptType=\"final\"]" ] }, "role": { @@ -20645,6 +26083,10 @@ "message": { "description": "These are all the messages that can be sent to the client-side SDKs during the call. Configure the messages you'd like to receive in `assistant.clientMessages`.", "oneOf": [ + { + "$ref": "#/components/schemas/ClientMessageWorkflowNodeStarted", + "title": "WorkflowNodeStarted" + }, { "$ref": "#/components/schemas/ClientMessageConversationUpdate", "title": "ConversationUpdate" @@ -20915,6 +26357,31 @@ "type": "string", "description": "This is the reason the call ended. This can also be found at `call.endedReason` on GET /call/:id.", "enum": [ + "assistant-not-valid", + "assistant-not-provided", + "call-start-error-neither-assistant-nor-server-set", + "assistant-request-failed", + "assistant-request-returned-error", + "assistant-request-returned-unspeakable-error", + "assistant-request-returned-invalid-assistant", + "assistant-request-returned-no-assistant", + "assistant-request-returned-forwarding-phone-number", + "assistant-ended-call", + "assistant-said-end-call-phrase", + "assistant-ended-call-with-hangup-task", + "assistant-forwarded-call", + "assistant-join-timed-out", + "customer-busy", + "customer-ended-call", + "customer-did-not-answer", + "customer-did-not-give-microphone-permission", + "assistant-said-message-with-end-call-enabled", + "exceeded-max-duration", + "manually-canceled", + "phone-call-provider-closed-websocket", + "db-error", + "assistant-not-found", + "license-check-failed", "pipeline-error-openai-voice-failed", "pipeline-error-cartesia-voice-failed", "pipeline-error-deepgram-voice-failed", @@ -20924,9 +26391,14 @@ "pipeline-error-azure-voice-failed", "pipeline-error-rime-ai-voice-failed", "pipeline-error-neets-voice-failed", - "db-error", - "assistant-not-found", - "license-check-failed", + "pipeline-error-smallest-ai-voice-failed", + "pipeline-error-neuphonic-voice-failed", + "pipeline-error-deepgram-transcriber-failed", + "pipeline-error-gladia-transcriber-failed", + "pipeline-error-speechmatics-transcriber-failed", + "pipeline-error-assembly-ai-transcriber-failed", + "pipeline-error-talkscriber-transcriber-failed", + "pipeline-error-azure-speech-transcriber-failed", "pipeline-error-vapi-llm-failed", "pipeline-error-vapi-400-bad-request-validation-failed", "pipeline-error-vapi-401-unauthorized", @@ -20946,36 +26418,15 @@ "vapifault-web-call-worker-setup-failed", "vapifault-transport-connected-but-call-not-active", "vapifault-call-started-but-connection-to-transport-missing", - "pipeline-error-deepgram-transcriber-failed", - "pipeline-error-gladia-transcriber-failed", - "pipeline-error-assembly-ai-transcriber-failed", "pipeline-error-openai-llm-failed", "pipeline-error-azure-openai-llm-failed", "pipeline-error-groq-llm-failed", "pipeline-error-google-llm-failed", "pipeline-error-xai-llm-failed", + "pipeline-error-mistral-llm-failed", "pipeline-error-inflection-ai-llm-failed", - "assistant-not-invalid", - "assistant-not-provided", - "call-start-error-neither-assistant-nor-server-set", - "assistant-request-failed", - "assistant-request-returned-error", - "assistant-request-returned-unspeakable-error", - "assistant-request-returned-invalid-assistant", - "assistant-request-returned-no-assistant", - "assistant-request-returned-forwarding-phone-number", - "assistant-ended-call", - "assistant-said-end-call-phrase", - "assistant-forwarded-call", - "assistant-join-timed-out", - "customer-busy", - "customer-ended-call", - "customer-did-not-answer", - "customer-did-not-give-microphone-permission", - "assistant-said-message-with-end-call-enabled", - "exceeded-max-duration", - "manually-canceled", - "phone-call-provider-closed-websocket", + "pipeline-error-cerebras-llm-failed", + "pipeline-error-deep-seek-llm-failed", "pipeline-error-openai-400-bad-request-validation-failed", "pipeline-error-openai-401-unauthorized", "pipeline-error-openai-403-model-access-denied", @@ -20991,11 +26442,21 @@ "pipeline-error-xai-403-model-access-denied", "pipeline-error-xai-429-exceeded-quota", "pipeline-error-xai-500-server-error", + "pipeline-error-mistral-400-bad-request-validation-failed", + "pipeline-error-mistral-401-unauthorized", + "pipeline-error-mistral-403-model-access-denied", + "pipeline-error-mistral-429-exceeded-quota", + "pipeline-error-mistral-500-server-error", "pipeline-error-inflection-ai-400-bad-request-validation-failed", "pipeline-error-inflection-ai-401-unauthorized", "pipeline-error-inflection-ai-403-model-access-denied", "pipeline-error-inflection-ai-429-exceeded-quota", "pipeline-error-inflection-ai-500-server-error", + "pipeline-error-deep-seek-400-bad-request-validation-failed", + "pipeline-error-deep-seek-401-unauthorized", + "pipeline-error-deep-seek-403-model-access-denied", + "pipeline-error-deep-seek-429-exceeded-quota", + "pipeline-error-deep-seek-500-server-error", "pipeline-error-azure-openai-400-bad-request-validation-failed", "pipeline-error-azure-openai-401-unauthorized", "pipeline-error-azure-openai-403-model-access-denied", @@ -21006,6 +26467,11 @@ "pipeline-error-groq-403-model-access-denied", "pipeline-error-groq-429-exceeded-quota", "pipeline-error-groq-500-server-error", + "pipeline-error-cerebras-400-bad-request-validation-failed", + "pipeline-error-cerebras-401-unauthorized", + "pipeline-error-cerebras-403-model-access-denied", + "pipeline-error-cerebras-429-exceeded-quota", + "pipeline-error-cerebras-500-server-error", "pipeline-error-anthropic-400-bad-request-validation-failed", "pipeline-error-anthropic-401-unauthorized", "pipeline-error-anthropic-403-model-access-denied", @@ -21093,6 +26559,8 @@ "pipeline-error-playht-429-exceeded-quota", "pipeline-error-playht-502-gateway-error", "pipeline-error-playht-504-gateway-error", + "pipeline-error-tavus-video-failed", + "pipeline-error-custom-transcriber-failed", "pipeline-error-deepgram-returning-403-model-access-denied", "pipeline-error-deepgram-returning-401-invalid-credentials", "pipeline-error-deepgram-returning-404-not-found", @@ -21100,7 +26568,6 @@ "pipeline-error-deepgram-returning-500-invalid-json", "pipeline-error-deepgram-returning-502-network-error", "pipeline-error-deepgram-returning-502-bad-gateway-ehostunreach", - "pipeline-error-custom-transcriber-failed", "silence-timed-out", "sip-gateway-failed-to-connect-call", "twilio-failed-to-connect-call", @@ -21694,6 +27161,31 @@ "type": "string", "description": "This is the reason the call ended. This is only sent if the status is \"ended\".", "enum": [ + "assistant-not-valid", + "assistant-not-provided", + "call-start-error-neither-assistant-nor-server-set", + "assistant-request-failed", + "assistant-request-returned-error", + "assistant-request-returned-unspeakable-error", + "assistant-request-returned-invalid-assistant", + "assistant-request-returned-no-assistant", + "assistant-request-returned-forwarding-phone-number", + "assistant-ended-call", + "assistant-said-end-call-phrase", + "assistant-ended-call-with-hangup-task", + "assistant-forwarded-call", + "assistant-join-timed-out", + "customer-busy", + "customer-ended-call", + "customer-did-not-answer", + "customer-did-not-give-microphone-permission", + "assistant-said-message-with-end-call-enabled", + "exceeded-max-duration", + "manually-canceled", + "phone-call-provider-closed-websocket", + "db-error", + "assistant-not-found", + "license-check-failed", "pipeline-error-openai-voice-failed", "pipeline-error-cartesia-voice-failed", "pipeline-error-deepgram-voice-failed", @@ -21703,9 +27195,14 @@ "pipeline-error-azure-voice-failed", "pipeline-error-rime-ai-voice-failed", "pipeline-error-neets-voice-failed", - "db-error", - "assistant-not-found", - "license-check-failed", + "pipeline-error-smallest-ai-voice-failed", + "pipeline-error-neuphonic-voice-failed", + "pipeline-error-deepgram-transcriber-failed", + "pipeline-error-gladia-transcriber-failed", + "pipeline-error-speechmatics-transcriber-failed", + "pipeline-error-assembly-ai-transcriber-failed", + "pipeline-error-talkscriber-transcriber-failed", + "pipeline-error-azure-speech-transcriber-failed", "pipeline-error-vapi-llm-failed", "pipeline-error-vapi-400-bad-request-validation-failed", "pipeline-error-vapi-401-unauthorized", @@ -21724,37 +27221,16 @@ "vapifault-transport-never-connected", "vapifault-web-call-worker-setup-failed", "vapifault-transport-connected-but-call-not-active", - "vapifault-call-started-but-connection-to-transport-missing", - "pipeline-error-deepgram-transcriber-failed", - "pipeline-error-gladia-transcriber-failed", - "pipeline-error-assembly-ai-transcriber-failed", - "pipeline-error-openai-llm-failed", - "pipeline-error-azure-openai-llm-failed", - "pipeline-error-groq-llm-failed", - "pipeline-error-google-llm-failed", - "pipeline-error-xai-llm-failed", - "pipeline-error-inflection-ai-llm-failed", - "assistant-not-invalid", - "assistant-not-provided", - "call-start-error-neither-assistant-nor-server-set", - "assistant-request-failed", - "assistant-request-returned-error", - "assistant-request-returned-unspeakable-error", - "assistant-request-returned-invalid-assistant", - "assistant-request-returned-no-assistant", - "assistant-request-returned-forwarding-phone-number", - "assistant-ended-call", - "assistant-said-end-call-phrase", - "assistant-forwarded-call", - "assistant-join-timed-out", - "customer-busy", - "customer-ended-call", - "customer-did-not-answer", - "customer-did-not-give-microphone-permission", - "assistant-said-message-with-end-call-enabled", - "exceeded-max-duration", - "manually-canceled", - "phone-call-provider-closed-websocket", + "vapifault-call-started-but-connection-to-transport-missing", + "pipeline-error-openai-llm-failed", + "pipeline-error-azure-openai-llm-failed", + "pipeline-error-groq-llm-failed", + "pipeline-error-google-llm-failed", + "pipeline-error-xai-llm-failed", + "pipeline-error-mistral-llm-failed", + "pipeline-error-inflection-ai-llm-failed", + "pipeline-error-cerebras-llm-failed", + "pipeline-error-deep-seek-llm-failed", "pipeline-error-openai-400-bad-request-validation-failed", "pipeline-error-openai-401-unauthorized", "pipeline-error-openai-403-model-access-denied", @@ -21770,11 +27246,21 @@ "pipeline-error-xai-403-model-access-denied", "pipeline-error-xai-429-exceeded-quota", "pipeline-error-xai-500-server-error", + "pipeline-error-mistral-400-bad-request-validation-failed", + "pipeline-error-mistral-401-unauthorized", + "pipeline-error-mistral-403-model-access-denied", + "pipeline-error-mistral-429-exceeded-quota", + "pipeline-error-mistral-500-server-error", "pipeline-error-inflection-ai-400-bad-request-validation-failed", "pipeline-error-inflection-ai-401-unauthorized", "pipeline-error-inflection-ai-403-model-access-denied", "pipeline-error-inflection-ai-429-exceeded-quota", "pipeline-error-inflection-ai-500-server-error", + "pipeline-error-deep-seek-400-bad-request-validation-failed", + "pipeline-error-deep-seek-401-unauthorized", + "pipeline-error-deep-seek-403-model-access-denied", + "pipeline-error-deep-seek-429-exceeded-quota", + "pipeline-error-deep-seek-500-server-error", "pipeline-error-azure-openai-400-bad-request-validation-failed", "pipeline-error-azure-openai-401-unauthorized", "pipeline-error-azure-openai-403-model-access-denied", @@ -21785,6 +27271,11 @@ "pipeline-error-groq-403-model-access-denied", "pipeline-error-groq-429-exceeded-quota", "pipeline-error-groq-500-server-error", + "pipeline-error-cerebras-400-bad-request-validation-failed", + "pipeline-error-cerebras-401-unauthorized", + "pipeline-error-cerebras-403-model-access-denied", + "pipeline-error-cerebras-429-exceeded-quota", + "pipeline-error-cerebras-500-server-error", "pipeline-error-anthropic-400-bad-request-validation-failed", "pipeline-error-anthropic-401-unauthorized", "pipeline-error-anthropic-403-model-access-denied", @@ -21872,6 +27363,8 @@ "pipeline-error-playht-429-exceeded-quota", "pipeline-error-playht-502-gateway-error", "pipeline-error-playht-504-gateway-error", + "pipeline-error-tavus-video-failed", + "pipeline-error-custom-transcriber-failed", "pipeline-error-deepgram-returning-403-model-access-denied", "pipeline-error-deepgram-returning-401-invalid-credentials", "pipeline-error-deepgram-returning-404-not-found", @@ -21879,7 +27372,6 @@ "pipeline-error-deepgram-returning-500-invalid-json", "pipeline-error-deepgram-returning-502-network-error", "pipeline-error-deepgram-returning-502-bad-gateway-ehostunreach", - "pipeline-error-custom-transcriber-failed", "silence-timed-out", "sip-gateway-failed-to-connect-call", "twilio-failed-to-connect-call", @@ -21976,6 +27468,10 @@ "type": "string", "description": "This is the transcript of the call. This is only sent if the status is \"forwarding\"." }, + "summary": { + "type": "string", + "description": "This is the summary of the call. This is only sent if the status is \"forwarding\"." + }, "inboundPhoneCallDebuggingArtifacts": { "type": "object", "description": "This is the inbound phone call debugging artifacts. This is only sent if the status is \"ended\" and there was an error accepting the inbound phone call.\n\nThis will include any errors related to the \"assistant-request\" if one was made." @@ -22315,7 +27811,8 @@ "type": "string", "description": "This is the type of the message. \"transcript\" is sent as transcriber outputs partial or final transcript.", "enum": [ - "transcript" + "transcript", + "transcript[transcriptType=\"final\"]" ] }, "timestamp": { @@ -23073,6 +28570,21 @@ } } }, + "ClientInboundMessageEndCall": { + "type": "object", + "properties": { + "type": { + "type": "string", + "description": "This is the type of the message. Send \"end-call\" message to end the call.", + "enum": [ + "end-call" + ] + } + }, + "required": [ + "type" + ] + }, "ClientInboundMessageTransfer": { "type": "object", "properties": { @@ -23095,6 +28607,10 @@ "title": "SipTransferDestination" } ] + }, + "content": { + "type": "string", + "description": "This is the content to say." } }, "required": [ @@ -23119,6 +28635,10 @@ "$ref": "#/components/schemas/ClientInboundMessageSay", "title": "Say" }, + { + "$ref": "#/components/schemas/ClientInboundMessageEndCall", + "title": "EndCall" + }, { "$ref": "#/components/schemas/ClientInboundMessageTransfer", "title": "Transfer" @@ -23712,6 +29232,263 @@ "metadata" ] }, + "BashToolWithToolCall": { + "type": "object", + "properties": { + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false + }, + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } + }, + "type": { + "type": "string", + "enum": [ + "bash" + ], + "description": "The type of tool. \"bash\" for Bash tool." + }, + "subType": { + "type": "string", + "enum": [ + "bash_20241022" + ], + "description": "The sub type of tool." + }, + "toolCall": { + "$ref": "#/components/schemas/ToolCall" + }, + "name": { + "type": "string", + "description": "The name of the tool, fixed to 'bash'", + "default": "bash", + "enum": [ + "bash" + ] + }, + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] + }, + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] + } + }, + "required": [ + "type", + "subType", + "toolCall", + "name" + ] + }, + "ComputerToolWithToolCall": { + "type": "object", + "properties": { + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false + }, + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } + }, + "type": { + "type": "string", + "enum": [ + "computer" + ], + "description": "The type of tool. \"computer\" for Computer tool." + }, + "subType": { + "type": "string", + "enum": [ + "computer_20241022" + ], + "description": "The sub type of tool." + }, + "toolCall": { + "$ref": "#/components/schemas/ToolCall" + }, + "name": { + "type": "string", + "description": "The name of the tool, fixed to 'computer'", + "default": "computer", + "enum": [ + "computer" + ] + }, + "displayWidthPx": { + "type": "number", + "description": "The display width in pixels" + }, + "displayHeightPx": { + "type": "number", + "description": "The display height in pixels" + }, + "displayNumber": { + "type": "number", + "description": "Optional display number" + }, + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] + }, + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] + } + }, + "required": [ + "type", + "subType", + "toolCall", + "name", + "displayWidthPx", + "displayHeightPx" + ] + }, + "TextEditorToolWithToolCall": { + "type": "object", + "properties": { + "async": { + "type": "boolean", + "description": "This determines if the tool is async.\n\nIf async, the assistant will move forward without waiting for your server to respond. This is useful if you just want to trigger something on your server.\n\nIf sync, the assistant will wait for your server to respond. This is useful if want assistant to respond with the result from your server.\n\nDefaults to synchronous (`false`).", + "example": false + }, + "messages": { + "type": "array", + "description": "These are the messages that will be spoken to the user as the tool is running.\n\nFor some tools, this is auto-filled based on special fields like `tool.destinations`. For others like the function tool, these can be custom configured.", + "items": { + "oneOf": [ + { + "$ref": "#/components/schemas/ToolMessageStart", + "title": "ToolMessageStart" + }, + { + "$ref": "#/components/schemas/ToolMessageComplete", + "title": "ToolMessageComplete" + }, + { + "$ref": "#/components/schemas/ToolMessageFailed", + "title": "ToolMessageFailed" + }, + { + "$ref": "#/components/schemas/ToolMessageDelayed", + "title": "ToolMessageDelayed" + } + ] + } + }, + "type": { + "type": "string", + "enum": [ + "textEditor" + ], + "description": "The type of tool. \"textEditor\" for Text Editor tool." + }, + "subType": { + "type": "string", + "enum": [ + "text_editor_20241022" + ], + "description": "The sub type of tool." + }, + "toolCall": { + "$ref": "#/components/schemas/ToolCall" + }, + "name": { + "type": "string", + "description": "The name of the tool, fixed to 'str_replace_editor'", + "default": "str_replace_editor", + "enum": [ + "str_replace_editor" + ] + }, + "function": { + "description": "This is the function definition of the tool.\n\nFor `endCall`, `transferCall`, and `dtmf` tools, this is auto-filled based on tool-specific fields like `tool.destinations`. But, even in those cases, you can provide a custom function definition for advanced use cases.\n\nAn example of an advanced use case is if you want to customize the message that's spoken for `endCall` tool. You can specify a function where it returns an argument \"reason\". Then, in `messages` array, you can have many \"request-complete\" messages. One of these messages will be triggered if the `messages[].conditions` matches the \"reason\" argument.", + "allOf": [ + { + "$ref": "#/components/schemas/OpenAIFunction" + } + ] + }, + "server": { + "description": "This is the server that will be hit when this tool is requested by the model.\n\nAll requests will be sent with the call object among other things. You can find more details in the Server URL documentation.\n\nThis overrides the serverUrl set on the org and the phoneNumber. Order of precedence: highest tool.server.url, then assistant.serverUrl, then phoneNumber.serverUrl, then org.serverUrl.", + "allOf": [ + { + "$ref": "#/components/schemas/Server" + } + ] + } + }, + "required": [ + "type", + "subType", + "toolCall", + "name" + ] + }, "StepDestination": { "type": "object", "properties": { diff --git a/fern/assets/styles.css b/fern/assets/styles.css index d425d8b4f..389afc9dc 100644 --- a/fern/assets/styles.css +++ b/fern/assets/styles.css @@ -74,37 +74,59 @@ height: 100%; } -#\/api-reference\/webhooks\/server-message\#request h3 + div:first-of-type > div:first-of-type { +#\/api-reference\/webhooks\/server-message\#request + h3 + + div:first-of-type + > div:first-of-type { display: none; } -#\/api-reference\/webhooks\/server-message\#request h3 + div:first-of-type:before { +#\/api-reference\/webhooks\/server-message\#request + h3 + + div:first-of-type:before { content: "Vapi will make a request to your server with the following object:"; - font-size: .875rem; + font-size: 0.875rem; } -:is(.light) #\/api-reference\/webhooks\/server-message\#request h3 + div:first-of-type:before { +:is(.light) + #\/api-reference\/webhooks\/server-message\#request + h3 + + div:first-of-type:before { color: #0008059f; } -:is(.dark) #\/api-reference\/webhooks\/server-message\#request h3 + div:first-of-type:before { +:is(.dark) + #\/api-reference\/webhooks\/server-message\#request + h3 + + div:first-of-type:before { color: #f6f5ffb6; } -#\/api-reference\/webhooks\/client-message\#request h3 + div:first-of-type > div:first-of-type { +#\/api-reference\/webhooks\/client-message\#request + h3 + + div:first-of-type + > div:first-of-type { display: none; } -#\/api-reference\/webhooks\/client-message\#request h3 + div:first-of-type:before { +#\/api-reference\/webhooks\/client-message\#request + h3 + + div:first-of-type:before { content: "Vapi will make a request to your server with the following object:"; - font-size: .875rem; + font-size: 0.875rem; } -:is(.light) #\/api-reference\/webhooks\/client-message\#request h3 + div:first-of-type:before { +:is(.light) + #\/api-reference\/webhooks\/client-message\#request + h3 + + div:first-of-type:before { color: #0008059f; } -:is(.dark) #\/api-reference\/webhooks\/client-message\#request h3 + div:first-of-type:before { +:is(.dark) + #\/api-reference\/webhooks\/client-message\#request + h3 + + div:first-of-type:before { color: #f6f5ffb6; } @@ -115,4 +137,21 @@ .clipped-background { top: 0 !important; -} \ No newline at end of file +} + +/* HEADER */ +.fern-theme-default.fern-container .fern-header, +.fern-theme-default.fern-container .fern-header-tabs { + background-color: light-dark(#fffaea, #0e0e13); +} + +/* CARD STYLES */ +.fern-card { + border-color: light-dark(#d9d3c2, #27272a); + background-color: light-dark(#fffaea, #0e0e13); +} + +/* SIDEBAR */ +.fern-theme-default.fern-container .fern-sidebar-container { + background-color: light-dark(#fffaea, #0e0e13); +} diff --git a/fern/assistants.mdx b/fern/assistants.mdx index 4bd17482d..44c1aa57b 100644 --- a/fern/assistants.mdx +++ b/fern/assistants.mdx @@ -1,13 +1,62 @@ --- -title: Introduction +title: Introduction to Assistants subtitle: The core building-block of voice agents on Vapi. slug: assistants --- +[**Assistant**](/api-reference/assistants/create) is a fancy word for an AI configuration that can be used across phone calls and Vapi clients. Your voice assistant can augment your customer support and experience for call centers, business websites, mobile apps, and much more. -**Assistant** is a fancy word for an AI configuration that can be used across phone calls and Vapi clients. Your voice assistant can augment your customer support and experience for call centers, business websites, mobile apps, and much more. + -There are three core components: **Transcriber**, **Model**, and **Voice**. These can be configured, mixed, and matched for your use case.
There are also various other configurable properties you can find [here](/api-reference/assistants/create-assistant) Below, check out some ways you can layer in powerful customizations and features to meet any use case. +## Core Components + +There are three core components that make up an assistant: + +- **Transcriber**: Converts spoken audio into text +- **Model**: The AI model that processes the text and generates responses +- **Voice**: The voice that speaks the AI's responses + +These components can be configured, mixed, and matched for your specific use case. + + + View all configurable properties in the [API Reference](/api-reference/assistants/create-assistant). + + +## Key Features + +### Dynamic Variables +Personalize your assistant's responses using variables that can be customized for each call. This allows you to: +- Insert dynamic content like dates, times, and user information +- Customize greetings and responses +- Maintain context across conversations + +### Call Analysis +Get detailed insights into each conversation through: +- Call summaries +- Structured data extraction +- Success evaluation metrics +- Custom analysis rubrics + +### Persistence Options +Choose between: +- **Persistent Assistants**: Reusable configurations stored via the `/assistant` endpoint +- **Temporary Assistants**: One-time configurations specified when starting a call + +## Prompting Best Practices + +Effective prompt engineering is crucial for creating successful voice AI agents. Learn how to: +- Structure prompts for voice interactions +- Add personality and natural speech patterns +- Handle errors gracefully +- Improve response quality + + + Learn best practices for engineering voice AI prompts + ## Advanced Concepts diff --git a/fern/assistants/voice-formatting-plan.mdx b/fern/assistants/voice-formatting-plan.mdx new file mode 100644 index 000000000..fdb77ce95 --- /dev/null +++ b/fern/assistants/voice-formatting-plan.mdx @@ -0,0 +1,246 @@ +## What is Voice Input Formatted? + +When interacting with voice assistants, you might notice terms like `Voice Input Formatted` in call logs or system outputs. This article explains what this means, how it works, and why it's important for delivering clear and natural voice interactions. + +Voice Input Formatted is a function that takes raw text from a language model (LLM) and cleans it up so text-to-speech (TTS) provider can read it more naturally. It’s **on by default** in your assistant’s voice provider settings, because it helps turn things like: + +- `$42.50` → `forty two dollars and fifty cents` +- `ST` → `STREET`, +- or phone numbers → spaced digits (“1 2 3 4 5 6 7 8 9 0”). + +If you prefer the raw, unchanged text, you can **turn off** these transformations, which we’ll show you later. + +### Log Example + +![Screenshot 2025-01-21 at 10.23.19.png](https://img.notionusercontent.com/s3/prod-files-secure%2Ffdafdda2-774c-49e6-8896-a352ff4d44f3%2Ff603f2bd-36cf-4085-a3bc-f76c89a1ef75%2FScreenshot_2025-01-21_at_10.23.19.png/size/w=2000?exp=1737581744&sig=yoEEQF-BcTTgEVBNdcZh9MWHye2moRsbUcxGPjATNX8) + +## 1. Step-by-Step Transformations + +When `Voice Input Formatted` runs, it calls a bunch of helper functions in a row. Each one focuses on a different kind of text pattern. The entire process happens in this order: + +1. **removeAngleBracketContent** +2. **removeMarkdownSymbols** +3. **removePhrasesInAsterisks** +4. **replaceNewLinesWithPeriods** +5. **replaceColonsWithPeriods** +6. **formatAcronyms** +7. **formatDollarAmounts** +8. **formatEmails** +9. **formatDates** +10. **formatTimes** +11. **formatDistances, formatUnits, formatPercentages, formatPhoneNumbers** +12. **formatNumbers** +13. **Applying Replacements** + +We’ll walk you through them using a **shorter example** than before. + +### 1.1 Our Simpler Example Input + +``` +Hello world +**Wanted** to say *hi* +We have NASA and .NET here, +call me at 123-456-7890, +price: $42.50 +and the date is 2023 05 10 +and time is 14:00 +Distance is 5km +We might see 9999 +the address is 320 ST 21 RD +my email is JOHN.DOE@example.COM + +``` + +### 1.2 removeAngleBracketContent + +- **What it does**: Removes `` unless it’s ``, ``, or double angle brackets `<< >>`. +- **Example effect**: `` gets removed. + +**Result so far**: + +``` +Hello world +**Wanted** to say *hi* +We have NASA and .NET here, +call me at 123-456-7890, +price: $42.50 +and the date is 2023 05 10 +and time is 14:00 +Distance is 5km +We might see 9999 +the address is 320 ST 21 RD +my email is JOHN.DOE@example.COM + +``` + +### 1.3 removeMarkdownSymbols + +- **What it does**: Removes `_`, ```, or `~`. Some versions also remove double asterisks, but that might happen in a later step (next function). + +In this example, there’s `**Wanted**`, which _might_ remain if we strictly only remove `_`, backticks, and tildes. If the code does remove `**` as well, it’ll vanish here or in the next step. Let’s assume it doesn’t remove them in this step. + +**Result**: _No real change if the code only targets `_` , ```, and `~`.\_ + +``` +Hello world +**Wanted** to say *hi* +... + +``` + +### 1.4 removePhrasesInAsterisks + +- **What it does**: Looks for `some text*` or `*some text**` and cuts it out. + +In our text, we have `**Wanted**` and `*hi*`. Both get removed if the function is broad enough to remove single and double-asterisk blocks. + +**Result**: + +``` +Hello world + to say +We have NASA and .NET here, +call me at 123-456-7890, +price: $42.50 +and the date is 2023 05 10 +and time is 14:00 +Distance is 5km +We might see 9999 +the address is 320 ST 21 RD +my email is JOHN.DOE@example.COM + +``` + +### 1.5 replaceNewLinesWithPeriods + +- **What it does**: Turns line breaks into `.` or `.` and merges repeated periods. + +Let’s say the above text has line breaks. After this step, it’s more of a single line (or fewer lines), each newline replaced by a period. + +**Result** (roughly): + +``` +Hello world . to say . We have NASA and .NET here, call me at 123-456-7890, price: $42.50 and the date is 2023 05 10 and time is 14:00 Distance is 5km We might see 9999 the address is 320 ST 21 RD my email is JOHN.DOE@example.COM + +``` + +### 1.6 replaceColonsWithPeriods + +- **What it does**: `:` → `.` + +Our text has `price: $42.50`. That becomes `price. $42.50`. + +**Result**: + +``` +Hello world . to say . We have NASA and .NET here, call me at 123-456-7890, price. $42.50 ... + +``` + +### 1.7 formatAcronyms + +- **What it does**: + - If something is in a known “to-lower” list (like `NASA`, `.NET`), it becomes lowercase (`nasa`, `.net`). + - If it’s all-caps but not recognized, it might get spaced letters. If it has vowels, it’s left alone. + +In the example: + +- `NASA` → `nasa` +- `.NET` → `.net` + +### 1.8 formatDollarAmounts + +- **What it does**: `$42.50` → “forty two dollars and fifty cents.” + +### 1.9 formatEmails + +- **What it does**: Replaces `@` with “ at ” and `.` with “ dot ” in emails. +- `JOHN.DOE@example.COM` → `JOHN dot DOE at example dot COM` + +### 1.10 formatDates + +- **What it does**: `YYYY MM DD` → e.g. “Wednesday, May 10, 2023” (if valid). +- `2023 05 10` become “Wednesday, May 10, 2023” (day name depends on how the code calculates it). + +### 1.11 formatTimes + +- **What it does**: `14:00` → `14` (since minutes are “00,” it remove them). +- If it was `14:30`, it might become `14 30`. + +### 1.12 formatDistances, formatUnits, formatPercentages, formatPhoneNumbers + +- **Distances**: `5km` → “5 kilometers.” +- **Units**: e.g. `43 lb` → “forty three pounds.” +- **Percentages**: `50%` → “50 percent.” +- **PhoneNumbers**: `123-456-7890` → `1 2 3 4 5 6 7 8 9 0`. + +### 1.13 formatNumbers + +- **What it does**: + - Skips year-like numbers if they’re below current year(2025). + - For large numbers above a cutoff (e.g. 1000 or 5000), it reads as digits. + - Negative numbers: `9` → “minus nine.” + - Decimals: `2.5` → “two point five.” + +In our case, `9999` might be big enough to become spelled out (nine thousand nine hundred ninety nine) or digits spaced out, depending on the cutoff. + +`2023` used with `05 10` might get turned into a date, so it’s handled by the date logic, not the plain number logic. + +### 1.14 Applying Replacements (street-suffix expansions) + +- **Runs last**. If you have user-defined replacements like `\bST\b` → `STREET`, `\bRD\b` → `ROAD`, it changes them after all the other steps. +- So `320 ST 21 RD` → `320 STREET 21 ROAD`. + +**End Result**: A single line of text with all the helpful expansions and transformations done. + +## 2. Formatting Plan: Customization Options + +The **Formatting Plan** governs how Voice Input Formatted works. Here are the main settings you can customize: + +### 2.1 Enabled + +Determines whether the formatting is applied. + +- **Default**: `true` +- To disable: Set `voice.chunkPlan.formatPlan.enabled = false`. + +### 2.2 Number-to-Digits Cutoff + +This decides when numbers are read as digits instead of words. + +- **Default**: `2025` (current year). +- The code generally **doesn’t** convert numbers below the current year (like `2025`) into spelled-out words, so it stays as digits if it’s obviously a year. +- If a number is bigger than the cutoff (`numberToDigitsCutoff`), it reads digits out loud. +- Negative numbers become “minus,” decimals get “point,” etc. +- Example: With a cutoff of `2025`, numbers like `12345` will remain digits. +- To ensure larger numbers are spelled out, set the cutoff higher, like `300000`. For example: + - `30003` → “thirty thousand and three” (with a cutoff of `300000`). + +### 2.3 Replacements + +Allows exact or regex-based substitutions in text. + +- **Example 1**: Replace `hello` with `hi`:`{ type: 'exact', key: 'hello', value: 'hi' }`. +- **Example 2**: Replace words matching a pattern:`{ type: 'regex', regex: '\\\\b[a-zA-Z]{5}\\\\b', value: 'hi' }`. + +### Note + +Currently, only **replacements** and **number-to-digits cutoff** are exposed for customization. Other options, such as toggling acronym replacement, are not exposed to be toggled. + +## 3. How to Turn It Off + +By default, the entire pipeline is **on** because it helps TTS read better. To **turn it off**, set: + +``` +voice.chunkPlan.enabled = false; +// or +voice.chunkPlan.formatPlan.enabled = false; +``` + +Any of those flags being `false` means we **skip** calling `Voice Input Formatted`. + +## 4. Conclusion + +- `Voice Input Formatted` orchestrates a chain of mini-functions that together fix punctuation, expand abbreviations, and make text more readable out loud. +- You can keep it **on** for better TTS results or **off** if you need the raw LLM output. +- The final transformations, especially the user-supplied replacements (like street expansions), happen **last**, so keep that in mind it rely on other expansions earlier. diff --git a/fern/quickstart/billing.mdx b/fern/billing/billing-faq.mdx similarity index 99% rename from fern/quickstart/billing.mdx rename to fern/billing/billing-faq.mdx index 832fa0ba0..fab17eb17 100644 --- a/fern/quickstart/billing.mdx +++ b/fern/billing/billing-faq.mdx @@ -1,7 +1,7 @@ --- title: Billing FAQ subtitle: How billing with Vapi works. -slug: quickstart/billing +slug: billing/billing-faq --- ### Overview diff --git a/fern/billing/billing-limits.mdx b/fern/billing/billing-limits.mdx index 3b92f19d3..7dadaa746 100644 --- a/fern/billing/billing-limits.mdx +++ b/fern/billing/billing-limits.mdx @@ -9,9 +9,16 @@ You can set billing limits in the billing section of your dashboard. You can access your billing settings at - [dashboard.vapi.ai/billing](https://dashboard.vapi.ai/billing) + [dashboard.vapi.ai/org/billing](https://dashboard.vapi.ai/org/billing) +### Concurrency Limits +Vapi has concurrency limits on both inbound and outbound calls. These limits define the maximum number of simultaneous calls your account can handle. Exceeding your concurrency limit causes new requests to queue or be rejected until existing calls finish. + +- The default concurrency limit is 10 simultaneous calls(inbound and outbound calls combined). This limit applies to your entire account and is not dependent on the number of users or organizations associated with it. + +- To increase your concurrency limit beyond the default of 10, you must purchase additional concurrent lines through the dashboard section. + ### Setting a Monthly Billing Limit In your billing settings you can set a monthly billing limit: diff --git a/fern/blocks.mdx b/fern/blocks.mdx index 1f00745ed..f19bf5860 100644 --- a/fern/blocks.mdx +++ b/fern/blocks.mdx @@ -1,12 +1,14 @@ --- -title: Introduction +title: Introduction to Blocks subtitle: Breaking down bot conversations into smaller, more manageable prompts slug: blocks --- + + **Blocks** is being deprecated in favor of [Workflows](/workflows). We recommend using Workflows for all new development as it provides a more powerful and flexible way to structure conversational AI. We're working on migration tools to help transition existing Blocks implementations to Workflows. + - -We're currently running a beta for **Blocks**, an upcoming feature from [Vapi.ai](http://vapi.ai/) aimed at improving bot conversations. The problem we've noticed is that single LLM prompts are prone to hallucinations, unreliable tool calls, and can’t handle many-step complex instructions. +We're currently running a beta for [**Blocks**](/api-reference/blocks/create), an upcoming feature from [Vapi.ai](http://vapi.ai/) aimed at improving bot conversations. The problem we've noticed is that single LLM prompts are prone to hallucinations, unreliable tool calls, and can’t handle many-step complex instructions. **By breaking the conversation into smaller, more manageable prompts**, we can guarantee the bot will do this, then that, or if this happens, then that happens. It’s like having a checklist for conversations — less room for error, more room for getting things right. diff --git a/fern/blocks/block-types.mdx b/fern/blocks/block-types.mdx index f3563afdb..5a948ac4c 100644 --- a/fern/blocks/block-types.mdx +++ b/fern/blocks/block-types.mdx @@ -4,6 +4,9 @@ subtitle: 'Building the Logic and Actions for Each Step in Your Conversation ' slug: blocks/block-types --- + + **Blocks** is being deprecated in favor of [Workflows](/workflows). We recommend using Workflows for all new development as it provides a more powerful and flexible way to structure conversational AI. We're working on migration tools to help transition existing Blocks implementations to Workflows. + [**Blocks**](https://api.vapi.ai/api#/Blocks/BlockController_create) are the functional units within a Step, defining what action happens at each stage of a conversation. Each Step can contain only one Block, and there are three main types of Blocks, each designed to handle different aspects of conversation flow. diff --git a/fern/blocks/steps.mdx b/fern/blocks/steps.mdx index b5391bf3e..918893c2f 100644 --- a/fern/blocks/steps.mdx +++ b/fern/blocks/steps.mdx @@ -4,13 +4,12 @@ subtitle: Building and Controlling Conversation Flow for Your Assistants slug: blocks/steps --- + + **Blocks** is being deprecated in favor of [Workflows](/workflows). We recommend using Workflows for all new development as it provides a more powerful and flexible way to structure conversational AI. We're working on migration tools to help transition existing Blocks implementations to Workflows. + [**Steps**](https://api.vapi.ai/api#:~:text=HandoffStep) are the core building blocks that dictate how conversations progress in a bot interaction. Each Step represents a distinct point in the conversation where the bot performs an action, gathers information, or decides where to go next. Think of Steps as checkpoints in a conversation that guide the flow, manage user inputs, and determine outcomes. - - Blocks is currently in beta. We're excited to have you try this new feature and welcome your [feedback](https://discord.com/invite/pUFNcf2WmH) as we continue to refine and improve the experience. - - #### Features - **Output:** The data or response expected from the step, as outlined in the block's `outputSchema`. diff --git a/fern/call-forwarding.mdx b/fern/call-forwarding.mdx index 0cd5b5b2d..7079ffeaa 100644 --- a/fern/call-forwarding.mdx +++ b/fern/call-forwarding.mdx @@ -259,4 +259,50 @@ Here is a full example of a `transferCall` payload using the warm transfer with } ``` +#### 3. Warm Transfer with TwiML + +In this mode, Vapi executes TwiML instructions on the destination call leg before connecting the destination number. + +* **Configuration:** + * Set the `mode` to `"warm-transfer-with-twiml"`. + * Provide the TwiML instructions in the `twiml` property. + * Supports only `Play`, `Say`, `Gather`, `Hangup`, and `Pause` verbs. + * Maximum TwiML length is 4096 characters. + +* **Example:** + +```json +"transferPlan": { + "mode": "warm-transfer-with-twiml", + "twiml": "Hello, transferring a customer to you.They called about billing questions." +} +``` + + +Here is a full example of a `transferCall` payload using the warm transfer with TwiML mode: + +```json +{ + "type": "transferCall", + "messages": [ + { + "type": "request-start", + "content": "I'll transfer you to someone who can help." + } + ], + "destinations": [ + { + "type": "number", + "number": "+14155551234", + "description": "Transfer to customer support", + "transferPlan": { + "mode": "warm-transfer-with-twiml", + "twiml": "Hello, this is an incoming call from a customer.They have questions about their recent order.Connecting you now.", + "sipVerb": "refer" + } + } + ] +} +``` + **Note:** In all warm transfer modes, the `{{transcript}}` variable contains the full transcript of the call and can be used within the `summaryPlan`. diff --git a/fern/calls/call-dynamic-transfers.mdx b/fern/calls/call-dynamic-transfers.mdx new file mode 100644 index 000000000..84beccc8d --- /dev/null +++ b/fern/calls/call-dynamic-transfers.mdx @@ -0,0 +1,126 @@ +## Introduction to Transfer Destinations + +Transferring calls dynamically based on context is an essential feature for handling user interactions effectively. This guide walks you through creating a custom transfer tool, linking it to your assistant, and handling transfer requests with detailed examples. Whether the destination is a phone number, SIP, or another assistant, you'll learn how to configure it seamlessly. + +## Step 1: Create a Custom Transfer Tool + +To get started, create a transfer tool with an empty `destinations` array: + +```bash +curl -X POST https://api.vapi.ai/tool \ + -H "Authorization: Bearer insert-private-key-here" \ + -H "Content-Type: application/json" \ + -d '{ + "type": "transferCall", + "destinations": [], + "function": { + "name": "dynamicDestinationTransferCall" + } +}' +``` + +This tool acts as a placeholder, allowing dynamic destinations to be defined at runtime. + +## Step 2: Link the Tool to Your Assistant + +After creating the tool, link it to your assistant. This connection enables the assistant to trigger the tool during calls. + +## Step 3: Configure the Server Event + +Select the `transfer-destination-request` server event in your assistant settings. This event sends a webhook to your server whenever a transfer is requested, giving you the flexibility to dynamically determine the destination. + +## Step 4: Set Up Your Server + +Ensure your server is ready to handle incoming requests. Update the assistant's server URL to point to your server, which will process transfer requests and respond with the appropriate destination or error. + +## Step 5: Trigger the Tool and Process Requests + +Use the following prompt to trigger the transfer tool: + +``` +[TASK] +trigger the dynamicDestinationTransferCall tool +``` + +When triggered, the assistant sends a `transfer-destination-request` webhook to your server. This webhook contains all the necessary call details, such as transcripts and messages, allowing your server to process the request dynamically. + +**Sample Request Payload:** + +```json +{ + "type": "transfer-destination-request", + "artifact": { + "messages": [...], + "transcript": "Hello, how can I help you?", + "messagesOpenAIFormatted": [...] + }, + "assistant": { "id": "assistant123" }, + "phoneNumber": "+14155552671", + "customer": { "id": "customer456" }, + "call": { "id": "call789", "status": "ongoing" } +} +``` + +## Step 6: Respond to Transfer Requests + +Your server should respond with either a valid `destination` or an `error` to indicate why the transfer cannot be completed. + +### Transfer Destination Request Response Payload + +#### Number Destination + +```json +{ + "destination": { + "type": "number", + "message": "Connecting you to our support line.", + "number": "+14155552671", + "numberE164CheckEnabled": true, + "callerId": "+14155551234", + "extension": "101" + } +} +``` + +Transfers the call to a specific phone number, with options for caller ID and extensions. + +#### SIP Destination + +```json +{ + "destination": { + "type": "sip", + "message": "Connecting your call via SIP.", + "sipUri": "sip:customer-support@domain.com", + "sipHeaders": { + "X-Custom-Header": "value" + } + } +} +``` + +Transfers the call to a SIP URI with optional custom headers. + +### Error Response + +If the transfer cannot be completed, respond with an error: + +```json +{ + "error": "Invalid destination specified." +} +``` + +- **Field**: `error` +- **Description**: Provides a clear reason why the transfer failed. + +## Destination or Error in Response + +Every response to a transfer-destination-request must include either a `destination` or an `error`. These indicate the outcome of the transfer request: + +- **Destination**: Provides details for transferring the call. +- **Error**: Explains why the transfer cannot be completed. + +## Conclusion + +Dynamic call transfers empower your assistant to route calls efficiently based on real-time data. By implementing this flow, you can ensure seamless interactions and provide a better experience for your users. diff --git a/fern/calls/call-handling-with-vapi-and-twilio.mdx b/fern/calls/call-handling-with-vapi-and-twilio.mdx new file mode 100644 index 000000000..12bf06f58 --- /dev/null +++ b/fern/calls/call-handling-with-vapi-and-twilio.mdx @@ -0,0 +1,274 @@ +This document explains how to handle a scenario where a user is on hold while the system attempts to connect them to a specialist. If the specialist does not pick up within X seconds or if the call hits voicemail, we take an alternate action (like playing an announcement or scheduling an appointment). This solution integrates Vapi.ai for AI-driven conversations and Twilio for call bridging. + +## Problem + +Vapi.ai does not provide a built-in way to keep the user on hold, dial a specialist, and handle cases where the specialist is unavailable. We want: + +1. The user already talking to the AI (Vapi). +2. The AI offers to connect them to a specialist. +3. The user is placed on hold or in a conference room. +4. We dial the specialist to join. +5. If the specialist answers, everyone is merged. +6. If the specialist does not answer (within X seconds or goes to voicemail), we want to either announce "Specialist not available" or schedule an appointment. + +## Solution + +1. An inbound call arrives from Vapi or from the user directly. +2. We store its details (e.g., Twilio CallSid). +3. We send TwiML (or instructions) to put the user in a Twilio conference (on hold). +4. We place a second call to the specialist, also directed to join the same conference. +5. If the specialist picks up, Twilio merges the calls. +6. If not, we handle the no-answer event by playing a message or returning control to the AI for scheduling. + +## Steps to Solve the Problem + +1. **Receive Inbound Call** + + - Twilio posts data to your `/inbound_call`. + - You store the call reference. + - You might also invoke Vapi for initial AI instructions. + +2. **Prompt User via Vapi** + + - The user decides whether they want the specialist. + - If yes, you call an endpoint (e.g., `/connect`). + +3. **Create/Join Conference** + + - In `/connect`, you update the inbound call to go into a conference route. + - The user is effectively on hold. + +4. **Dial Specialist** + + - You create a second call leg to the specialist’s phone. + - A `statusCallback` can detect no-answer or voicemail. + +5. **Detect Unanswered** + + - If Twilio sees a no-answer or failure, your callback logic plays an announcement or signals the AI to schedule an appointment. + +6. **Merge or Exit** + + - If the specialist answers, they join the user. + - If not, the user is taken off hold and the call ends or goes back to AI. + +7. **Use Ephemeral Call (Optional)** + - If you need an in-conference announcement, create a short-lived Twilio call that `` the message to everyone, then ends the conference. + +## Code Example + +Below is a minimal Express.js server aligned for On-Hold Specialist Transfer with Vapi and Twilio. + +1. **Express Setup and Environment** + +```js +const express = require("express"); +const bodyParser = require("body-parser"); +const axios = require("axios"); +const twilio = require("twilio"); + +const app = express(); +app.use(bodyParser.urlencoded({ extended: true })); +app.use(bodyParser.json()); + +// Load important env vars +const { + TWILIO_ACCOUNT_SID, + TWILIO_AUTH_TOKEN, + FROM_NUMBER, + TO_NUMBER, + VAPI_BASE_URL, + PHONE_NUMBER_ID, + ASSISTANT_ID, + PRIVATE_API_KEY, +} = process.env; + +// Create a Twilio client +const client = twilio(TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN); + +// We'll store the inbound call SID here for simplicity +let globalCallSid = ""; +``` + +2. **`/inbound_call` - Handling the Inbound Call** + +```js +app.post("/inbound_call", async (req, res) => { + try { + globalCallSid = req.body.CallSid; + const caller = req.body.Caller; + + // Example: We call Vapi.ai to get initial TwiML + const response = await axios.post( + `${VAPI_BASE_URL || "https://api.vapi.ai"}/call`, + { + phoneNumberId: PHONE_NUMBER_ID, + phoneCallProviderBypassEnabled: true, + customer: { number: caller }, + assistantId: ASSISTANT_ID, + }, + { + headers: { + Authorization: `Bearer ${PRIVATE_API_KEY}`, + "Content-Type": "application/json", + }, + } + ); + + const returnedTwiml = response.data.phoneCallProviderDetails.twiml; + return res.type("text/xml").send(returnedTwiml); + } catch (err) { + return res.status(500).send("Internal Server Error"); + } +}); +``` + +3. **`/connect` - Putting User on Hold and Dialing Specialist** + +```js +app.post("/connect", async (req, res) => { + try { + const protocol = + req.headers["x-forwarded-proto"] === "https" ? "https" : "http"; + const baseUrl = `${protocol}://${req.get("host")}`; + const conferenceUrl = `${baseUrl}/conference`; + + // 1) Update inbound call to fetch TwiML from /conference + await client.calls(globalCallSid).update({ + url: conferenceUrl, + method: "POST", + }); + + // 2) Dial the specialist + const statusCallbackUrl = `${baseUrl}/participant-status`; + + await client.calls.create({ + to: TO_NUMBER, + from: FROM_NUMBER, + url: conferenceUrl, + method: "POST", + statusCallback: statusCallbackUrl, + statusCallbackMethod: "POST", + }); + + return res.json({ status: "Specialist call initiated" }); + } catch (err) { + return res.status(500).json({ error: "Failed to connect specialist" }); + } +}); +``` + +4. **`/conference` - Placing Callers Into a Conference** + +```js +app.post("/conference", (req, res) => { + const VoiceResponse = twilio.twiml.VoiceResponse; + const twiml = new VoiceResponse(); + + // Put the caller(s) into a conference + const dial = twiml.dial(); + dial.conference( + { + startConferenceOnEnter: true, + endConferenceOnExit: true, + }, + "my_conference_room" + ); + + return res.type("text/xml").send(twiml.toString()); +}); +``` + +5. **`/participant-status` - Handling No-Answer or Busy** + +```js +app.post("/participant-status", async (req, res) => { + const callStatus = req.body.CallStatus; + if (["no-answer", "busy", "failed"].includes(callStatus)) { + console.log("Specialist did not pick up:", callStatus); + // Additional logic: schedule an appointment, ephemeral call, etc. + } + return res.sendStatus(200); +}); +``` + +6. **`/announce` (Optional) - Ephemeral Announcement** + +```js +app.post("/announce", (req, res) => { + const VoiceResponse = twilio.twiml.VoiceResponse; + const twiml = new VoiceResponse(); + twiml.say("Specialist is not available. Ending call now."); + + // Join the conference, then end it. + twiml.dial().conference( + { + startConferenceOnEnter: true, + endConferenceOnExit: true, + }, + "my_conference_room" + ); + + return res.type("text/xml").send(twiml.toString()); +}); +``` + +7. **Starting the Server** + +```js +app.listen(3000, () => { + console.log("Server running on port 3000"); +}); +``` + +## How to Test + +1. **Environment Variables** + Set `TWILIO_ACCOUNT_SID`, `TWILIO_AUTH_TOKEN`, `FROM_NUMBER`, `TO_NUMBER`, `VAPI_BASE_URL`, `PHONE_NUMBER_ID`, `ASSISTANT_ID`, and `PRIVATE_API_KEY`. + +2. **Expose Your Server** + + - Use a tool like `ngrok` to create a public URL to port 3000. + - Configure your Twilio phone number to call `/inbound_call` when a call comes in. + +3. **Place a Real Call** + + - Dial your Twilio number from a phone. + - Twilio hits `/inbound_call`, and run Vapi logic. + - Trigger `/connect` to conference the user and dial the specialist. + - If the specialist answers, they join the same conference. + - If they never answer, Twilio eventually calls `/participant-status`. + +4. **Use cURL for Testing** + - **Simulate Inbound**: + ```bash + curl -X POST https:///inbound_call \ + -F "CallSid=CA12345" \ + -F "Caller=+15551112222" + ``` + - **Connect**: + ```bash + curl -X POST https:///connect \ + -H "Content-Type: application/json" \ + -d "{}" + ``` + +## Note on Replacing "Connect" with Vapi Tools + +Vapi offers built-in functions or custom tool calls for placing a second call or transferring, you can replace the manual `/connect` call with that Vapi functionality. The flow remains the same: user is put in a Twilio conference, the specialist is dialed, and any no-answer events are handled. + +## Notes & Limitations + +1. **Voicemail** + If a phone’s voicemail picks up, Twilio sees it as answered. Consider advanced detection or a fallback. + +2. **Concurrent Calls** + Multiple calls at once require storing separate `CallSid`s or similar references. + +3. **Conference Behavior** + `startConferenceOnEnter: true` merges participants immediately; `endConferenceOnExit: true` ends the conference when that participant leaves. + +4. **X Seconds** + Decide how you detect no-answer. Typically, Twilio sets a final `callStatus` if the remote side never picks up. + +With these steps and code, you can integrate Vapi Assistant while using Twilio’s conferencing features to hold, dial out to a specialist, and handle an unanswered or unavailable specialist scenario. diff --git a/fern/calls/voice-mail-detection.mdx b/fern/calls/voice-mail-detection.mdx new file mode 100644 index 000000000..448a67cf6 --- /dev/null +++ b/fern/calls/voice-mail-detection.mdx @@ -0,0 +1,131 @@ +Voicemail is basically a digital answering machine. When you can’t pick up, callers can leave a message so you don’t miss anything important. It’s especially handy if you’re in a meeting, driving, or just can’t get to the phone in time. + +### **The Main Problem** + +If a lot of your calls are landing in voicemail, you could be spending too much time and money on calls that never connect to a real person. This leads to wasted resources, and sometimes missed business opportunities. + +### **The Solution: Early Voicemail Detection** + +By detecting voicemail right away, your VAPI Assistant can either hang up (if leaving a message isn’t necessary) or smoothly play a recorded message. This cuts down on useless call time and makes your entire call flow more efficient. + +## **Two Ways to Detect Voicemail** + +### **1. Using Twilio’s Voicemail Detection** + +Twilio has built-in features to detect when a machine picks up. You configure these settings in your VAPI Assistant so it knows when a voicemail system has answered instead of a live person. + +```jsx +voicemailDetection: { + provider: "twilio", + voicemailDetectionTypes: [ + "machine_start", + "machine_end_beep", + "machine_end_silence", + "unknown", + "machine_end_other" + ], + enabled: true, + machineDetectionTimeout: 15, + machineDetectionSpeechThreshold: 2500, + machineDetectionSpeechEndThreshold: 2050, + machineDetectionSilenceTimeout: 2000 +} + +``` + +- **provider**: Tells VAPI to use Twilio’s system. +- **voicemailDetectionTypes**: Defines the events that mean “voicemail.” +- **machineDetectionTimeout**: How many seconds to wait to confirm a machine. +- The other settings let you fine-tune how quickly or accurately Twilio identifies a machine based on speech or silence. + +#### Quick Reference + +| Setting | Type | Valid Range | Default | +| ---------------------------------- | ------ | ------------- | -------- | +| machineDetectionTimeout | number | 3 – 59 (sec) | 30 (sec) | +| machineDetectionSpeechThreshold | number | 1000–6000 ms | 2400 ms | +| machineDetectionSpeechEndThreshold | number | 500–5000 ms | 1200 ms | +| machineDetectionSilenceTimeout | number | 2000–10000 ms | 5000 ms | + +### **2. Using VAPI’s Built-In Voicemail Tool** + +VAPI also has an LLM-powered tool that listens for typical voicemail greetings or prompts in the call’s audio transcription. If you prefer an approach that relies more on phrasing and context clues, this is a great option. + +```jsx +{ + ...yourExistingSettings, + "model": { + "tools": [{ type: "voicemail" }] + } +} + +``` + +Here, `tools: [{ type: "voicemail" }]` signals that your VAPI Assistant should look for keywords or patterns indicating a voicemail greeting. + +## **Combining Both Approaches** + +For the best of both worlds, you can enable Twilio’s detection **and** the built-in voicemail tool at the same time: + +```jsx +{ + ...yourExistingSettings, + voicemailDetection: { + provider: "twilio", + voicemailDetectionTypes: [ + "machine_start", + "machine_end_beep", + "unknown" + ], + enabled: true, + machineDetectionTimeout: 15 + }, + model: { + tools: [{ type: "voicemail" }] + } +} + +``` + +When one method doesn’t catch it, the other might—boosting your overall detection accuracy. + +## **Tips for Better Voicemail Handling** + +1. **Adjust Detection Timing** + + Lower `machineDetectionTimeout` (e.g., to 5 seconds) if you want the system to decide faster. But remember, shorter timeouts can lead to occasional false positives. + +2. **Fine-Tune Speech and Silence Thresholds** + + For example: + + ```jsx + { + "provider": "twilio", + "enabled": true, + "machineDetectionTimeout": 5, + "machineDetectionSpeechThreshold": 2400, + "machineDetectionSpeechEndThreshold": 1000, + "machineDetectionSilenceTimeout": 3000 + } + + ``` + + These values tweak how quickly Twilio “listens” for human speech or background silence. + +3. **Think Through Your Call Flow** + - **Give It Time**: If you’re leaving a message, you might want to increase `startSpeakingPlan.waitSeconds` so the detection has enough time before the tone. + - **firstMessageMode**: Setting it to `assistant-waits-for-user` can also give you smoother call handling—your assistant won’t barge in if someone unexpectedly picks up late + +## **What Happens When a Call Ends?** + +- **Detected Voicemail + No Message**: The call will end, and you’ll see a reason like `customer-did-not-answer`. +- **Detected Voicemail + Have a Message**: Your assistant leaves the recorded message, and the call ends with a reason like `voicemail`. + +## **Testing and Next Steps** + +1. **Make a Test Call**: Dial a known voicemail number and watch how quickly (and accurately) your VAPI Assistant identifies the machine. +2. **Tweak Settings**: Adjust your timeout and threshold values based on real-world performance. +3. **Repeat**: Keep testing until you’re confident your configuration is catching voicemail reliably without cutting off real people. + +By following these steps, you’ll save time, improve call-handling efficiency, and ensure your system feels more professional. If you need to fine-tune or add new features later, you can always revisit these settings and make quick adjustments. diff --git a/fern/changelog/2024-12-10.mdx b/fern/changelog/2024-12-10.mdx new file mode 100644 index 000000000..04cc304cc --- /dev/null +++ b/fern/changelog/2024-12-10.mdx @@ -0,0 +1,3 @@ +1. **Claude Computer Use Tools Available**: You can now use [Claude computer use tools](https://www.anthropic.com/news/3-5-models-and-computer-use) like `BashTool`, `ComputerTool`, and `TextEditorTool` when building your Vapi assistant. Create these tools with `CreateBashToolDTO` (enables shell command execution), `CreateComputerToolDTO` (use desktop functionality with customizable display dimensions using `displayWidthPx`, `displayHeightPx`), and `CreateTextEditorToolDTO` (text editing operations), respectively. + +Refer to our [API docs](https://api.vapi.ai/api) to learn more about how to use Claude computer use tools. \ No newline at end of file diff --git a/fern/changelog/2024-12-11.mdx b/fern/changelog/2024-12-11.mdx new file mode 100644 index 000000000..bef409f70 --- /dev/null +++ b/fern/changelog/2024-12-11.mdx @@ -0,0 +1,3 @@ +1. **Use OpenAI Chat Completions in your Assistant**: you can now more easily integrate your Assistant with OpenAI's [chat completions sessions](https://platform.openai.com/docs/api-reference/chat) by specifying `messages` (an array of `OpenAIMessage` objects) and an `assistantId` (a string). Each `OpenAIMessage` in turn consists of a `content` (a string between 1 and 100,000,000 characters) and a role (between *assistant*, *function*, *user*, *system*, *tool*). This makes it easier to manage chat sessions associated with a specific assistant. Refer to the `ChatDTO`, `OpenAIMessage` schemas in [our API docs](https://api.vapi.ai/api) to learn more. + +2. **Update Subscription Email on Billing Page**: you can now customize which email address appears on your Vapi invoices through the updated billing page > [under payment history](https://dashboard.vapi.ai/org/billing). You can specify an email address (in addition through physical address and tax id) - read more in [our docs](https://docs.vapi.ai/quickstart/billing#how-do-i-download-invoices-for-my-credit-purchases). \ No newline at end of file diff --git a/fern/changelog/2024-12-13.mdx b/fern/changelog/2024-12-13.mdx new file mode 100644 index 000000000..1ae525295 --- /dev/null +++ b/fern/changelog/2024-12-13.mdx @@ -0,0 +1,3 @@ +1. **Azure Speech Transcriber Support**: You can now use Azure's speech-to-text service by specifying `AzureSpeechTranscriber` as an option for `transcriber`. This allows you to leverage Azure's speech to text capabilities when creating or updating your assistant. + +Refer to our [api docs](lhttps://api.vapi.ai/api) to learn more. \ No newline at end of file diff --git a/fern/changelog/2024-12-14.mdx b/fern/changelog/2024-12-14.mdx new file mode 100644 index 000000000..a0440734a --- /dev/null +++ b/fern/changelog/2024-12-14.mdx @@ -0,0 +1,3 @@ +1. **Removal of `'gemma-7b-it'` from `GroqModel` Options:** The `'gemma-7b-it'` model is no longer available when selecting Groq as a model provider. Update your applications to use other valid options provided by the API. + +Refer to the [`GroqModel` schema](https://api.vapi.ai/api) or the [vapi dashboard](https://dashboard.vapi.ai/assistants) for Groq for a list of supported models. \ No newline at end of file diff --git a/fern/changelog/2024-12-19.mdx b/fern/changelog/2024-12-19.mdx new file mode 100644 index 000000000..ef147ffec --- /dev/null +++ b/fern/changelog/2024-12-19.mdx @@ -0,0 +1 @@ +1. **Azure Region Renamed to `swedencentral` (from *sweden*)**: Azure Speech Services customers using the Sweden data center should now specify `swedencentral` as your Azure Speech Services region instead of `sweden`. Update your region in your code and the updated [provider keys page](https://dashboard.vapi.ai/keys) > *Azure Speech*. \ No newline at end of file diff --git a/fern/changelog/2024-12-21.mdx b/fern/changelog/2024-12-21.mdx new file mode 100644 index 000000000..43be45fb6 --- /dev/null +++ b/fern/changelog/2024-12-21.mdx @@ -0,0 +1,7 @@ +**Expanded Voice Compatibility with Realtime Models**: You can use the voices ash, ballad, coral, sage, and verse with any realtime models, giving you more flexibility in voice synthesis options. + +**Access to New OpenAI Models**: + You can now specify the new models `gpt-4o-realtime-preview-2024-12-17` and `gpt-4o-mini-realtime-preview-2024-12-17` when configuring `OpenAIModel.model` and `OpenAIModel.fallbackModels`. + +**New ElevenLabs Voice Models Available**: + The new voice models `eleven_flash_v2` and `eleven_flash_v2_5` are now available for use in `ElevenLabsVoice` and `FallbackElevenLabsVoice`, offering potential improvements in voice performance. \ No newline at end of file diff --git a/fern/changelog/2024-12-30.mdx b/fern/changelog/2024-12-30.mdx new file mode 100644 index 000000000..fa7da9ce8 --- /dev/null +++ b/fern/changelog/2024-12-30.mdx @@ -0,0 +1,31 @@ +1. **Addition of *AzureSpeechTranscriber*:** + You can now configure assistants to use Azure's speech transcription service by setting +`AzureSpeechTranscriber.provider` to `azure`. Additionally, you will receive azure transcriber errors like `pipeline-error-azure-speech-transcriber-failed` in `Call.endReason`, `ServerMessageEndOfCallReport.endReason`, and `ServerMessageStatusUpdate.endReason`. + +2. **Combined `serverUrl` and `serverUrlSecret` into `server` Property**: + The `serverUrl` and `serverUrlSecret` properties have been replaced by a new `server` property in multiple schemas. This lets you configure webhook endpoints using the `server` object, allowing for more detailed and flexible setup, including URL and authentication, in a single place. These schemas include: +- ByoPhoneNumber +- BuyPhoneNumberDTO +- CreateByoPhoneNumberDTO +- CreateOrgDTO +- CreateTwilioPhoneNumberDTO +- CreateVapiPhoneNumberDTO +- CreateVonagePhoneNumberDTO +- ImportTwilioPhoneNumberDTO +- ImportVonagePhoneNumberDTO +- Org +- OrgWithOrgUser +- TwilioPhoneNumber +- UpdateOrgDTO +- UpdatePhoneNumberDTO +- VapiPhoneNumber +- VonagePhoneNumber + +3. **Introduction of New OpenAI Models**: +You can now use `o1-preview`, `o1-preview-2024-09-12`, `o1-mini`, and `o1-mini-2024-09-12`. in `OpenAIModel.model`. + +4. **Introduction of *'sonic' Voice Models* in Voice Schemas:** + You can now use `sonic` and `sonic-preview` models in `CartesiaVoice.model` and `FallbackCartesiaVoice.model` configurations. + +5. **Removal of Deprecated *GroqModel* Models:** + The models `llama3-groq-8b-8192-tool-use-preview` and `llama3-groq-70b-8192-tool-use-preview` have been removed from `GroqModel.model`. You should switch to supported models to avoid any disruptions. diff --git a/fern/changelog/2025-01-05.mdx b/fern/changelog/2025-01-05.mdx new file mode 100644 index 000000000..7b1d1734b --- /dev/null +++ b/fern/changelog/2025-01-05.mdx @@ -0,0 +1,9 @@ +1. **New Transfer Plan Mode Added**: You can now include call summaries in the SIP header during blind transfers without assistant involvement with `blind-transfer-add-summary-to-sip-header` (a new `TransferPlan.mode` option). Doing so will make `ServerMessageStatusUpdate` include a `summary` when the call status is `forwarding` - which means you can access call summaries for real-time display or logging purposes in your SIP calls. + +2. **Azure Speech Transcription Support**: You can now specify a new property called `AzureSpeechTranscriber.language` in Azure's Speech-to-Text service to improve the accuracy of processing spoken input. + +3. **New Groq Model Available**: You can now use `'llama-3.3-70b-versatile'` in `GroqModel.model`. + + + + diff --git a/fern/changelog/2025-01-07.mdx b/fern/changelog/2025-01-07.mdx new file mode 100644 index 000000000..f59ed3fad --- /dev/null +++ b/fern/changelog/2025-01-07.mdx @@ -0,0 +1,9 @@ +# New Gemini 2.0 Models, Realtime Updates, and Configuration Options + +1. **New Gemini 2.0 Models**: You can now use two new models in `Assistant.model[model='GoogleModel']`: `gemini-2.0-flash-exp` and `gemini-2.0-flash-realtime-exp`, which give you access to the latest real-time capabilities and experimental features. + +2. **Support for Real-time Configuration with Gemini 2.0 Models**: Developers can now fine-tune real-time settings for the Gemini 2.0 Multimodal Live API using `Assistant.model[model='GoogleModel'].realtimeConfig`, enabling more control over text generation and speech output. + +3. **Customize Speech Output for Gemini Multimodal Live APIs**: You can now customize the assistant's voice using the `speechConfig` and `voiceConfig` properties, with options like `"Puck"`, `"Charon"`, and more. + +4. **Advanced Gemini Text Generation Parameters**: You can also tune advanced hyperparameters such as `topK`, `topP`, `presencePenalty`, and `frequencyPenalty` to control how the assistant generates responses, leading to more natural and dynamic conversations. \ No newline at end of file diff --git a/fern/changelog/2025-01-11.mdx b/fern/changelog/2025-01-11.mdx new file mode 100644 index 000000000..ace6a6ba2 --- /dev/null +++ b/fern/changelog/2025-01-11.mdx @@ -0,0 +1,98 @@ +1. **Integration of Smallest AI Voices**: Assistants can now utilize voices from Smallest AI by setting the voice provider to `Assistant.voice[provider="smallest-ai"]`, allowing selection from a variety of 25 preset voices and customization of voice attributes. + +2. **Support for DeepSeek Language Models**: Developers can now configure assistants to use DeepSeek LLMs by setting the `Assistant.model[provider="deep-seek"]` and `Assistant.model[model="deepseek-chat"]`. You can also specify custom credentials by passing the following payload: + +```json +{ + "credentials": [ + { + "provider": "deep-seek", + "apiKey": "YOUR_API_KEY", + "name": "YOUR_CREDENTIAL_NAME" + } + ], + "model": { + "provider": "deep-seek", + "model": "deepseek-chat" + } +} +``` + +3. **Additional Call Ended Reasons for DeepSeek and Cerebras**: New `Call.endedReason` have been added to handle specific DeepSeek and Cerebras call termination scenarios, allowing developers to better manage error handling. + +4. **New API Endpoint to Delete Logs**: A new `DELETE /logs` endpoint has been added, enabling developers to programmatically delete logs and manage log data. + +5. **Enhanced Call Transfer Options with SIP Verb**: You can now specify a `sipVerb` when defining a `TransferPlan` with `Assistant.model.tools[type=transferCall].destinations[type=sip].transferPlan` giving you the ability to specify the SIP verb (`refer` or `bye`) used during call transfers for greater control over call flow. + +6. **Azure Credentials and Blob Storage Support**: You can now configure Azure credentials with support for AzureCredential.service[service=blob_storage] service and use AzureBlobStorageBucketPlan withAzureCredential.bucketPlan, enabling you to store call artifacts directly in Azure Blob Storage. + +7. **Add Authentication Support for Azure OpenAI API Management with the 'Ocp-Apim-Subscription-Key' Header**: When configuring Azure OpenAI credentials, you can now include the AzureOpenAICredential.ocpApimSubscriptionKey to authenticate with Azure's OpenAI services for the API Management proxy in place of an API Key. + +8. **New CloudflareR2BucketPlan**: You can now use CloudflareR2BucketPlan to configure storage with Cloudflare R2 buckets, enabling you to store call artifacts directly. + +9. **Enhanced Credential Support**: It is now simpler to configure provider credentials in `Assistant.credentials`. Additionally, credentials can be overridden with `AssistantOverride.credentials` enables granular credential management per assistant. Our backend improvements add type safety and autocompletion for all supported credential types in the SDKs, making it easier to configure and maintain credentials for the following providers: + +- S3Credential +- GcpCredential +- XAiCredential +- GroqCredential +- LmntCredential +- MakeCredential +- AzureCredential +- TavusCredential +- GladiaCredential +- GoogleCredential +- OpenAICredential +- PlayHTCredential +- RimeAICredential +- RunpodCredential +- TrieveCredential +- TwilioCredential +- VonageCredential +- WebhookCredential +- AnyscaleCredential +- CartesiaCredential +- DeepgramCredential +- LangfuseCredential +- CerebrasCredential +- DeepSeekCredential +- AnthropicCredential +- CustomLLMCredential +- DeepInfraCredential +- SmallestAICredential +- AssemblyAICredential +- CloudflareCredential +- ElevenLabsCredential +- OpenRouterCredential +- TogetherAICredential +- AzureOpenAICredential +- ByoSipTrunkCredential +- GoHighLevelCredential +- InflectionAICredential +- PerplexityAICredential + +10. **Specify Type When Updating Tools, Blocks, Phone Numbers, and Knowledge Bases**: You should now specify the type in the request body when [updating tools](https://api.vapi.ai/api#/Tools/ToolController_update), [blocks](https://api.vapi.ai/api#/Blocks/BlockController_update), [phone numbers](https://api.vapi.ai/api#/Phone%20Numbers/PhoneNumberController_update), or [knowledge bases](https://api.vapi.ai/api#/Knowledge%20Base/KnowledgeBaseController_update) using the appropriate payload for each type. Specifying the type now provides type safety and autocompletion in the SDKs. Refer to [the schemas](https://api.vapi.ai/api) to see the expected payload for the following types: + +- UpdateBashToolDTO +- UpdateComputerToolDTO +- UpdateDtmfToolDTO +- UpdateEndCallToolDTO +- UpdateFunctionToolDTO +- UpdateGhlToolDTO +- UpdateMakeToolDTO +- UpdateOutputToolDTO +- UpdateTextEditorToolDTO +- UpdateTransferCallToolDTO +- BashToolWithToolCall +- ComputerToolWithToolCall +- TextEditorToolWithToolCall +- UpdateToolCallBlockDTO +- UpdateWorkflowBlockDTO +- UpdateConversationBlockDTO +- UpdateByoPhoneNumberDTO +- UpdateTwilioPhoneNumberDTO +- UpdateVonagePhoneNumberDTO +- UpdateVapiPhoneNumberDTO +- UpdateCustomKnowledgeBaseDTO +- UpdateTrieveKnowledgeBaseDTO + diff --git a/fern/changelog/2025-01-14.mdx b/fern/changelog/2025-01-14.mdx new file mode 100644 index 000000000..6f03b2ef8 --- /dev/null +++ b/fern/changelog/2025-01-14.mdx @@ -0,0 +1 @@ +**End Call Message Support in ClientInboundMessage**: Developers can now programmatically end a call by sending an `end-call` message type within `ClientInboundMessage`. To use this feature, include a message with the `type` property set to `"end-call"` when sending inbound messages to the client. \ No newline at end of file diff --git a/fern/changelog/2025-01-15.mdx b/fern/changelog/2025-01-15.mdx new file mode 100644 index 000000000..017124e6b --- /dev/null +++ b/fern/changelog/2025-01-15.mdx @@ -0,0 +1,5 @@ +1. **Updated Log Endpoints:** +Both the `GET /logs` and `DELETE /logs` endpoints have been simplified by removing the `orgId` parameter. + +2. **Updated Log Schema:** +The following fields in the Log schema are no longer required: `requestDurationSeconds`, `requestStartedAt`, `requestFinishedAt`, `requestBody`, `requestHttpMethod`, `requestUrl`, `requestPath`, and `responseHttpCode`. \ No newline at end of file diff --git a/fern/changelog/2025-01-20.mdx b/fern/changelog/2025-01-20.mdx new file mode 100644 index 000000000..3bd56344b --- /dev/null +++ b/fern/changelog/2025-01-20.mdx @@ -0,0 +1,16 @@ +# Workflow Steps, Trieve Knowledge Base Updates, and Concurrent Calls Tracking + +1. **Use Workflow Blocks to Simplify Blocks Steps:** You can now compose complicated Blocks steps with smaller, resuable [Workflow blocks](https://api.vapi.ai/api#:~:text=Workflow) that manage conversations and take actions in external systems. + +In addition to normal operations inside [Block steps](https://docs.vapi.ai/blocks/steps) - you can now [Say messages](https://api.vapi.ai/api#:~:text=Say), [Gather information](https://api.vapi.ai/api#:~:text=Gather), or connect to other workflow [Edges](https://api.vapi.ai/api#:~:text=Edge) based on a [LLM evaluating a condition](https://api.vapi.ai/api#:~:text=SemanticEdgeCondition), or a more [logic-based condition](https://api.vapi.ai/api#:~:text=ProgrammaticEdgeCondition). Workflows can be used through `Assistant.model["VapiModel"]` to create custom call workflows. + +2. **Trieve Knowledge Base Integration Improvements:** You should now configure [Trieve knowledge bases](https://api.vapi.ai/api#:~:text=TrieveKnowledgeBase) using the new `createPlan` and `searchPlan` fields instead of specifying the raw vector plans directly. The new plans allow you to create or import trieve plans directly, and specify the type of search more precisely than before. + +3. **Updated Concurrency Tracking:** Your subscriptions now track active calls with `concurrencyCounter`, replacing `concurrencyLimit`. This does not affect how you reserve concurrent calls through [billing add-ons](https://dashboard.vapi.ai/org/billing/add-ons). + + + + + +4. **Define Allowed Values with `type` using `JsonSchema`:** You can restrict model outputs to specific values inside Blocks or tool calls using the new `type` property in [JsonSchema](https://api.vapi.ai/api#:~:text=JsonSchema). Supported types include `string`, `number`, `integer`, `boolean`, `array` (which also needs `items` to be defined), and `object` (which also needs `properties` to be defined). + diff --git a/fern/changelog/2025-01-21.mdx b/fern/changelog/2025-01-21.mdx new file mode 100644 index 000000000..a5df2257c --- /dev/null +++ b/fern/changelog/2025-01-21.mdx @@ -0,0 +1,7 @@ +# Updated Azure Regions for Credentials + +1. **Updated Azure Regions for Credentials**: You can now specify `canadacentral`, `japaneast`, and `japanwest` as valid regions when specifying your Azure credentials. Additionally, the region `canada` has been renamed to `canadaeast`, and `japan` has been replaced with `japaneast` and `japanwest`; please update your configurations accordingly. + + + + diff --git a/fern/changelog/2025-01-22.mdx b/fern/changelog/2025-01-22.mdx new file mode 100644 index 000000000..eb07492e8 --- /dev/null +++ b/fern/changelog/2025-01-22.mdx @@ -0,0 +1,8 @@ +# Tool Calling Updates, Final Transcripts, and DeepSeek Reasoner +1. **Migrate `ToolCallFunction` to `ToolCall`**: You should update your client and server tool calling code to use the [`ToolCall` schema](https://api.vapi.ai/api#:~:text=ToolCall) instead of `ToolCallFunction`, which includes properties like `name`, `tool`, and `toolBody` for more detailed tool call specifications. ToolCallFunction has been removed. + +2. **Include `ToolCall` Nodes in Workflows**: You can now incorporate [`ToolCall` nodes](https://api.vapi.ai/api#:~:text=ToolCall) directly into workflow block steps, enabling tools to be invoked as part of the workflow execution. + +3. **New Model Option `deepseek-reasoner`**: You can now select `deepseek-reasoner` as a model option inside your assistants with `Assistant.model["deep-seek"].model["deepseek-reasoner"]`, offering enhanced reasoning capabilities for your applications. + +4. **Support for Final Transcripts in Server Messages**: The API now supports `'transcript[transcriptType="final"]'` in server messages, allowing your application to handle and process end of conversation transcripts. \ No newline at end of file diff --git a/fern/changelog/2025-01-29.mdx b/fern/changelog/2025-01-29.mdx new file mode 100644 index 000000000..f1613df8e --- /dev/null +++ b/fern/changelog/2025-01-29.mdx @@ -0,0 +1,20 @@ +# New workflow nodes, improved call handling, better phone number management, and expanded tool calling capabilities + +1. **New Hangup Workflow Node**: You can now include a [`Hangup`](https://api.vapi.ai/api#:~:text=Hangup) node in your workflows to end calls programmatically. + +2. **New HttpRequest Workflow Node**: Workflows can now make HTTP requests using the new [`HttpRequest`](https://api.vapi.ai/api#:~:text=HttpRequest) node, enabling integration with external APIs during workflow execution. + +3. **Updates to Tool Calls**: The [`ToolCall`](https://api.vapi.ai/api#:~:text=ToolCall) schema has been revamped; you should update your tool calls to use the new `function` property with `id` and `function` details (instead of older `tool` and `toolBody` properties). + +4. **Improvements to [Say](https://api.vapi.ai/api#:~:text=Say), [Edge](https://api.vapi.ai/api#:~:text=Edge), [Gather](https://api.vapi.ai/api#:~:text=Gather), and [Workflow](https://api.vapi.ai/api#:~:text=Workflow) Nodes**: +- The `name`, `to`, and `from` properties in these nodes now support up to 80 characters, letting you use more descriptive identifiers. +- A `metadata` property has been added to these nodes, allowing you to store additional information. +- The [`Gather`](https://api.vapi.ai/api#:~:text=Gather) node now supports a `confirmContent` option to confirm collected data with users. + +5. **Regex Validation with Json Outputs**: You can now validate inputs and outputs from your conversations, tool calls, and OpenAI structured outputs against regular expressions using the `regex` property in [`JSON outputs`](https://api.vapi.ai/api#:~:text=JsonSchema) node. + +6. **New Assistant Transfer Mode**: A new [transfer mode](https://api.vapi.ai/api#:~:text=TransferPlan) `swap-system-message-in-history-and-remove-transfer-tool-messages` allows more control over conversation history during assistant transfers. + +7. **Area Code Selection for Vapi Phone Numbers**: You can now specify a desired area code when creating Vapi phone numbers using `numberDesiredAreaCode`. + +8. **Chat Completions Support**: You can now handle chat messages and their metadata within your applications using familiar chat completion messages in your workflow nodes. diff --git a/fern/changelog/2025-02-01.mdx b/fern/changelog/2025-02-01.mdx new file mode 100644 index 000000000..a2930abb3 --- /dev/null +++ b/fern/changelog/2025-02-01.mdx @@ -0,0 +1,25 @@ +# API Request Node, Improved Retries, and Enhanced Message Controls + +1. **HttpRequest Node Renamed to ApiRequest**: The `HttpRequest` workflow node has been renamed to [`ApiRequest`](https://api.vapi.ai/api#:~:text=ApiRequest), and can be accessed through `Assistant.model.workflow.nodes[type="api-request"]`. Key changes: + - New support for POST requests with customizable headers and body + - New async request support with `isAsync` flag + - Task status messages for waiting, starting, failure and success states +The `HttpRequest` node is now deprecated and will be removed in a future release. Please migrate to the new `ApiRequest` node. + +2. **New Backoff and Retry Controls**: You can now configure [`Assistant.model.tools[type=dtmf].server.backoffPlan`](https://api.vapi.ai/api#:~:text=BackoffPlan) to handle failed requests with customizable retry strategies and delays. + - Supports fixed or exponential backoff strategies + - Configure `maxRetries` (up to 10) and `baseDelaySeconds` (up to 10 seconds) + - Available in server configurations via `backoffPlan` property + +3. **Enhanced Gather Node**: The [`Assistant.model.workflow.nodes[type=gather]`](https://api.vapi.ai/api#:~:text=Gather) node has been improved with the following changes: + - Added `maxRetries` property to control retry attempts + - Now accepts a single JsonSchema instead of an array + - Removed default value for `confirmContent` property + +4. **Improved Message Controls**: [`Assistant.messagePlan`](https://api.vapi.ai/api#:~:text=MessagePlan) has been improved with the following changes: + - Increased `idleTimeoutSeconds` maximum from 30 to 60 seconds + - Added `silenceTimeoutMessage` to customize call ending due to silence + +5. **New Distilled Deepseek Model with Groq**: You can now select `deepseek-r1-distill-llama-70b` when using [Groq](https://api.vapi.ai/api#:~:text=Groq) as the provider in [`Assistant.model[provider='groq']`](https://api.vapi.ai/api#:~:text=UpdateCallDTO-,Assistant,-UpdateAssistantDTO) + +6. **Edge Condition Updates**: Edge conditions now require explicit matching criteria to improve workflow control and readability. Semantic edges must specify a `matches` property while programmatic edges require a `booleanExpression` property to define transition logic. diff --git a/fern/changelog/2025-02-04.md b/fern/changelog/2025-02-04.md new file mode 100644 index 000000000..2e51c9944 --- /dev/null +++ b/fern/changelog/2025-02-04.md @@ -0,0 +1,8 @@ +# Hooks, PCI Compliance, and Blocking Messages + +1. **Introduction of `Hook`s in Workflows**: You can now use [`Hooks`](https://api.vapi.ai/api#:~:text=Hook) in your workflows to automatically execute actions when specific events occur, like task start or confirmation. Hooks are now available in [`ApiRequest`](https://api.vapi.ai/api#:~:text=ApiRequest) and [`Gather`](https://api.vapi.ai/api#:~:text=Gather) workflow nodes. + +2. **Make your Assistant PCI Compliant**: You can now configure [`Assistant.pciEnabled`](https://api.vapi.ai/api#:~:text=UpdateCallDTO-,Assistant,-UpdateAssistantDTO) to indicate if your assistant deals with sensitive cardholder data that requires PCI compliance, helping you meet security standards for financial information. + +3. **Blocking Messages before Tool Calls**: You can now configure your tool calls to wait until a message is fully spoken before starting with [`ToolMessageStart.blocking=true`](https://api.vapi.ai/api#:~:text=ToolMessageStart) (default is `false`). + diff --git a/fern/changelog/2025-02-10.mdx b/fern/changelog/2025-02-10.mdx new file mode 100644 index 000000000..1e7b313eb --- /dev/null +++ b/fern/changelog/2025-02-10.mdx @@ -0,0 +1,35 @@ +# API Enhancements, Call Features, and Workflow Improvements + +1. **`POST` requests to `/analytics` (migrate from `GET`)**: You should now make `POST` requests (instead of `GET`) to the [`/analytics`](https://api.vapi.ai/api#/Analytics/AnalyticsController_query) endpoint. Structure your analytics query as a JSON payload using [`AnalyticsQuery`](https://api.vapi.ai/api#/Analytics/AnalyticsQuery) in the request body. + +2. **Use `SayHook` to Intercept and Modify Text for Assistant Speech**: You can use [`SayHook`](https://api.vapi.ai/api#/Hooks/SayHook) to intercept and modify text before it's spoken by your assistant. Specify the text to be spoken using the `exact` or `prompt` properties. + +3. **Call Transfer Support**: The `Transfer` node type is now available in workflows. Configure the `destination` property to define the transfer target. + +4. **Workflow Edge Condition Updates**: [`AIEdgeCondition`](https://api.vapi.ai/api#:~:text=AIEdgeCondition) (which replaces `SemanticEdgeCondition`) enables AI-powered routing decisions by analyzing conversation context and intent, while [`LogicEdgeCondition`](https://api.vapi.ai/api#:~:text=LogicEdgeCondition) (which replaces `ProgrammaticEdgeCondition`) allows for rule-based routing using custom logical expressions. The previous `SemanticEdgeCondition` and `ProgrammaticEdgeCondition` are now deprecated, and a new `FailedEdgeCondition` has been added to handle node failures in workflows. + +5. **`Gather` Node: Data Collection Refactor**: The [`Gather` node](https://api.vapi.ai/api#:~:text=Gather) now requires an `output` property to define the expected data schema. The `instruction` and `schema` properties have been removed. + +6. **Call Packet Capture (PCAP) Configuration**: Your call [`Artifact`](https://api.vapi.ai/api#:~:text=Artifact)s now support links to download a call's network packet capture (PCAP) file, providing you with detailed network traffic analysis and troubleshooting for calls. PCAP is only supported by `vapi` and `byo-phone-number` providers. Enable PCAP through `pcapEnabled`, automatically upload to S3 bucket with `pcapS3PathPrefix`, and access via `pcapUrl`. + +7. **`ApiRequest` Node Improvements**: [`ApiRequest`](https://api.vapi.ai/api#:~:text=ApiRequest) now supports `GET` requests. You can also define the expected response schema. You can make API requests as `blocking` or run in the `background` with `ApiRequest.mode`. + +8. **`Call` and `ServerMessage` `endedReason` Updates**: The `assistant-not-invalid` `Call.endedReason` has been corrected to `"assistant-not-valid"`. Also added `"assistant-ended-call-with-hangup-task"` to the `Call.endedReason`. + +9. **New Azure OpenAI Model `gpt-4o-2024-08-06-ptu`**: You can now use `gpt-4o-2024-08-06-ptu` from Azure OpenAI inside your [Assistant](https://dashboard.vapi.ai/assistants/2ec63711-f867-4066-8c54-7833346783b1). + + + Azure OpenAI Model GPT-4o-2024-08-06-ptu + + + +10. **Deprecated Schemas and Properties**: The following properties and schemas are now deprecated in the [API reference](https://api.vapi.ai/api/): + * `SemanticEdgeCondition` + * `ProgrammaticEdgeCondition` + * `Workflow.type` + * `ApiRequest.waitTaskMessage` + * `ApiRequest.startTaskMessage` + * `ApiRequest.failureTaskMessage` + * `ApiRequest.successTaskMessage` + * `OpenAIModel.semanticCachingEnabled` + * `CreateWorkflowDTO.type` diff --git a/fern/changelog/2025-02-17.mdx b/fern/changelog/2025-02-17.mdx new file mode 100644 index 000000000..df4d03749 --- /dev/null +++ b/fern/changelog/2025-02-17.mdx @@ -0,0 +1,66 @@ +## What's New + +### Compliance & Security Enhancements +- **New [CompliancePlan](https://api.vapi.ai/api#:~:text=CompliancePlan) Consolidates HIPAA and PCI Compliance Settings**: You should now enable HIPAA and PCI compliance settings with `Assistant.compliancePlan.hipaaEnabled` and `Assistant.compliancePlan.pciEnabled` which both default to `false` (replacing the old HIPAA and PCI flags on `Assistant` and `AssistantOverrides`). + +- **Phone Number Status Tracking**: You can now view your phone number `status` with `GET /phone-number/{id}` for all phone number types ([Bring Your Own Number](https://api.vapi.ai/api#:~:text=ByoPhoneNumber), [Vapi](https://api.vapi.ai/api#:~:text=VapiPhoneNumber), [Twilio](https://api.vapi.ai/api#:~:text=TwilioPhoneNumber), [Vonage](https://api.vapi.ai/api#:~:text=VonagePhoneNumber)) for better monitoring. + +### Advanced Call Control + +- **Assistant Hooks System**: You can now use [`AssistantHooks`](https://api.vapi.ai/api#:~:text=AssistantHooks) to support `call.ending` events with customizable filters and actions + - Enable transfer actions through [`TransferAssistantHookAction`](https://api.vapi.ai/api#:~:text=TransferAssistantHookAction). For example: +```javascript +{ + "hooks": [{ + "on": "call.ending", + "do": [{ + "type": "transfer", + "destination": { + // Your transfer configuration + } + }] + }] +} +``` + + - Conditionally execute hooks with `Assistant.hooks.filter`. For example, trigger different hooks for call completed, system errors, or customer hangup / transfer: + +```json +{ + "assistant": { + "hooks": [{ + "filters": [{ + "type": "oneOf", + "key": "call.endedReason", + "oneOf": ["pipeline-error-custom-llm-500-server-error", "pipeline-error-custom-llm-llm-failed"] + }] + } + ] + } +} +``` + +### Model & Voice Updates + +- **New Models Added**: You can now use new models inside `Assistant.model[provider="google", "openai", "xai"]` and `Assistant.fallbackModels[provider="google", "openai", "xai"]` + - Google: Gemini 2.0 series (`flash-thinking-exp`, `pro-exp-02-05`, `flash`, `flash-lite-preview`) + - OpenAI: o3 mini `o3-mini` + - xAI: Grok 2 `grok-2` + + + New Assistant Models + + +- **New `PlayDialog` Model for [PlayHT Voices](https://api.vapi.ai/api#:~:text=PlayHTVoice)**: You can now use the `PlayDialog` model in `Assistant.voice[provider="playht"].model["PlayDialog"]`. + +- **New `nova-3` and `nova-3-general` Models for [Deepgram Transcriber](https://api.vapi.ai/api#:~:text=DeepgramTranscriber)**: You can now use the `nova-3` and `nova-3-general` models in `Assistant.transcriber[provider="deepgram"].model["nova-3", "nova-3-general"]` + +### API Improvements + +- **Workflow Updates**: You can now send a [`workflow.node.started`](https://api.vapi.ai/api#:~:text=ClientMessageWorkflowNodeStarted) message to track the start of a workflow node for better call flow tracking + +- **Analytics Enhancement**: Added subscription table and concurrency columns in [POST /analytics](https://api.vapi.ai/api#/Analytics/AnalyticsController_query) for richer queries about your subscriptions and concurrent calls. + +### Deprecations + +The `/logs` endpoints are now marked as deprecated - plan to update your implementation accordingly. diff --git a/fern/changelog/2025-02-20.mdx b/fern/changelog/2025-02-20.mdx new file mode 100644 index 000000000..f98e323f9 --- /dev/null +++ b/fern/changelog/2025-02-20.mdx @@ -0,0 +1,25 @@ +## What's New +1. **Configure 16 text normalization processors in [FormatPlan](https://api.vapi.ai/api#:~:text=FormatPlan)**: You can now control how text is transcribed and spoken for currency, dates, etc. by setting the `formattersEnabled` array in `Assistant.voice.chunkPlan.formatPlan` (not specifying `formattersEnabled` defaults to all formatters being enabled). See all available formatters in the [FormatPlan.formattersEnabled reference](https://api.vapi.ai/api#:~:text=FormatPlan). + +2. **Deepgram [Keyterm Prompting](https://developers.deepgram.com/docs/keyterm)**: The `keyterm` array in [DeepgramTranscriber](https://api.vapi.ai/api#:~:text=DeepgramTranscriber) implements Deepgram's [Keyterm Prompting](https://developers.deepgram.com/docs/keyterm) technology, boosting recall for domain-specific terminology. Compared to the existing `keywords` field: + +| Feature | `keywords` | `keyterm` | +|------------------|--------------------|--------------------| +| Recall Boost | 15-20% | Up to 90% | +| Format | Word:Weight | Raw phrases | +| Use Case | General vocabulary | Critical terms | + +You should reserve `keyterm` for compliance-sensitive terms like medical codes while using `keywords` for proper nouns / brand names. + +3. **Subscription usage tracking improvements**: The `minutesUsedNextResetAt` timestamp now appears in all subscription tiers (not just enterprise), exposed at `subscription.minutesUsedNextResetAt` for predictable billing cycle integration. Combine with existing `minutesUsed` and `minutesIncluded` metrics to build custom usage dashboards, regardless of subscription tier. + +4. **Neuphonic voice synthesis**: You can now configure Neuphonic as a voice provider with `Assistant.voice[provider="neuphonic"]`. Handle appropriate errors with `pipeline-error-neuphonic-voice-failed`. Test latency thresholds as Neuphonic requires 200ms additional processing time compared to ElevenLabs. + + + Neuphonic Voice Synthesis + + +5. **Support for pre-transfer announcements in [ClientInboundMessageTransfer](https://api.vapi.ai/api#:~:text=ClientInboundMessageTransfer)**: The `content` field in `ClientInboundMessageTransfer` now supports pre-transfer announcements ("Connecting you to billing...") before SIP/number routing. Implement via WebSocket messages using type: "transfer" with destination object. + +### Deprecation Notice +**OrgWithOrgUser** is now deprecated, and impacts endpoints returning organization-user composites. This has been replaced with separate [`Org`](https://api.vapi.ai/api#:~:text=Org) and [`User`](https://api.vapi.ai/api#:~:text=User) schemas for better clarity and consistency. \ No newline at end of file diff --git a/fern/changelog/2025-02-25.mdx b/fern/changelog/2025-02-25.mdx new file mode 100644 index 000000000..be8964c3d --- /dev/null +++ b/fern/changelog/2025-02-25.mdx @@ -0,0 +1,54 @@ +## Test Suite APIs, Enhanced Call Transfers, Voice Model Enhancements + +1. **Introducing Test Suite Management APIs:** You can now test your assistant conversations before deploying them by creating [end-to-end tests](https://docs.vapi.ai/test/voice-testing#step-1-create-a-new-test-suite), [adding test cases](https://docs.vapi.ai/test/voice-testing#step-3-add-test-cases), and [running and reviewing test suites](https://docs.vapi.ai/test/voice-testing#step-5-run-and-review-tests). You can configure these tests through the [Test Suites dashboard page](https://dashboard.vapi.ai/test-suites) and [Test Suite APIs](https://docs.vapi.ai/api-reference/test-suites/test-suite-controller-find-all-paginated), and learn more in the [docs](https://docs.vapi.ai/test/voice-testing). + + + Test Suite Management APIs + + + +2. **Enhanced Call Transfers with TwiML Control:** You can now use `twiml` ([Twilio Markup Language](https://www.twilio.com/docs/voice/twiml)) in [`Assistant.model.tools[type=transferCall].destinations[].transferPlan[mode=warm-transfer-twiml]`](https://api.vapi.ai/api#:~:text=TransferPlan) to execute TwiML instructions before connecting the call, allowing for pre-transfer announcements or data collection with Twilio. + +3. **New Voice Models and Experimental Controls:** + * **`mistv2` Rime AI Voice:** You can now use the `mistv2` model in [`Assistant.voice[provider="rime-ai"].model[model="mistv2"]`](https://api.vapi.ai/api#:~:text=RimeAIVoice). + * **OpenAI Models:** You can now use `chatgpt-4o-latest` model in [`Assistant.model[provider="openai"].model[model="chatgpt-4o-latest"]`](https://api.vapi.ai/api#:~:text=OpenAIModel). + +4. **Experimental Controls for Cartesia Voices:** You can now specify your Cartesia voice speed (string) and emotional range (array) with [`Assistant.voice[provider="cartesia"].experimentalControls`](https://api.vapi.ai/api#:~:text=CartesiaExperimentalControls). For example: + +```json +{ + "speed": "fast", + "emotion": [ + "anger:lowest", + "curiosity:high" + ] +} +``` + +| Property | Option | +|----------|--------| +| speed | slowest | +| | slow | +| | normal (default) | +| | fast | +| | fastest | +| emotion | anger:lowest | +| | anger:low | +| | anger:high | +| | anger:highest | +| | positivity:lowest | +| | positivity:low | +| | positivity:high | +| | positivity:highest | +| | surprise:lowest | +| | surprise:low | +| | surprise:high | +| | surprise:highest | +| | sadness:lowest | +| | sadness:low | +| | sadness:high | +| | sadness:highest | +| | curiosity:lowest | +| | curiosity:low | +| | curiosity:high | +| | curiosity:highest | diff --git a/fern/changelog/2025-02-27.mdx b/fern/changelog/2025-02-27.mdx new file mode 100644 index 000000000..d8cf366b1 --- /dev/null +++ b/fern/changelog/2025-02-27.mdx @@ -0,0 +1,49 @@ +# Phone Keypad Input Support, OAuth2 and Analytics Improvements + +1. **Keypad Input Support for Phone Calls:** A new [`keypadInputPlan`](https://api.vapi.ai/api#:~:text=KeypadInputPlan) feature has been added to enable handling of DTMF (touch-tone) keypad inputs during phone calls. This allows your voice assistant to collect numeric input from callers, like account numbers, menu selections, or confirmation codes. + +Configuration options: +```json +{ + "keypadInputPlan": { + "enabled": true, // Default: false + "delimiters": "#", // Options: "#", "*", or "" (empty string) + "timeoutSeconds": 2 // Range: 0.5-10 seconds, Default: 2 + } +} +``` + +The feature can be configured in: +- `assistant.keypadInputPlan` +- `call.squad.members.assistant.keypadInputPlan` +- `call.squad.members.assistantOverrides.keypadInputPlan` + +2. **OAuth2 Authentication Enhancement:** The [`OAuth2AuthenticationPlan`](https://api.vapi.ai/api#:~:text=OAuth2AuthenticationPlan) now includes a `scope` property to specify access scopes when authenticating. This allows more granular control over permissions when integrating with OAuth2-based services. + +```json +{ + "credentials": [ + { + "authenticationPlan": { + "type": "oauth2", + "url": "https://example.com/oauth2/token", + "clientId": "your-client-id", + "clientSecret": "your-client-secret", + "scope": "read:data" // New property, max length: 1000 characters + } + } + ] +} +``` + +The scope property can be configured at: +- `assistant.credentials.authenticationPlan` +- `call.squad.members.assistant.credentials.authenticationPlan` + +3. **New Analytics Metric: Minutes Used** The [`AnalyticsOperation`](https://api.vapi.ai/api#:~:text=AnalyticsOperation) schema now includes a new column option: `minutesUsed`. This metric allows you to track and analyze the duration of calls in your usage reports and analytics dashboards. + + +4. **Removed TrieveKnowledgeBaseCreate Schema:** Removed `TrieveKnowledgeBaseCreate` schema from +- `TrieveKnowledgeBase.createPlan` +- `CreateTrieveKnowledgeBaseDTO.createPlan` +- `UpdateTrieveKnowledgeBaseDTO.createPlan` diff --git a/fern/changelog/2025-03-02.mdx b/fern/changelog/2025-03-02.mdx new file mode 100644 index 000000000..ca8e42a48 --- /dev/null +++ b/fern/changelog/2025-03-02.mdx @@ -0,0 +1,49 @@ +## Claude 3.7 Sonnet and GPT 4.5 preview, New Hume AI Voice Provider, New Supabase Storage Provider, Enhanced Call Transfer Options + +1. **Claude 3.7 Sonnet with Thinking Configuration Support**: +You can now use the latest claude-3-7-sonnet-20250219 model with a new "thinking" feature via the [`AnthropicThinkingConfig`](https://api.vapi.ai/api#:~:text=AnthropicThinkingConfig) schema. +Configure it in `assistant.model` or `call.squad.members.assistant.model`: +```json +{ + "model": "claude-3-7-sonnet-20250219", + "provider": "anthropic", + "thinking": { + "type": "enabled", + "budgetTokens": 5000 // min 1024, max 100000 + } +} +``` + +2. **OpenAI GPT-4.5-Preview Support**: +You can now use the latest gpt-4.5-preview model as a primary model or fallback option via the [`OpenAIModel`](https://api.vapi.ai/api#:~:text=OpenAIModel) schema. +Configure it in `assistant.model` or `call.squad.members.assistant.model`: +```json +{ + "model": "gpt-4.5-preview", + "provider": "openai" +} +``` + +3. **New Hume Voice Provider**: +Integrated Hume AI as a new voice provider with the "octave" model for text-to-speech synthesis. + + + Hume Voice Provider + + +4. **Supabase Storage Integration**: +New Supabase S3-compatible storage support for file operations. This integration lets developers configure buckets and paths across 16 regions, enabling structured file storage with proper authentication. +Configure [`SupabaseBucketPlan`](https://api.vapi.ai/api#:~:text=SupabaseBucketPlan) in `assistant.credentials.bucketPlan`,`call.squad.members.assistant.credentials.bucketPlan` + +5. **Voice Speed Control** +Added a speed parameter to ElevenLabs voices ranging from 0.7 (slower) to 1.2 (faster) [`ElevenLabsVoice`](https://api.vapi.ai/api#:~:text=ElevenLabsVoice). This enhancement gives developers more control over speech cadence for more natural-sounding conversations. + +6. **Enhanced Call Transfer Options in TransferPlan** +Added a new dial option to the sipVerb parameter for call transfers. This complements the existing refer (default) and bye options, providing more flexibility in call handling. +- 'dial': Uses SIP DIAL to transfer the call + +7. **Zero-Value Minumum Subscription Minutes** +Changed the minimum value for minutesUsed and minutesIncluded from 1 to 0. This supports tracking of new subscriptions and free tiers with no included minutes. + +8. **Zero-Value Minimum KeypadInputPlan Timeout** +Adjusted the KeypadInputPlan.timeoutSeconds minimum from 0.5 to 0. diff --git a/fern/changelog/overview.mdx b/fern/changelog/overview.mdx index 712e15e23..53a4763c2 100644 --- a/fern/changelog/overview.mdx +++ b/fern/changelog/overview.mdx @@ -1,3 +1,90 @@ --- slug: changelog --- + document.querySelector('input[type="email"]').focus()}>Get the (almost) daily changelog} + icon="envelope" + iconType="solid" +> +
{ + const emailInput = document.getElementById('email_input'); + const emailValue = emailInput.value; + const emailPattern = /^[^\s@]+@[^\s@]+\.[^\s@]+$/; + if (!emailPattern.test(emailValue)) { + e.preventDefault(); + alert('Please enter a valid email address.'); + } + }} + > +
+ + + +
+
+
\ No newline at end of file diff --git a/fern/community/expert-directory.mdx b/fern/community/expert-directory.mdx index 2183134c0..ca182866d 100644 --- a/fern/community/expert-directory.mdx +++ b/fern/community/expert-directory.mdx @@ -10,6 +10,27 @@ Want to maximize your Voice AI? Vapi, a certified consultant, specializes in bui Whether you need help deciding what to automate or assistance in building it, Vapi Experts have proven their expertise by supporting users and creating valuable video content for the community. Find the right fit here.
+ + +
+

Qonvo

+

Qonvo is the best way to stop wasting your time on the phone for repetitive tasks and low-value added inbound requests. Allow your self to better invest your time thanks custom-build Vocal AI agents.

+
+
+ ' \ --data '{ - "name": "knowledge-base-test", + "name": "v2", "provider": "trieve", - "vectorStoreSearchPlan": { - "scoreThreshold": 0.1, - "searchType": "hybrid" + "searchPlan": { + "searchType": "semantic", + "topK": 3, + "removeStopWords": true, + "scoreThreshold": 0.7 }, - "vectorStoreCreatePlan": { - "fileIds": [""] + "createPlan": { + "type": "create", + "chunkPlans": [ + { + "fileIds": ["", ""], + "websites": ["", ""], + "targetSplitsPerChunk": 50, + "splitDelimiters": [".!?\n"], + "rebalanceChunks": true + } + ] } }' ``` +#### Configuration Options + +##### Search Plan Options + +- **searchType** (required): The search method used for finding relevant chunks. Available options: + - `fulltext`: Traditional text search + - `semantic`: Semantic similarity search + - `hybrid`: Combines fulltext and semantic search + - `bm25`: BM25 ranking algorithm +- **topK** (optional): Number of top chunks to return. Default varies by implementation +- **removeStopWords** (optional): When true, removes common stop words from the search query. Default: `false` +- **scoreThreshold** (optional): Filters out chunks based on their similarity score: + - For cosine distance: Excludes chunks below the threshold + - For Manhattan Distance, Euclidean Distance, and Dot Product: Excludes chunks above the threshold + - Set to 0 or omit for no threshold + +##### Chunk Plan Options + +- **fileIds** (optional): Array of file IDs to include in the vector store +- **websites** (optional): Array of website URLs to crawl and include in the vector store +- **targetSplitsPerChunk** (optional): Number of splits per chunk. Default: `20` +- **splitDelimiters** (optional): Array of delimiters used to split text before chunking. Default: `[".!?\n"]` +- **rebalanceChunks** (optional): When true, evenly distributes remainder splits across chunks. For example, 66 splits with `targetSplitsPerChunk: 20` will create 3 chunks with 22 splits each. Default: `true` + ### **Step 3: Create an Assistant** Create a new assistant in Vapi and, on the right sidebar menu. Add the Knowledge Base to your assistant via the PATCH endpoint. Also make sure you customize your assistant's system prompt to utilize the Knowledge Base for responding to user queries. diff --git a/fern/customization/custom-llm/tool-calling-integration.mdx b/fern/customization/custom-llm/tool-calling-integration.mdx new file mode 100644 index 000000000..096c6c877 --- /dev/null +++ b/fern/customization/custom-llm/tool-calling-integration.mdx @@ -0,0 +1,422 @@ +## What Is a Custom LLM and Why Use It? + +A **Custom LLM** is more than just a text generator—it’s a conversational assistant that can call external functions, trigger processes, and handle special logic, all while chatting with your users. Think of it as your smart helper that not only answers questions but also takes actions. + +**Why use a Custom LLM?** +- **Enhanced Functionality:** It mixes natural language responses with actionable functions. +- **Flexibility:** You can combine built-in functions, attach external tools via Vapi, or even add custom endpoints. +- **Dynamic Interactions:** The assistant can return structured instructions—like transferring a call or running a custom process—when needed. +- **Seamless Integration:** Vapi lets you plug these custom endpoints into your assistant quickly and easily. + +--- + +## Setting Up Your Custom LLM for Response Generation + +Before adding tool calls, let’s start with the basics: setting up your Custom LLM to simply generate conversation responses. In this mode, your LLM receives conversation details, asks the model for a reply, and streams that text back. + +### How It Works +- **Request Reception:** Your endpoint (e.g., `/chat/completions`) gets a payload with the model, messages, temperature, and (optionally) tools. +- **Content Generation:** The code builds an OpenAI API request that includes the conversation context. +- **Response Streaming:** The generated reply is sent back as Server-Sent Events (SSE). + +### Sample Code Snippet + +```typescript +app.post("/chat/completions", async (req: Request, res: Response) => { + // Log the incoming request. + logEvent("Request received at /chat/completions", req.body); + const payload = req.body; + + // Prepare the API request to OpenAI. + const requestArgs: any = { + model: payload.model, + messages: payload.messages, + temperature: payload.temperature ?? 1.0, + stream: true, + tools: payload.tools || [], + tool_choice: "auto", + }; + + // Optionally merge in native tool definitions. + const modelTools = payload.tools || []; + requestArgs.tools = [...modelTools, ...ourTools]; + + logEvent("Calling OpenAI API for content generation"); + const openAIResponse = await openai.chat.completions.create(requestArgs); + logEvent("OpenAI API call successful. Streaming response."); + + // Set up streaming headers. + res.setHeader("Content-Type", "text/event-stream"); + res.setHeader("Cache-Control", "no-cache"); + res.setHeader("Connection", "keep-alive"); + + // Stream the response chunks back. + for await (const chunk of openAIResponse as unknown as AsyncIterable) { + res.write(`data: ${JSON.stringify(chunk)}\n\n`); + } + res.write("data: [DONE]\n\n"); + res.end(); +}); +``` + +### Attaching Custom LLM Without Tools to an Existing Assistant in Vapi + +If you just want response generation (without tool calls), update your Vapi model with a PATCH request like this: + +```bash +curl -X PATCH https://api.vapi.ai/assistant/insert-your-assistant-id-here \ + -H "Authorization: Bearer insert-your-private-key-here" \ + -H "Content-Type: application/json" \ + -d '{ + "model": { + "provider": "custom-llm", + "model": "gpt-4o", + "url": "https://custom-llm-url/chat/completions", + "messages": [ + { + "role": "system", + "content": "[TASK] Ask the user if they want to transfer the call; if not, continue the conversation." + } + ] + }, + "transcriber": { + "provider": "azure", + "language": "en-CA" + } +}' +``` + +--- + +## Adding Tools Calling with Your Custom LLM + +Now that you’ve got response generation working, let’s expand your assistant’s abilities. Your Custom LLM can trigger external actions in three different ways. + +### a. Native LLM Tools + +These tools are built right into your LLM integration. For example, a native function like `get_payment_link` can return a payment URL. + +**How It Works:** +1. **Detection:** The LLM’s streaming response includes a tool call for `get_payment_link`. +2. **Execution:** The integration parses the arguments and calls the native function. +3. **Response:** The result is packaged into a follow-up API call and streamed back. + +**Code Snippet:** + +```typescript +// Variables to accumulate tool call information. +let argumentsStr = ""; +let toolCallInfo: { name?: string; id?: string } | null = null; + +// Process streaming chunks. +for await (const chunk of openAIResponse as unknown as AsyncIterable) { + const choice = chunk.choices && chunk.choices[0]; + const delta = choice?.delta || {}; + const toolCalls = delta.tool_calls; + + if (toolCalls && toolCalls.length > 0) { + for (const toolCall of toolCalls) { + const func = toolCall.function; + if (func && func.name) { + toolCallInfo = { name: func.name, id: toolCall.id }; + } + if (func && func.arguments) { + argumentsStr += func.arguments; + } + } + } + + const finishReason = choice?.finish_reason; + if (finishReason === "tool_calls" && toolCallInfo) { + let parsedArgs = {}; + try { + parsedArgs = JSON.parse(argumentsStr); + } catch (err) { + console.error("Failed to parse arguments:", err); + } + if (tool_functions[toolCallInfo.name!]) { + const result = await tool_functions[toolCallInfo.name!](parsedArgs); + const functionMessage = { + role: "function", + name: toolCallInfo.name, + content: JSON.stringify(result) + }; + + const followUpResponse = await openai.chat.completions.create({ + model: requestArgs.model, + messages: [...requestArgs.messages, functionMessage], + temperature: requestArgs.temperature, + stream: true, + tools: requestArgs.tools, + tool_choice: "auto" + }); + + for await (const followUpChunk of followUpResponse) { + res.write(`data: ${JSON.stringify(followUpChunk)}\n\n`); + } + argumentsStr = ""; + toolCallInfo = null; + continue; + } + } + res.write(`data: ${JSON.stringify(chunk)}\n\n`); +} +``` + +### b. Vapi-Attached Tools + +These tools come pre-attached via your Vapi configuration. For example, the `transferCall` tool: + +**How It Works:** +1. **Detection:** When a tool call for `transferCall` appears with a destination in the payload, the function isn’t executed. +2. **Response:** The integration immediately sends a function call payload with the destination back to Vapi. + +**Code Snippet:** + +```typescript +if (functionName === "transferCall" && payload.destination) { + const functionCallPayload = { + function_call: { + name: "transferCall", + arguments: { + destination: payload.destination, + }, + }, + }; + logEvent("Special handling for transferCall", { functionCallPayload }); + res.write(`data: ${JSON.stringify(functionCallPayload)}\n\n`); + // Skip further processing for this chunk. + continue; +} +``` + +### c. Custom Tools + +Custom tools are unique to your application and are handled by a dedicated endpoint. For example, a custom function named `processOrder`. + +**How It Works:** +1. **Dedicated Endpoint:** Requests for custom tools go to `/chat/completions/custom-tool`. +2. **Detection:** The payload includes a tool call list. If the function name is `"processOrder"`, a hardcoded result is returned. +3. **Response:** A JSON response is sent back with the result. + +**Code Snippet (Custom Endpoint):** + +```typescript +app.post("/chat/completions/custom-tool", async (req: Request, res: Response) => { + logEvent("Received request at /chat/completions/custom-tool", req.body); + // Expect the payload to have a "message" with a "toolCallList" array. + const vapiPayload = req.body.message; + + // Process tool call. + for (const toolCall of vapiPayload.toolCallList) { + if (toolCall.function?.name === "processOrder") { + const hardcodedResult = "CustomTool processOrder With CustomLLM Always Works"; + logEvent("Returning hardcoded result for 'processOrder'", { toolCallId: toolCall.id }); + return res.json({ + results: [ + { + toolCallId: toolCall.id, + result: hardcodedResult, + }, + ], + }); + } + } +}); +``` + +--- + +## Testing Tool Calling with cURL + +Once your endpoints are set up, try testing them with these cURL commands. + +### a. Native Tool Calling (`get_payment_link`) + +```bash +curl -X POST https://custom-llm-url/chat/completions \ + -H "Content-Type: application/json" \ + -d '{ + "model": "gpt-3.5-turbo", + "messages": [ + {"role": "user", "content": "I need a payment link."} + ], + "temperature": 0.7, + "tools": [ + { + "type": "function", + "function": { + "name": "get_payment_link", + "description": "Get a payment link", + "parameters": {} + } + } + ] + }' +``` + +*Expected Response:* +Streaming chunks eventually include the result (e.g., a payment link) returned by the native tool function. + +### b. Vapi-Attached Tool Calling (`transferCall`) + +```bash +curl -X POST https://custom-llm-url/chat/completions \ + -H "Content-Type: application/json" \ + -d '{ + "model": "gpt-3.5-turbo", + "messages": [ + {"role": "user", "content": "Please transfer my call."} + ], + "temperature": 0.7, + "tools": [ + { + "type": "function", + "function": { + "name": "transferCall", + "description": "Transfer call to a specified destination", + "parameters": {} + } + } + ], + "destination": "555-1234" + }' +``` + +*Expected Response:* +Immediately returns a function call payload that instructs Vapi to transfer the call to the specified destination. + +### c. Custom Tool Calling (`processOrder`) + +```bash +curl -X POST https://custom-llm-url/chat/completions/custom-tool \ + -H "Content-Type: application/json" \ + -d '{ + "message": { + "toolCallList": [ + { + "id": "12345", + "function": { + "name": "processOrder", + "arguments": { + "param": "value" + } + } + } + ] + } + }' +``` + +*Expected Response:* +```json +{ + "results": [ + { + "toolCallId": "12345", + "result": "CustomTools With CustomLLM Always Works" + } + ] +} +``` + +--- + +## Integrating Tools with Vapi + +After testing locally, integrate your Custom LLM with Vapi. Choose the configuration that fits your needs. + +### a. Without Tools (Response Generation Only) + +```bash +curl -X PATCH https://api.vapi.ai/assistant/insert-your-assistant-id-here \ + -H "Authorization: Bearer insert-your-private-key-here" \ + -H "Content-Type: application/json" \ + -d '{ + "model": { + "provider": "custom-llm", + "model": "gpt-4o", + "url": "https://custom-llm-url/chat/completions", + "messages": [ + { + "role": "system", + "content": "[TASK] Ask the user if they want to transfer the call; if not, continue chatting." + } + ] + }, + "transcriber": { + "provider": "azure", + "language": "en-CA" + } +}' +``` + +### b. With Tools (Including `transferCall` and `processOrder`) + +```bash +curl -X PATCH https://api.vapi.ai/assistant/insert-your-assistant-id-here \ + -H "Authorization: Bearer insert-your-private-key-here" \ + -H "Content-Type: application/json" \ + -d '{ + "model": { + "provider": "custom-llm", + "model": "gpt-4o", + "url": "https://custom-llm-url/chat/completions", + "messages": [ + { + "role": "system", + "content": "[TASK] Ask the user if they want to transfer the call; if they agree, trigger the transferCall tool; if not, continue the conversation. Also, if the user asks about the custom function processOrder, trigger that tool." + } + ], + "tools": [ + { + "type": "transferCall", + "destinations": [ + { + "type": "number", + "number": "+xxxxxx", + "numberE164CheckEnabled": false, + "message": "Transferring Call To Customer Service Department" + } + ] + }, + { + "type": "function", + "async": false, + "function": { + "name": "processOrder", + "description": "it's a custom tool function named processOrder according to vapi.ai custom tools guide" + }, + "server": { + "url": "https://custom-llm-url/chat/completions/custom-tool" + } + } + ] + }, + "transcriber": { + "provider": "azure", + "language": "en-CA" + } +}' +``` + +--- + +## Conclusion + +A Custom LLM turns a basic conversational assistant into an interactive helper that can: +- **Generate everyday language responses,** +- **Call native tools** (like fetching a payment link), +- **Use Vapi-attached tools** (like transferring a call), and +- **Leverage custom tools** (like processing orders). + +By building each layer step by step and testing with cURL, you can fine-tune your integration before rolling it out in production. + +--- + +## Complete Code + +For your convenience, you can find the complete source code for this Custom LLM integration here: + +**[Custom LLM with Vapi Integration – Complete Code](https://codesandbox.io/p/devbox/gfwztp)** +``` \ No newline at end of file diff --git a/fern/customization/custom-transcriber.mdx b/fern/customization/custom-transcriber.mdx new file mode 100644 index 000000000..d6912af3e --- /dev/null +++ b/fern/customization/custom-transcriber.mdx @@ -0,0 +1,438 @@ +## Introduction + +Vapi supports several transcription providers, but sometimes you may need to use your own transcription service. This guide shows you how to integrate Deepgram as your custom transcriber. The solution streams raw stereo PCM audio (16‑bit) from Vapi via WebSocket to your server, which then forwards the audio to Deepgram. Deepgram returns real‑time partial and final transcripts that are processed (including channel detection) and sent back to Vapi. + +## Why Use a Custom Transcriber? + +- **Flexibility:** Integrate with your preferred transcription service. +- **Control:** Implement specialized processing that isn’t available with built‑in providers. +- **Cost Efficiency:** Leverage your existing transcription infrastructure while maintaining full control over the pipeline. +- **Customization:** Tailor the handling of audio data, transcript formatting, and buffering according to your specific needs. + +## How It Works + +1. **Connection Initialization:** + Vapi connects to your custom transcriber endpoint (e.g. `/api/custom-transcriber`) via WebSocket. It sends an initial JSON message like this: + ```json + { + "type": "start", + "encoding": "linear16", + "container": "raw", + "sampleRate": 16000, + "channels": 2 + } + ``` +2. **Audio Streaming:** + Vapi then streams binary PCM audio to your server. + +3. **Transcription Processing:** + Your server forwards the audio to Deepgram(Chooseen Transcriber for Example) using its SDK. Deepgram processes the audio and returns transcript events that include a `channel_index` (e.g. `[0, ...]` for customer, `[1, ...]` for assistant). The service buffers the incoming data, processes the transcript events (with debouncing and channel detection), and emits a final transcript. + +4. **Response:** + The final transcript is sent back to Vapi as a JSON message: + ```json + { + "type": "transcriber-response", + "transcription": "The transcribed text", + "channel": "customer" // or "assistant" + } + ``` + +## Implementation Steps + +### 1. Project Setup + +Create a new Node.js project and install the required dependencies: + +```bash +mkdir vapi-custom-transcriber +cd vapi-custom-transcriber +npm init -y +npm install ws express dotenv @deepgram/sdk +``` + +Create a `.env` file with the following content: + +```env +DEEPGRAM_API_KEY=your_deepgram_api_key +PORT=3001 +``` + +### 2. Code Files + +Below are the individual code files you need for the integration. + +#### transcriptionService.js + +This service creates a live connection to Deepgram, processes incoming audio, handles transcript events (including channel detection), and emits the final transcript back to the caller. + +```js +const { createClient, LiveTranscriptionEvents } = require("@deepgram/sdk"); +const EventEmitter = require("events"); + +const PUNCTUATION_TERMINATORS = [".", "!", "?"]; +const MAX_RETRY_ATTEMPTS = 3; +const DEBOUNCE_DELAY_IN_SECS = 3; +const DEBOUNCE_DELAY = DEBOUNCE_DELAY_IN_SECS * 1000; +const DEEPGRAM_API_KEY = process.env["DEEPGRAM_API_KEY"] || ""; + +class TranscriptionService extends EventEmitter { + constructor(config, logger) { + super(); + this.config = config; + this.logger = logger; + this.flowLogger = require("./fileLogger").createNamedLogger( + "transcriber-flow.log" + ); + if (!DEEPGRAM_API_KEY) { + throw new Error("Missing Deepgram API Key"); + } + this.deepgramClient = createClient(DEEPGRAM_API_KEY); + this.logger.logDetailed( + "INFO", + "Initializing Deepgram live connection", + "TranscriptionService", + { + model: "nova-2", + sample_rate: 16000, + channels: 2, + } + ); + this.deepgramLive = this.deepgramClient.listen.live({ + encoding: "linear16", + channels: 2, + sample_rate: 16000, + model: "nova-2", + smart_format: true, + interim_results: true, + endpointing: 800, + language: "en", + multichannel: true, + }); + this.finalResult = { customer: "", assistant: "" }; + this.audioBuffer = []; + this.retryAttempts = 0; + this.lastTranscriptionTime = Date.now(); + this.pcmBuffer = Buffer.alloc(0); + + this.deepgramLive.addListener(LiveTranscriptionEvents.Open, () => { + this.logger.logDetailed( + "INFO", + "Deepgram connection opened", + "TranscriptionService" + ); + this.deepgramLive.on(LiveTranscriptionEvents.Close, () => { + this.logger.logDetailed( + "INFO", + "Deepgram connection closed", + "TranscriptionService" + ); + this.emitTranscription(); + this.audioBuffer = []; + }); + this.deepgramLive.on(LiveTranscriptionEvents.Metadata, (data) => { + this.logger.logDetailed( + "DEBUG", + "Deepgram metadata received", + "TranscriptionService", + data + ); + }); + this.deepgramLive.on(LiveTranscriptionEvents.Transcript, (event) => { + this.handleTranscript(event); + }); + this.deepgramLive.on(LiveTranscriptionEvents.Error, (err) => { + this.logger.logDetailed( + "ERROR", + "Deepgram error received", + "TranscriptionService", + { error: err } + ); + this.emit("transcriptionerror", err); + }); + }); + } + + send(payload) { + if (payload instanceof Buffer) { + this.pcmBuffer = + this.pcmBuffer.length === 0 + ? payload + : Buffer.concat([this.pcmBuffer, payload]); + } else { + this.logger.warn("TranscriptionService: Received non-Buffer data chunk."); + } + if (this.deepgramLive.getReadyState() === 1 && this.pcmBuffer.length > 0) { + this.sendBufferedData(this.pcmBuffer); + this.pcmBuffer = Buffer.alloc(0); + } + } + + sendBufferedData(bufferedData) { + try { + this.logger.logDetailed( + "INFO", + "Sending buffered data to Deepgram", + "TranscriptionService", + { bytes: bufferedData.length } + ); + this.deepgramLive.send(bufferedData); + this.audioBuffer = []; + this.retryAttempts = 0; + } catch (error) { + this.logger.logDetailed( + "ERROR", + "Error sending buffered data", + "TranscriptionService", + { error } + ); + this.retryAttempts++; + if (this.retryAttempts <= MAX_RETRY_ATTEMPTS) { + setTimeout(() => { + this.sendBufferedData(bufferedData); + }, 1000); + } else { + this.logger.logDetailed( + "ERROR", + "Max retry attempts reached, discarding data", + "TranscriptionService" + ); + this.audioBuffer = []; + this.retryAttempts = 0; + } + } + } + + handleTranscript(transcription) { + if (!transcription.channel || !transcription.channel.alternatives?.[0]) { + this.logger.logDetailed( + "WARN", + "Invalid transcript format", + "TranscriptionService", + { transcription } + ); + return; + } + const text = transcription.channel.alternatives[0].transcript.trim(); + if (!text) return; + const currentTime = Date.now(); + const channelIndex = transcription.channel_index + ? transcription.channel_index[0] + : 0; + const channel = channelIndex === 0 ? "customer" : "assistant"; + this.logger.logDetailed( + "INFO", + "Received transcript", + "TranscriptionService", + { channel, text } + ); + if (transcription.is_final || transcription.speech_final) { + this.finalResult[channel] += ` ${text}`; + this.emitTranscription(); + } else { + this.finalResult[channel] += ` ${text}`; + if (currentTime - this.lastTranscriptionTime >= DEBOUNCE_DELAY) { + this.logger.logDetailed( + "INFO", + `Emitting transcript after ${DEBOUNCE_DELAY_IN_SECS}s inactivity`, + "TranscriptionService" + ); + this.emitTranscription(); + } + } + this.lastTranscriptionTime = currentTime; + } + + emitTranscription() { + for (const chan of ["customer", "assistant"]) { + if (this.finalResult[chan].trim()) { + const transcript = this.finalResult[chan].trim(); + this.logger.logDetailed( + "INFO", + "Emitting transcription", + "TranscriptionService", + { channel: chan, transcript } + ); + this.emit("transcription", transcript, chan); + this.finalResult[chan] = ""; + } + } + } +} + +module.exports = TranscriptionService; +``` + +--- + +#### server.js + +This file creates an Express server, attaches the custom transcriber WebSocket at `/api/custom-transcriber`, and starts the HTTP server. + +```js +const express = require("express"); +const http = require("http"); +const TranscriptionService = require("./transcriptionService"); +const FileLogger = require("./fileLogger"); +require("dotenv").config(); + +const app = express(); +app.use(express.json()); +app.use(express.urlencoded({ extended: true })); + +app.get("/", (req, res) => { + res.send("Custom Transcriber Service is running"); +}); + +const server = http.createServer(app); + +const config = { + DEEPGRAM_API_KEY: process.env.DEEPGRAM_API_KEY, + PORT: process.env.PORT || 3001, +}; + +const logger = new FileLogger(); +const transcriptionService = new TranscriptionService(config, logger); + +transcriptionService.setupWebSocketServer = function (server) { + const WebSocketServer = require("ws").Server; + const wss = new WebSocketServer({ server, path: "/api/custom-transcriber" }); + wss.on("connection", (ws) => { + logger.logDetailed( + "INFO", + "New WebSocket client connected on /api/custom-transcriber", + "Server" + ); + ws.on("message", (data, isBinary) => { + if (!isBinary) { + try { + const msg = JSON.parse(data.toString()); + if (msg.type === "start") { + logger.logDetailed( + "INFO", + "Received start message from client", + "Server", + { sampleRate: msg.sampleRate, channels: msg.channels } + ); + } + } catch (err) { + logger.error("JSON parse error", err, "Server"); + } + } else { + transcriptionService.send(data); + } + }); + ws.on("close", () => { + logger.logDetailed("INFO", "WebSocket client disconnected", "Server"); + if ( + transcriptionService.deepgramLive && + transcriptionService.deepgramLive.getReadyState() === 1 + ) { + transcriptionService.deepgramLive.finish(); + } + }); + ws.on("error", (error) => { + logger.error("WebSocket error", error, "Server"); + }); + transcriptionService.on("transcription", (text, channel) => { + const response = { + type: "transcriber-response", + transcription: text, + channel, + }; + ws.send(JSON.stringify(response)); + logger.logDetailed("INFO", "Sent transcription to client", "Server", { + channel, + text, + }); + }); + transcriptionService.on("transcriptionerror", (err) => { + ws.send( + JSON.stringify({ type: "error", error: "Transcription service error" }) + ); + logger.error("Transcription service error", err, "Server"); + }); + }); +}; + +transcriptionService.setupWebSocketServer(server); + +server.listen(config.PORT, () => { + console.log(`Server is running on http://localhost:${config.PORT}`); +}); +``` + +--- + +## Testing Your Integration + +### Code Examples – How to Test + +1. **Deploy Your Server:** + Run your server with: + + ```bash + node server.js + ``` + +2. **Expose Your Server:** + If you want to test externally, use a tool like ngrok to expose your server via HTTPS/WSS. + +3. **Initiate a Call with Vapi:** + Use the following CURL command (update the placeholders with your actual values): + ```bash + curl -X POST https://api.vapi.ai/call \ + -H "Authorization: Bearer YOUR_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "phoneNumberId": "YOUR_PHONE_NUMBER_ID", + "customer": { + "number": "CUSTOMER_PHONE_NUMBER" + }, + "assistant": { + "transcriber": { + "provider": "custom-transcriber", + "server": { + "url": "wss://your-server.ngrok.io/api/custom-transcriber" + }, + "secret": "your_optional_secret_value" + }, + "firstMessage": "Hello! I am using a custom transcriber with Deepgram." + }, + "name": "CustomTranscriberTest" + }' + ``` + +### Expected Behavior + +- Vapi connects via WebSocket to your custom transcriber at `/api/custom-transcriber`. +- The `"start"` message initializes the Deepgram session. +- PCM audio data is forwarded to Deepgram. +- Deepgram returns transcript events, which are processed with channel detection and debouncing. +- The final transcript is sent back as a JSON message: + ```json + { + "type": "transcriber-response", + "transcription": "The transcribed text", + "channel": "customer" // or "assistant" + } + ``` + +## Notes and Limitations + +- **Streaming Support Requirement:** + The custom transcriber must support streaming. Vapi sends continuous audio data over the WebSocket, and your server must handle this stream in real time. +- **Secret Header:** + The custom transcriber configuration accepts an optional field called **`secret`**. When set, Vapi will send this value with every request as an HTTP header named `x-vapi-secret`. This can also be configured via a headers field. + +- **Buffering:** + The solution buffers PCM audio and performs simple validation (e.g. ensuring stereo PCM data length is a multiple of 4). If the audio data is malformed, it is trimmed to a valid length. + +- **Channel Detection:** + Transcript events from Deepgram include a `channel_index` array. The service uses the first element to determine whether the transcript is from the customer (`0`) or the assistant (`1`). Ensure Deepgram’s response format remains consistent with this logic. + +--- + +## Conclusion + +Using a custom transcriber with Vapi gives you the flexibility to integrate any transcription service into your call flows. This guide walked you through the setup, usage, and testing of a solution that streams real-time audio, processes transcripts with multi‑channel detection, and returns formatted responses back to Vapi. Follow the steps above and use the provided code examples to build your custom transcriber solution. diff --git a/fern/customization/speech-configuration.mdx b/fern/customization/speech-configuration.mdx index 28fe4eb73..cdb209865 100644 --- a/fern/customization/speech-configuration.mdx +++ b/fern/customization/speech-configuration.mdx @@ -10,26 +10,75 @@ The Speaking Plan and Stop Speaking Plan are essential configurations designed t **Note**: At the moment these configurations can currently only be made via API. ## Start Speaking Plan +This plan defines the parameters for when the assistant begins speaking after the customer pauses or finishes. -- **Wait Time Before Speaking**: You can set how long the assistant waits before speaking after the customer finishes. The default is 0.4 seconds, but you can increase it if the assistant is speaking too soon, or decrease it if there’s too much delay. - -- **Smart Endpointing**: This feature uses advanced processing to detect when the customer has truly finished speaking, especially if they pause mid-thought. It’s off by default but can be turned on if needed. -- **Transcription-Based Detection**: Customize how the assistant determines that the customer has stopped speaking based on what they’re saying. This offers more control over the timing. +- **Wait Time Before Speaking**: You can set how long the assistant waits before speaking after the customer finishes. The default is 0.4 seconds, but you can increase it if the assistant is speaking too soon, or decrease it if there’s too much delay. +**Example:** For tech support calls, set `waitSeconds` for the assistant to more than 1.0 seconds to give customers time to complete their thoughts, even if they have some pauses in between. + +- **Smart Endpointing**: This feature uses advanced processing to detect when the customer has truly finished speaking, especially if they pause mid-thought. It’s off by default but can be turned on if needed. **Example:** In insurance claims, `smartendpointingEnabled` helps avoid interruptions while customers think through responses and as they formulate responses. Example: The assistant mentions "do you want a loan," triggering the system to check the customer's response. If the customer responds with "yes" (matching the CustomerRegex for "yes|no"), the system waits for 1.1 seconds before proceeding, allowing time for further clarification. For responses requiring number sequences like "What’s your account number?", set longer timeouts like 5 seconds or more to accommodate pauses between digits. + +- **Transcription-Based Detection**: Customize how the assistant determines that the customer has stopped speaking based on what they’re saying. This offers more control over the timing. **Example:** When a customer says, "My account number is 123456789, I want to transfer $500." + - The system detects the number "123456789" and waits for 0.5 seconds (`WaitSeconds`) to ensure the customer isn't still speaking. + - If the customer were to finish with an additional line, "I want to transfer $500.", the system uses `onPunctuationSeconds` to confirm the end of the speech and then proceed with the request processing. + - In a scenario where the customer has been silent for a long and has already finished speaking but the transcriber is not confident to punctuate the transcription, `onNoPunctuationSeconds` is used for 1.5 seconds. + +Here's a code snippet for Start Speaking Plan - + +```json + "startSpeakingPlan": { + "waitSeconds": 0.4, + "smartEndpointingEnabled": false, + "customEndpointingRules": [ + { + "type": "both", + "assistantRegex": "customEndpointingRules", + "customerRegex": "customEndpointingRules", + "timeoutSeconds": 1.1 + } + ], + "transcriptionEndpointingPlan": { + "onPunctuationSeconds": 0.1, + "onNoPunctuationSeconds": 1.5, + "onNumberSeconds": 0.5 + } + } +``` ## Stop Speaking Plan +The Stop Speaking Plan defines when the assistant stops talking after detecting customer speech. -- **Words to Stop Speaking**: Define how many words the customer needs to say before the assistant stops talking. If you want immediate reaction, set this to 0. Increase it to avoid interruptions by brief acknowledgments like "okay" or "right". +- **Words to Stop Speaking**: Define how many words the customer needs to say before the assistant stops talking. If you want immediate reaction, set this to 0. Increase it to avoid interruptions by brief acknowledgments like "okay" or "right". **Example:** While setting an appointment with a clinic, set `numWords` to 2-3 seconds to allow customers to finish brief clarifications without triggering interruptions. - **Voice Activity Detection**: Adjust how long the customer needs to be speaking before the assistant stops. The default is 0.2 seconds, but you can tweak this to balance responsiveness and avoid false triggers. +**Example:** For a banking call center, setting a higher `voiceSeconds` value ensures accuracy by reducing false positives. This avoids interruptions caused by background sounds, even if it slightly delays the detection of speech onset. This tradeoff is essential to ensure the assistant processes only correct and intended information. + - **Pause Before Resuming**: Control how long the assistant waits before starting to talk again after being interrupted. The default is 1 second, but you can adjust it depending on how quickly the assistant should resume. +**Example:** For quick queries (e.g., "What’s the total order value in my cart?"), set `backoffSeconds` to 1 second. + +Here's a code snippet for Start Speaking Plan - + +```json + "stopSpeakingPlan": { + "numWords": 0, + "voiceSeconds": 0.2, + "backoffSeconds": 1 + } +``` + ## Considerations for Configuration - **Customer Style**: Think about whether the customer pauses mid-thought or provides continuous speech. Adjust wait times and enable smart endpointing as needed. -- **Background Noise**: If there’s a lot of background noise, you may need to tweak the settings to avoid false triggers. +- **Background Noise**: If there’s a lot of background noise, you may need to tweak the settings to avoid false triggers. Default for phone calls is ‘office’ and default for web calls is ‘off’. + + + +```json + "backgroundSound": "off", +``` - **Conversation Flow**: Aim for a balance where the assistant is responsive but not intrusive. Test different settings to find the best fit for your needs. diff --git a/fern/docs.yml b/fern/docs.yml index 75b0dc2e7..1db9d0f24 100644 --- a/fern/docs.yml +++ b/fern/docs.yml @@ -13,22 +13,22 @@ announcement: message: "🚀 Vapi now provides server SDKs! Check out the [supported languages](/server-sdks)." title: Vapi -favicon: static/images/favicon.png +favicon: static/images/favicon.ico logo: - light: static/images/logo/logo-light.png - dark: static/images/logo/logo-dark.png + light: static/images/logo/logo-light.svg + dark: static/images/logo/logo-dark.svg href: / - height: 28 + height: 22 colors: accentPrimary: - dark: "#94ffd2" - light: "#37aa9d" + light: "#62F6B5" + dark: "#62F6B5" background: - dark: "#121418" - light: "#FFFFFF" + light: "#FFFAEA" + dark: "#0E0E13" header-background: - dark: "#121418" - light: "#FFFFFF" + light: "#FFFAEA" + dark: "#0E0E13" experimental: mdx-components: - snippets @@ -49,11 +49,9 @@ navbar-links: - type: minimal text: Status href: https://status.vapi.ai/ - - type: minimal - text: Changelog - href: /changelog - - type: minimal + - type: outlined text: Support + rightIcon: fa-solid fa-headset href: /support - type: filled text: Dashboard @@ -75,69 +73,42 @@ tabs: display-name: Documentation icon: book slug: documentation + changelog: + display-name: Changelog + slug: changelog + changelog: ./changelog + icon: history layout: tabs-placement: header searchbar-placement: header header-height: 80px +analytics: + posthog: + api-key: ${POSTHOG_PROJECT_API_KEY} + endpoint: https://us.i.posthog.com navigation: - tab: documentation layout: - - page: Introduction - path: introduction.mdx - - section: General + - section: Getting Started contents: + - section: Quickstart + path: introduction.mdx + contents: + - page: Dashboard Quickstart + path: quickstart/dashboard.mdx + - page: Inbound Call Quickstart + path: quickstart/inbound.mdx + - page: Outbound Call Quickstart + path: quickstart/outbound.mdx + - page: Web Call Quickstart + path: quickstart/web.mdx - section: How Vapi Works contents: - page: Core Models path: quickstart.mdx - page: Orchestration Models path: how-vapi-works.mdx - - page: Knowledge Base - path: knowledgebase.mdx - - section: Pricing - contents: - - page: Overview - path: pricing.mdx - - page: Cost Routing - path: billing/cost-routing.mdx - - page: Billing Limits - path: billing/billing-limits.mdx - - page: Estimating Costs - path: billing/estimating-costs.mdx - - page: Billing Examples - path: billing/examples.mdx - - section: Enterprise - contents: - - page: Vapi Enterprise - path: enterprise/plans.mdx - - page: On-Prem Deployments - path: enterprise/onprem.mdx - - changelog: ./changelog - - page: Support - path: support.mdx - - link: Status - href: https://status.vapi.ai/ - - section: Quickstart - contents: - - page: Dashboard - path: quickstart/dashboard.mdx - - page: Inbound Calling - path: quickstart/inbound.mdx - - page: Outbound Calling - path: quickstart/outbound.mdx - - page: Web Calling - path: quickstart/web.mdx - - section: Client SDKs - contents: - - page: Overview - path: sdks.mdx - - page: Web SDK - path: sdk/web.mdx - - page: Web Snippet - path: examples/voice-widget.mdx - - page: Server SDKs - path: server-sdks.mdx - - section: Examples + - section: Use Cases contents: - page: Outbound Sales path: examples/outbound-sales.mdx @@ -147,48 +118,13 @@ navigation: path: examples/pizza-website.mdx - page: Python Outbound Snippet path: examples/outbound-call-python.mdx - - page: Billing FAQ - path: quickstart/billing.mdx - - page: Code Resources - path: resources.mdx - - section: Customization - contents: - - page: Provider Keys - path: customization/provider-keys.mdx - - section: Custom LLM - contents: - - page: Fine-tuned OpenAI models - path: customization/custom-llm/fine-tuned-openai-models.mdx - - page: Custom LLM - path: customization/custom-llm/using-your-server.mdx - - section: Custom Voices - contents: - - page: Introduction - path: customization/custom-voices/custom-voice.mdx - - page: Elevenlabs - path: customization/custom-voices/elevenlabs.mdx - - page: PlayHT - path: customization/custom-voices/playht.mdx - - page: Tavus - path: customization/custom-voices/tavus.mdx - - page: Custom Keywords - path: customization/custom-keywords.mdx - - page: Knowledge Base - path: customization/knowledgebase.mdx - - page: Multilingual - path: customization/multilingual.mdx - - page: JWT Authentication - path: customization/jwt-authentication.mdx - - page: Speech Configuration - path: customization/speech-configuration.mdx - - section: Core Concepts + - section: Build contents: - section: Assistants + path: assistants.mdx contents: - - page: Introduction - path: assistants.mdx - - page: Function Calling - path: assistants/function-calling.mdx + - page: Voice AI Prompting Guide + path: prompting-guide.mdx - page: Persistent Assistants path: assistants/persistent-assistants.mdx - page: Dynamic Variables @@ -197,62 +133,194 @@ navigation: path: assistants/call-analysis.mdx - page: Background Messages path: assistants/background-messages.mdx + - page: Voice Formatting Plan + path: assistants/voice-formatting-plan.mdx + - section: Workflows + path: workflows.mdx + contents: + - section: Tasks + contents: + - page: Say + path: workflows/tasks/say.mdx + - page: Gather + path: workflows/tasks/gather.mdx + - page: API Request + path: workflows/tasks/api-request.mdx + - page: Transfer + path: workflows/tasks/transfer.mdx + - page: Hangup + path: workflows/tasks/hangup.mdx + - section: Conditions + contents: + - page: Logical Conditions + path: workflows/logical-conditions.mdx + - page: AI Conditions + path: workflows/ai-conditions.mdx - section: Blocks + path: blocks.mdx contents: - - page: Introduction - path: blocks.mdx - page: Steps path: blocks/steps.mdx - page: Block Types path: blocks/block-types.mdx - - section: Server URL + - section: Tools + path: tools/introduction.mdx contents: - - page: Introduction - path: server-url.mdx - - page: Setting Server URLs - path: server-url/setting-server-urls.mdx - - page: Server Events - path: server-url/events.mdx - - page: Developing Locally - path: server-url/developing-locally.mdx - - section: Phone Calling + - page: Default Tools + path: tools/default-tools.mdx + - page: Custom Tools + path: tools/custom-tools.mdx + - page: Make & GHL Tools + path: GHL.mdx + - section: Knowledge Base + path: knowledge-base/knowledge-base.mdx contents: - - page: Introduction - path: phone-calling.mdx + - page: Integrating with Trieve + path: knowledge-base/integrating-with-trieve.mdx - section: Squads + path: squads.mdx contents: - - page: Introduction - path: squads.mdx - page: Example path: squads-example.mdx - - section: Advanced Concepts + - page: Silent Transfers + path: squads/silent-transfers.mdx + - section: Phone Numbers + contents: + - page: Free Telephony + path: phone-numbers/free-telephony.mdx + - section: Test + contents: + - page: Manual Testing + hidden: true + path: test/manual-testing.mdx + - page: Voice AI Testing + path: test/voice-testing.mdx + - section: Deploy contents: - section: Calls + path: phone-calling.mdx contents: - page: Call Forwarding path: call-forwarding.mdx + - page: Dynamic Call Transfers + path: calls/call-dynamic-transfers.mdx - page: Ended Reason path: calls/call-ended-reason.mdx - - page: SIP - path: advanced/calls/sip.mdx - page: Live Call Control path: calls/call-features.mdx - - page: Make & GHL Integration - path: GHL.mdx - - page: Tools Calling - path: tools-calling.mdx - - page: Prompting Guide - path: prompting-guide.mdx - - page: Voice Fallback Plan - path: voice-fallback-plan.mdx - - page: OpenAI Realtime - path: openai-realtime.mdx + - page: On-Hold Specialist Transfer + path: calls/call-handling-with-vapi-and-twilio.mdx + - page: Voice Mail Detection + path: calls/voice-mail-detection.mdx + - section: Vapi SDKs + path: sdks.mdx + contents: + - section: Client SDKs + contents: + - page: Web SDK + path: sdk/web.mdx + - page: Web Snippet + path: examples/voice-widget.mdx + - page: Server SDKs + path: server-sdks.mdx + - page: Code Resources + path: resources.mdx + - section: Server URLs + path: server-url.mdx + contents: + - page: Setting Server URLs + path: server-url/setting-server-urls.mdx + - page: Server Events + path: server-url/events.mdx + - page: Developing Locally + path: server-url/developing-locally.mdx + - section: SIP Telephony + path: advanced/sip/sip.mdx + contents: + - page: SIP Trunking + path: advanced/sip/sip-trunk.mdx + - page: Telnyx Integration + path: advanced/sip/sip-telnyx.mdx + - page: Zadarma Integration + path: advanced/sip/sip-zadarma.mdx + - section: Advanced Concepts + contents: + - page: Multilingual Support + path: customization/multilingual.mdx + - section: Advanced Voice Features + contents: + - page: Speech Configuration + path: customization/speech-configuration.mdx + - page: Voice Fallback Plan + path: voice-fallback-plan.mdx + - page: OpenAI Realtime Speech-to-Speech + path: openai-realtime.mdx + - section: Customization + contents: + - page: Provider Keys + path: customization/provider-keys.mdx + - section: Custom LLM + contents: + - page: Fine-tuned OpenAI models + path: customization/custom-llm/fine-tuned-openai-models.mdx + - page: Custom LLM + path: customization/custom-llm/using-your-server.mdx + - page: Custom LLM Tool Calling Integration + path: customization/custom-llm/tool-calling-integration.mdx + - section: Custom Voices + contents: + - page: Introduction + path: customization/custom-voices/custom-voice.mdx + - page: Elevenlabs + path: customization/custom-voices/elevenlabs.mdx + - page: PlayHT + path: customization/custom-voices/playht.mdx + - page: Tavus + path: customization/custom-voices/tavus.mdx + - page: Custom Keywords + path: customization/custom-keywords.mdx + - page: Custom Transcriber + path: customization/custom-transcriber.mdx + - page: JWT Authentication + path: customization/jwt-authentication.mdx - section: Glossary contents: - page: Definitions path: glossary.mdx - page: FAQ path: faq.mdx + - section: Support & Billing + contents: + - page: Support + path: support.mdx + - section: Billing + contents: + - page: Billing Overview + path: pricing.mdx + - page: Cost Routing + path: billing/cost-routing.mdx + - page: Billing FAQ + path: billing/billing-faq.mdx + - page: Billing Limits + path: billing/billing-limits.mdx + - page: Estimating Costs + path: billing/estimating-costs.mdx + - page: Billing Examples + path: billing/examples.mdx + - section: Enterprise + contents: + - page: Vapi Enterprise + path: enterprise/plans.mdx + - page: On-Prem Deployments + path: enterprise/onprem.mdx + - page: HIPAA Compliance + path: security-and-privacy/hipaa.mdx + - page: PCI Compliance + path: security-and-privacy/PCI.mdx + - link: SOC-2 Compliance + href: https://security.vapi.ai/ + - link: Status + href: https://status.vapi.ai/ - section: Community contents: - section: Videos @@ -287,13 +355,13 @@ navigation: path: community/television.mdx - page: Usecase path: community/usecase.mdx - - page: My Vapi - path: community/myvapi.mdx - page: Expert Directory path: community/expert-directory.mdx - - section: Providers + - link: Discord + href: https://discord.com/invite/pUFNcf2WmH + - section: Providers on Vapi contents: - - section: Voice + - section: Voices (Text-to-Speech) contents: - page: ElevenLabs path: providers/voice/elevenlabs.mdx @@ -313,11 +381,11 @@ navigation: path: providers/voice/rimeai.mdx - page: Deepgram path: providers/voice/deepgram.mdx - - section: Video - contents: - - page: Tavus - path: providers/video/tavus.mdx - - section: Models + - section: Video Models + contents: + - page: Tavus + path: providers/video/tavus.mdx + - section: Large Language Models contents: - page: OpenAI path: providers/model/openai.mdx @@ -333,7 +401,7 @@ navigation: path: providers/model/togetherai.mdx - page: OpenRouter path: providers/model/openrouter.mdx - - section: Transcription + - section: Transcribers (Speech-to-Text) contents: - page: Deepgram path: providers/transcriber/deepgram.mdx @@ -351,18 +419,18 @@ navigation: path: providers/cloud/gcp.mdx - page: Cloudflare R2 path: providers/cloud/cloudflare.mdx + - page: Supabase + path: providers/cloud/supabase.mdx - section: Observability contents: - page: Langfuse path: providers/observability/langfuse.mdx - page: Voiceflow path: providers/voiceflow.mdx - - section: Security & Privacy + - section: Legal contents: - - page: HIPAA Compliance - path: security-and-privacy/hipaa.mdx - - link: SOC-2 Compliance - href: https://security.vapi.ai/ + - page: TCPA Consent Guidelines + path: tcpa-consent.mdx - link: Privacy Policy href: https://vapi.ai/privacy - link: Terms of Service @@ -388,10 +456,9 @@ navigation: href: https://api.vapi.ai/api - link: OpenAPI href: https://api.vapi.ai/api-json + - tab: changelog redirects: - - source: /customization/knowledgebase - destination: /knowledgebase - source: /developer-documentation destination: /introduction - source: /documentation/general/changelog @@ -542,3 +609,19 @@ redirects: destination: "/community/appointment-scheduling" - source: "/enterprise" destination: "/enterprise/plans" + - source: "/tools-calling" + destination: "/assistants/custom-tools" + - source: "/knowledgebase" + destination: "/knowledge-base" + - source: "/customization/bring-your-own-vectors/trieve" + destination: "/knowledge-base/integrating-with-trieve" + - source: "/quickstart/billing" + destination: "/billing/billing-faq" + - source: /assistants/default-tools + destination: /tools/default-tools + - source: /assistants/function-calling + destination: /tools/default-tools + - source: /assistants/custom-tools + destination: /tools/custom-tools + - source: /GHL + destination: /tools/GHL diff --git a/fern/examples/inbound-support.mdx b/fern/examples/inbound-support.mdx index 8b2d869ef..d2196b7c2 100644 --- a/fern/examples/inbound-support.mdx +++ b/fern/examples/inbound-support.mdx @@ -51,13 +51,14 @@ As a bonus, we also want the assistant to remember by the phone number of the ca For this example, we're going to store the conversation on our server between calls and use the [Server URL's `assistant-request`](/server-url#retrieving-assistants) to fetch a new configuration based on the caller every time someone calls. - - We'll buy a phone number for inbound calls using the [Phone Numbers API](/api-reference/phone-numbers/buy-phone-number). + + We'll create a phone number for inbound calls using the [Phone Numbers API](/api-reference/phone-numbers/create). ```json { "id": "c86b5177-5cd8-447f-9013-99e307a8a7bb", "orgId": "aa4c36ba-db21-4ce0-9c6e-99e307a8a7bb", + "provider": "vapi", "number": "+11234567890", "createdAt": "2023-09-29T21:44:37.946Z", "updatedAt": "2023-12-08T00:57:24.706Z", diff --git a/fern/examples/outbound-sales.mdx b/fern/examples/outbound-sales.mdx index a2e769256..27fccf311 100644 --- a/fern/examples/outbound-sales.mdx +++ b/fern/examples/outbound-sales.mdx @@ -72,13 +72,14 @@ We want this agent to be able to call a list of leads and schedule appointments. We'll then make a POST request to the [Create Assistant](/api-reference/assistants/create-assistant) endpoint to create the assistant. - - We'll buy a phone number for outbound calls using the [Phone Numbers API](/phone-calling#set-up-a-phone-number). + + We'll create a phone number for outbound calls using the [Phone Numbers API](/phone-calling#set-up-a-phone-number). ```json { "id": "c86b5177-5cd8-447f-9013-99e307a8a7bb", "orgId": "aa4c36ba-db21-4ce0-9c6e-99e307a8a7bb", + "provider": "vapi", "number": "+11234567890", "createdAt": "2023-09-29T21:44:37.946Z", "updatedAt": "2023-12-08T00:57:24.706Z", diff --git a/fern/fern.config.json b/fern/fern.config.json index e3c5af7b8..9d928e525 100644 --- a/fern/fern.config.json +++ b/fern/fern.config.json @@ -1,4 +1,4 @@ { "organization": "vapi", - "version": "0.45.3" + "version": "0.53.17" } \ No newline at end of file diff --git a/fern/info-hierarchy.mdx b/fern/info-hierarchy.mdx new file mode 100644 index 000000000..e899cabc7 --- /dev/null +++ b/fern/info-hierarchy.mdx @@ -0,0 +1,73 @@ +### Information Hierarchy + +#### Current +* Overview +* Platform + * Assistants + * Phone Numbers + * Files + * Tools + * Blocks + * Squads +* Voice Library +* Logs + * Calls + * API Requests + * Webhooks + +#### Proposed Dashboard Hierarchy +* Overview +* Build + * Assistants + * Workflows + * Phone Numbers + * Tools + * Files + * Squads +* Test + * Voice Test Suites +* Observe + * Call Logs + * API Logs + * Webhook Logs +* Community + * Task Library + * Workflow Library + * Voice Library + * Model Library +* Profile +* Organizations + * LIST +* Admin + * Billing + * Members + * Settings + * API Keys + * Provider Credentials +* Light/Dark Toggle +* Log Out + +#### Docs Hierarchy +* Getting Started +* Build + * Assistants + * Workflows <-- + * Tools + * Knowledge Base + * Squads +* Test + * Voice Testing <-- +* Deploy + * Phone Numbers + * Calls +* Community + * Tasks + * Workflows + * Voices + * Models + * Transcribers +* Admin + * Billing + * Org -- Enterprise + * Org management + * Provider Keys \ No newline at end of file diff --git a/fern/introduction.mdx b/fern/introduction.mdx index aa46ab21e..cc7df12c4 100644 --- a/fern/introduction.mdx +++ b/fern/introduction.mdx @@ -1,5 +1,5 @@ --- -title: Introduction +title: Introduction to Vapi subtitle: Vapi is the Voice AI platform for developers. slug: introduction --- @@ -174,42 +174,48 @@ Gain a deep understanding of key concepts in Vapi, as well as how Vapi works: Our SDKs are open source, and available on [our GitHub](https://github.com/VapiAI): - - Add a Vapi assistant to your web application. - - - Add a Vapi assistant to your iOS app. - - - Add a Vapi assistant to your Flutter app. - - - Add a Vapi assistant to your React Native app. - - - Multi-platform. Mac, Windows, and Linux. - - + + Add a Vapi assistant to your web application. + + + Add a Vapi assistant to your iOS app. + + + Add a Vapi assistant to your Flutter app. + + + Add a Vapi assistant to your React Native app. + + + Multi-platform. Mac, Windows, and Linux. + + ## FAQ diff --git a/fern/knowledge-base/integrating-with-trieve.mdx b/fern/knowledge-base/integrating-with-trieve.mdx new file mode 100644 index 000000000..a3d92eb92 --- /dev/null +++ b/fern/knowledge-base/integrating-with-trieve.mdx @@ -0,0 +1,321 @@ +--- +title: Bring your own chunks/vectors from Trieve +subtitle: Using Trieve for improved RAG with Vapi +slug: knowledge-base/integrating-with-trieve +--- + +# Using Trieve with Vapi + +Vapi integrates with [Trieve](https://trieve.ai) through the BYOD (Bring Your Own Dataset) approach, allowing you to use your Trieve API key to import your existing Trieve datasets into Vapi. + +## Integrating with Trieve + +The BYOD approach offers flexibility and control over your datasets. You can: + +- Fully manage your datasets in Trieve's native interface +- Use Trieve's advanced features like: + - Custom chunking rules + - Search playground testing + - Manual chunk editing + - Website crawling + - Dataset visualization + +### Step 1: Set Up Trieve Dataset + +1. Create an account at [Trieve](https://trieve.ai) +2. Create a new dataset using Trieve's dashboard + +![Create dataset in Trieve](../static/images/knowledge-base/create-dataset.png) + +When creating your dataset in Trieve, selecting the right embedding model is crucial for optimizing performance and accuracy. Here are some of the available options: + +### jina-base-en + +- **Provider**: Jina AI (Hosted by Trieve) +- **Performance**: Fast +- **Description**: This model is designed for speed and efficiency, making it suitable for applications where quick response times are critical. It provides a good balance of performance and accuracy for general use cases. + +### text-embedding-3-small + +- **Provider**: OpenAI +- **Performance**: Moderate +- **Description**: A smaller model from OpenAI that offers a compromise between speed and accuracy. It is suitable for applications that require a balance between computational efficiency and the quality of embeddings. + +### text-embedding-3-large + +- **Provider**: OpenAI +- **Performance**: Slow +- **Description**: This larger model provides the highest accuracy among the options but at the cost of slower processing times. It is ideal for applications where the quality of embeddings is prioritized over speed. + +3. Add content through various methods: + +#### Upload Documents + +Upload documents directly through Trieve's interface: + +![Upload files in Trieve](../static/images/knowledge-base/upload-files.png) + +When uploading files, you can configure advanced chunking options: + +![Upload files advanced options in Trieve](../static/images/knowledge-base/upload-files-advanced.png) + +#### Edit Individual Chunks + +After uploading documents, you can edit individual chunks to refine their content: + +![Edit chunk interface in Trieve](../static/images/knowledge-base/edit-chunk.png) + +##### Editing Options + +- **Chunk Content**: Modify the text directly in the rich text editor + + - Fix formatting issues + - Correct errors or typos + - Split or combine chunks manually + - Add or remove content + +- **Metadata Fields**: + - Date: Update document timestamps + - Number Value: Adjust numeric metadata for filtering + - Location: Set or modify geographical coordinates + - Weight: Fine-tune search relevance with custom weights + - Fulltext Boost: Add terms to enhance search visibility + - Semantic Boost: Adjust vector embedding influence + +##### Best Practices for Chunk Editing + +1. **Content Length** + + - Keep chunks between 200-1000 tokens + - Maintain logical content boundaries + - Ensure complete thoughts within each chunk + +2. **Metadata Optimization** + + - Use consistent date formats + - Add relevant numeric values for filtering + - Apply weights strategically for important content + +3. **Search Enhancement** + - Use boost terms for critical keywords + - Balance semantic and fulltext boosts + - Test search results after significant edits + +### Advanced Chunking Options + +#### Metadata + +- Add custom metadata as JSON to associate with your chunks + - Useful for filtering and organizing content (e.g., `{"author": "John Doe", "category": "technical"}`) + - Keep metadata concise and relevant to avoid storage overhead + - Use consistent keys across related documents for better searchability + +#### Date + +- Specify the creation or relevant date for the document + - Important for version control and content freshness + - Helps with filtering outdated information + - Use actual document creation dates when possible + +#### Split Delimiters + +- Define custom delimiters (e.g., ".,?\n") to control where chunks are split + - Recommended defaults: ".,?\n" for general content + - Add semicolons (;) for technical documentation + - Use "\n\n" for markdown or structured content + - Avoid over-aggressive splitting that might break context + +#### Target Splits Per Chunk + +- Set the desired number of splits per chunk + - Default: 20 splits + - Recommended ranges: + - 15-25 for general content + - 10-15 for technical documentation + - 25-30 for narrative content + - Lower values create more granular chunks, better for precise retrieval + - Higher values maintain more context but may retrieve irrelevant information + +#### Rebalance Chunks + +- Enable to redistribute content evenly across chunks + - Recommended for documents with varying section lengths + - Helps maintain consistent chunk sizes + - May slightly impact natural content boundaries + - Best used with technical documentation or structured content + +#### Use gpt4o chunking + +- Enable GPT-4 optimized chunking for improved semantic coherence + - Recommended for: + - Complex technical documentation + - Content with intricate relationships + - Documents where context preservation is crucial + - Note: Increases processing time and cost + - Best for high-value content where accuracy is paramount + +#### Heading Based Chunking + +- Split content based on document headings + - Ideal for well-structured documents (e.g., documentation, reports) + - Works best with consistent heading hierarchy + - Consider enabling for: + - Technical documentation + - User manuals + - Research papers + - May create uneven chunk sizes based on section lengths + +#### System Prompt + +- Provide custom instructions for the chunking process + - Optional but powerful for specific use cases + - Example prompts: + - "Preserve code blocks as single chunks" + - "Keep API endpoint descriptions together" + - "Maintain question-answer pairs in the same chunk" + - Keep prompts clear and specific + - Test different prompts with sample content to optimize results + +#### Website Crawling + +Trieve offers powerful website crawling capabilities with extensive configuration options: + +![Website crawling in Trieve](../static/images/knowledge-base/crawl.png) + +##### Crawl Configuration Options + +- **Crawl Interval**: Set how often to refresh content + + - Options: Daily, Weekly, Monthly + - Recommended: Daily for frequently updated content + +- **Page Limit**: Control the maximum number of pages to crawl + + - Default: 1000 pages + - Adjust based on your site size and content relevance + +- **URL Patterns** + + - Include/Exclude specific URL patterns using regex + - Example includes: `https://docs.example.com/*` + - Example excludes: `https://example.com/internal/*` + +- **Query Selectors** + + - Include specific HTML elements for targeted content extraction + - Exclude navigation, footers, and other non-content elements + - Common excludes: `navbar`, `footer`, `aside`, `nav`, `form` + +- **Special Content Types** + + - OpenAPI Spec: Toggle for API documentation crawling + - Shopify: Enable for e-commerce content + - YouTube Channel: Include video transcripts and descriptions + +- **Advanced Options** + - Boost Titles: Increase weight of page titles in search results + - Allow External Links: Include content from linked domains + - Ignore Sitemap: Skip sitemap-based crawling + - Remove Strings: Clean up headers and body content + +##### Best Practices for Crawling + +1. **Start Small** + + - Begin with a low page limit + - Test with specific sections of your site + - Gradually expand coverage + +2. **Optimize Selectors** + + - Remove navigation and UI elements + - Focus on main content areas + - Use browser inspector to identify key selectors + +3. **Monitor Performance** + - Check crawl logs regularly + - Adjust patterns based on results + - Balance frequency with server load + +### Step 2: Test and Refine + +Use Trieve's search playground to: + +- Test semantic search queries +- Adjust chunk sizes +- Edit chunks manually + +- Visualize vector embeddings +- Fine-tune relevance scores + +![Search playground in Trieve](../static/images/knowledge-base/search-playground.png) + +### Step 3: Import to Vapi + +1. Create your Trieve API key from [Trieve's dashboard](https://dashboard.trieve.ai/org/keys) +2. Add your Trieve API key to Vapi [Provider Credentials](https://dashboard.vapi.ai/keys) + ![Add Trieve API key in Vapi](../static/images/knowledge-base/trieve-credential.png) +3. Once your dataset is optimized in Trieve, import it to Vapi: + +```json +{ + "name": "trieve-dataset", + "provider": "trieve", + "searchPlan": { + "scoreThreshold": 0.2, + "searchType": "semantic" + }, + "createPlan": { + "type": "import", + "providerId": "" + } +} +``` + +## Best Practices + +1. **Dataset Organization** + + - Segment datasets by domain knowledge boundaries + - Use semantic-based dataset naming (e.g., "api-docs-v2", "user-guides-2024") + - Version control chunking configurations in your codebase + +2. **Content Quality** + + - Implement text normalization (Unicode normalization, whitespace standardization) + - Use regex patterns to clean formatting artifacts + - Validate chunk semantic coherence through embedding similarity scores + +3. **Performance Optimization** + + - Target chunk sizes: 200-1000 tokens (optimal for current embedding models) + - Configure hybrid search with BM25 boost = 0.3 for technical content + - Set score thresholds dynamically based on embedding model (0.2 for text-embedding-3-small, 0.25 for text-embedding-3-large) + +4. **Maintenance** + - Implement automated content refresh cycles via Trieve's API + - Track search result relevance metrics (MRR, NDCG) + - Rotate API keys on 90-day cycles + +## Troubleshooting + +Common issues and solutions: + +1. **Search Relevance Issues** + + - Implement cross-encoder reranking for critical queries + - Fine-tune BM25 vs semantic weights (recommended ratio: 0.3:0.7) + - Analyze chunk boundary overlap percentage (aim for 15-20%) + +2. **Integration Errors** + + - Validate dataset permissions (READ_DATASET scope required) + - Check for dataset ID format compliance (UUID v4) + - Monitor rate limits (default: 100 requests/min) + +3. **Performance Optimization** + - Implement chunk size normalization (max variance: 20%) + - Enable query caching for frequent searches + - Use batch operations for bulk updates (max 100 chunks/request) + +Need help? Contact [support@vapi.ai](mailto:support@vapi.ai) for assistance. diff --git a/fern/customization/knowledgebase.mdx b/fern/knowledge-base/knowledge-base.mdx similarity index 54% rename from fern/customization/knowledgebase.mdx rename to fern/knowledge-base/knowledge-base.mdx index 35a80c02b..0740ed4a1 100644 --- a/fern/customization/knowledgebase.mdx +++ b/fern/knowledge-base/knowledge-base.mdx @@ -1,14 +1,14 @@ --- -title: Creating Custom Knowledge Bases for Your Voice AI Assistants +title: Introduction to Knowledge Bases subtitle: >- Learn how to create and integrate custom knowledge bases into your voice AI assistants. -slug: knowledgebase +slug: knowledge-base --- ## **What is Vapi's Knowledge Base?** -A Knowledge Base is a collection of custom files that contain information on specific topics or domains. By integrating a Knowledge Base into your voice AI assistant, you can enable it to provide more accurate and informative responses to user queries. This is currently available in Vapi via the API, and will be on the dashboard soon. +A [**Knowledge Base**](/api-reference/knowledge-bases/create) is a collection of custom files that contain information on specific topics or domains. By integrating a Knowledge Base into your voice AI assistant, you can enable it to provide more accurate and informative responses to user queries. This is currently available in Vapi via the API, and will be on the dashboard soon. ### **Why Use a Knowledge Base?** @@ -18,6 +18,10 @@ Using a Knowledge Base with your voice AI assistant offers several benefits: - **Enhanced capabilities**: A Knowledge Base enables your assistant to answer complex queries and provide detailed responses to user inquiries. - **Customization**: With a Knowledge Base, you can tailor your assistant's responses to specific domains or topics, making it more effective and informative. + + Knowledge Bases are configured through the API, view all configurable properties in the [API Reference](/api-reference/knowledge-bases/create-knowledge-base). + + ## **How to Create a Knowledge Base** To create a Knowledge Base, follow these steps: @@ -43,25 +47,62 @@ curl --location 'https://api.vapi.ai/file' \ ### **Step 2: Create a Knowledge Base** -Use the ID of the uploaded file to create a Knowledge Base. Currently we support [trieve](https://trieve.ai) as a provider. +Use the ID of the uploaded file to create a Knowledge Base along with the KB configurations. + +1. Provider: [trieve](https://trieve.ai) ```bash -curl --location 'https://api.vapi.ai/knowledge-base' \ +curl --location 'http://localhost:3001/knowledge-base' \ --header 'Content-Type: text/plain' \ --header 'Authorization: Bearer ' \ --data '{ - "name": "knowledge-base-test", + "name": "v2", "provider": "trieve", - "vectorStoreSearchPlan": { - "scoreThreshold": 0.1, - "searchType": "hybrid" + "searchPlan": { + "searchType": "semantic", + "topK": 3, + "removeStopWords": true, + "scoreThreshold": 0.7 }, - "vectorStoreCreatePlan": { - "fileIds": [""] + "createPlan": { + "type": "create", + "chunkPlans": [ + { + "fileIds": ["", ""], + "websites": ["", ""], + "targetSplitsPerChunk": 50, + "splitDelimiters": [".!?\n"], + "rebalanceChunks": true + } + ] } }' ``` +#### Configuration Options + +##### Search Plan Options + +- **searchType** (required): The search method used for finding relevant chunks. Available options: + - `fulltext`: Traditional text search + - `semantic`: Semantic similarity search + - `hybrid`: Combines fulltext and semantic search + - `bm25`: BM25 ranking algorithm +- **topK** (optional): Number of top chunks to return. Default varies by implementation +- **removeStopWords** (optional): When true, removes common stop words from the search query. Default: `false` +- **scoreThreshold** (optional): Filters out chunks based on their similarity score: + - For cosine distance: Excludes chunks below the threshold + - For Manhattan Distance, Euclidean Distance, and Dot Product: Excludes chunks above the threshold + - Set to 0 or omit for no threshold + +##### Chunk Plan Options + +- **fileIds** (optional): Array of file IDs to include in the vector store +- **websites** (optional): Array of website URLs to crawl and include in the vector store +- **targetSplitsPerChunk** (optional): Number of splits per chunk. Default: `20` +- **splitDelimiters** (optional): Array of delimiters used to split text before chunking. Default: `[".!?\n"]` +- **rebalanceChunks** (optional): When true, evenly distributes remainder splits across chunks. For example, 66 splits with `targetSplitsPerChunk: 20` will create 3 chunks with 22 splits each. Default: `true` + ### **Step 3: Create an Assistant** Create a new assistant in Vapi and, on the right sidebar menu. Add the Knowledge Base to your assistant via the PATCH endpoint. Also make sure you customize your assistant's system prompt to utilize the Knowledge Base for responding to user queries. diff --git a/fern/knowledgebase.mdx b/fern/knowledgebase.mdx index 1764fc6c2..a67c92667 100644 --- a/fern/knowledgebase.mdx +++ b/fern/knowledgebase.mdx @@ -20,49 +20,54 @@ Using a Knowledge Base with your voice AI assistant offers several benefits: ## **How to Create a Knowledge Base** -To create a Knowledge Base, follow these steps: +To create a Knowledge Base with Trieve, follow these steps: -### **Step 1: Upload Your Files** +### **Step 1: Create a Knowledge Base with Trieve** -Navigate to Platform > Files and upload your custom files in Markdown, PDF, plain text, or Microsoft Word (.doc and .docx) format to Vapi's Knowledge Base. - - - Adding files to your Knowledge Base - - -Alternatively you can upload your files via the API. - -```bash -curl --location 'https://api.vapi.ai/file' \ ---header 'Authorization: Bearer ' \ ---form 'file=@""' -``` - -### **Step 2: Create a Knowledge Base** - -Use the ID of the uploaded file to create a Knowledge Base. Currently we support [trieve](https://trieve.ai) as a provider. +Vapi integrates with [Trieve](https://trieve.ai) using the BYOD (Bring Your Own Dataset) approach. First, create and optimize your dataset in Trieve (see our [Integrating with Trieve guide](knowledge-base/integrating-with-trieve) for detailed instructions), then import it to Vapi: ```bash curl --location 'https://api.vapi.ai/knowledge-base' \ --header 'Content-Type: text/plain' \ --header 'Authorization: Bearer ' \ --data '{ - "name": "knowledge-base-test", + "name": "trieve-dataset", "provider": "trieve", - "vectorStoreSearchPlan": { - "scoreThreshold": 0.1, - "searchType": "hybrid" + "searchPlan": { + "searchType": "semantic", + "topK": 3, + "removeStopWords": true, + "scoreThreshold": 0.7 }, - "vectorStoreCreatePlan": { - "fileIds": [""] + "createPlan": { + "type": "import", + "providerId": "" } }' ``` -### **Step 3: Create an Assistant** +#### Configuration Options + +##### Search Plan Options + +- **searchType** (required): The search method used for finding relevant chunks. Available options: + - `fulltext`: Traditional text search + - `semantic`: Semantic similarity search + - `hybrid`: Combines fulltext and semantic search + - `bm25`: BM25 ranking algorithm +- **topK** (optional): Number of top chunks to return. Default varies by implementation +- **removeStopWords** (optional): When true, removes common stop words from the search query. Default: `false` +- **scoreThreshold** (optional): Filters out chunks based on their similarity score: + - For cosine distance: Excludes chunks below the threshold + - For Manhattan Distance, Euclidean Distance, and Dot Product: Excludes chunks above the threshold + - Set to 0 or omit for no threshold + +##### Import Options + +- **providerId** (required): The ID of your Trieve dataset that you want to import +- **type** (required): Must be set to "import" for the BYOD approach + +### **Step 2: Create an Assistant** Create a new assistant in Vapi and, on the right sidebar menu. Add the Knowledge Base to your assistant via the PATCH endpoint. Also make sure you customize your assistant's system prompt to utilize the Knowledge Base for responding to user queries. diff --git a/fern/openai-realtime.mdx b/fern/openai-realtime.mdx index 3e0e66218..4cd0cf952 100644 --- a/fern/openai-realtime.mdx +++ b/fern/openai-realtime.mdx @@ -10,7 +10,7 @@ slug: openai-realtime OpenAI’s Realtime API enables developers to use a native speech-to-speech model. Unlike other Vapi configurations which orchestrate a transcriber, model and voice API to simulate speech-to-speech, OpenAI’s Realtime API natively processes audio in and audio out. -To start using it with your Vapi assistants, select `gpt-4o-realtime-preview-2024-10-01` as your model. +To start using it with your Vapi assistants, select `gpt-4o-realtime-preview-2024-12-17` as your model. - Please note that only OpenAI voices may be selected while using this model. The voice selection will not act as a TTS (text-to-speech) model, but rather as the voice used within the speech-to-speech model. -- Also note that we don’t currently support Knowledge Bases with the Realtime API. Furthermore, advanced functionality is currently limited with the latest voices Ash, Ballad, Coral, Sage and Verse. -- Lastly, note that our Realtime integration still retains the rest of Vapi's orchestration layer such as the endpointing and interruption models to enable a reliable conversational flow. \ No newline at end of file +- Also note that we don’t currently support Knowledge Bases with the Realtime API. +- Lastly, note that our Realtime integration still retains the rest of Vapi's orchestration layer such as Endpointing and Interruption models to enable a reliable conversational flow. \ No newline at end of file diff --git a/fern/phone-calling.mdx b/fern/phone-calling.mdx index a3dd1ad28..7a4a0ba91 100644 --- a/fern/phone-calling.mdx +++ b/fern/phone-calling.mdx @@ -7,9 +7,9 @@ slug: phone-calling -You can set up a phone number to place and receive phone calls. Phone numbers can be bought directly through Vapi, or you can use your own from Twilio. +You can set up a phone number to place and receive phone calls. Phone numbers can be created directly through Vapi, or you can use your own from Twilio. -You can buy a phone number through the dashboard or use the [`/phone-numbers/buy`](/api-reference/phone-numbers/buy-phone-number)` endpoint. +You can create a free phone number through the dashboard or use the [`/phone-numbers`](/api-reference/phone-numbers/create) endpoint. If you want to use your own phone number, you can also use the dashboard or the [`/phone-numbers/import`](/api-reference/phone-numbers/import-twilio-number) endpoint. This will use your Twilio credentials to verify the number and configure it with Vapi services. diff --git a/fern/phone-numbers.mdx b/fern/phone-numbers.mdx new file mode 100644 index 000000000..e69de29bb diff --git a/fern/phone-numbers/free-telephony.mdx b/fern/phone-numbers/free-telephony.mdx new file mode 100644 index 000000000..ef8822e13 --- /dev/null +++ b/fern/phone-numbers/free-telephony.mdx @@ -0,0 +1,56 @@ +--- +title: Creating Free Phone Numbers +subtitle: Creating free phone numbers on the Vapi platform. +slug: free-telephony +--- + +This guide details how to create free phone numbers on the Vapi platform, which you can use with your assistants or squads. + + + ### Head to the “Phone Numbers” tab in your Vapi dashboard. + + + + + + ### Click on “Create a Phone Number” + + + + + + ### Within the "Free Vapi Number" tab, enter your desired area code + + + Currently, only US phone numbers can be directly created through Vapi. + + + + + + + ### Vapi will automatically allot you a random phone number — free of charge! + + + It takes a couple of minutes for the phone number to be fully activated. During this period, calls will not be functional. + + + + + + + + +### Frequently Asked Questions + + + + For now, each wallet can have up to 10 free numbers. This limit ensures we can continue offering reliable, high-quality service to everyone. + + + Not at this time. You can still bring in global numbers using our phone number import feature. + + + None at all. We’re simply passing on the cost efficiencies we’ve gained through robust engineering and volume partnerships. + + \ No newline at end of file diff --git a/fern/pricing.mdx b/fern/pricing.mdx index f11dfc281..d71695dc8 100644 --- a/fern/pricing.mdx +++ b/fern/pricing.mdx @@ -1,18 +1,18 @@ --- -title: Pricing Overview -subtitle: Only pay for the minutes you use. +title: Startup Pricing +subtitle: This is an overview of our pricing for developers and startups. For Enterprise pricing, please contact sales. slug: pricing --- - +
- + Vapi itself charges $0.05 per minute for calls. Prorated to the second. Bring your own API keys for providers, Vapi makes requests on your behalf. - - Phone numbers purchased through Vapi bill at $2/mo. - ### Starter Credits diff --git a/fern/prompting-guide.mdx b/fern/prompting-guide.mdx index b11d4eded..496d3bc07 100644 --- a/fern/prompting-guide.mdx +++ b/fern/prompting-guide.mdx @@ -1,5 +1,5 @@ --- -title: Prompting Guide +title: Voice AI Prompting Guide slug: prompting-guide --- @@ -44,7 +44,7 @@ To enhance clarity and maintainability, it's recommended to break down system pr **Example:** -``` +```md wordWrap [Identity] You are a helpful and knowledgeable virtual assistant for a travel booking platform. @@ -69,7 +69,7 @@ You are a helpful and knowledgeable virtual assistant for a travel booking platf For complex interactions, breaking down the task into a sequence of steps enhances the agent's understanding and ensures a structured conversation flow. Incorporate conditional logic to guide the agent's responses based on user input. Example: -``` +```md wordWrap [Task] 1. Welcome the user to the technical support service. 2. Inquire about the nature of the technical issue. @@ -82,7 +82,7 @@ Example: To prevent the agent from rushing through the conversation, explicitly indicate when to wait for the user's response before proceeding to the next step. -``` +```md wordWrap [Task] 1. Inform the user about the purpose of the call. 2. Ask for the user's name and account information. @@ -95,18 +95,20 @@ To prevent the agent from rushing through the conversation, explicitly indicate Specify when and how the agent should utilize external tools or APIs. Reference the tools by their designated names and describe their functions to ensure accurate invocation. Example: -``` +```md wordWrap [Task] ... 3. If the user wants to know about something, use the get_data function with the parameter 'query', which will contain the user's question to initiate the process. 4. Guide the user through the password reset steps provided by the API.... ``` +### Silent Transfers +If the AI determines that the user needs to be transferred, do not send any text response back to the user. Instead, silently call the appropriate tool for transferring the call. This ensures a seamless user experience and avoids confusion. ### Include Fallback and Error Handling Mechanisms Always include fallback options and error-handling mechanisms in your prompts. This ensures that the Agent can gracefully handle unexpected user inputs or system errors. -``` +```md wordWrap [Error Handling] If the customer's response is unclear, ask clarifying questions. If you encounter any issues, inform the customer politely and ask to repeat. ``` @@ -132,7 +134,7 @@ When working with numbers that need to be expressed in writing, it's often prefe Add personality and provide tone to the assistant to make it sound more like a human being. Here are some examples: -``` +```md wordWrap - Tone: conversational, spartan, use less corporate jargon - Act as Iva, a sales agent who is a human being and has real feelings. ``` @@ -141,7 +143,7 @@ Add personality and provide tone to the assistant to make it sound more like a h ### Appointment Setter -``` +```md wordWrap [Role] You're Susan, an AI assistant for xxx. Your primary task is to interact with the customer, ask questions, and gather information for appointment booking. @@ -166,6 +168,7 @@ Present time in a clear format (e.g. Four Thirty PM) like: 11 pm can be spelled: Speak dates gently using English words instead of numbers. Never say the word 'function' nor 'tools' nor the name of the Available functions. Never say ending the call. +If you think you are about to transfer the call, do not send any text response. Simply trigger the tool silently. This is crucial for maintaining a smooth call experience. [Error Handling] If the customer's response is unclear, ask clarifying questions. If you encounter any issues, inform the customer politely and ask to repeat. diff --git a/fern/providers/cloud/supabase.mdx b/fern/providers/cloud/supabase.mdx new file mode 100644 index 000000000..74e9d11bd --- /dev/null +++ b/fern/providers/cloud/supabase.mdx @@ -0,0 +1,29 @@ +--- +title: Supabase S3 Storage +subtitle: Store recordings of chat conversations in Supabase Storage +slug: providers/cloud/supabase +--- + +Your assistants can be configured to record chat conversations and upload +the recordings to a bucket in Supabase Storage when the conversation ends. You will +need to configure the credential and bucket settings in the "Cloud Providers" +section of the "Provider Credentials" page in the Vapi dashboard. + +See these [instructions](https://supabase.com/docs/guides/storage/s3/authentication) for generating Supabase tokens and access keys, and finding your endpoint and region. + +## Credential Settings + +Setting | Description +------------------------- | ------------------------------------------------------- +Bucket Name | The name of the bucket in Supabase Storage to upload recordings to +Storage Region | The region of the Supabase project +Storage Endpoint | The endpoint of the Supabase Storage to upload recordings to +Bucket Path Prefix | An optional path prefix for recordings uploaded to the bucket +Storage Access Key ID | The access key id for Supabase Storage +Storage Secret Access Key | The secret access key for Supabase Storage, associated with the access key id + +## Example + + + + diff --git a/fern/providers/observability/langfuse.mdx b/fern/providers/observability/langfuse.mdx index 9e62986e8..6381fe5e9 100644 --- a/fern/providers/observability/langfuse.mdx +++ b/fern/providers/observability/langfuse.mdx @@ -4,20 +4,22 @@ description: Integrate Vapi with Langfuse for enhanced voice AI telemetry monito slug: providers/observability/langfuse --- -# Vapi Integration +# Langfuse Integration Vapi natively integrates with Langfuse, allowing you to send traces directly to Langfuse for enhanced telemetry monitoring. This integration enables you to gain deeper insights into your voice AI applications and improve their performance and reliability. -