You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: fern/calls/call-ended-reason.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,7 +33,7 @@ These relate to issues within the AI processing pipeline or the Large Language M
33
33
-**call.in-progress.error-vapifault-\***: Various error codes indicate specific failures within the processing pipeline, such as function execution, LLM responses, or external service integration. Examples include OpenAI, Azure OpenAI, Together AI, and several other LLMs or voice providers.
34
34
-**call.in-progress.error-providerfault-\***: Similar to **call.in-progress.error-vapifault-\***. However, these error codes are surfaced when Vapi receives an error that has occured on the provider's side. Examples include internal server errors, or service unavailability.
35
35
-**pipeline-error-\***: Similar to **call.in-progress.error-vapifault-\***. However, these error codes are surfaced when you are using your own provider keys.
36
-
-**pipeline-no-available-llm-model**: No suitable LLM was available to process the request.
36
+
-**pipeline-no-available-llm-model**: No suitable LLM was available to process the request. Previously **pipeline-no-available-model**.
0 commit comments