Skip to content

Commit 5740351

Browse files
Merge branch 'main' of github.com:VapiAI/docs into fix-flux-docs-2
2 parents 25c0aaf + 0ebd8f2 commit 5740351

File tree

2 files changed

+274
-1
lines changed

2 files changed

+274
-1
lines changed

fern/docs.yml

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -289,14 +289,19 @@ navigation:
289289
icon: fa-light fa-chart-line
290290
- section: Structured outputs
291291
icon: fa-light fa-database
292-
path: assistants/structured-outputs.mdx
293292
contents:
294293
- page: Quickstart
295294
path: assistants/structured-outputs-quickstart.mdx
296295
icon: fa-light fa-rocket
297296
- page: Examples
298297
path: assistants/structured-outputs-examples.mdx
299298
icon: fa-light fa-code
299+
- section: Scorecard
300+
icon: fa-light fa-star
301+
contents:
302+
- page: Quickstart
303+
path: observability/scorecard-quickstart.mdx
304+
icon: fa-light fa-rocket
300305

301306
- section: Squads
302307
contents:
Lines changed: 268 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,268 @@
1+
---
2+
title: Scorecard quickstart
3+
subtitle: Automatically grade calls against KPIs using structured outputs
4+
slug: observability/scorecard-quickstart
5+
---
6+
7+
## Overview
8+
9+
This quickstart shows how to create a scorecard that automatically grades calls based on your structured outputs. You’ll create a scorecard, attach it to an assistant, run a call, and retrieve the score — using API only for now.
10+
11+
### What are scorecards?
12+
13+
Scorecards compute objective quality metrics from the call's structured outputs after the call ends:
14+
15+
1. Evaluate against metrics - Compare structured output values to conditions you define
16+
2. Allocate points - Grant points when conditions are met, per metric
17+
3. Filter Calls - Filter calls based on the scorecard's metrics
18+
19+
### When are scorecards generated?
20+
21+
- After call completion
22+
- Typically within seconds of call end
23+
- Available in `call.artifact.scorecards`
24+
25+
### What data do scorecards use?
26+
27+
- Structured outputs attached to the assistant (`artifactPlan.structuredOutputIds`)
28+
- Each metric references one structured output (must be number/integer or boolean)
29+
30+
### Why use scorecards?
31+
32+
- Grade calls consistently and automatically providing a quick overview of the call's quality
33+
- Power dashboards and automated workflows with numeric scores
34+
35+
## What you'll build
36+
37+
A scorecard that:
38+
39+
- Awards points if the call objective was achieved (boolean structured output)
40+
- Awards points if CSAT is above a threshold (number structured output)
41+
- Filters calls based on the scorecard's metrics
42+
43+
## Prerequisites
44+
45+
<Note>
46+
You need: A Vapi account and API key, an assistant with a model and voice
47+
configured, structured outputs created and attached that your scorecard will
48+
use
49+
</Note>
50+
<Warning>
51+
Metrics can only reference structured outputs of type number/integer or boolean.
52+
</Warning>
53+
54+
## Step 1: Create a scorecard
55+
56+
Create a scorecard with metrics. Each metric references a structured output ID and one or more conditions.
57+
58+
```bash title="cURL"
59+
curl -X POST https://api.vapi.ai/observability/scorecard \
60+
-H "Authorization: Bearer $VAPI_API_KEY" \
61+
-H "Content-Type: application/json" \
62+
-d '{
63+
"name": "Support QA Scorecard",
64+
"description": "Grades calls based on success and CSAT",
65+
"metrics": [
66+
{
67+
"structuredOutputId": "UUID-OF-BOOLEAN-STRUCTURED-OUTPUT",
68+
"conditions": [
69+
{
70+
"type": "comparator",
71+
"comparator": "=",
72+
"value": true,
73+
"points": 70
74+
}
75+
]
76+
},
77+
{
78+
"structuredOutputId": "UUID-OF-NUMBER-STRUCTURED-OUTPUT",
79+
"conditions": [
80+
{
81+
"type": "comparator",
82+
"comparator": ">=",
83+
"value": 8,
84+
"points": 30
85+
}
86+
]
87+
}
88+
],
89+
"assistantIds": ["your-assistant-id"]
90+
}'
91+
```
92+
93+
Notes:
94+
95+
- Supported comparators: `=`, `!=`, `>`, `<`, `>=`, `<=`
96+
- Boolean metrics support only `=` operator
97+
- Points for all rules must sum to 100
98+
- If you pass `assistantIds`, the scorecard is linked to those assistants
99+
100+
## Step 2: Optional - Attach to an assistant
101+
102+
Adding the assistantId to the scorecard will link the scorecard to the assistant, so that the scorecard is automatically applied to all calls made with the assistant. However, if the assistantId has not been set on the scorecard, you can still PATCH the assistant to add the scorecard.
103+
104+
```bash title="cURL"
105+
curl -X PATCH https://api.vapi.ai/assistant/YOUR_ASSISTANT_ID \
106+
-H "Authorization: Bearer $VAPI_API_KEY" \
107+
-H "Content-Type: application/json" \
108+
-d '{
109+
"artifactPlan": {
110+
"structuredOutputIds": [
111+
"UUID-OF-BOOLEAN-STRUCTURED-OUTPUT",
112+
"UUID-OF-NUMBER-STRUCTURED-OUTPUT"
113+
],
114+
"scorecardIds": [
115+
"YOUR_SCORECARD_ID"
116+
]
117+
}'
118+
```
119+
120+
<Tip>
121+
If you included `assistantIds` when creating the scorecard, you can still
122+
PATCH later to add more assistants.
123+
</Tip>
124+
125+
## Step 3: Create and test a call
126+
127+
Start a call with your assistant.
128+
129+
```bash title="cURL"
130+
131+
# For phone calls:
132+
curl -X POST https://api.vapi.ai/call \
133+
-H "Authorization: Bearer $VAPI_API_KEY" \
134+
-H "Content-Type: application/json" \
135+
-d '{
136+
"assistantId": "your-assistant-id",
137+
"phoneNumberId": "your-phone-number-id",
138+
"customer": { "number": "+1234567890" }
139+
}'
140+
```
141+
142+
## Step 4: Retrieve the score
143+
144+
Wait a few seconds after the call ends, then fetch the call:
145+
146+
```bash title="cURL"
147+
curl -X GET "https://api.vapi.ai/call/YOUR_CALL_ID" \
148+
-H "Authorization: Bearer $VAPI_API_KEY"
149+
```
150+
151+
Scorecards live at `call.artifact.scorecards`:
152+
153+
```json
154+
{
155+
"c5bd0d4e-0c7f-4b0a-82e9-5d0baf6e4a01": {
156+
"name": "Support QA Scorecard",
157+
"score": 70,
158+
"scoreNormalized": 70,
159+
"metricPoints": {
160+
"UUID-OF-BOOLEAN-STRUCTURED-OUTPUT": 70,
161+
"UUID-OF-NUMBER-STRUCTURED-OUTPUT": 0
162+
}
163+
}
164+
}
165+
```
166+
167+
### Expected output
168+
169+
- Root keys are scorecard IDs (UUIDs)
170+
- `score`: sum of awarded points
171+
- `scoreNormalized`: ceil(score / maxPoints * 100)
172+
- `metricPoints`: points awarded per structured output
173+
174+
<Note>
175+
Read extracted data values from `call.artifact.structuredOutputs`. Evaluation
176+
results are under `call.artifact.scorecards[scorecardId]`.
177+
</Note>
178+
179+
## Next steps
180+
181+
- Attach multiple scorecards to the same assistant
182+
- Add more metrics and conditions
183+
- Track normalized score trends over time
184+
185+
## Common patterns
186+
187+
### Multiple scorecards
188+
189+
```bash title="cURL"
190+
curl -X PATCH https://api.vapi.ai/assistant/YOUR_ASSISTANT_ID \
191+
-H "Authorization: Bearer $VAPI_API_KEY" \
192+
-H "Content-Type: application/json" \
193+
-d '{
194+
"artifactPlan": {
195+
"structuredOutputIds": [
196+
"STRUCTURED-OUTPUT-UUID-1",
197+
"STRUCTURED-OUTPUT-UUID-2",
198+
"STRUCTURED-OUTPUT-UUID-3"
199+
],
200+
"scorecardIds": [
201+
"SCORECARD-UUID-1",
202+
"SCORECARD-UUID-2"
203+
]
204+
}
205+
}'
206+
```
207+
208+
### Transient scorecards (inline)
209+
210+
You can embed “inline” scorecards for rapid testing (no saved scorecard ID) using `artifactPlan.scorecards`:
211+
212+
```bash title="cURL"
213+
curl -X PATCH https://api.vapi.ai/assistant/YOUR_ASSISTANT_ID \
214+
-H "Authorization: Bearer $VAPI_API_KEY" \
215+
-H "Content-Type: application/json" \
216+
-d '{
217+
"artifactPlan": {
218+
"structuredOutputIds": ["SO-UUID-BOOLEAN", "SO-UUID-NUMBER"],
219+
"scorecards": [
220+
{
221+
"name": "Inline Test Scorecard",
222+
"metrics": [
223+
{
224+
"structuredOutputId": "SO-UUID-BOOLEAN",
225+
"conditions": [
226+
{ "type": "comparator", "comparator": "=", "value": true, "points": 30 }
227+
]
228+
},
229+
{
230+
"structuredOutputId": "SO-UUID-NUMBER",
231+
"conditions": [
232+
{ "type": "comparator", "comparator": ">=", "value": 9, "points": 70 }
233+
]
234+
}
235+
]
236+
}
237+
]
238+
}
239+
}'
240+
```
241+
242+
## Validation rules
243+
244+
- Referenced structured outputs must be type number/integer or boolean
245+
- Number conditions accept: `=`, `!=`, `>`, `<`, `>=`, `<=`
246+
- Boolean conditions accept: `=` only
247+
- Points must be 0–100 per condition
248+
249+
## Tips for success
250+
251+
- Keep metrics focused; use multiple conditions per metric to tier points
252+
- Use normalized score for consistent reporting across different scorecards
253+
- Ensure all referenced structured outputs are attached to the assistant
254+
255+
## Troubleshooting
256+
257+
| Issue | Solution |
258+
| ------------------------- | ---------------------------------------------------------------------------------- |
259+
| Scorecard missing in call | Ensure `artifactPlan.scorecardIds` or `scorecards` is set on the assistant |
260+
| Score always 0 | Verify the structured outputs exist and match metric types; check comparator/value |
261+
| No structured outputs | Make sure `artifactPlan.structuredOutputIds` includes the required outputs |
262+
| Slow updates | Allow a few seconds after call end for processing |
263+
264+
## Get help
265+
266+
- [API Documentation](/api-reference)
267+
- [Discord Community](https://discord.gg/pUFNcf2WmH)
268+
- [Support](mailto:[email protected])

0 commit comments

Comments
 (0)