Skip to content

Commit 8f3dbaf

Browse files
schwaaampclaude
andcommitted
Fix AI narration truncation: reduce chunk size to 8, increase maxTokens to 8192
Chunk 1 returned 9 of 10 expected items — the response was truncated at 4096 tokens. Reducing chunk size from 10 to 8 and doubling maxTokens to 8192 gives adequate headroom for the conversational prompt style which produces longer narratives per discovery. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent fc7ebac commit 8f3dbaf

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

supabase/functions/ai-engine/engines/pattern-spotter.ts

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1037,7 +1037,7 @@ export async function spotPatterns(
10371037
const allToNarrate = [...discoveriesToSurface, ...observationsToSurface];
10381038
const newPatterns: Array<Record<string, unknown>> = [];
10391039
let aiCallSucceeded = false;
1040-
const NARRATION_CHUNK_SIZE = 10; // Max discoveries per AI call to avoid truncation
1040+
const NARRATION_CHUNK_SIZE = 8; // Max discoveries per AI call to avoid response truncation
10411041

10421042
if (allToNarrate.length > 0) {
10431043
const systemPrompt = buildPatternDetectionSystemPrompt();
@@ -1069,7 +1069,7 @@ export async function spotPatterns(
10691069
systemPrompt,
10701070
userPrompt,
10711071
temperature: 0.3,
1072-
maxTokens: 4096,
1072+
maxTokens: 8192,
10731073
responseFormat: 'json',
10741074
});
10751075

0 commit comments

Comments
 (0)