-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
MineRatings.ts: ARG_MAX risk when passing large prompts as CLI args to Inference.ts #905
Description
Summary
MineRatings.ts shells out to bun Inference.ts passing both the system prompt and user prompt as CLI positional arguments. With 750+ rating entries, the JSON summary in the user prompt can approach OS ARG_MAX limits (256KB per arg on macOS), causing silent failures — the process exits with zero output and no error.
Reproduction
bun ~/.claude/skills/Utilities/PAIUpgrade/Tools/MineRatings.ts --allWith a sufficiently large ratings.jsonl (750+ entries), the spawned bun Inference.ts <system_prompt> <user_prompt> call can silently fail due to argument length.
Root Cause
MineRatings.ts runInference() passes prompts as argv:
const proc = spawn('bun', [
INFERENCE_PATH,
'--level', 'standard',
'--timeout', '300000',
systemPrompt, // ~350 words
userPrompt, // JSON summary of all entries — grows with dataset
], { ... });Meanwhile, Inference.ts itself already handles ARG_MAX correctly internally — it pipes the user prompt via stdin to claude --print (line 96-98). The problem is in MineRatings' invocation of Inference.ts, not in Inference.ts itself.
Fix
Replace the subprocess spawn with a direct import of inference():
import { inference } from '../../../../PAI/Tools/Inference.ts';
async function runInference(systemPrompt, userPrompt) {
const result = await inference({
systemPrompt,
userPrompt,
level: 'standard',
timeout: 300000,
});
return { success: result.success, output: result.success ? result.output : (result.error || 'Inference failed') };
}This eliminates the subprocess entirely and reuses the stdin-piping that inference() already does.
Priority
Low — only manifests with large datasets (750+ entries) and is already fixed in virtualian/pai#49.