Summary
Updated root cause - this is NOT about reasoning models.
Any condition that causes the opencode server to return an empty 200 response body surfaces to consumers as SyntaxError: Unexpected end of JSON input with no actionable context. The most common trigger is an invalid or unavailable model ID, but the same failure mode applies to any empty-body response class.
The original hypothesis (gpt-5 emits reasoning parts that extractTextFromParts drops) is incorrect. The error happens before doGenerate ever reaches the parts-extraction code - it crashes inside @opencode-ai/sdk while parsing the HTTP response body.
Real stack (captured with local instrumentation)
SyntaxError: Unexpected end of JSON input
at JSON.parse (<anonymous>)
at parseJSONFromBytes (node:internal/deps/undici/undici:5880:19)
at successSteps (node:internal/deps/undici/undici:5861:27)
at fullyReadBody (node:internal/deps/undici/undici:4753:9)
at consumeBody (node:internal/deps/undici/undici:5870:7)
at request (@opencode-ai/sdk/dist/gen/client/client.gen.js:83:28)
at OpencodeLanguageModel.doGenerate (ai-sdk-provider-opencode-sdk/dist/index.js:1130:22)
@opencode-ai/sdk line 83 blindly calls response.json() regardless of body presence. When the body is empty (which opencode server returns for conditions like unknown model IDs), undici's JSON.parse('') throws.
Reproduce
Any invalid model ID triggers it:
# Direct CLI - tells you exactly what's wrong:
$ opencode run --model openai/gpt-5.2 "test"
Error: Model not found: openai/gpt-5.2
$ opencode run --model github-copilot/gpt-5 "test"
Error: Model not found: github-copilot/gpt-5. Did you mean: gpt-5.2, gpt-5.4, gpt-5.1?
# Same models via this SDK - cryptic JSON error:
$ task-master models --set-main openai/gpt-5.2 --opencode
$ task-master parse-prd prd.md --num-tasks=5
[ERROR] OpenCode object generation failed: Unexpected end of JSON input
Triggers confirmed
openai/gpt-5.2 with no direct OpenAI auth configured in opencode (provider prefix invalid for the user's setup)
github-copilot/gpt-5 (the real Copilot model is gpt-5.2, not gpt-5)
- Any other unknown provider/model combination
github-copilot/gpt-5.2 and github-copilot/gpt-4.1 work fine - the models resolve, body comes back populated, no crash.
Why the original hypothesis was wrong
The empty-body failure happens at the HTTP layer inside @opencode-ai/sdk, before doGenerate even gets response data. The extractTextFromParts code was a red herring because the request never reaches it - it throws during the initial client.session.prompt() call.
Impact
Developer UX only. When a user misconfigures the model ID (easy to do given the provider/model format and the fact that Copilot's gpt-5 family is named gpt-5.2, not gpt-5), they get an opaque JSON parse error instead of "model not found". This surfaces downstream as AI_APICallError: Unexpected end of JSON input with no model context, which is what initially misled me in this issue.
Proposed fix (in ai-sdk-provider-opencode-sdk)
Wrap the client.session.prompt() call in doGenerate (and the equivalent in doStream) with a narrow catch:
try {
const result = await client.session.prompt({ path: { id: sessionId }, body: requestBody });
// ... existing handling
} catch (error) {
if (error instanceof SyntaxError && /Unexpected end of JSON input/.test(error.message)) {
throw new APICallError({
message: `OpenCode returned an empty response body for model "${this.modelId}". This usually means the model is unavailable or the ID is incorrect. Run \`opencode models\` to see valid IDs. (Original: ${error.message})`,
url: 'opencode://session',
requestBodyValues: requestBody,
cause: error,
});
}
throw error;
}
This preserves all successful paths and only transforms the specific empty-body symptom into an actionable diagnostic.
Upstream fix (in @opencode-ai/sdk, separate concern)
The deeper fix belongs in the generated client at client.gen.js:83: before calling response.json(), check Content-Length === 0 or wrap the call and return null/throw a typed error with status code. That surfaces the real HTTP-level problem to all consumers, not just ones via this provider.
Happy to PR the narrow fix here. The upstream fix I'd leave as a separate issue against @opencode-ai/sdk / opencode's main repo.
Environment
- SDK:
ai-sdk-provider-opencode-sdk@0.0.3 (ai-sdk-v5 tag)
- AI SDK:
ai@^5.0.51
- Node: 20
- OpenCode: 1.4.3
- Shell verified:
opencode run --model github-copilot/gpt-5.2 succeeds; same model via this SDK also works (earlier test in claude-task-master#1685)
Summary
Updated root cause - this is NOT about reasoning models.
Any condition that causes the opencode server to return an empty 200 response body surfaces to consumers as
SyntaxError: Unexpected end of JSON inputwith no actionable context. The most common trigger is an invalid or unavailable model ID, but the same failure mode applies to any empty-body response class.The original hypothesis (gpt-5 emits
reasoningparts thatextractTextFromPartsdrops) is incorrect. The error happens beforedoGenerateever reaches the parts-extraction code - it crashes inside@opencode-ai/sdkwhile parsing the HTTP response body.Real stack (captured with local instrumentation)
@opencode-ai/sdkline 83 blindly callsresponse.json()regardless of body presence. When the body is empty (which opencode server returns for conditions like unknown model IDs), undici'sJSON.parse('')throws.Reproduce
Any invalid model ID triggers it:
Triggers confirmed
openai/gpt-5.2with no direct OpenAI auth configured in opencode (provider prefix invalid for the user's setup)github-copilot/gpt-5(the real Copilot model isgpt-5.2, notgpt-5)github-copilot/gpt-5.2andgithub-copilot/gpt-4.1work fine - the models resolve, body comes back populated, no crash.Why the original hypothesis was wrong
The empty-body failure happens at the HTTP layer inside
@opencode-ai/sdk, beforedoGenerateeven gets response data. TheextractTextFromPartscode was a red herring because the request never reaches it - it throws during the initialclient.session.prompt()call.Impact
Developer UX only. When a user misconfigures the model ID (easy to do given the
provider/modelformat and the fact that Copilot's gpt-5 family is namedgpt-5.2, notgpt-5), they get an opaque JSON parse error instead of "model not found". This surfaces downstream asAI_APICallError: Unexpected end of JSON inputwith no model context, which is what initially misled me in this issue.Proposed fix (in
ai-sdk-provider-opencode-sdk)Wrap the
client.session.prompt()call indoGenerate(and the equivalent indoStream) with a narrow catch:This preserves all successful paths and only transforms the specific empty-body symptom into an actionable diagnostic.
Upstream fix (in
@opencode-ai/sdk, separate concern)The deeper fix belongs in the generated client at
client.gen.js:83: before callingresponse.json(), checkContent-Length === 0or wrap the call and returnnull/throw a typed error with status code. That surfaces the real HTTP-level problem to all consumers, not just ones via this provider.Happy to PR the narrow fix here. The upstream fix I'd leave as a separate issue against
@opencode-ai/sdk/ opencode's main repo.Environment
ai-sdk-provider-opencode-sdk@0.0.3(ai-sdk-v5 tag)ai@^5.0.51opencode run --model github-copilot/gpt-5.2succeeds; same model via this SDK also works (earlier test in claude-task-master#1685)