Skip to content

Conversation

@matifali
Copy link
Member

Summary

This PR adds support for the max_completion_tokens parameter required by newer OpenAI models (gpt-5, gpt-4o) instead of max_tokens.

Changes

  • Added MaxCompletionTokens field to ChatCompletionNewParamsWrapper
  • Updated MarshalJSON to include max_completion_tokens in extras map when set
  • Updated UnmarshalJSON to extract and set max_completion_tokens from incoming JSON
  • Added comprehensive test coverage for the new parameter including:
    • Unmarshaling from JSON
    • Marshaling to JSON
    • Ensuring parameter is not present when nil

Test Plan

  • All existing tests pass
  • New test TestMaxCompletionTokens verifies:
    • Parameter is correctly unmarshaled from JSON
    • Parameter is correctly marshaled to JSON
    • Parameter is not included when nil
  • Code formatted with make fmt

Related Issues

Fixes #40

🤖 Generated with Claude Code

Add support for the max_completion_tokens parameter required by newer
OpenAI models (gpt-5, gpt-4o). This parameter is used instead of
max_tokens for these models.

Changes:
- Added MaxCompletionTokens field to ChatCompletionNewParamsWrapper
- Updated MarshalJSON to include max_completion_tokens in extras
- Updated UnmarshalJSON to extract and set max_completion_tokens
- Added comprehensive test coverage for the new parameter

Fixes #40

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds support for the max_completion_tokens parameter required by newer OpenAI models (gpt-4o, gpt-5) as an alternative to the older max_tokens parameter.

Key Changes:

  • Added MaxCompletionTokens field to ChatCompletionNewParamsWrapper struct
  • Updated JSON marshaling/unmarshaling logic to handle the new parameter
  • Added comprehensive test coverage for unmarshaling, marshaling, and nil handling scenarios

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.

File Description
openai.go Adds MaxCompletionTokens field and updates marshal/unmarshal methods to handle the parameter
openai_test.go Adds test suite covering unmarshal, marshal, and nil cases for the new parameter

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

matifali and others added 2 commits October 28, 2025 14:29
Address review feedback from GitHub Copilot. The previous implementation
would skip setting max_completion_tokens when the value was 0, which is
incorrect as 0 is a valid value that should be explicitly passed to the
OpenAI API.

Changes:
- Check for field existence in JSON rather than value > 0
- Handle explicit 0 values correctly
- Add test cases for zero value marshaling/unmarshaling
- Support multiple numeric types (float64, int, int64) for robustness

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Revert the previous change that accepted zero values. Research shows
that the OpenAI API requires positive integers for token limits, and
there is no documentation supporting zero as a valid value.

Changes:
- Only accept max_completion_tokens > 0
- Add validation to reject 0 and negative values
- Update tests to verify invalid values are ignored
- Add test for negative value handling
- Keep field existence check to improve over original implementation

The GitHub Copilot review suggested accepting 0, but without any
documentation proving it's valid, we should validate input to match
API requirements.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

gpt-5, gpt-4o do not work when using with opencode

1 participant