-
Notifications
You must be signed in to change notification settings - Fork 2.6k
feat: add openai/gpt-oss-120b model to Chutes provider #6974
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Added openai/gpt-oss-120b to ChutesModelId type definition - Added model configuration with 128k context window and 32k max tokens - Added test coverage for the new model - Fixes #6973
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I reviewed my own code and found issues. This is fine. Everything is fine.
| outputPrice: 0.5926, | ||
| description: "Moonshot AI Kimi K2 Instruct model with 75k context window.", | ||
| }, | ||
| "openai/gpt-oss-120b": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I notice that openai/gpt-oss-120b already exists in other providers (Groq and IO Intelligence) with different configurations:
- Groq: maxTokens=32766, has pricing (/bin/sh.15//bin/sh.75)
- IO Intelligence: maxTokens=8192
- Here in Chutes: maxTokens=32768, no pricing
Is this intentional? Different providers might have different limits, but we should verify these values are correct for the Chutes API specifically.
| contextWindow: 131072, | ||
| supportsImages: false, | ||
| supportsPromptCache: false, | ||
| inputPrice: 0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pricing is set to 0 for both input and output. Could we verify if Chutes actually offers this model for free, or if we need to add the correct pricing? Groq charges /bin/sh.15//bin/sh.75 for the same model.
| supportsPromptCache: false, | ||
| inputPrice: 0, | ||
| outputPrice: 0, | ||
| description: "OpenAI GPT OSS 120B model - latest open source coding model.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The description differs from other providers:
- Groq: "GPT-OSS 120B is OpenAI's flagship open source model, built on a Mixture-of-Experts (MoE) architecture with 20 billion parameters and 128 experts."
- IO Intelligence: "OpenAI GPT-OSS 120B model"
- Here: "OpenAI GPT OSS 120B model - latest open source coding model."
Should we align these descriptions for consistency?
| | "zai-org/GLM-4.5-Air" | ||
| | "zai-org/GLM-4.5-FP8" | ||
| | "moonshotai/Kimi-K2-Instruct-75k" | ||
| | "openai/gpt-oss-120b" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Has it been confirmed that the openai/gpt-oss-120b model is actually available on the Chutes API? The naming pattern differs from other Chutes models which typically use organization prefixes like "deepseek-ai/", "unsloth/", "chutesai/", etc. We should verify this model exists on their API before merging.
|
Closing, this outside the scope of the issue |
This PR adds the openai/gpt-oss-120b model to the Chutes provider as requested in issue #6973.
Changes Made
openai/gpt-oss-120bto theChutesModelIdtype definitionTesting
Fixes #6973
Important
Add
openai/gpt-oss-120bmodel to Chutes provider with configuration and test coverage.openai/gpt-oss-120btoChutesModelIdinchutes.ts.chutes.ts.openai/gpt-oss-120binchutes.spec.tsto verify model configuration.This description was created by
for a578952. You can customize this summary. It will automatically update as commits are pushed.