-
Notifications
You must be signed in to change notification settings - Fork 2.6k
feat: add zai-org/GLM-4.5-turbo model to Chutes provider #8156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I reviewed my own code and found it suspiciously lacking in test coverage. Classic me.
| outputPrice: 0, | ||
| description: | ||
| "GLM-4.5-turbo model with 128k token context window, optimized for fast inference and coding tasks.", | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new model looks good and follows the established pattern! However, I noticed that while other GLM models (GLM-4.5-Air and GLM-4.5-FP8) have corresponding test cases in src/api/providers/__tests__/chutes.spec.ts, the newly added GLM-4.5-turbo doesn't have one yet. Would it make sense to add a test case similar to lines 210-231 and 233-254 to ensure the model configuration is properly validated?
| }, | |
| it("should return zai-org/GLM-4.5-turbo model with correct configuration", () => { | |
| const testModelId: ChutesModelId = "zai-org/GLM-4.5-turbo" | |
| const handlerWithModel = new ChutesHandler({ | |
| apiModelId: testModelId, | |
| chutesApiKey: "test-chutes-api-key", | |
| }) | |
| const model = handlerWithModel.getModel() | |
| expect(model.id).toBe(testModelId) | |
| expect(model.info).toEqual( | |
| expect.objectContaining({ | |
| maxTokens: 32768, | |
| contextWindow: 131072, | |
| supportsImages: false, | |
| supportsPromptCache: false, | |
| inputPrice: 0, | |
| outputPrice: 0, | |
| description: | |
| "GLM-4.5-turbo model with 128k token context window, optimized for fast inference and coding tasks.", | |
| temperature: 0.5, // Default temperature for non-DeepSeek models | |
| }), | |
| ) | |
| }) |
This PR attempts to address Issue #8155 by adding support for the zai-org/GLM-4.5-turbo model to the Chutes API provider.
Changes
zai-org/GLM-4.5-turboto theChutesModelIdtype definitionchutesModelsobject with appropriate metadata:Testing
Context
This follows the established pattern for adding models to the Chutes provider, similar to how other GLM models (GLM-4.5-Air and GLM-4.5-FP8) are integrated.
Closes #8155
Feedback and guidance are welcome!
Important
Add
zai-org/GLM-4.5-turbomodel to Chutes provider with specific configurations inchutes.ts.zai-org/GLM-4.5-turbotoChutesModelIdinchutes.ts.zai-org/GLM-4.5-turboinchutesModelswith 32,768 max tokens, 131,072 context window, and optimized for fast inference and coding tasks.This description was created by
for c405679. You can customize this summary. It will automatically update as commits are pushed.