Skip to content

Commit ed45d1c

Browse files
daniel-lxskevint-cerebrasmrubens
authored
feat: add zai-glm-4.6 model to Cerebras and set gpt-oss-120b as default (#8920)
* feat: add zai-glm-4.6 model and update gpt-oss-120b for Cerebras - Add zai-glm-4.6 with 128K context window and 40K max tokens - Set zai-glm-4.6 as default Cerebras model - Update gpt-oss-120b to 128K context and 40K max tokens * feat: add zai-glm-4.6 model to Cerebras provider - Add zai-glm-4.6 with 128K context window and 40K max tokens - Set zai-glm-4.6 as default Cerebras model - Model provides ~2000 tokens/s for general-purpose tasks * add [SOON TO BE DEPRECATED] warning for Q3C * chore: set gpt-oss-120b as default Cerebras model * Fix cerebras test: update expected default model to gpt-oss-120b * Apply suggestion from @mrubens Co-authored-by: Matt Rubens <[email protected]> --------- Co-authored-by: kevint-cerebras <[email protected]> Co-authored-by: Matt Rubens <[email protected]>
1 parent bd5807b commit ed45d1c

File tree

2 files changed

+13
-4
lines changed

2 files changed

+13
-4
lines changed

packages/types/src/providers/cerebras.ts

Lines changed: 12 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,18 @@ import type { ModelInfo } from "../model.js"
33
// https://inference-docs.cerebras.ai/api-reference/chat-completions
44
export type CerebrasModelId = keyof typeof cerebrasModels
55

6-
export const cerebrasDefaultModelId: CerebrasModelId = "qwen-3-coder-480b-free"
6+
export const cerebrasDefaultModelId: CerebrasModelId = "gpt-oss-120b"
77

88
export const cerebrasModels = {
9+
"zai-glm-4.6": {
10+
maxTokens: 16_384,
11+
contextWindow: 128000,
12+
supportsImages: false,
13+
supportsPromptCache: false,
14+
inputPrice: 0,
15+
outputPrice: 0,
16+
description: "Highly intelligent general-purpose model with ~2000 tokens/s",
17+
},
918
"qwen-3-coder-480b-free": {
1019
maxTokens: 40000,
1120
contextWindow: 64000,
@@ -14,7 +23,7 @@ export const cerebrasModels = {
1423
inputPrice: 0,
1524
outputPrice: 0,
1625
description:
17-
"SOTA coding model with ~2000 tokens/s ($0 free tier)\n\n• Use this if you don't have a Cerebras subscription\n• 64K context window\n• Rate limits: 150K TPM, 1M TPH/TPD, 10 RPM, 100 RPH/RPD\n\nUpgrade for higher limits: [https://cloud.cerebras.ai/?utm=roocode](https://cloud.cerebras.ai/?utm=roocode)",
26+
"[SOON TO BE DEPRECATED] SOTA coding model with ~2000 tokens/s ($0 free tier)\n\n• Use this if you don't have a Cerebras subscription\n• 64K context window\n• Rate limits: 150K TPM, 1M TPH/TPD, 10 RPM, 100 RPH/RPD\n\nUpgrade for higher limits: [https://cloud.cerebras.ai/?utm=roocode](https://cloud.cerebras.ai/?utm=roocode)",
1827
},
1928
"qwen-3-coder-480b": {
2029
maxTokens: 40000,
@@ -24,7 +33,7 @@ export const cerebrasModels = {
2433
inputPrice: 0,
2534
outputPrice: 0,
2635
description:
27-
"SOTA coding model with ~2000 tokens/s ($50/$250 paid tiers)\n\n• Use this if you have a Cerebras subscription\n• 131K context window with higher rate limits",
36+
"[SOON TO BE DEPRECATED] SOTA coding model with ~2000 tokens/s ($50/$250 paid tiers)\n\n• Use this if you have a Cerebras subscription\n• 131K context window with higher rate limits",
2837
},
2938
"qwen-3-235b-a22b-instruct-2507": {
3039
maxTokens: 64000,

src/api/providers/__tests__/cerebras.spec.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ describe("CerebrasHandler", () => {
5656
it("should fallback to default model when apiModelId is not provided", () => {
5757
const handlerWithoutModel = new CerebrasHandler({ cerebrasApiKey: "test" })
5858
const { id } = handlerWithoutModel.getModel()
59-
expect(id).toBe("qwen-3-coder-480b") // cerebrasDefaultModelId (routed)
59+
expect(id).toBe("gpt-oss-120b") // cerebrasDefaultModelId
6060
})
6161
})
6262

0 commit comments

Comments
 (0)