-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Add new kimi model to groq #7692
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -11,10 +11,11 @@ export type GroqModelId = | |
| | "qwen/qwen3-32b" | ||
| | "deepseek-r1-distill-llama-70b" | ||
| | "moonshotai/kimi-k2-instruct" | ||
| | "moonshotai/kimi-k2-instruct-0905" | ||
| | "openai/gpt-oss-120b" | ||
| | "openai/gpt-oss-20b" | ||
|
|
||
| export const groqDefaultModelId: GroqModelId = "llama-3.3-70b-versatile" // Defaulting to Llama3 70B Versatile | ||
| export const groqDefaultModelId: GroqModelId = "moonshotai/kimi-k2-instruct-0905" | ||
|
|
||
| export const groqModels = { | ||
| // Models based on API response: https://api.groq.com/openai/v1/models | ||
|
|
@@ -100,6 +101,16 @@ export const groqModels = { | |
| cacheReadsPrice: 0.5, // 50% discount for cached input tokens | ||
| description: "Moonshot AI Kimi K2 Instruct 1T model, 128K context.", | ||
| }, | ||
| "moonshotai/kimi-k2-instruct-0905": { | ||
| maxTokens: 16384, | ||
| contextWindow: 262144, | ||
| supportsImages: false, | ||
| supportsPromptCache: true, | ||
| inputPrice: 0.6, | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is the significant price reduction intentional? The new model has:
Could you confirm these prices are correct according to Groq's pricing? |
||
| outputPrice: 2.5, | ||
| cacheReadsPrice: 0.15, | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The cache reads price of /bin/sh.15 represents a 75% discount from the input price (/bin/sh.6), while the existing kimi-k2-instruct model has a 50% discount. Is this aggressive discount structure correct? |
||
| description: "Moonshot AI Kimi K2 Instruct 1T model, 256K context.", | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Minor inconsistency: The description says "256K context" but the |
||
| }, | ||
| "openai/gpt-oss-120b": { | ||
| maxTokens: 32766, | ||
| contextWindow: 131072, | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
llama-3.3-70b-versatiletomoonshotai/kimi-k2-instruct-0905will affect all users who rely on the default model. They may experience different behavior, costs, and capabilities. Could we consider: