Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions packages/types/src/providers/deepseek.ts
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,17 @@ export const deepSeekModels = {
cacheReadsPrice: 0.07, // $0.07 per million tokens (cache hit) - Updated Sept 5, 2025
description: `DeepSeek-V3 achieves a significant breakthrough in inference speed over previous models. It tops the leaderboard among open-source models and rivals the most advanced closed-source models globally.`,
},
"deepseek-coder": {
maxTokens: 8192, // 8K max output
contextWindow: 128_000,
supportsImages: false,
supportsPromptCache: true,
inputPrice: 0.56, // $0.56 per million tokens (cache miss) - Updated Sept 5, 2025
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this pricing accurate for the coder model? The comment says "Updated Sept 5, 2025" which seems to be a future date, and I'm using the same pricing as deepseek-chat. Should we verify the actual pricing from DeepSeek's platform to ensure billing accuracy?

outputPrice: 1.68, // $1.68 per million tokens - Updated Sept 5, 2025
cacheWritesPrice: 0.56, // $0.56 per million tokens (cache miss) - Updated Sept 5, 2025
cacheReadsPrice: 0.07, // $0.07 per million tokens (cache hit) - Updated Sept 5, 2025
description: `DeepSeek-Coder-V3 is specifically optimized for code generation, completion, and understanding tasks. It excels at programming challenges across multiple languages.`,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The description mentions "DeepSeek-Coder-V3" but the model ID is just "deepseek-coder". Should we verify if V3 is the actual version being accessed via the API? This could be important for users to know which specific version they're using.

},
"deepseek-reasoner": {
maxTokens: 65536, // 64K max output for reasoning mode
contextWindow: 128_000,
Expand Down
14 changes: 14 additions & 0 deletions src/api/providers/__tests__/deepseek.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -160,6 +160,20 @@ describe("DeepSeekHandler", () => {
expect(model.info.supportsPromptCache).toBe(true) // Should be true now
})

it("should return correct model info for deepseek-coder", () => {
const handlerWithCoder = new DeepSeekHandler({
...mockOptions,
apiModelId: "deepseek-coder",
})
const model = handlerWithCoder.getModel()
expect(model.id).toBe("deepseek-coder")
expect(model.info).toBeDefined()
expect(model.info.maxTokens).toBe(8192) // deepseek-coder has 8K max
expect(model.info.contextWindow).toBe(128_000)
expect(model.info.supportsImages).toBe(false)
expect(model.info.supportsPromptCache).toBe(true)
})
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good test coverage for the model configuration! Consider also adding an integration test that verifies the actual API handler behavior with the deepseek-coder model to ensure end-to-end functionality works as expected.


it("should return correct model info for deepseek-reasoner", () => {
const handlerWithReasoner = new DeepSeekHandler({
...mockOptions,
Expand Down
Loading