Commit 6a4653a
committed
fix: ensure OpenAI-compatible providers use custom max tokens setting
The issue was that OpenAI-compatible providers (Chutes, Groq) were directly using model.info.maxTokens instead of calling getModelMaxOutputTokens(). This meant that the user's custom modelMaxTokens setting was being ignored.
Fixed by:
- Updating BaseOpenAiCompatibleProvider to use getModelMaxOutputTokens()
- Updating ChutesHandler's getCompletionParams to use getModelMaxOutputTokens()
This ensures that when users set a custom max output tokens value in the settings, it will be properly applied to API requests for all OpenAI-compatible providers.1 parent 83522e2 commit 6a4653a
File tree
2 files changed
+17
-11
lines changed- src/api/providers
2 files changed
+17
-11
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
4 | 4 | | |
5 | 5 | | |
6 | 6 | | |
| 7 | + | |
7 | 8 | | |
8 | 9 | | |
9 | 10 | | |
| |||
67 | 68 | | |
68 | 69 | | |
69 | 70 | | |
70 | | - | |
71 | | - | |
72 | | - | |
73 | | - | |
| 71 | + | |
| 72 | + | |
| 73 | + | |
| 74 | + | |
| 75 | + | |
| 76 | + | |
74 | 77 | | |
75 | 78 | | |
76 | 79 | | |
77 | 80 | | |
78 | | - | |
| 81 | + | |
79 | 82 | | |
80 | 83 | | |
81 | 84 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
3 | 3 | | |
4 | 4 | | |
5 | 5 | | |
| 6 | + | |
6 | 7 | | |
7 | 8 | | |
8 | 9 | | |
| |||
27 | 28 | | |
28 | 29 | | |
29 | 30 | | |
30 | | - | |
31 | | - | |
32 | | - | |
33 | | - | |
| 31 | + | |
| 32 | + | |
| 33 | + | |
| 34 | + | |
| 35 | + | |
| 36 | + | |
34 | 37 | | |
35 | | - | |
| 38 | + | |
36 | 39 | | |
37 | 40 | | |
38 | | - | |
| 41 | + | |
39 | 42 | | |
40 | 43 | | |
41 | 44 | | |
| |||
0 commit comments