-
Notifications
You must be signed in to change notification settings - Fork 0
add ALLOWED_OPENAI_PARAMS to openai pipeline #107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: smart-window
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -26,6 +26,8 @@ let _logLevel = "Error"; | |
| */ | ||
| const lazy = {}; | ||
|
|
||
| const DEFAULT_ALLOWED_OPENAI_PARAMS = Object.freeze(["tools", "tool_choice"]); | ||
|
|
||
| ChromeUtils.defineLazyGetter(lazy, "console", () => { | ||
| return console.createInstance({ | ||
| maxLogLevel: _logLevel, // we can't use maxLogLevelPref in workers. | ||
|
|
@@ -68,6 +70,9 @@ export class OpenAIPipeline { | |
| let config = {}; | ||
| options.applyToConfig(config); | ||
| config.backend = config.backend || "openai"; | ||
| if (!config.allowedOpenAIParams) { | ||
| config.allowedOpenAIParams = DEFAULT_ALLOWED_OPENAI_PARAMS; | ||
| } | ||
|
|
||
| // reapply logLevel if it has changed. | ||
| if (lazy.console.logLevel != config.logLevel) { | ||
|
|
@@ -334,6 +339,11 @@ export class OpenAIPipeline { | |
| }); | ||
| const stream = request.streamOptions?.enabled || false; | ||
| const tools = request.tools || []; | ||
| const allowedOpenAIParams = | ||
| request.allowed_openai_params ?? | ||
| request.allowedOpenAIParams ?? | ||
| this.#options.allowedOpenAIParams ?? | ||
|
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @tarekziade not sure if these this.#options or config.allowedOpenAIParams check is necessary to allow future clients to set this value. Most of time we probably don't want to add this payload on every client call anyways. Long term, LiteLLM should ideally support custom model configurations that allow passing through model-specific request parameters. Would love to hear your thoughts on this. |
||
| DEFAULT_ALLOWED_OPENAI_PARAMS; | ||
|
|
||
| const completionParams = { | ||
| model: modelId, | ||
|
|
@@ -342,6 +352,13 @@ export class OpenAIPipeline { | |
| tools, | ||
| }; | ||
|
|
||
| if ( | ||
| Array.isArray(allowedOpenAIParams) && | ||
| allowedOpenAIParams.length > 0 | ||
| ) { | ||
| completionParams.allowed_openai_params = [...allowedOpenAIParams]; | ||
| } | ||
|
|
||
| const args = { | ||
| client, | ||
| completionParams, | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how do you expect this config to be set? if from
createEngine, the allowed key options filters it out anyway https://searchfox.org/firefox-main/rev/77b6c9748bdd784eb5e0ee42603c408b34559d7d/toolkit/components/ml/content/EngineProcess.sys.mjs#769-770at least for fork, we could conditional this on the modelId? quick test shows adding these params works for qwen, gpt4o, together but not mistral: