-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Expose the "store" parameter through ModelSettings #357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 4 commits
2941064
9566c47
73475f4
4bf33f4
242efa1
4fefc56
ded0f8d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,6 +1,6 @@ | ||
from __future__ import annotations | ||
|
||
from dataclasses import dataclass | ||
from dataclasses import dataclass, asdict | ||
from typing import Literal | ||
|
||
|
||
|
@@ -30,7 +30,7 @@ class ModelSettings: | |
tool_choice: Literal["auto", "required", "none"] | str | None = None | ||
"""The tool choice to use when calling the model.""" | ||
|
||
parallel_tool_calls: bool | None = False | ||
parallel_tool_calls: bool | None = None | ||
"""Whether to use parallel tool calls when calling the model.""" | ||
|
||
truncation: Literal["auto", "disabled"] | None = None | ||
|
@@ -39,18 +39,17 @@ class ModelSettings: | |
max_tokens: int | None = None | ||
"""The maximum number of output tokens to generate.""" | ||
|
||
store: bool | None = None | ||
"""Whether to store the generated model response for later retrieval.""" | ||
stevenheidel marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
||
def resolve(self, override: ModelSettings | None) -> ModelSettings: | ||
"""Produce a new ModelSettings by overlaying any non-None values from the | ||
override on top of this instance.""" | ||
if override is None: | ||
return self | ||
return ModelSettings( | ||
temperature=override.temperature or self.temperature, | ||
top_p=override.top_p or self.top_p, | ||
frequency_penalty=override.frequency_penalty or self.frequency_penalty, | ||
presence_penalty=override.presence_penalty or self.presence_penalty, | ||
tool_choice=override.tool_choice or self.tool_choice, | ||
parallel_tool_calls=override.parallel_tool_calls or self.parallel_tool_calls, | ||
truncation=override.truncation or self.truncation, | ||
max_tokens=override.max_tokens or self.max_tokens, | ||
) | ||
|
||
new_values = { | ||
k: getattr(override, k) if getattr(override, k) is not None else getattr(self, k) | ||
for k in asdict(self) | ||
} | ||
return ModelSettings(**new_values) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rm-openai this seemed like a typo, since it meant that if you tried to override a different setting at the Runner.run level it would also set this back to False
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually never mind since an
or
was used below it's fine. but we should actually not useor
below because "0" is false-y in Python, so setting "temperature=0" at the Runner.run level would fail to override temperature at the agent level