Replies: 2 comments
-
For what I could observe, If you want PARALLEL_TOOLS do # Parallel Tools
client = instructor.from_openai(openai.OpenAI(),
mode=instructor.Mode.PARALLEL_TOOLS,
)
function_calls = client.chat.completions.create(
model="gpt-3.5-turbo",
#response_model=Union[UserExtract, MovieExtract],
#response_model=Iterable[UserExtract|MovieExtract],
response_model=Iterable[Union[UserExtract, MovieExtract]],
#response_model = Union[Iterable[UserExtract], Iterable[MovieExtract]],
messages=[
{"role": "user", "content": "Extract jason is 25 years old"},
],
#tool_choice="auto"
)
```python
And you get
```python
for fc in function_calls:
print(fc)
# name='Jason' age=25 If you want to access the full response, then do # Normal Tools
client = instructor.from_openai(openai.OpenAI(),
mode=instructor.Mode.TOOLS,
)
class Output(BaseModel):
results: Union[UserExtract, MovieExtract]
user, completion = client.chat.completions.create_with_completion(
model="gpt-3.5-turbo",
response_model=Output,
messages=[
{"role": "user", "content": "Extract jason is 25 years old"},
],
#tool_choice="auto"
) Note: If instead of AttributeError: 'UserExtract' object has no attribute '_raw_response' print(user)
print(completion)
results=UserExtract(name='Jason', age=25)
ChatCompletion(
│ id='chatcmpl-9ccOS3GN12c54Fn7aLfXPZoqZSgmt',
│ choices=[
│ │ Choice(
│ │ │ finish_reason='stop',
│ │ │ index=0,
│ │ │ logprobs=None,
│ │ │ message=ChatCompletionMessage(
│ │ │ │ content=None,
│ │ │ │ role='assistant',
│ │ │ │ function_call=None,
│ │ │ │ tool_calls=[
│ │ │ │ │ ChatCompletionMessageToolCall(
│ │ │ │ │ │ id='call_PPuKNfZtTlfsyzLAJgU',
│ │ │ │ │ │ function=Function(
│ │ │ │ │ │ │ arguments='{"results":{"name":"Jason","age":25}}',
│ │ │ │ │ │ │ name='Output'
│ │ │ │ │ │ ),
│ │ │ │ │ │ type='function'
│ │ │ │ │ )
│ │ │ │ ]
│ │ │ )
│ │ )
│ ],
│ created=17189076,
│ model='gpt-3.5-turbo-0125',
│ object='chat.completion',
│ service_tier=None,
│ system_fingerprint=None,
│ usage=CompletionUsage(
│ │ completion_tokens=11,
│ │ prompt_tokens=113,
│ │ total_tokens=124
│ )
) +Info -> https://python.useinstructor.com/concepts/parallel/ By the way, in that web it says and that is no longer true (I used 3.5). reason: OpenAI updated the models' capabilities +Info: https://platform.openai.com/docs/guides/function-calling/supported-models If there is a mismatch between TypeError: Model should be with Iterable instead if <class 'instructor.dsl.simple_type.Response'> If someone finds that anything above is not 100% correct, please let me know. I'd be glad to update it. (I am learning too). Your other question (managing messages) deserves a new Post (I think). |
Beta Was this translation helpful? Give feedback.
-
If you want to merge both methods ( multiple calls + access to full response ) , it can be done. # If you want to access the full response,
# and be able to get multiple calls to Tools
# then do
# Normal Tools (to be able to get the full response)
client = instructor.from_openai(openai.OpenAI(),
mode=instructor.Mode.TOOLS,
)
# Create a class that has an Iterable of classes (to allow Multiple Tools)
class Output(BaseModel):
#results: Union[UserExtract, MovieExtract] # Before, just one or the other
results: Iterable[Union[UserExtract, MovieExtract]] # Now, any number
# Call API
output, assistant_message_response = client.chat.completions.create_with_completion(
model="gpt-3.5-turbo",
response_model=Output,
messages=[
{"role": "user", "content": "Jason is a guy of 25 years old. Today he is watching `Titanic (2002)`"},
],
)
# Display
for call in output.results:
call
assistant_message_response UserExtract(name='Jason', age=25)
MovieExtract(movie='Titanic', year=2002)
ChatCompletion(
│ id='chatcmpl-9cdHdhyF9PYWmsoZwIKODwfa5iLVL',
│ choices=[
│ │ Choice(
│ │ │ finish_reason='stop',
│ │ │ index=0,
│ │ │ logprobs=None,
│ │ │ message=ChatCompletionMessage(
│ │ │ │ content=None,
│ │ │ │ role='assistant',
│ │ │ │ function_call=None,
│ │ │ │ tool_calls=[
│ │ │ │ │ ChatCompletionMessageToolCall(
│ │ │ │ │ │ id='call_HZrHepSverpxFM4WPel',
│ │ │ │ │ │ function=Function(
│ │ │ │ │ │ │ arguments='{"results":[{"name":"Jason","age":25},{"movie":"Titanic","year":2002}]}',
│ │ │ │ │ │ │ name='Output'
│ │ │ │ │ │ ),
│ │ │ │ │ │ type='function'
│ │ │ │ │ )
│ │ │ │ ]
│ │ │ )
│ │ )
│ ],
│ created=17184497,
│ model='gpt-3.5-turbo-0125',
│ object='chat.completion',
│ service_tier=None,
│ system_fingerprint=None,
│ usage=CompletionUsage(
│ │ completion_tokens=22,
│ │ prompt_tokens=129,
│ │ total_tokens=151
│ )
) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Does Instructor have any utils for managing messages? Specifically asking about how to format an assistant message that invoked a tool.
I'd like to store assistant messages that used tools in the conversation history. Should end up something like this.
I know this data can be found in the raw response.
But when allowing for multiple tools, this doesn't work:
More generally, I'm wondering how Instructor users work with chat message history.
Beta Was this translation helpful? Give feedback.
All reactions