When an assistant calls a tools, give the option to pass the thread_id so that the tool can add files to the code interpreter #4392
owengo
started this conversation in
Feature Requests & Suggestions
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
An issue with custom gpts is that tools do not allow data transfer.
Let's say you want to create a tool which will generate a big .csv file you want then to process with the code interpreter, you have to give it in the tool response so it is extremly slow, inefficient and limited by the size of context.
A solution for this would be to provide the tool ( the function described in the openapi spec ) with the thread_id of the discussion, for example using a custom header. This way the remote function would be able to:
This feature would effectively allow actions to pass efficiently big data files to the assistant's code interpreter without depending on the model's context size. It would be unique to Librechat since as of today, AFAIK, OpenAI does not support it, even for enterprise customers.
Exemple:
Imagine the following api with the get_monthly_data action. The action takes a query parameter "date" and requires the header "x-openai-thread-id".
The implementation would generate the csv data, upload it as "monthly_data_at_{date}.csv", update the thread passed as x-openai-thread-id and make a response like this: "data for {date} is in /mnt/data/monthly_data_at_{date}.csv".
Librechat would just have to detect the action requires the header x-openai-thread-id and instanciate it when executing the action.
Beta Was this translation helpful? Give feedback.
All reactions