Replies: 4 comments 6 replies
-
|
Assistant api is depreciated in favor of Responses API. So nothing will be done and responses api should be used. Or you talking about something else. |
Beta Was this translation helpful? Give feedback.
1 reply
-
I can't find the way at least to limit this is Response Api...max_num_result not working for me there
|
Beta Was this translation helpful? Give feedback.
0 replies
-
|
https://platform.openai.com/docs/guides/conversation-state My be this will help? |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
I have nothing to do with all this stuff :) |
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment



Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi @remdex,
I am writing to request an update for the assistant_stream logic, which hasn't been updated in a while.
1. The Economic Context (Why this is important now)
OpenAI has a "Data Sharing" program where many accounts (even Tier 1) receive "Complimentary Daily Tokens".
For example, we now get up to 2.5 million tokens per day for free on models like
gpt-4o-mini.This makes the Assistants API incredibly viable for production.
The issue with using the standard Response API (Chat Completions) for similar tasks is that Vector Store / File Search usage there is paid and billed separately — it is explicitly NOT included in the "Complimentary Daily Tokens".
Therefore, to maximize the benefit of the free tier (which covers the model inference), the Assistants API is the most efficient choice.
However, the current implementation in
assistant_streamis lagging behind the capabilities needed to fully utilize this.2. The Request: Multimodal Support
Currently, the
assistant_streamimplementation does not seem to handle Images or Voice messages, whereas the standard "Bot" implementation in LHC already supports them.Could you please port the logic from the standard Bot integration into the
assistant_stream?Images (Vision):
In the standard bot (as per docs here), images are passed via
file_body_embed/ base64.Voice (Audio):
In the standard bot (as per docs here), audio is sent to Whisper for transcription, and the text is then sent to the LLM.
Since the
gpt-4o-minimodel is now free for many users (up to 2.5M tokens/day), updatingassistant_streamto support Images and Voice would make it a powerful, all-in-one solution for Live Helper Chat users.Thank you for considering this!
Beta Was this translation helpful? Give feedback.
All reactions