Lack of Streaming Response for Azure o1-mini Model #6177
Unanswered
genaiservice
asked this question in
Troubleshooting
Replies: 2 comments 1 reply
-
same for o1-preview and o1 |
Beta Was this translation helpful? Give feedback.
0 replies
-
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
What happened?
What happened?
When using the
o1-mini AzureOpenAI
model in LibreChat, the model generates correct responses; however, the streaming functionality does not work as expected. With other models, streaming is functional and responses are displayed progressively (token by token). This inconsistency appears to be specific to the o1-mini model, despite the proper configuration being added in the librechat.yml file.The following configuration is being used for the AzureOpenAI o1-mini model in the librechat.yml file:
Expected result:
When using the o1-mini model, the response should stream progressively, token by token, in the same way other supported models behave. This would provide a smoother and more interactive user experience.
Version Information
this is the docker-compose im currently using:
Steps to Reproduce
git clone https://github.com/danny-avila/LibreChat.git
docker compose up
What browsers are you seeing the problem on?
Chrome
Relevant log output
Screenshots
Grabando-20250304_165452.webm
Code of Conduct
Beta Was this translation helpful? Give feedback.
All reactions