o1 model with image failed due to gpt-4* vision model doesn't support max_completion_tokens #4564
Unanswered
cloud-25683
asked this question in
Troubleshooting
Replies: 1 comment
-
I would fix this, but then o1 would never be used in the conversation, as it would switch to the vision model every time unless you disable "resend files" from "Parameters". Since it's a one line fix, I will do this, but just note this behavior. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
What happened?
Due to o1 model doesn't support chat with image, my default vision model is gpt-4o, but due to implementation of this PR, if incoming request is o1 models, max_tokens is converted to max_completion_tokens and max_completion_tokens is not supported by gpt-4o model, so it failed. I understand we are not supposed to use o1 model with image, but I recommend better error handling/message on the UI.
Steps to Reproduce
What browsers are you seeing the problem on?
No response
Relevant log output
No response
Screenshots
No response
Code of Conduct
Beta Was this translation helpful? Give feedback.
All reactions