-
Notifications
You must be signed in to change notification settings - Fork 248
UI and API implementation for base64 encoded image input #1152
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/1152
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New FailuresAs of commit 452d90f with merge base 03c9819 ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
974a250 to
7844931
Compare
7844931 to
c25e24f
Compare
torchchat/usages/server.py
Outdated
|
|
||
| def main(args): | ||
| app = create_app(args) | ||
| # app.run(host="::", port=8085) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lmao
torchchat/usages/openai_api.py
Outdated
|
|
||
| import torch | ||
|
|
||
| from _torchchat_test_script import flamingo_transform, padded_collate |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we need to add try and except here?
| response = st.write_stream( | ||
| get_streamed_completion( | ||
| client.chat.completions.create( | ||
| model="llama3", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remind me why this is set to llama3?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now, this could be set to anything - I just set the string so the API doesn't complain.
Since loading a model is very expensive, we just load whatever the server is launched with.
I could add a task for selecting a model from a list since the models endpoint is already implemented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not in this PR, but we should comment this
torchchat/usages/openai_api.py
Outdated
| f"{self.builder_args.device}_{self.builder_args.precision}" | ||
| ) | ||
|
|
||
| def _openai_messages_to_torchtune(self, messages: List[_AbstractMessage]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: return type
| response = st.write_stream( | ||
| get_streamed_completion( | ||
| client.chat.completions.create( | ||
| model="llama3", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not in this PR, but we should comment this
torchchat/usages/server.py
Outdated
|
|
||
| def main(args): | ||
| app = create_app(args) | ||
| # app.run(host="::", port=8085) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lmao
0b15f75 to
452d90f
Compare
As described, tested with a basic image input.