Skip to content
This repository was archived by the owner on Sep 10, 2025. It is now read-only.

Conversation

@vmpuri
Copy link
Contributor

@vmpuri vmpuri commented Sep 16, 2024

As described, tested with a basic image input.

@vmpuri vmpuri requested a review from Jack-Khuu September 16, 2024 19:01
@pytorch-bot
Copy link

pytorch-bot bot commented Sep 16, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/1152

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures

As of commit 452d90f with merge base 03c9819 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@vmpuri vmpuri requested a review from Gasoonjia September 16, 2024 19:01
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Sep 16, 2024
@vmpuri vmpuri force-pushed the openai_api_multimodal_ui branch from 974a250 to 7844931 Compare September 16, 2024 19:43
@vmpuri vmpuri force-pushed the openai_api_multimodal_ui branch from 7844931 to c25e24f Compare September 16, 2024 20:09
@vmpuri vmpuri marked this pull request as ready for review September 16, 2024 20:23

def main(args):
app = create_app(args)
# app.run(host="::", port=8085)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove comment?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lmao


import torch

from _torchchat_test_script import flamingo_transform, padded_collate
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we need to add try and except here?

response = st.write_stream(
get_streamed_completion(
client.chat.completions.create(
model="llama3",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remind me why this is set to llama3?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now, this could be set to anything - I just set the string so the API doesn't complain.

Since loading a model is very expensive, we just load whatever the server is launched with.

I could add a task for selecting a model from a list since the models endpoint is already implemented.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not in this PR, but we should comment this

f"{self.builder_args.device}_{self.builder_args.precision}"
)

def _openai_messages_to_torchtune(self, messages: List[_AbstractMessage]):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: return type

response = st.write_stream(
get_streamed_completion(
client.chat.completions.create(
model="llama3",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not in this PR, but we should comment this


def main(args):
app = create_app(args)
# app.run(host="::", port=8085)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lmao

@vmpuri vmpuri force-pushed the openai_api_multimodal_ui branch from 0b15f75 to 452d90f Compare September 17, 2024 00:58
@Jack-Khuu Jack-Khuu merged commit 16b3d64 into main Sep 17, 2024
49 of 51 checks passed
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants