Skip to content
This repository was archived by the owner on Sep 10, 2025. It is now read-only.

Conversation

@vmpuri
Copy link
Contributor

@vmpuri vmpuri commented Sep 13, 2024

Tested using:

python3 torchchat.py generate --checkpoint-path ~/path/to/consolidated.pth --tokenizer-path ~/path/to/tokenizer.model --params-path ~/path/to/flamingo.json --image-prompts ~/path/to/dog.jpg --prompt "Is the image of a cat?"

Chat does not function properly with image prompts.

@pytorch-bot
Copy link

pytorch-bot bot commented Sep 13, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/1140

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 751bcee with merge base 6fae164 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Sep 13, 2024

# ET changed the way it's loading the custom ops so it's not included in portable_lib but has to be loaded separately.
from executorch.examples.models.llama2.custom_ops import sdpa_with_kv_cache # no-qa
from executorch.extension.pybindings import portable_lib as exec_lib
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will introduce issue previously -- we have to import exec_lib first then sdpa_with_kv_cache

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need bump back up, or the imports fail

We should comment this so people know in the future

Copy link
Contributor

@Gasoonjia Gasoonjia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can add flamingo as a new model name in the codebase to make the command easier. That's is, instead of current cmd:

python3 torchchat.py generate --checkpoint-path ~/path/to/consolidated.pth --tokenizer-path ~/path/to/tokenizer.model --params-path ~/path/to/flamingo.json --image-prompts ~/path/to/dog.jpg --prompt "Is the image of a cat?"

maybe we can have something like:

python3 torchchat.py generate flamingo --checkpoint-path ~/path/to/consolidated.pth --image-prompts ~/path/to/dog.jpg --prompt "Is the image of a cat?"

@vmpuri vmpuri force-pushed the multimodal_cli_pipeline branch from 987269b to 621cf5d Compare September 13, 2024 23:09
@vmpuri vmpuri marked this pull request as ready for review September 13, 2024 23:09
text_args = model.config.transformer_args.get("text")
if text_args is None:
# TODO: Will be refactored: Currently, the only model that doesn't have text in transfomer_args is Flamingo
use_tiktoken = model.config.model_type == ModelType.Flamingo
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just calling out that this might cause issues with other use cases that manually pass in checkpoints/params

Not something that needs to be addressed in this diff, but may need to revisit sooner rather than later

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason why we need this is on my side I didn't provide a unify config system: the way we get model config from model between tune-support and chat-support is different.
I've put this into my BE doc, and will solve it whenever we can breath.

elif batch is not None:
logits = model(**batch)[:, -1]
print("logits", logits)
return sample(logits, 0, 500)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Not required


# torchtune model definition dependencies
from torchtune.data import Message
from torchtune.generation._generation import sample
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: not required

@vmpuri vmpuri force-pushed the multimodal_cli_pipeline branch from 621cf5d to b86c77c Compare September 14, 2024 00:28
import torch._dynamo.config
import torch._inductor.config
import torch.nn as nn

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

plz add a comment to describe why we need to pass the import error

@Jack-Khuu Jack-Khuu merged commit 26c1d8b into main Sep 15, 2024
51 checks passed
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants