- 
                Notifications
    You must be signed in to change notification settings 
- Fork 248
Torchchat CLI pipeline for Multimodal Models #1140
Conversation
| 🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/1140
 Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 751bcee with merge base 6fae164 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. | 
        
          
                torchchat/model.py
              
                Outdated
          
        
      |  | ||
| # ET changed the way it's loading the custom ops so it's not included in portable_lib but has to be loaded separately. | ||
| from executorch.examples.models.llama2.custom_ops import sdpa_with_kv_cache # no-qa | ||
| from executorch.extension.pybindings import portable_lib as exec_lib | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will introduce issue previously -- we have to import exec_lib first then sdpa_with_kv_cache
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need bump back up, or the imports fail
We should comment this so people know in the future
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can add flamingo as a new model name in the codebase to make the command easier. That's is, instead of current cmd:
python3 torchchat.py generate --checkpoint-path ~/path/to/consolidated.pth --tokenizer-path ~/path/to/tokenizer.model --params-path ~/path/to/flamingo.json --image-prompts ~/path/to/dog.jpg --prompt "Is the image of a cat?"
maybe we can have something like:
python3 torchchat.py generate flamingo --checkpoint-path ~/path/to/consolidated.pth --image-prompts ~/path/to/dog.jpg --prompt "Is the image of a cat?"
987269b    to
    621cf5d      
    Compare
  
    | text_args = model.config.transformer_args.get("text") | ||
| if text_args is None: | ||
| # TODO: Will be refactored: Currently, the only model that doesn't have text in transfomer_args is Flamingo | ||
| use_tiktoken = model.config.model_type == ModelType.Flamingo | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just calling out that this might cause issues with other use cases that manually pass in checkpoints/params
Not something that needs to be addressed in this diff, but may need to revisit sooner rather than later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason why we need this is on my side I didn't provide a unify config system: the way we get model config from model between tune-support and chat-support is different.
I've put this into my BE doc, and will solve it whenever we can breath.
        
          
                torchchat/generate.py
              
                Outdated
          
        
      | elif batch is not None: | ||
| logits = model(**batch)[:, -1] | ||
| print("logits", logits) | ||
| return sample(logits, 0, 500) | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Not required
        
          
                torchchat/generate.py
              
                Outdated
          
        
      |  | ||
| # torchtune model definition dependencies | ||
| from torchtune.data import Message | ||
| from torchtune.generation._generation import sample | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: not required
621cf5d    to
    b86c77c      
    Compare
  
    | import torch._dynamo.config | ||
| import torch._inductor.config | ||
| import torch.nn as nn | ||
|  | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
plz add a comment to describe why we need to pass the import error
Tested using:
Chat does not function properly with image prompts.