Skip to content
This repository was archived by the owner on Sep 10, 2025. It is now read-only.

Conversation

@Jack-Khuu
Copy link
Contributor

No Compile, No Warning

python torchchat.py generate llama3.1
Using device=mps
Loading model...
Time to load model: 10.68 seconds
-----------------------------------------------------------

With compile, with Warning

python torchchat.py generate llama3.1 --compile
Warning: compilation is not available with device MPS, ignoring option to engage compilation
Using device=mps
Loading model...
Time to load model: 11.98 seconds
-----------------------------------------------------------

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 1, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/1245

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 92f5f56 with merge base 0ddee99 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Oct 1, 2024
@Jack-Khuu Jack-Khuu merged commit 24d00ea into main Oct 1, 2024
52 checks passed
@Jack-Khuu Jack-Khuu deleted the mps-compile-msg branch October 5, 2024 02:37
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants