Skip to content
This repository was archived by the owner on Sep 10, 2025. It is now read-only.

Conversation

@manuelcandales
Copy link
Contributor

@manuelcandales manuelcandales commented Apr 7, 2025

This enables compile on MPS, modifying the warning message to say this is experimental and is not ready for broad use yet
We also adjust inductor's config to avoid the inductor's decomposition of mm if running on MPS backend

python3 torchchat.py generate stories15M --dtype float16 --prompt "Once upon a time" --num-samples 3 --device mps --compile

@pytorch-bot
Copy link

pytorch-bot bot commented Apr 7, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/1523

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit d491b34 with merge base 1384f7d (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Apr 7, 2025
@Jack-Khuu
Copy link
Contributor

We added this for MPS LLama 3.2-Vision. Does this affect your work here?

os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"

@manuelcandales
Copy link
Contributor Author

PYTORCH_ENABLE_MPS_FALLBACK

I don't think so. Will experiment with it, but shouldn't have any effect on stories or text only llamas

@manuelcandales manuelcandales merged commit 02d1a59 into main Apr 8, 2025
72 checks passed
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants