Skip to content
This repository was archived by the owner on Sep 10, 2025. It is now read-only.

Conversation

@manuelcandales
Copy link
Contributor

We accumulate tokens in generate mode before calling callback on them.
This avoids synchronizing GPU and CPU on each token, improving performance on MPS backend.

@pytorch-bot
Copy link

pytorch-bot bot commented Apr 22, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/1534

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit facb0b7 with merge base 359db61 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Apr 22, 2025
sequential_prefill=sequential_prefill,
max_autotune=args.max_autotune,
is_torchtune_model=args.model and args.model.endswith("tune"),
accumulate_tokens=getattr(args, "accumulate_tokens", 8),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unrelated to this PR: I should fix this so we don't have duplicate defaults

@manuelcandales manuelcandales merged commit 5f8f35d into main Apr 23, 2025
72 checks passed
@manuelcandales manuelcandales deleted the manuel/batch-callbacks branch April 23, 2025 00:48
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants