-
Notifications
You must be signed in to change notification settings - Fork 751
share embeddding and output #6800
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6800
Note: Links to docs will display an error until the docs builds have been completed. ❌ 3 New FailuresAs of commit 08cb3f7 with merge base 4b7a60f ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D64189995 |
Summary: Pull Request resolved: pytorch#6800 Differential Revision: D64189995
dce7aa9 to
08cb3f7
Compare
|
This pull request was exported from Phabricator. Differential Revision: D64189995 |
|
@chiwwang here is one thing we can do to optimize the memory. For 1b model, the embedding is shared with the last linear weight, so ideally, we can save more memory if it's enabled. |
|
@shewu-quic for awareness. This might also help us reduce memory usage. |
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
Differential Revision: D64189995