-
Notifications
You must be signed in to change notification settings - Fork 349
Update TorchAO README inference section before PTC #3206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3206
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
b0fc829
to
081118f
Compare
from torchao.quantization import Int4WeightOnlyConfig, quantize_ | ||
quantize_(model, Int4WeightOnlyConfig(group_size=32, version=1)) | ||
``` | ||
Compared to a `torch.compiled` bf16 baseline, your quantized model should be significantly smaller and faster on a single A100 GPU: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removing these since toy model memory/latency is not meaningful, to make our README shorter
|
||
TorchAO is integrated into some of the leading open-source libraries including: | ||
|
||
* HuggingFace transformers with a [builtin inference backend](https://huggingface.co/docs/transformers/main/quantization/torchao) and [low bit optimizers](https://github.com/huggingface/transformers/pull/31865) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reordered a bit to put more commonly used ones earlier
081118f
to
d45a249
Compare
d45a249
to
f7762ad
Compare
docs/source/quick_start.rst
Outdated
command instead:: | ||
|
||
pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu121 | ||
pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this be cu128 or cu129?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure, I can change to 128
* Integration with [FBGEMM](https://github.com/pytorch/FBGEMM/tree/main/fbgemm_gpu/experimental/gen_ai) for SOTA kernels on server GPUs | ||
* Integration with [ExecuTorch](https://github.com/pytorch/executorch/) for edge device deployment | ||
* Axolotl for [QAT](https://docs.axolotl.ai/docs/qat.html) and [PTQ](https://docs.axolotl.ai/docs/quantize.html) | ||
* TorchTitan for [float8 pre-training](https://github.com/pytorch/torchtitan/blob/main/docs/float8.md) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we add unsloth too? Or are we still waiting for the blog post to link?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I was waiting for you to add this
### PyTorch-Native Training-to-Serving Model Optimization | ||
- Pre-train Llama-3.1-70B **1.5x faster** with float8 training | ||
- Recover **77% of quantized perplexity degradation** on Llama-3.2-3B with QAT | ||
- Quantize Llama-3-8B to int4 for **1.89x faster** inference with **58% less memory** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you also update this Jerry?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I'm not sure about the latest, was planning for someone more familiar to update this
f7762ad
to
6fdadba
Compare
docs/source/serving.rst
Outdated
|
||
.. code-block:: bash | ||
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jerryzh168 Should we update this to use pytorch's vllm build: https://download.pytorch.org/whl/nightly/vllm ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK sure
6fdadba
to
8e4e53f
Compare
Summary: att Test Plan: visual inspection Reviewers: Subscribers: Tasks: Tags:
8e4e53f
to
30f2dd8
Compare
Summary:
att
Test Plan:
visual inspection
Reviewers:
Subscribers:
Tasks:
Tags: