-
Notifications
You must be signed in to change notification settings - Fork 322
refactor common used toy model #2729
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Integrates common used toy model and refactor across TorchAO (ao/test/benchmark/tutorial) - fix: pytorch#2078 Test Plan: CI
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2729
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@namgyu-youn thanks for taking up this effort |
self.linear1 = torch.nn.Linear(k, n, bias=False).to(dtype) | ||
self.linear1 = torch.nn.Linear(m, n, bias=False) | ||
self.linear2 = torch.nn.Linear(n, k, bias=False) | ||
self.linear3 = torch.nn.Linear(k, 1, bias=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please create a separate model for two linear layers. This model for single linear layer is used in benchmarking run on CI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jainapurva I prefer to define ToySingleLinearModel
and ToyMultiLinearModel
for a future update as you mentioned, but how about reverting benchmark_aq.py
?
Unit tests (e.g., test_quant_api.py
, test_awq.py
) are using single/multiple layers in a mixed manner, and using only multiple layers might be an update. If this makes sense, benchmark_aq.py
would be the only case using single layers. Let me know which one aligns better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ToySingleLinearModel
and ToyMultiLinearModel
sounds good. Please ensure all the tests are running smoothly for it.
For benchmark_aq.py
you can add the bias parameter as the last arg in init and set it to False by default. In addition to this, ToySingleLinearModel
is used in running .github/workflows/run_microbenchmarks.yml
. It uses the create_model_and_input_data, please ensure that method is running smoothly, and is updated as per the new toy models.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for opening the PR without checking it. And I will move into your suggestion; thanks for your leading.
@namgyu-youn Please feel free to divide this into multiple PRs if it's too many changes. |
Integrate commonly used single/multi-linear toy models and refactor them across the codebase (src/test/benchmark/tutorial). - fix: pytorch#2078 Test Plan: CI
@namgyu-youn There are some merge conflicts in the branch. Please rebase it onto main. If needed, I can help with that. |
@jainapurva Could you take a look at this PR? It passed CI after resolving the merge conflict. |
self.linear3 = torch.nn.Linear(k, 64, bias=has_bias) | ||
|
||
def example_inputs( | ||
self, batch_size=1, sequence_length=10, dtype=torch.float32, device="cpu" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: should we move dtype
and device
to __init__
as well to be consistent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is a plan to expand the toy model like backward, we can consider moving them to __init__
. But I am fine to keep this because it has slightly more brevity, and there is no plan to expand it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what I meant is that, linear in init should have dtype and device as well, and it doesn't make sense to define linear modules in one device/dtype but get example_inputs from another dtype/device? so might be easier just to define these in init and not worry about them in example_inputs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I missed it. Updating init is much better, thanks.
|
||
|
||
class ToyMultiLinearModel(torch.nn.Module): | ||
def __init__(self, m=512, n=256, k=128, has_bias=True): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel m
, n
, k
should be required actually (also probably change m
, n
, k
naming a bit, since it's easily confused with the shapes of linear itself)
also do we need 3 linears? can this be 2 linears and rename to TwoLinearModel
to make it clearer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That means the user should input m
, n
, k
whenever ToyMultiLinearModel
is called, right? In my thought, (m, n, k)
can be renamed to (input_dim, hidden_dim, output_dim)
. Let me know if there is better one.
Also in the old version, there were two (test_awq.py
and test_smoothquant.py
) scripts related to performance (error range; AWQ and SmoothQuant). But since they are quiet far away from the real benchmark, I am fine to go with 2-linears for more brevity. ToyTwoLinearModel
sounds good to me.
docs/source/serialization.rst
Outdated
x = self.linear1(x) | ||
x = self.linear2(x) | ||
return x | ||
from torchao.testing.model_architectures import ToyTwoLinearModel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you revert the changes for this? I think it's better to have this tutorial self contained
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes keeping them for tutorial sounds good to me, I will revert it.
docs/source/quick_start.rst
Outdated
@@ -29,19 +29,9 @@ First, let's set up our toy model: | |||
|
|||
import copy | |||
import torch | |||
from torchao.testing.model_architectures import ToyTwoLinearModel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also this
"""Single linear for m * k * n problem size""" | ||
|
||
def __init__( | ||
self, m=64, n=32, k=64, has_bias=False, dtype=torch.float, device="cuda" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
default dtype should probably be torch.bfloat16 I feel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is still not updated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I missed it, thanks for the reminder.
|
||
|
||
class ToyTwoLinearModel(torch.nn.Module): | ||
def __init__(self, input_dim, hidden_dim, output_dim, has_bias=False): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dtype and device?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh I missed it, thanks.
@@ -179,7 +220,7 @@ def create_model_and_input_data( | |||
m, k, n (int): dimensions of the model and input data | |||
""" | |||
if model_type == "linear": | |||
model = ToyLinearModel(k, n, high_precision_dtype).to(device) | |||
model = ToyTwoLinearModel(k, n, high_precision_dtype).to(device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
arg seems to be wrong here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I misunderstood its workflow. This if-else and test_model_architecture.py
should be updated using the following:
model, input_data = create_model_and_input_data(
"linear", 10, 64, 32, device=device
)
Therefore, we should not use toy model here.
@@ -284,7 +265,7 @@ def test_static_quant(target_dtype: torch.dtype, mapping_type: MappingType): | |||
weight_obs = AffineQuantizedMinMaxObserver( | |||
mapping_type, | |||
target_dtype, | |||
granularity=PerAxis(axis=0), | |||
granularity=PerTensor(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this changed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was my misundertstooding while fixing obersver's input shape. I will revert it with related workflows.
@@ -113,7 +102,7 @@ def test_fp8_linear_variants( | |||
input_tensor = torch.randn(*M, K, dtype=dtype, device="cuda") | |||
|
|||
# Create a linear layer with bfloat16 dtype | |||
model = ToyLinearModel(K, N).eval().to(dtype).to("cuda") | |||
model = ToyTwoLinearModel(K, 64, N).eval().to(dtype).to("cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
K, N, K?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unlike ToyLinearModel (old), ToyTwoLinearModel uses input_dim (K), hidden_dim (64), output_dim (N); following is the same case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for old model, it's in_features = K, out_features = N
mapping to ToyTwoLInearModel should be input_dim = K, hidden_dim = N, output_dim = K, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes this is right, thanks for correcting these. I will fix all these case.
@@ -222,7 +211,7 @@ def test_kernel_preference_numerical_equivalence(self, granularity, sizes): | |||
dtype = torch.bfloat16 | |||
input_tensor = torch.randn(*M, K, dtype=dtype, device="cuda") | |||
# Create a linear layer with bfloat16 dtype | |||
model = ToyLinearModel(K, N).eval().to(dtype).to("cuda") | |||
model = ToyTwoLinearModel(K, 64, N).eval().to(dtype).to("cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes all dimensions in ToyTwoLinearModel
should be updated, thanks.
please run the changed tests locally as well |
docs/source/serialization.rst
Outdated
@@ -46,7 +46,7 @@ Here is the serialization and deserialization flow:: | |||
state_dict = torch.load(f) | |||
|
|||
with torch.device("meta"): | |||
m_loaded = ToyLinearModel(1024, 1024, 1024).eval().to(dtype) | |||
m_loaded = ToyTwoLinearModel(1024, 1024, 1024).eval().to(dtype) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this has to be reverted as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it was reverted at 994b507.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I misunderstood it. What you mean is revert its name also, right? We can keep ToyLinearModel
for the tutorials.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, otherwise the tutorial code won't run
scripts/quick_start.py
Outdated
|
||
|
||
model = ToyLinearModel(1024, 1024, 1024).eval().to(torch.bfloat16).to("cuda") | ||
model = ToyTwoLinearModel(1024, 1024, 1024).eval().to(torch.bfloat16).to("cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should be reverted as well I think, since it seems like to be a copy of the quick_start.rst
output_dim, | ||
has_bias=False, | ||
dtype=torch.bfloat16, | ||
device="cpu", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
default device should also be cuda here I think
if device is None: | ||
device = self.device | ||
|
||
if sequence_length is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this used? this can be done in callsite as well right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes reverting sequence_length
and removing this conditioner is much better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry I missed AWQ. It is used at test/prototype/test_awq.py
, so we have to keep sequence_length
for AWQ.
) | ||
|
||
def example_inputs( | ||
self, batch_size=1, sequence_length=None, dtype=None, device=None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also not sure if we need dtype
and device
arg here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes we only need init for ags
@@ -122,7 +109,7 @@ def test_fp8_linear_variants( | |||
} | |||
|
|||
# Create a linear layer with bfloat16 dtype | |||
model = ToyLinearModel(K, N).eval().to(dtype).to("cuda") | |||
model = ToyTwoLinearModel(K, K // 2, N).eval().to(dtype).to("cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
original seems to be (K, N, K)? according to L54-55 in original file
same for the many of the changes in this file, I think we can match the original
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it is (K, N, K), not (M, N, K); thanks for correcting these.
# TODO: Refactor torchao and tests to use these models | ||
class ToyLinearModel(torch.nn.Module): | ||
def __init__(self, k=64, n=32, dtype=torch.bfloat16): | ||
class ToySingleLinearModel(torch.nn.Module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also feel it's important for ToySingleLinearModel and ToyTwoLinearModel to have consistent APIs, right now they are not:
- batch_size is passed in example_input() function for
ToyTwoLinearModel
but notToySingleLinearModel
- dtype, device arg list mismatch for init and example_inputs
- default values for dtype, device should also match
- what example_inputs returns also mismatch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sound good to me. But because single-linear model is chained with benchmarks/benchmark_aq.py
, it would be a little bit harder. In other words, updating dtype, device, or whatever in single-linear model might break CI, require to update docs, etc. Lcukily, ToySingleLinearModel
used same dtype, device as ToyTwoLinearModel
, so we can do it without considering CI.
Personally, defining ToyLinearModel(layer=1,2, ..., n)
seems more general. How about moving to ToyLinearModel(layer=n)
after this PR? But this is low-priority I guess.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can extend to layer n in the future if needed, keep 2 for now for simplicity would be better I think
test/prototype/test_awq.py
Outdated
@@ -108,7 +85,7 @@ def test_awq_functionality(self): | |||
|
|||
loss_awq = (ref_out - awq_out).pow(2).mean().item() | |||
loss_base = (ref_out - baseline_out).pow(2).mean().item() | |||
assert loss_awq < loss_base | |||
assert loss_awq < loss_base * 1.1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is edge case (toy model architecture is quiet different), error range is adjusted for passing CI. We can try only checking loss_awq is generated (no matter error range), as discussed in #2728 (comment) for more brevity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure we can do that, even the model changed, the loss should still be smaller I think, since that's waht awq is optimizing for
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe higher error cames from lower (3->2) layers. Because AWQ uses weight distribution in this implementation, 2-layers might not adequate to compute distribution, making AWQ hard to learn.
if device is None: | ||
device = self.device | ||
|
||
if sequence_length is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry I missed AWQ. It is used at test/prototype/test_awq.py
, so we have to keep sequence_length
for AWQ.
@@ -222,7 +211,7 @@ def test_kernel_preference_numerical_equivalence(self, granularity, sizes): | |||
dtype = torch.bfloat16 | |||
input_tensor = torch.randn(*M, K, dtype=dtype, device="cuda") | |||
# Create a linear layer with bfloat16 dtype | |||
model = ToyLinearModel(K, N).eval().to(dtype).to("cuda") | |||
model = ToyTwoLinearModel(K, 64, N).eval().to(dtype).to("cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes all dimensions in ToyTwoLinearModel
should be updated, thanks.
) | ||
|
||
def example_inputs( | ||
self, batch_size=1, sequence_length=None, dtype=None, device=None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes we only need init for ags
if sequence_length is not None: | ||
return [ | ||
torch.randn( | ||
1, self.linear1.in_features, dtype=self.dtype, device=self.device |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is different from the previous code in AWQ I think, sequence_length is not used at all here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel we can just copy paste this to awq, instead of complicating the implementation of ToyTwoLinearModel
here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I am fine to revert it for this edge-case.
Summary:
Integrate commonly used single/multi-linear toy models and refactor them across the codebase (src/test/benchmark/tutorial).
Test Plan: CI