Skip to content

Conversation

namgyu-youn
Copy link
Contributor

@namgyu-youn namgyu-youn commented Aug 11, 2025

Summary:
Integrate commonly used single/multi-linear toy models and refactor them across the codebase (src/test/benchmark/tutorial).

Test Plan: CI

Integrates common used toy model and refactor across TorchAO (ao/test/benchmark/tutorial)
- fix: pytorch#2078

Test Plan: CI
Copy link

pytorch-bot bot commented Aug 11, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2729

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 11, 2025
@jainapurva jainapurva self-requested a review August 11, 2025 16:24
@jainapurva
Copy link
Contributor

@namgyu-youn thanks for taking up this effort

@jainapurva jainapurva added the topic: not user facing Use this tag if you don't want this PR to show up in release notes label Aug 11, 2025
self.linear1 = torch.nn.Linear(k, n, bias=False).to(dtype)
self.linear1 = torch.nn.Linear(m, n, bias=False)
self.linear2 = torch.nn.Linear(n, k, bias=False)
self.linear3 = torch.nn.Linear(k, 1, bias=False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please create a separate model for two linear layers. This model for single linear layer is used in benchmarking run on CI.

Copy link
Contributor Author

@namgyu-youn namgyu-youn Aug 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jainapurva I prefer to define ToySingleLinearModel and ToyMultiLinearModel for a future update as you mentioned, but how about reverting benchmark_aq.py?

Unit tests (e.g., test_quant_api.py, test_awq.py) are using single/multiple layers in a mixed manner, and using only multiple layers might be an update. If this makes sense, benchmark_aq.py would be the only case using single layers. Let me know which one aligns better.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ToySingleLinearModel and ToyMultiLinearModel sounds good. Please ensure all the tests are running smoothly for it.
For benchmark_aq.py you can add the bias parameter as the last arg in init and set it to False by default. In addition to this, ToySingleLinearModel is used in running .github/workflows/run_microbenchmarks.yml. It uses the create_model_and_input_data, please ensure that method is running smoothly, and is updated as per the new toy models.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for opening the PR without checking it. And I will move into your suggestion; thanks for your leading.

@namgyu-youn namgyu-youn marked this pull request as draft August 11, 2025 22:23
@jainapurva
Copy link
Contributor

@namgyu-youn Please feel free to divide this into multiple PRs if it's too many changes.

@namgyu-youn namgyu-youn marked this pull request as ready for review August 12, 2025 01:04
@namgyu-youn namgyu-youn requested a review from jainapurva August 12, 2025 01:04
Integrate commonly used single/multi-linear toy models and refactor them across the codebase (src/test/benchmark/tutorial).

- fix: pytorch#2078

Test Plan: CI
@jainapurva
Copy link
Contributor

@namgyu-youn There are some merge conflicts in the branch. Please rebase it onto main. If needed, I can help with that.

@namgyu-youn namgyu-youn marked this pull request as draft August 16, 2025 16:08
@namgyu-youn namgyu-youn marked this pull request as ready for review August 17, 2025 05:52
@namgyu-youn
Copy link
Contributor Author

@jainapurva Could you take a look at this PR? It passed CI after resolving the merge conflict.

self.linear3 = torch.nn.Linear(k, 64, bias=has_bias)

def example_inputs(
self, batch_size=1, sequence_length=10, dtype=torch.float32, device="cpu"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: should we move dtype and device to __init__ as well to be consistent?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there is a plan to expand the toy model like backward, we can consider moving them to __init__. But I am fine to keep this because it has slightly more brevity, and there is no plan to expand it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what I meant is that, linear in init should have dtype and device as well, and it doesn't make sense to define linear modules in one device/dtype but get example_inputs from another dtype/device? so might be easier just to define these in init and not worry about them in example_inputs?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I missed it. Updating init is much better, thanks.



class ToyMultiLinearModel(torch.nn.Module):
def __init__(self, m=512, n=256, k=128, has_bias=True):
Copy link
Contributor

@jerryzh168 jerryzh168 Aug 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel m, n, k should be required actually (also probably change m, n, k naming a bit, since it's easily confused with the shapes of linear itself)

also do we need 3 linears? can this be 2 linears and rename to TwoLinearModel to make it clearer?

Copy link
Contributor Author

@namgyu-youn namgyu-youn Aug 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That means the user should input m, n, k whenever ToyMultiLinearModel is called, right? In my thought, (m, n, k) can be renamed to (input_dim, hidden_dim, output_dim). Let me know if there is better one.

Also in the old version, there were two (test_awq.py and test_smoothquant.py) scripts related to performance (error range; AWQ and SmoothQuant). But since they are quiet far away from the real benchmark, I am fine to go with 2-linears for more brevity. ToyTwoLinearModel sounds good to me.

@namgyu-youn namgyu-youn requested a review from jerryzh168 August 22, 2025 10:27
x = self.linear1(x)
x = self.linear2(x)
return x
from torchao.testing.model_architectures import ToyTwoLinearModel
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you revert the changes for this? I think it's better to have this tutorial self contained

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes keeping them for tutorial sounds good to me, I will revert it.

@@ -29,19 +29,9 @@ First, let's set up our toy model:

import copy
import torch
from torchao.testing.model_architectures import ToyTwoLinearModel
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also this

"""Single linear for m * k * n problem size"""

def __init__(
self, m=64, n=32, k=64, has_bias=False, dtype=torch.float, device="cuda"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

default dtype should probably be torch.bfloat16 I feel

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is still not updated

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I missed it, thanks for the reminder.



class ToyTwoLinearModel(torch.nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim, has_bias=False):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dtype and device?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh I missed it, thanks.

@@ -179,7 +220,7 @@ def create_model_and_input_data(
m, k, n (int): dimensions of the model and input data
"""
if model_type == "linear":
model = ToyLinearModel(k, n, high_precision_dtype).to(device)
model = ToyTwoLinearModel(k, n, high_precision_dtype).to(device)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

arg seems to be wrong here?

Copy link
Contributor Author

@namgyu-youn namgyu-youn Aug 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I misunderstood its workflow. This if-else and test_model_architecture.py should be updated using the following:

model, input_data = create_model_and_input_data(
    "linear", 10, 64, 32, device=device
)

Therefore, we should not use toy model here.

@@ -284,7 +265,7 @@ def test_static_quant(target_dtype: torch.dtype, mapping_type: MappingType):
weight_obs = AffineQuantizedMinMaxObserver(
mapping_type,
target_dtype,
granularity=PerAxis(axis=0),
granularity=PerTensor(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this changed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was my misundertstooding while fixing obersver's input shape. I will revert it with related workflows.

@@ -113,7 +102,7 @@ def test_fp8_linear_variants(
input_tensor = torch.randn(*M, K, dtype=dtype, device="cuda")

# Create a linear layer with bfloat16 dtype
model = ToyLinearModel(K, N).eval().to(dtype).to("cuda")
model = ToyTwoLinearModel(K, 64, N).eval().to(dtype).to("cuda")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

K, N, K?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unlike ToyLinearModel (old), ToyTwoLinearModel uses input_dim (K), hidden_dim (64), output_dim (N); following is the same case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for old model, it's in_features = K, out_features = N
mapping to ToyTwoLInearModel should be input_dim = K, hidden_dim = N, output_dim = K, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes this is right, thanks for correcting these. I will fix all these case.

@@ -222,7 +211,7 @@ def test_kernel_preference_numerical_equivalence(self, granularity, sizes):
dtype = torch.bfloat16
input_tensor = torch.randn(*M, K, dtype=dtype, device="cuda")
# Create a linear layer with bfloat16 dtype
model = ToyLinearModel(K, N).eval().to(dtype).to("cuda")
model = ToyTwoLinearModel(K, 64, N).eval().to(dtype).to("cuda")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes all dimensions in ToyTwoLinearModel should be updated, thanks.

@jerryzh168
Copy link
Contributor

please run the changed tests locally as well

@namgyu-youn namgyu-youn requested a review from jerryzh168 August 26, 2025 15:42
@@ -46,7 +46,7 @@ Here is the serialization and deserialization flow::
state_dict = torch.load(f)

with torch.device("meta"):
m_loaded = ToyLinearModel(1024, 1024, 1024).eval().to(dtype)
m_loaded = ToyTwoLinearModel(1024, 1024, 1024).eval().to(dtype)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this has to be reverted as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it was reverted at 994b507.

Copy link
Contributor Author

@namgyu-youn namgyu-youn Aug 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I misunderstood it. What you mean is revert its name also, right? We can keep ToyLinearModel for the tutorials.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, otherwise the tutorial code won't run



model = ToyLinearModel(1024, 1024, 1024).eval().to(torch.bfloat16).to("cuda")
model = ToyTwoLinearModel(1024, 1024, 1024).eval().to(torch.bfloat16).to("cuda")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be reverted as well I think, since it seems like to be a copy of the quick_start.rst

output_dim,
has_bias=False,
dtype=torch.bfloat16,
device="cpu",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

default device should also be cuda here I think

if device is None:
device = self.device

if sequence_length is not None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this used? this can be done in callsite as well right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes reverting sequence_length and removing this conditioner is much better.

Copy link
Contributor Author

@namgyu-youn namgyu-youn Aug 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry I missed AWQ. It is used at test/prototype/test_awq.py, so we have to keep sequence_length for AWQ.

)

def example_inputs(
self, batch_size=1, sequence_length=None, dtype=None, device=None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also not sure if we need dtype and device arg here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes we only need init for ags

@@ -122,7 +109,7 @@ def test_fp8_linear_variants(
}

# Create a linear layer with bfloat16 dtype
model = ToyLinearModel(K, N).eval().to(dtype).to("cuda")
model = ToyTwoLinearModel(K, K // 2, N).eval().to(dtype).to("cuda")
Copy link
Contributor

@jerryzh168 jerryzh168 Aug 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

original seems to be (K, N, K)? according to L54-55 in original file

same for the many of the changes in this file, I think we can match the original

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it is (K, N, K), not (M, N, K); thanks for correcting these.

# TODO: Refactor torchao and tests to use these models
class ToyLinearModel(torch.nn.Module):
def __init__(self, k=64, n=32, dtype=torch.bfloat16):
class ToySingleLinearModel(torch.nn.Module):
Copy link
Contributor

@jerryzh168 jerryzh168 Aug 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also feel it's important for ToySingleLinearModel and ToyTwoLinearModel to have consistent APIs, right now they are not:

  1. batch_size is passed in example_input() function for ToyTwoLinearModel but not ToySingleLinearModel
  2. dtype, device arg list mismatch for init and example_inputs
  3. default values for dtype, device should also match
  4. what example_inputs returns also mismatch

Copy link
Contributor Author

@namgyu-youn namgyu-youn Aug 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sound good to me. But because single-linear model is chained with benchmarks/benchmark_aq.py, it would be a little bit harder. In other words, updating dtype, device, or whatever in single-linear model might break CI, require to update docs, etc. Lcukily, ToySingleLinearModel used same dtype, device as ToyTwoLinearModel, so we can do it without considering CI.

Personally, defining ToyLinearModel(layer=1,2, ..., n) seems more general. How about moving to ToyLinearModel(layer=n) after this PR? But this is low-priority I guess.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can extend to layer n in the future if needed, keep 2 for now for simplicity would be better I think

@@ -108,7 +85,7 @@ def test_awq_functionality(self):

loss_awq = (ref_out - awq_out).pow(2).mean().item()
loss_base = (ref_out - baseline_out).pow(2).mean().item()
assert loss_awq < loss_base
assert loss_awq < loss_base * 1.1
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is edge case (toy model architecture is quiet different), error range is adjusted for passing CI. We can try only checking loss_awq is generated (no matter error range), as discussed in #2728 (comment) for more brevity

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure we can do that, even the model changed, the loss should still be smaller I think, since that's waht awq is optimizing for

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe higher error cames from lower (3->2) layers. Because AWQ uses weight distribution in this implementation, 2-layers might not adequate to compute distribution, making AWQ hard to learn.

if device is None:
device = self.device

if sequence_length is not None:
Copy link
Contributor Author

@namgyu-youn namgyu-youn Aug 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry I missed AWQ. It is used at test/prototype/test_awq.py, so we have to keep sequence_length for AWQ.

@@ -222,7 +211,7 @@ def test_kernel_preference_numerical_equivalence(self, granularity, sizes):
dtype = torch.bfloat16
input_tensor = torch.randn(*M, K, dtype=dtype, device="cuda")
# Create a linear layer with bfloat16 dtype
model = ToyLinearModel(K, N).eval().to(dtype).to("cuda")
model = ToyTwoLinearModel(K, 64, N).eval().to(dtype).to("cuda")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes all dimensions in ToyTwoLinearModel should be updated, thanks.

)

def example_inputs(
self, batch_size=1, sequence_length=None, dtype=None, device=None
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes we only need init for ags

@namgyu-youn namgyu-youn requested a review from jerryzh168 August 29, 2025 13:37
if sequence_length is not None:
return [
torch.randn(
1, self.linear1.in_features, dtype=self.dtype, device=self.device
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is different from the previous code in AWQ I think, sequence_length is not used at all here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel we can just copy paste this to awq, instead of complicating the implementation of ToyTwoLinearModel here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I am fine to revert it for this edge-case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: not user facing Use this tag if you don't want this PR to show up in release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Refactor torchao and tests to use model architectures from torchao.testing.model_architectures
3 participants