Skip to content

Conversation

@apullin
Copy link
Contributor

@apullin apullin commented Jan 9, 2026

Summary:

Problem

When using nn.LayerNorm in models that go through the ARM backend's quantization flow, the DecomposeLayerNormPass fails with:

ValueError: DecomposeLayerNormPass: too many values to unpack (expected 2)

This happens because torch.ops.aten.layer_norm.default has 6 arguments:

layer_norm(input, normalized_shape, weight, bias, eps, cudnn_enable)

But DecomposeLayerNormPass only handled up to 5 arguments (for native_layer_norm).

The error occurs during transform_for_annotation_pipeline in the ARM quantizer, which runs before edge transformation when the op is still aten.layer_norm.default.

Solution

Add case 6: to the match len(args) block in DecomposeLayerNormPass.call() to handle the 6th argument (cudnn_enable). This argument is simply ignored during decomposition since it's only relevant for cuDNN GPU optimization.

Testing

Added a new test file test_layernorm_modai_compat.py that:

  1. Creates a simple Linear -> LayerNorm -> Linear model
  2. Exports it via torch.export
  3. Runs it through transform_for_annotation_pipeline (the exact path that was failing)
  4. Verifies LayerNorm is decomposed correctly through the full TOSA pipelines

@apullin apullin requested a review from digantdesai as a code owner January 9, 2026 17:44
@pytorch-bot
Copy link

pytorch-bot bot commented Jan 9, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16516

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 1 Cancelled Job, 1 Unrelated Failure

As of commit 49be1d5 with merge base 10f72fc (image):

NEW FAILURE - The following job has failed:

CANCELLED JOB - The following job was cancelled. Please retry:

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 9, 2026
@meta-codesync
Copy link

meta-codesync bot commented Jan 9, 2026

@apullin has exported this pull request. If you are a Meta employee, you can view the originating Diff in D90395786.

@github-actions
Copy link

github-actions bot commented Jan 9, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

apullin pushed a commit to apullin/executorch that referenced this pull request Jan 9, 2026
Summary:

## Problem

When using `nn.LayerNorm` in models that go through the ARM backend's quantization flow, the `DecomposeLayerNormPass` fails with:

```
ValueError: DecomposeLayerNormPass: too many values to unpack (expected 2)
```

This happens because `torch.ops.aten.layer_norm.default` has **6 arguments**:
```
layer_norm(input, normalized_shape, weight, bias, eps, cudnn_enable)
```

But `DecomposeLayerNormPass` only handled up to 5 arguments (for `native_layer_norm`).

The error occurs during `transform_for_annotation_pipeline` in the ARM quantizer, which runs before edge transformation when the op is still `aten.layer_norm.default`.

## Solution

Add `case 6:` to the `match len(args)` block in `DecomposeLayerNormPass.call()` to handle the 6th argument (`cudnn_enable`). This argument is simply ignored during decomposition since it's only relevant for cuDNN GPU optimization.
---
> Generated by [Confucius Code Assist (CCA)](https://www.internalfb.com/wiki/Confucius/Analect/Shared_Analects/Confucius_Code_Assist_(CCA)/)
[Confucius Session](https://www.internalfb.com/confucius?host=92481.od.fbinfra.net&port=8086&tab=Chat&session_id=eace3d92-ed78-11f0-b67c-c7843469b0d5&entry_name=Code+Assist), [Trace](https://www.internalfb.com/confucius?session_id=eace3d92-ed78-11f0-b67c-c7843469b0d5&tab=Trace)

Reviewed By: JacobSzwejbka

Differential Revision: D90395786
Summary:

## Problem

When using `nn.LayerNorm` in models that go through the ARM backend's quantization flow, the `DecomposeLayerNormPass` fails with:

```
ValueError: DecomposeLayerNormPass: too many values to unpack (expected 2)
```

This happens because `torch.ops.aten.layer_norm.default` has **6 arguments**:
```
layer_norm(input, normalized_shape, weight, bias, eps, cudnn_enable)
```

But `DecomposeLayerNormPass` only handled up to 5 arguments (for `native_layer_norm`).

The error occurs during `transform_for_annotation_pipeline` in the ARM quantizer, which runs before edge transformation when the op is still `aten.layer_norm.default`.

## Solution

Add `case 6:` to the `match len(args)` block in `DecomposeLayerNormPass.call()` to handle the 6th argument (`cudnn_enable`). This argument is simply ignored during decomposition since it's only relevant for cuDNN GPU optimization.
---
> Generated by [Confucius Code Assist (CCA)](https://www.internalfb.com/wiki/Confucius/Analect/Shared_Analects/Confucius_Code_Assist_(CCA)/)
[Confucius Session](https://www.internalfb.com/confucius?host=92481.od.fbinfra.net&port=8086&tab=Chat&session_id=eace3d92-ed78-11f0-b67c-c7843469b0d5&entry_name=Code+Assist), [Trace](https://www.internalfb.com/confucius?session_id=eace3d92-ed78-11f0-b67c-c7843469b0d5&tab=Trace)

Reviewed By: JacobSzwejbka

Differential Revision: D90395786
@meta-codesync meta-codesync bot merged commit 0d78c23 into pytorch:main Jan 10, 2026
143 of 146 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants