Skip to content
Merged
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 50 additions & 0 deletions ReleaseNotes.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,55 @@
# Release Notes

## New in Release 2.18.0

Post-training Quantization:

- Breaking changes:
- ...
- General:
- ...
- Features:
- Introduced `group_size_fallback_mode` advanced weight compression parameter. This specifies how to handle nodes that do not support a default group size value. By default it is set to `GroupSizeFallbackMode.IGNORE`. This corresponds to skipping nodes that cannot be compressed with the given group size.
- Added support for external quantizers in the `quantize_pt2e` API, including [XNNPACKQuantizer](https://docs.pytorch.org/executorch/stable/backends-xnnpack.html#quantization) and [CoreMLQuantizer](https://docs.pytorch.org/executorch/stable/backends-coreml.html#quantization).
- Fixes:
- ...
- Improvements:
- Support of weight compression for models with the Rotary Positional Embedding block.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

- Support of weight compression for models with stateful self-attention blocks.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

- Deprecations/Removals:
- ...
- Tutorials:
- ...
- Known issues:
- ...

Compression-aware training:

- Breaking changes:
- ...
- General:
- ...
- Features:
- ...
- Fixes:
- ...
- Improvements:
- ...
- Deprecations/Removals:
- ...
- Tutorials:
- ...
- Known issues:
- ...

Deprecations/Removals:

- ...

Requirements:

- ...

## New in Release 2.17.0

Post-training Quantization:
Expand Down