Skip to content

Commit e4f8a6c

Browse files
committed
refactor: replace "compression" with "quantization" across codebase
The library quantizes values (reduces resolution via right-shift) but does not compress data. The caller chooses the storage type. Replaced all "compress/decompress" wording with "quantize/restore" in NatSpec, README, AGENTS.md, copilot instructions, and tests.
1 parent 8c1054e commit e4f8a6c

File tree

5 files changed

+11
-11
lines changed

5 files changed

+11
-11
lines changed

.github/copilot-instructions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
## Project Overview
44

5-
`uint-quantization-lib` is a pure-function Solidity library for shift-based `uint256` compression.
5+
`uint-quantization-lib` is a pure-function Solidity library for shift-based `uint256` quantization.
66
The core implementation is `UintQuantizationLib` in `src/UintQuantizationLib.sol`, built around
77
`Quant` (a `uint16` value type that packs `discardedBitWidth` and `encodedBitWidth`).
88

AGENTS.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
44

55
## Project Overview
66

7-
`uint-quantization-lib` is a pure-function Solidity library for shift-based `uint256` lossy compression. The core mechanism is floor quantization via right-shifting. A `Quant` value type packs `(discardedBitWidth, encodedBitWidth)` into a single `uint16`, making the compression scheme explicit and reusable. The recommended pattern is `immutable` + `create(discardedBitWidth, encodedBitWidth)`.
7+
`uint-quantization-lib` is a pure-function Solidity library for shift-based `uint256` quantization. The core mechanism is floor quantization via right-shifting. A `Quant` value type packs `(discardedBitWidth, encodedBitWidth)` into a single `uint16`, making the quantization scheme explicit and reusable. The recommended pattern is `immutable` + `create(discardedBitWidth, encodedBitWidth)`.
88

99
## Commands
1010

README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55

66
On-chain values routinely carry more resolution than the protocol needs, but storage charges for every bit you store, not every bit you use. Unnecessary resolution widens structs, fills extra slots, and costs 20,000 gas per cold write. You do not have to pay for resolution you do not use.
77

8-
This library quantizes `uint256` values via right-shift compression, packing more fields per storage slot and cutting gas on every write.
8+
This library quantizes `uint256` values via right-shift, packing more fields per storage slot and cutting gas on every write.
99

1010
**Quick start:**
1111

@@ -14,8 +14,8 @@ import {Quant, UintQuantizationLib} from "uint-quantization-lib/src/UintQuantiza
1414
1515
Quant private immutable SCHEME = UintQuantizationLib.create(32, 24);
1616
17-
uint24 stored = uint24(SCHEME.encode(largeValue)); // compress
18-
uint256 restored = SCHEME.decode(stored); // decompress
17+
uint24 stored = uint24(SCHEME.encode(largeValue)); // quantize
18+
uint256 restored = SCHEME.decode(stored); // restore
1919
```
2020

2121
## Installation
@@ -50,7 +50,7 @@ The `Quant` value type is a `uint16` with the following bit layout:
5050
| `q.encodedBitWidth()` | Bit-width of the encoded value (set at creation). |
5151
| `q.encode(value)` | Compresses `value` by discarding the low bits (floor). Reverts with `Overflow` if `value > max(q)`. |
5252
| `q.encode(value, true)` | Same as `encode(value)`, but also reverts with `NotAligned` if `value` is not step-aligned. |
53-
| `q.decode(encoded)` | Decompresses `encoded` back to the original scale. Discarded bits are restored as zeros (lower bound). |
53+
| `q.decode(encoded)` | Restores `encoded` back to the original scale. Discarded bits are restored as zeros (lower bound). |
5454
| `q.decodeMax(encoded)` | Like `decode`, but fills discarded bits with ones (upper bound within the step). |
5555
| `q.isValid()` | True if `q` satisfies the invariants enforced by `create`. Use to validate hand-wrapped `Quant` values. |
5656
| `q.fits(value)` | True if `value` fits within the scheme's representable range. |
@@ -81,7 +81,7 @@ contract StakingVault {
8181
8282
mapping(address => uint96) internal stakes;
8383
84-
/// Floor-encodes msg.value and stores the compressed amount.
84+
/// Floor-encodes msg.value and stores the quantized amount.
8585
function stake() external payable {
8686
require(SCHEME.fits(msg.value), "amount exceeds scheme max");
8787
stakes[msg.sender] = uint96(SCHEME.encode(msg.value));
@@ -154,7 +154,7 @@ Showcase contracts under `src/showcase/` use `UintQuantizationLib` and compare:
154154

155155
- Real-life example (production-style ETH staking):
156156
raw path uses realistic packed fields by default (`uint128 amount`, `uint64` timestamps, `bool active`)
157-
in `RawETHStakingShowcase`, while the quantized path further compresses stake amount into `uint96`
157+
in `RawETHStakingShowcase`, while the quantized path further reduces stake amount into `uint96`
158158
in `QuantizedETHStakingShowcase`.
159159
- Extreme example (upper-bound packing showcase):
160160
raw path stores 12 full-width `uint256` values (`RawExtremePackingShowcase`),

src/UintQuantizationLib.sol

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,10 +5,10 @@ pragma solidity ^0.8.25;
55
* @title UintQuantizationLib
66
* @author [0xferit](https://github.com/0xferit)
77
* @custom:security-contact ferit@cryptolab.net
8-
* @notice Pure-function library for shift-based uint256 compression using a bundled config type.
8+
* @notice Pure-function library for shift-based uint256 quantization using a bundled config type.
99
*
1010
* The `Quant` value type packs a `(discardedBitWidth, encodedBitWidth)` scheme into a single `uint16`,
11-
* allowing callers to define the compression config once and invoke methods on it.
11+
* allowing callers to define the quantization scheme once and invoke methods on it.
1212
* Type layout (uint16):
1313
* bits 0-7 → discardedBitWidth (LSBs discarded during encoding)
1414
* bits 8-15 → encodedBitWidth (bit-width of the encoded value)

test/UintQuantizationLib.t.sol

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@ contract UintQuantizationLibSmokeTest is Test {
209209
}
210210

211211
// -------------------------------------------------------------------------
212-
// Boundary: discardedBitWidth == 0 (identity / no compression)
212+
// Boundary: discardedBitWidth == 0 (identity / no quantization)
213213
// -------------------------------------------------------------------------
214214

215215
function test_discardedBitWidth_zero_identity() public view {

0 commit comments

Comments
 (0)