Skip to content

Commit 7a2d7c8

Browse files
committed
dequant scales not support
1 parent 9dfb31c commit 7a2d7c8

File tree

1 file changed

+7
-0
lines changed

1 file changed

+7
-0
lines changed

src/compressed_tensors/compressors/quantized_compressors/fp4_quantized.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -149,6 +149,13 @@ def compress_scale(
149149
scale_exp = 127 + torch.floor(torch.log2(scale)).to(torch.int32) - 2
150150
return scale_exp.to(quantization_args.scale_dtype)
151151

152+
def decompress_weight(
153+
self,
154+
compressed_data: Dict[str, Tensor],
155+
quantization_args: Optional[QuantizationArgs] = None,
156+
) -> torch.Tensor:
157+
raise NotImplementedError("MXFP4 Decompression is currently not supported")
158+
152159

153160
@torch.compile(fullgraph=True, dynamic=True)
154161
def pack_fp4_to_uint8(x: torch.Tensor) -> torch.Tensor:

0 commit comments

Comments
 (0)