Skip to content

Commit 1dca638

Browse files
authored
Remove calls to contiguous in the implementation of Float8Tensor (#2747)
Summary: We typically should not be calling contiguous in the op implementations since these does not align with the semantics of the op, e.g. transpose Test Plan: python test/quantization/quantize_/workflows/float8/test_float8_tensor.py Reviewers: Subscribers: Tasks: Tags:
1 parent cd7975e commit 1dca638

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

torchao/quantization/quantize_/workflows/float8/float8_tensor.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -270,7 +270,7 @@ def _(func, types, args, kwargs):
270270

271271
out_shape = get_out_shape(input_tensor.shape, weight_tensor.shape)
272272
xq = input_tensor.qdata.reshape(-1, input_tensor.qdata.shape[-1])
273-
wq = weight_tensor.qdata.contiguous()
273+
wq = weight_tensor.qdata
274274
x_scale = input_tensor.scale
275275
w_scale = weight_tensor.scale
276276
if _is_rowwise_scaled(weight_tensor):
@@ -510,8 +510,8 @@ def _(func, types, args, kwargs):
510510
@implements(aten.transpose.int)
511511
def _(func, types, args, kwargs):
512512
self, dim0, dim1 = args
513-
qdata = self.qdata.transpose(dim0, dim1).contiguous()
514-
scale = self.scale.transpose(dim0, dim1).contiguous()
513+
qdata = self.qdata.transpose(dim0, dim1)
514+
scale = self.scale.transpose(dim0, dim1)
515515
block_size = self.block_size.copy()
516516

517517
block_size[dim0], block_size[dim1] = block_size[dim1], block_size[dim0]

0 commit comments

Comments
 (0)