Skip to content

Commit 5b66aa6

Browse files
[DOCS] fix formatting for dot_scaled (#5361)
Before: <img width="742" alt="Screenshot 2024-12-06 at 12 35 56 PM" src="https://github.com/user-attachments/assets/c4f8c09f-8471-4dc6-a32a-1a3baa468051"> After: <img width="738" alt="Screenshot 2024-12-06 at 12 35 51 PM" src="https://github.com/user-attachments/assets/e7dfbc40-833e-49bf-900d-1fc1867a1c72">
1 parent b4440e1 commit 5b66aa6

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

python/triton/language/core.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1647,8 +1647,10 @@ def dot(input, other, acc=None, input_precision=None, allow_tf32=None, max_num_i
16471647
def dot_scaled(lhs, lhs_scale, lhs_format, rhs, rhs_scale, rhs_format, acc=None, out_dtype=float32, _builder=None):
16481648
"""
16491649
Returns the matrix product of two blocks in microscaling format.
1650+
16501651
lhs and rhs use microscaling formats described here:
16511652
https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final-pdf
1653+
16521654
:param lhs: The first tensor to be multiplied.
16531655
:type lhs: 2D tensor representing fp4, fp8 or bf16 elements. Fp4 elements are packed into uint8 inputs with the first element in lower bits. Fp8 are stored as uint8 or the corresponding fp8 type.
16541656
:param lhs_scale: Scale factor for lhs tensor.

0 commit comments

Comments
 (0)