Skip to content

Commit f0621ea

Browse files
author
yarden-sony
committed
cleanup
1 parent f332807 commit f0621ea

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

model_compression_toolkit/core/pytorch/graph_substitutions/substitutions/scaled_dot_product_attention.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -198,7 +198,6 @@ def substitute(self, graph: Graph, attention_node: FunctionalNode) -> Graph:
198198
:param attention_node: the node to replace
199199
:return: A graph after the substitution
200200
"""
201-
print("In scale_dot_product_attention substitution@@@@@@@@")
202201
input_nodes = self._get_attention_input_nodes(graph, attention_node)
203202
q_node, k_node, v_node = input_nodes["q"], input_nodes["k"], input_nodes["v"]
204203
transpose_k_node = self._get_transpose_k_node(attention_node.name, k_node)

0 commit comments

Comments
 (0)