-
在课程6的最后,给出翻译成高层算子的例子,代码如下: def map_nn_relu_op(bb, node_map, node, nn_mod):
A = node_map[node.args[0]]
return bb.emit(relax.op.relu(A))
def map_nn_linear_op(bb, node_map, node, nn_mod):
x = node_map[node.args[0]]
w = map_param(nn_mod.weight)
if nn_mod.bias is not None:
b = map_param(nn_mod.bias)
y = bb.emit(relax.op.dense(x, w))
return bb.emit(relax.op.add(y, b))
MLPModuleHighLevel = from_fx(
fx.symbolic_trace(mlp_model),
input_shapes = [(1, 784)],
call_function_map={
},
call_module_map={
torch.nn.Linear: map_nn_linear_op,
torch.nn.ReLU: map_nn_relu_op,
},
)
MLPModuleHighLevel.show() 所得 @tvm.script.ir_module
class Module:
@R.function
def main(x: Tensor((1, 784), "float32")) -> Tensor(None, "float32", ndim = 2):
# block 0
with R.dataflow():
lv: Tensor((1, 128), "float32") = relax.nn.dense(x, meta[relay.Constant][0])
lv1: Tensor((1, 128), "float32") = relax.add(lv, meta[relay.Constant][1])
lv2: Tensor((1, 128), "float32") = relax.nn.relu(lv1)
lv3: Tensor((1, 10), "float32") = relax.nn.dense(lv2, meta[relay.Constant][2])
lv4: Tensor((1, 10), "float32") = relax.add(lv3, meta[relay.Constant][3])
gv: Tensor((1, 10), "float32") = lv4
R.output(gv)
return gv |
Beta Was this translation helpful? Give feedback.
Answered by
Hzfengsy
Aug 26, 2022
Replies: 1 comment 3 replies
-
带有high-level op的IRModule是无法直接被执行的,因为没有给定相应的计算实现。需要将其转化为call_tir后执行 |
Beta Was this translation helpful? Give feedback.
3 replies
Answer selected by
wzzju
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
带有high-level op的IRModule是无法直接被执行的,因为没有给定相应的计算实现。需要将其转化为call_tir后执行