-
当前的relay+te的框架下模型的转换大致流程为:front model -> relay -> te -> tir 如果我上述理解是正确的话,那么我的问题就来了。 @tvm.script.ir_module
class MyModule:
@T.prim_func
def main(a: T.handle, b: T.handle):
# We exchange data between function by handles, which are similar to pointer.
T.func_attr({"global_symbol": "main", "tir.noalias": True})
# Create buffer from handles.
A = T.match_buffer(a, (8,), dtype="float32")
B = T.match_buffer(b, (8,), dtype="float32")
for i in range(8):
# A block is an abstraction for computation.
with T.block("B"):
# Define a spatial block iterator and bind it to value i.
vi = T.axis.spatial(8, i)
B[vi] = A[vi] + 1.0 或是这种: def relu0(X: T.Buffer[(1, 128), "float32"],
Y: T.Buffer[(1, 128), "float32"]):
# function attr dict
T.func_attr({"global_symbol": "relu0", "tir.noalias": True})
for i, j in T.grid(1, 128):
with T.block("Y"):
vi, vj = T.axis.remap("SS", [i, j])
Y[vi, vj] = T.max(X[vi, vj], T.float32(0)) 这两种写法中,传入到tir.prim_func的shape都是固定的。比如图一中的 如果是写一个算子,我希望的是传入一个tensor类,在算子内可以通过该类获取shape大小,以达到如下效果: @tvm.script.ir_module
class MyModule:
@T.prim_func
def main(a: T.tensor, b: T.tensor):
# We exchange data between function by handles, which are similar to pointer.
T.func_attr({"global_symbol": "main", "tir.noalias": True})
# Create buffer from handles.
A = T.match_buffer(a, a.shape, dtype=a.dtype)
B = T.match_buffer(b, b.shape, dtype=b.dtype)
for i in range(a.shape[0]):
# A block is an abstraction for computation.
with T.block("B"):
# Define a spatial block iterator and bind it to value i.
vi = T.axis.spatial(b.shape[0], i)
B[vi] = A[vi] + 1.0 不知道我理解的TensorIR的作用是否是替代te来写算子?如果是的话可否提供一个用TensorIR写算子的例子? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 3 replies
-
感谢提问。 首先需要指出一个点,我们的流程并不是 目前来说我们还不具备这种动态shape的表达方式,关于更灵活的Script写法可以关注RFC apache/tvm-rfcs#79 |
Beta Was this translation helpful? Give feedback.
-
@DzAvril 维度已知的话,可以在TIR script内定义variable @tvm.script.ir_module
class MyModule:
@T.prim_func
def main(a: T.tensor, b: T.tensor):
# We exchange data between function by handles, which are similar to pointer.
T.func_attr({"global_symbol": "main", "tir.noalias": True})
n = T.var('int32')
# Create buffer from handles.
A = T.match_buffer(a, (n,), dtype=a.dtype)
B = T.match_buffer(b, (n,), dtype=b.dtype)
for i in range(a.shape[0]):
# A block is an abstraction for computation.
with T.block("B"):
# Define a spatial block iterator and bind it to value i.
vi = T.axis.spatial(b.shape[0], i)
B[vi] = A[vi] + 1.0 |
Beta Was this translation helpful? Give feedback.
-
我们的目标并非完全去掉te。本身relax也会支持通过直接和te交互来快速构建TensorIR。针对te的schedule本身会被TensorIR变换替代。 课程的核心是让大家强化本身IRModule的概念,我们也鼓励用te或者其他方式本身可以用来meta programming(可以把这些东西看成是generic生成IRModule的一种方式), 也会继续做相关的整合。 而这些方式生成的目标IRModule本身会包含TensorIR和relax,并且大家可以对它进行进一步的变换。 |
Beta Was this translation helpful? Give feedback.
我们的目标并非完全去掉te。本身relax也会支持通过直接和te交互来快速构建TensorIR。针对te的schedule本身会被TensorIR变换替代。
课程的核心是让大家强化本身IRModule的概念,我们也鼓励用te或者其他方式本身可以用来meta programming(可以把这些东西看成是generic生成IRModule的一种方式), 也会继续做相关的整合。
而这些方式生成的目标IRModule本身会包含TensorIR和relax,并且大家可以对它进行进一步的变换。