-
Notifications
You must be signed in to change notification settings - Fork 14
Description
The paper and project look super interesting to me, but there are several questions that confused me and I listed those questions below.
Questions
-
In example/spmm folder, the Python code evaluated the kernel for unweighted SpMM, which is used in GCN. (The corresponding DGL kernel is “
dgl.ops.copy_u_sum(g, x)”.
Is there any code to test the weighted spMM, which is used in the GAT case? For example, DGL provides weighted SpMM named as:update_all(fn.u_mul_e('ft', 'a', 'm'), fn.sum('m', 'o')).Do SparseTir provide similar kernel and how can we compare their kernel performance? -
Is there any code in this repo that can run GAT end-to-end directly?
-
For GCN, the papers said that it was integrated into a Framework for end-to-end training. Could you provide more information about this framework? Such as which integrated framework is used, DGL or PyG?
-
The paper said that format decomposition is applied to SpMM only, Could we apply it to SDDMM also and evaluate its kernel running time result?
Looking forward to your response. Thank you.