Skip to content

[Bug] Can SparseTIR run GAT end-to-end directly? #100

@Ed-gong

Description

@Ed-gong

The paper and project look super interesting to me, but there are several questions that confused me and I listed those questions below.

Questions

  1. In example/spmm folder, the Python code evaluated the kernel for unweighted SpMM, which is used in GCN. (The corresponding DGL kernel is “dgl.ops.copy_u_sum(g, x)”.
    Is there any code to test the weighted spMM, which is used in the GAT case? For example, DGL provides weighted SpMM named as: update_all(fn.u_mul_e('ft', 'a', 'm'), fn.sum('m', 'o')) .Do SparseTir provide similar kernel and how can we compare their kernel performance?

  2. Is there any code in this repo that can run GAT end-to-end directly?

  3. For GCN, the papers said that it was integrated into a Framework for end-to-end training. Could you provide more information about this framework? Such as which integrated framework is used, DGL or PyG?

  4. The paper said that format decomposition is applied to SpMM only, Could we apply it to SDDMM also and evaluate its kernel running time result?

Looking forward to your response. Thank you.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions