Skip to content

Enable CUDA #85

@angry-crab

Description

@angry-crab

This is a followup issue from Enable OpenCL Backend for TVM

We may want to bring up CUDA backend for TVM for two reason:

  1. Future benchmarking tasks. I was trying to compare the performance of TensorRT and TVM deployment. According to the paper, Ansor: Generating High-Performance Tensor Programs for Deep Learning, the evaluation indicates that optimized TVM code beats TensorRT for some models. It would make more sense to benchmark with CUDA backend.
  2. General performance reason. According to a performance comparison survey. opencl is 13% ~ 67% slower than cuda.
    Reference

And the reason it was not done in the previous issue, Enable OpenCL Backend for TVM is that I was not able to compile Lidar CenterPoint models due to some errors and I did not have time to look into the details. However, to proceed development, I believe it is necessary to enable CUDA.

Regarding Comments from Ambroise, it is true that CUDA libraries need to be handled beforehand. I think we can try to detect the existence of CUDA components and patch tvm_vendor accordingly.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions