Skip to content

How can I use gen_cubins.py in the XQA directory to compile a cubin for SM101? #9517

@july2n

Description

@july2n

System Info

Hi, hope someone can help me.

I'm trying to compile a cubin for SM101 using gen_cubins.py in the XQA directory, but it seems that there is no macro definition for SM101 in the source code.

In mha.cu:

#if CUDA_ARCH == 860 || CUDA_ARCH == 890 || CUDA_ARCH == 1200
constexpr uint32_t preferedKHeadPartBytes = 64;
constant constexpr uint32_t cacheVTileSeqLen = 32;
#elif CUDA_ARCH == 800 || CUDA_ARCH == 870 || CUDA_ARCH == 900
constexpr uint32_t preferedKHeadPartBytes = 128;
constant constexpr uint32_t cacheVTileSeqLen = 64;

And in utils.cuh:

#ifdef CUDA_ARCH
#if CUDA_ARCH == 860 || CUDA_ARCH == 890 || CUDA_ARCH == 1200
constexpr uint32_t kMAX_SMEM_SIZE = (99u << 10);
#elif CUDA_ARCH == 800 || CUDA_ARCH == 870
constexpr uint32_t kMAX_SMEM_SIZE = (163u << 10);
#elif CUDA_ARCH == 900
constexpr uint32_t kMAX_SMEM_SIZE = (227u << 10);
#endif

Since SM101 is not covered by any of these conditions, the compilation cannot proceed.
Could anyone advise what macro values should be used for SM101, or how to properly add support for it?

Thanks!

How would you like to use TensorRT-LLM

I want to run inference of a [specific model](put Hugging Face link here). I don't know how to integrate it with TensorRT-LLM or optimize it for my use case.

Specific questions:

  • Model:
  • Use case (e.g., chatbot, batch inference, real-time serving):
  • Expected throughput/latency requirements:
  • Multi-GPU setup needed:

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and checked the documentation and examples for answers to frequently asked questions.

Metadata

Metadata

Assignees

Labels

Customized kernels<NV>Specialized/modified CUDA kernels in TRTLLM for LLM ops, beyond standard TRT. Dev & perf.questionFurther information is requested

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions