Now flashinfer contains source code of fmha_v2 but only supports a few kernels, many kernels exist in the form of cubin in TensorRT-LLM(see https://github.com/NVIDIA/TensorRT-LLM/tree/main/cpp/tensorrt_llm/kernels/contextFusedMultiHeadAttention/cubin), we would like to have them in flashinfer as well.