Skip to content

Commit 20874dd

Browse files
author
Zhiyuan Li
authored
[AUTOTUNER] Fix: Pass do_bench parameter to Autotuner in `autotun… (#5992)
<!--- The core Triton is a small number of people, and we receive many PRs (thank you!). To help us review your code more quickly, **if you are a new contributor (less than 3 PRs merged) we ask that you complete the following tasks and include the filled-out checklist in your PR description.** Complete the following tasks before sending your PR, and replace `[ ]` with `[x]` to indicate you have done them. --> # New contributor declaration - [x] I am not making a trivial change, such as fixing a typo in a comment. - [x] I have written a PR description following these [rules](https://cbea.ms/git-commit/#why-not-how). - [x] I have run `pre-commit run --from-ref origin/main --to-ref HEAD`. - Select one of the following. - [ ] I have added tests. - `/test` for `lit` tests - `/unittest` for C++ tests - `/python/test` for end-to-end tests - [x] This PR does not need a test because `Previous PR has introduced a test`. - Select one of the following. - [x] I have not added any `lit` tests. - [ ] The `lit` tests I have added follow these [best practices](https://mlir.llvm.org/getting_started/TestingGuide/#filecheck-best-practices), including the "tests should be minimal" section. (Usually running Python code and using the instructions it generates is not minimal.) ### Description Related PR: triton-lang/triton#4496 In the `autotune` decorator, the `do_bench` parameter was omitted when passed to the `Autotuner` constructor, causing `do_bench` to fail to be default. This PR fixes this issue and ensures that the `do_bench` parameter is passed correctly. By this way, we can use `do_bench` parameter instead of `use_cuda_graph` parameters which have been deprecated
1 parent 4d2434b commit 20874dd

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

python/triton/runtime/autotuner.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -361,7 +361,7 @@ def kernel(x_ptr, x_size, BLOCK_SIZE: tl.constexpr):
361361
def decorator(fn):
362362
return Autotuner(fn, fn.arg_names, configs, key, reset_to_zero, restore_value, pre_hook=pre_hook,
363363
post_hook=post_hook, prune_configs_by=prune_configs_by, warmup=warmup, rep=rep,
364-
use_cuda_graph=use_cuda_graph)
364+
use_cuda_graph=use_cuda_graph, do_bench=do_bench)
365365

366366
return decorator
367367

0 commit comments

Comments
 (0)