Skip to content

Commit ed9876b

Browse files
authored
bugfix: fix unittest test_fp8_quantize (#1599)
<!-- .github/pull_request_template.md --> ## 📌 Description test_fp8_quantize unittest is broken by #1446 because we didn't check the special case of cpu input. This PR fixes the issue. ## 🔍 Related Issues <!-- Link any related issues here --> ## 🚀 Pull Request Checklist Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete. ### ✅ Pre-commit Checks - [x] I have installed `pre-commit` by running `pip install pre-commit` (or used your preferred method). - [x] I have installed the hooks with `pre-commit install`. - [x] I have run the hooks manually with `pre-commit run --all-files` and fixed any reported issues. > If you are unsure about how to set up `pre-commit`, see [the pre-commit documentation](https://pre-commit.com/). ## 🧪 Tests - [x] Tests have been added or updated as needed. - [x] All tests are passing (`unittest`, etc.). ## Reviewer Notes <!-- Optional: anything you'd like reviewers to focus on, concerns, etc. -->
1 parent 1a85c43 commit ed9876b

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

flashinfer/utils.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -513,6 +513,8 @@ def set_log_level(lvl_str: str) -> None:
513513

514514

515515
def device_support_pdl(device: torch.device) -> bool:
516+
if device.type != "cuda":
517+
return False
516518
major, _ = get_compute_capability(device)
517519
return major >= 9
518520

0 commit comments

Comments
 (0)