Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
logger = init_logger(__name__)


def test_mark_mubltimodal_obj():
def test_mark_multimodal_obj():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This test allocates tensors on cuda, indicating a dependency on a CUDA-enabled GPU. While this is expected for a test targeting a Triton kernel, it's a best practice to explicitly mark such tests. This allows test runners like pytest to automatically skip them on environments without a GPU, preventing the entire test suite from failing.

Consider adding a pytest.mark.skipif decorator to make the test suite more robust across different environments.

Example:

import pytest
import torch

@pytest.mark.skipif(not torch.cuda.is_available(), reason="This test requires a CUDA-enabled GPU")
def test_mark_multimodal_obj():
    # ... test implementation

Applying this pattern to all GPU-dependent tests would be a valuable improvement for the project's CI/CD pipeline and for developers working on non-GPU machines.

obj_start_ids = torch.tensor([1, 4, 100], device="cuda", dtype=torch.int64)
obj_token_lens = torch.tensor([1, 3, 2], device="cuda", dtype=torch.int64)
input_ids = torch.tensor([1, 7, 9, 333], device="cuda", dtype=torch.int64)
Expand Down