Skip to content

Fix internal tests after recent chagnes #2726

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Aug 11, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion test/integration/test_loading_deprecated_checkpoint.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
)
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig

from torchao.utils import is_sm_at_least_89
from torchao.utils import is_fbcode, is_sm_at_least_89

_MODEL_NAME_AND_VERSIONS = [
("torchao-testing/opt-125m-float8dq-row-v1-0.13-dev", 1),
Expand All @@ -23,6 +23,10 @@

@unittest.skipIf(not torch.cuda.is_available(), "Need CUDA available")
@unittest.skipIf(not is_sm_at_least_89(), "Nedd sm89+")
@unittest.skipIf(
is_fbcode(),
"Skipping the test in fbcode for now, not sure how to download from transformers",
)
class TestLoadingDeprecatedCheckpoint(TestCase):
@common_utils.parametrize("model_name_and_version", _MODEL_NAME_AND_VERSIONS)
def test_load_model_and_run(self, model_name_and_version):
Expand Down
Loading