Skip to content

Commit 6f75c3e

Browse files
authored
Bump runner memory for llama3_2 torchtune test_model
1 parent 54899fe commit 6f75c3e

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

.ci/scripts/gather_test_models.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525
"resnet50": "linux.12xlarge",
2626
"llava": "linux.12xlarge",
2727
"llama3_2_vision_encoder": "linux.12xlarge",
28-
"llama3_2_text_decoder": "linux.12xlarge",
28+
"llama3_2_text_decoder": "linux.24xlarge",
2929
# This one causes timeout on smaller runner, the root cause is unclear (T161064121)
3030
"dl3": "linux.12xlarge",
3131
"emformer_join": "linux.12xlarge",
@@ -88,7 +88,7 @@ def model_should_run_on_event(model: str, event: str) -> bool:
8888
We put higher priority and fast models to pull request and rest to push.
8989
"""
9090
if event == "pull_request":
91-
return model in ["mv3", "vit"]
91+
return model in ["mv3", "vit", "llama_3_2_text_decoder"]
9292
elif event == "push":
9393
# These are super slow. Only run it periodically
9494
return model not in ["dl3", "edsr", "emformer_predict"]

0 commit comments

Comments
 (0)