Skip to content

Commit 90aa7b8

Browse files
Use correct model (#1810)
SUMMARY: This test was skipped and when run, was giving the can't copy out of meta tensor error. This is because the [model](https://huggingface.co/Xenova/llama2.c-stories110M/discussions/4) is missing lm_head and embed_token layers and therefore only loaded to meta device. Switching to our model to avoid this issue. TEST PLAN: Tested locally and passed. Signed-off-by: shanjiaz <[email protected]> Co-authored-by: Brian Dellabetta <[email protected]>
1 parent 21de943 commit 90aa7b8

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

tests/llmcompressor/recipe/test_recipe_parsing.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ def setup_model_and_config(tmp_path):
1717
Loads a test model and returns common arguments used in oneshot runs.
1818
"""
1919
model = AutoModelForCausalLM.from_pretrained(
20-
"Xenova/llama2.c-stories110M",
20+
"nm-testing/llama2.c-stories15M",
2121
torch_dtype="auto",
2222
)
2323

0 commit comments

Comments
 (0)