Skip to content

Commit 6a875f9

Browse files
authored
Fix Olmo trunk test and skip XNNPack trunk tests on mac (#13528)
Unblock trunk to advance `viable/strict` Intermittent segmentation faults on all of the XNNPack Optimum mac tests. Described in more detail [here](#13530) where the segmentation fault happens consistently, but interestingly enough this is before the pin bump and it is only happening intermittently.
1 parent 66aaf9d commit 6a875f9

File tree

2 files changed

+9
-9
lines changed

2 files changed

+9
-9
lines changed

.ci/scripts/test_huggingface_optimum_model.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -369,7 +369,7 @@ def test_vit(model_id, model_dir, recipe, *, quantize=False, run_only=False):
369369
), # fails to lower for CoreML
370370
"smollm2-135m": ("HuggingFaceTB/SmolLM2-135M", test_text_generation),
371371
"smollm3-3b": ("HuggingFaceTB/SmolLM3-3B", test_text_generation),
372-
"olmo": ("allenai/OLMo-1B-hf", test_text_generation),
372+
"olmo-1b": ("allenai/OLMo-1B-hf", test_text_generation),
373373
}
374374

375375
_mask_fill_mapping = {

.github/workflows/trunk.yml

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -836,14 +836,14 @@ jobs:
836836
strategy:
837837
matrix:
838838
config: [
839-
# XNNPack.
840-
llama3.2-1b|xnnpack|--quantize,
841-
qwen3-0.6b|xnnpack|--quantize,
842-
qwen3-1.7b|xnnpack|--quantize,
843-
gemma3-1b|xnnpack|--quantize,
844-
phi4-mini|xnnpack|--quantize,
845-
smollm2-135m|xnnpack|--quantize,
846-
smollm3-3b|xnnpack|--quantize,
839+
# # XNNPack. (Skipping for now due to intermittent segmentation faults, see https://github.com/huggingface/optimum-executorch/issues/122.)
840+
# llama3.2-1b|xnnpack|--quantize,
841+
# qwen3-0.6b|xnnpack|--quantize,
842+
# qwen3-1.7b|xnnpack|--quantize,
843+
# gemma3-1b|xnnpack|--quantize,
844+
# phi4-mini|xnnpack|--quantize,
845+
# smollm2-135m|xnnpack|--quantize,
846+
# smollm3-3b|xnnpack|--quantize,
847847
# CoreML.
848848
llama3.2-1b|coreml_fp32_gpu|--quantize,
849849
qwen3-0.6b|coreml_fp32_gpu|--quantize,

0 commit comments

Comments
 (0)