Skip to content
This repository was archived by the owner on Sep 10, 2025. It is now read-only.

Commit ba4169d

Browse files
authored
Wrap unused lm_eval in try-catch: Unblock dist_run (#1228)
* Fix Evaluate to a version * clean comments * Adding MPS Support for LLama3.2 11B Multimodal; Bump torchtune 9.28.24 * Lock onto a pre BC breaking version of datasets * Bump lm_eval import before torchtune * Minimize changes * Minimize changes * Test local import * Just Try Catch
1 parent 8278aa2 commit ba4169d

File tree

1 file changed

+8
-3
lines changed

1 file changed

+8
-3
lines changed

torchchat/model.py

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -30,9 +30,14 @@
3030
SequenceParallel,
3131
)
3232
from torch.nn import functional as F
33-
# TODO: remove this after we figure out where in torchtune an `evaluate` module
34-
# is being imported, which is being confused with huggingface's `evaluate``.
35-
import lm_eval # noqa
33+
34+
try:
35+
# TODO: remove this after we figure out where in torchtune an `evaluate` module
36+
# is being imported, which is being confused with huggingface's `evaluate``.
37+
import lm_eval # noqa
38+
except Exception:
39+
pass
40+
3641
from torchtune.models.clip import clip_vision_encoder
3742
from torchtune.models.llama3_1._component_builders import llama3_1 as llama3_1_builder
3843
from torchtune.models.llama3_2_vision._component_builders import (

0 commit comments

Comments
 (0)