Skip to content

Commit ef323dc

Browse files
committed
fix(util): remove extra init_chat_model call
We don't need it when we're using the end point explicitly, or we use it and not call the end point explicitly. But not both. https://docs.langchain.com/oss/python/integrations/llms/huggingface_endpoint
1 parent eb75e97 commit ef323dc

File tree

1 file changed

+6
-3
lines changed
  • utils_pkg/neuroml_ai_utils

1 file changed

+6
-3
lines changed

utils_pkg/neuroml_ai_utils/llm.py

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ def setup_llm(model_name_full, logger):
143143
hf_token = os.environ.get("HF_TOKEN", None)
144144
assert hf_token
145145

146-
llm = HuggingFaceEndpoint(
146+
model_var = HuggingFaceEndpoint(
147147
repo_id=f"{model_name}",
148148
provider="auto",
149149
max_new_tokens=512,
@@ -153,17 +153,20 @@ def setup_llm(model_name_full, logger):
153153
huggingfacehub_api_token=hf_token,
154154
)
155155

156+
"""
157+
156158
model_var = init_chat_model(
157159
model_name,
158160
model_provider="huggingface",
159161
llm=llm,
160162
configurable_fields=("temperature"),
161163
backend="endpoint",
162164
)
165+
"""
163166
assert model_var
164167

165-
# state, msg = check_model_works(model_var, timeout=0)
166-
# assert state
168+
state, msg = check_model_works(model_var, timeout=0)
169+
assert state
167170
else:
168171
if model_name_full.lower().startswith("ollama:"):
169172
check_ollama_model(logger, model_name_full.lower().replace("ollama:", ""))

0 commit comments

Comments
 (0)