generated from langchain-ai/integration-repo-template
-
Notifications
You must be signed in to change notification settings - Fork 271
Open
Description
I have found an issue when attempting to utilize MLXPipeline / mlx-lm
Context
My code is following the starter code from Langchain MLXPipeline Example
quantized_granite is an mlx-lm converted version of "ibm-granite/granite-4.0-h-tiny"
Code
from mlx_lm import load
from langchain_community.llms.mlx_pipeline import MLXPipeline
from langchain_core.prompts import PromptTemplate
model, tokenizer = load('quantized_granite')
pipe = MLXPipeline(model=model, tokenizer=tokenizer)
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)
chain = prompt | pipe
while True:
user = input("Query: ")
if user.lower() in ['q', 'quit', 'exit']:
break
print(chain.invoke({"question": user}))
Error
When this code is called during the chain.invoke() I was getting a TypeError
TypeError: generate_step() got an unexpected keyword argument 'formatter'
Local Fix
I was able to locally circumvent the issue by commenting out the formatter argument inside the generate function of the file langchain_community/llms/mlx_pipeline.py on line 175.
Unsure if this is a model specific issue, or some kind of update from mlx causing the error.
iamsorenl
Metadata
Metadata
Assignees
Labels
No labels