Skip to content

MLXPipeline 'formatter' argument (bug)Β #373

@CalBlanco

Description

@CalBlanco

I have found an issue when attempting to utilize MLXPipeline / mlx-lm

Context

My code is following the starter code from Langchain MLXPipeline Example

quantized_granite is an mlx-lm converted version of "ibm-granite/granite-4.0-h-tiny"

Code

from mlx_lm import load
from langchain_community.llms.mlx_pipeline import MLXPipeline
from langchain_core.prompts import PromptTemplate

model, tokenizer = load('quantized_granite')
pipe = MLXPipeline(model=model, tokenizer=tokenizer) 


template = """Question: {question}

Answer: Let's think step by step."""

prompt = PromptTemplate.from_template(template)

chain = prompt | pipe

while True:
    user = input("Query: ")

    if user.lower() in ['q', 'quit', 'exit']:
        break


    print(chain.invoke({"question": user}))

Error

When this code is called during the chain.invoke() I was getting a TypeError

TypeError: generate_step() got an unexpected keyword argument 'formatter'

Full stacktrace

Local Fix

I was able to locally circumvent the issue by commenting out the formatter argument inside the generate function of the file langchain_community/llms/mlx_pipeline.py on line 175.

Unsure if this is a model specific issue, or some kind of update from mlx causing the error.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions