Skip to content

OpenAIAssistantRunnable ainvoke calls _get_response instead of _aget_responseΒ #32398

@lucas-mooreslabai

Description

@lucas-mooreslabai

Checked other resources

  • This is a bug, not a usage question. For questions, please use the LangChain Forum (https://forum.langchain.com/).
  • I added a clear and descriptive title that summarizes this issue.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
  • I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
  • I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.

Example Code

Direct from the langchain/agents/openai_assistant/base.py

    @override
    async def ainvoke(
        self,
        input: dict,
        config: Optional[RunnableConfig] = None,
        **kwargs: Any,
    ) -> OutputType:
        """Async invoke assistant.

        Args:
            input: Runnable input dict that can have:
                content: User message when starting a new run.
                thread_id: Existing thread to use.
                run_id: Existing run to use. Should only be supplied when providing
                    the tool output for a required action after an initial invocation.
                message_metadata: Metadata to associate with a new message.
                thread_metadata: Metadata to associate with new thread. Only relevant
                    when a new thread is created.
                instructions: Overrides the instructions of the assistant.
                additional_instructions: Appends additional instructions.
                model: Override Assistant model for this run.
                tools: Override Assistant tools for this run.
                parallel_tool_calls: Allow Assistant to set parallel_tool_calls
                    for this run.
                top_p: Override Assistant top_p for this run.
                temperature: Override Assistant temperature for this run.
                max_completion_tokens: Allow setting max_completion_tokens for this run.
                max_prompt_tokens: Allow setting max_prompt_tokens for this run.
                run_metadata: Metadata to associate with new run.
            config: Runnable config. Defaults to None.
            kwargs: Additional arguments.

        Return:
            If self.as_agent, will return
                Union[List[OpenAIAssistantAction], OpenAIAssistantFinish].
                Otherwise, will return OpenAI types
                Union[List[ThreadMessage], List[RequiredActionFunctionToolCall]].
        """

        config = config or {}
        callback_manager = CallbackManager.configure(
            inheritable_callbacks=config.get("callbacks"),
            inheritable_tags=config.get("tags"),
            inheritable_metadata=config.get("metadata"),
        )
        run_manager = callback_manager.on_chain_start(
            dumpd(self),
            input,
            name=config.get("run_name") or self.get_name(),
        )
        try:
            # Being run within AgentExecutor and there are tool outputs to submit.
            if self.as_agent and input.get("intermediate_steps"):
                tool_outputs = await self._aparse_intermediate_steps(
                    input["intermediate_steps"],
                )
                run = await self.async_client.beta.threads.runs.submit_tool_outputs(
                    **tool_outputs,
                )
            # Starting a new thread and a new run.
            elif "thread_id" not in input:
                thread = {
                    "messages": [
                        {
                            "role": "user",
                            "content": input["content"],
                            "metadata": input.get("message_metadata"),
                        },
                    ],
                    "metadata": input.get("thread_metadata"),
                }
                run = await self._acreate_thread_and_run(input, thread)
            # Starting a new run in an existing thread.
            elif "run_id" not in input:
                _ = await self.async_client.beta.threads.messages.create(
                    input["thread_id"],
                    content=input["content"],
                    role="user",
                    metadata=input.get("message_metadata"),
                )
                run = await self._acreate_run(input)
            # Submitting tool outputs to an existing run, outside the AgentExecutor
            # framework.
            else:
                run = await self.async_client.beta.threads.runs.submit_tool_outputs(
                    **input,
                )
            run = await self._await_for_run(run.id, run.thread_id)
        except BaseException as e:
            run_manager.on_chain_error(e)
            raise
        try:
            response = self._get_response(run)
        except BaseException as e:
            run_manager.on_chain_error(e, metadata=run.dict())
            raise
        else:
            run_manager.on_chain_end(response)
            return response

It should call _aget_response which is defined in the same file but unused:

    async def _aget_response(self, run: Any) -> Any:
        # TODO: Pagination

        if run.status == "completed":
            import openai

            major_version = int(openai.version.VERSION.split(".")[0])
            minor_version = int(openai.version.VERSION.split(".")[1])
            version_gte_1_14 = (major_version > 1) or (
                major_version == 1 and minor_version >= 14  # noqa: PLR2004
            )

            messages = await self.async_client.beta.threads.messages.list(
                run.thread_id,
                order="asc",
            )
            new_messages = [msg for msg in messages if msg.run_id == run.id]
            if not self.as_agent:
                return new_messages
            answer: Any = [
                msg_content for msg in new_messages for msg_content in msg.content
            ]
            if all(
                (
                    isinstance(content, openai.types.beta.threads.TextContentBlock)
                    if version_gte_1_14
                    else isinstance(
                        content,
                        openai.types.beta.threads.MessageContentText,  # type: ignore[attr-defined]
                    )
                )
                for content in answer
            ):
                answer = "\n".join(content.text.value for content in answer)
            return OpenAIAssistantFinish(
                return_values={
                    "output": answer,
                    "thread_id": run.thread_id,
                    "run_id": run.id,
                },
                log="",
                run_id=run.id,
                thread_id=run.thread_id,
            )
        if run.status == "requires_action":
            if not self.as_agent:
                return run.required_action.submit_tool_outputs.tool_calls
            actions = []
            for tool_call in run.required_action.submit_tool_outputs.tool_calls:
                function = tool_call.function
                try:
                    args = json.loads(function.arguments, strict=False)
                except JSONDecodeError as e:
                    msg = (
                        f"Received invalid JSON function arguments: "
                        f"{function.arguments} for function {function.name}"
                    )
                    raise ValueError(msg) from e
                if len(args) == 1 and "__arg1" in args:
                    args = args["__arg1"]
                actions.append(
                    OpenAIAssistantAction(
                        tool=function.name,
                        tool_input=args,
                        tool_call_id=tool_call.id,
                        log="",
                        run_id=run.id,
                        thread_id=run.thread_id,
                    ),
                )
            return actions
        run_info = json.dumps(run.dict(), indent=2)
        msg = f"Unexpected run status: {run.status}. Full run info:\n\n{run_info}"
        raise ValueError(msg)

Error Message and Stack Trace (if applicable)

No response

Description

OpenAIAssistantRunnable ainvoke calls _get_response instead of _aget_response

System Info

System Information

OS: Linux
OS Version: #24~24.04.3-Ubuntu SMP PREEMPT_DYNAMIC Mon Jul 7 16:39:17 UTC 2
Python Version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0]

Package Information

langchain_core: 0.3.72
langchain: 0.3.27
langsmith: 0.4.4
langchain_openai: 0.3.27
langchain_tavily: 0.2.6
langchain_text_splitters: 0.3.9
langgraph_api: 0.2.92
langgraph_cli: 0.3.3
langgraph_license: Installed. No version info available.
langgraph_runtime: Installed. No version info available.
langgraph_runtime_inmem: 0.4.0
langgraph_sdk: 0.1.72

Optional packages not installed

langserve

Other Dependencies

aiohttp: 3.12.13
async-timeout<5.0.0,>=4.0.0;: Installed. No version info available.
blockbuster<2.0.0,>=1.5.24: Installed. No version info available.
click>=8.1.7: Installed. No version info available.
cloudpickle>=3.0.0: Installed. No version info available.
cryptography<45.0,>=42.0.0: Installed. No version info available.
httpx: 0.28.1
httpx>=0.25.0: Installed. No version info available.
httpx>=0.25.2: Installed. No version info available.
jsonpatch<2.0,>=1.33: Installed. No version info available.
jsonschema-rs<0.30,>=0.20.0: Installed. No version info available.
langchain-anthropic;: Installed. No version info available.
langchain-aws;: Installed. No version info available.
langchain-azure-ai;: Installed. No version info available.
langchain-cohere;: Installed. No version info available.
langchain-community;: Installed. No version info available.
langchain-core<1.0.0,>=0.3.66: Installed. No version info available.
langchain-core<1.0.0,>=0.3.72: Installed. No version info available.
langchain-core>=0.3.64: Installed. No version info available.
langchain-deepseek;: Installed. No version info available.
langchain-fireworks;: Installed. No version info available.
langchain-google-genai;: Installed. No version info available.
langchain-google-vertexai;: Installed. No version info available.
langchain-groq;: Installed. No version info available.
langchain-huggingface;: Installed. No version info available.
langchain-mistralai;: Installed. No version info available.
langchain-ollama;: Installed. No version info available.
langchain-openai;: Installed. No version info available.
langchain-perplexity;: Installed. No version info available.
langchain-text-splitters<1.0.0,>=0.3.9: Installed. No version info available.
langchain-together;: Installed. No version info available.
langchain-xai;: Installed. No version info available.
langgraph-api>=0.1.20;: Installed. No version info available.
langgraph-checkpoint>=2.0.23: Installed. No version info available.
langgraph-checkpoint>=2.0.25: Installed. No version info available.
langgraph-runtime-inmem<0.5,>=0.4.0: Installed. No version info available.
langgraph-runtime-inmem>=0.0.8;: Installed. No version info available.
langgraph-sdk>=0.1.0;: Installed. No version info available.
langgraph-sdk>=0.1.71: Installed. No version info available.
langgraph>=0.2: Installed. No version info available.
langgraph>=0.3.27: Installed. No version info available.
langsmith-pyo3: Installed. No version info available.
langsmith>=0.1.17: Installed. No version info available.
langsmith>=0.3.45: Installed. No version info available.
openai-agents: Installed. No version info available.
openai<2.0.0,>=1.86.0: Installed. No version info available.
opentelemetry-api: Installed. No version info available.
opentelemetry-exporter-otlp-proto-http: Installed. No version info available.
opentelemetry-sdk: Installed. No version info available.
orjson: 3.10.18
orjson>=3.10.1: Installed. No version info available.
orjson>=3.9.7: Installed. No version info available.
packaging: 24.2
packaging>=23.2: Installed. No version info available.
pydantic: 2.11.7
pydantic<3.0.0,>=2.7.4: Installed. No version info available.
pydantic>=2.7.4: Installed. No version info available.
pyjwt>=2.9.0: Installed. No version info available.
pytest: 8.4.0
python-dotenv>=0.8.0;: Installed. No version info available.
PyYAML>=5.3: Installed. No version info available.
requests: 2.32.4
requests-toolbelt: 1.0.0
requests<3,>=2: Installed. No version info available.
rich: Installed. No version info available.
SQLAlchemy<3,>=1.4: Installed. No version info available.
sse-starlette<2.2.0,>=2.1.0: Installed. No version info available.
sse-starlette>=2: Installed. No version info available.
starlette>=0.37: Installed. No version info available.
starlette>=0.38.6: Installed. No version info available.
structlog<26,>=24.1.0: Installed. No version info available.
structlog>23: Installed. No version info available.
tenacity!=8.4.0,<10.0.0,>=8.1.0: Installed. No version info available.
tenacity>=8.0.0: Installed. No version info available.
tiktoken<1,>=0.7: Installed. No version info available.
truststore>=0.1: Installed. No version info available.
typing-extensions>=4.7: Installed. No version info available.
uvicorn>=0.26.0: Installed. No version info available.
watchfiles>=0.13: Installed. No version info available.
zstandard: 0.23.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugRelated to a bug, vulnerability, unexpected error with an existing feature

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions