Skip to content

RunnableRetry.batch/abatch can return corrupted outputs when some items succeed on retry and others still fail #35475

@yangbaechu

Description

@yangbaechu

Checked other resources

  • This is a bug, not a usage question.
  • I added a clear and descriptive title that summarizes this issue.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
  • This is not related to the langchain-community package.
  • I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.

Package (Required)

  • langchain
  • langchain-openai
  • langchain-anthropic
  • langchain-classic
  • langchain-core
  • langchain-model-profiles
  • langchain-tests
  • langchain-text-splitters
  • langchain-chroma
  • langchain-deepseek
  • langchain-exa
  • langchain-fireworks
  • langchain-groq
  • langchain-huggingface
  • langchain-mistralai
  • langchain-nomic
  • langchain-ollama
  • langchain-openrouter
  • langchain-perplexity
  • langchain-qdrant
  • langchain-xai
  • Other / not sure / general

Related Issues / PRs

No response

Reproduction Steps / Example Code (Python)

from langchain_core.runnables import RunnableLambda


failed_once = False


def process_item(name: str) -> str:
    global failed_once

    if name == "ok":
        return "ok-result"
    if name == "retry_then_ok":
        if not failed_once:
            failed_once = True
            raise ValueError()
        return "retry-result"
    raise ValueError()


runnable = RunnableLambda(process_item).with_retry(
    stop_after_attempt=2,
    retry_if_exception_type=(ValueError,),
    wait_exponential_jitter=False,
)

result = runnable.batch(
    ["ok", "retry_then_ok", "always_fail"],
    return_exceptions=True,
)

# Expected: the third item is an exception
print(result)
assert isinstance(result[2], Exception)

Error Message and Stack Trace (if applicable)

Description

  • I'm using RunnableLambda(...).with_retry(...).batch(...) with return_exceptions=True.
  • I expect an input that still fails after all retry attempts to remain an exception in the matching output position.
  • Instead, if one item succeeds on retry while another still fails, the failing item can be replaced by the successful item's output.

System Info

System Information

OS: Linux
OS Version: #1 SMP PREEMPT_DYNAMIC Thu Jun 5 18:30:46 UTC 2025
Python Version: 3.10.12 (main, Jan 26 2026, 14:55:28) [GCC 11.4.0]

Package Information

langchain_core: 1.2.16
langchain: 1.2.10
langsmith: 0.7.9
langchain_openai: 1.1.10
langgraph_sdk: 0.3.9

Optional packages not installed

deepagents
deepagents-cli

Other Dependencies

httpx: 0.28.1
jsonpatch: 1.33
langgraph: 1.0.10
openai: 2.24.0
orjson: 3.11.7
packaging: 26.0
pydantic: 2.12.5
pyyaml: 6.0.3
requests: 2.32.5
requests-toolbelt: 1.0.0
tenacity: 9.1.4
tiktoken: 0.12.0
typing-extensions: 4.15.0
uuid-utils: 0.14.1
xxhash: 3.6.0
zstandard: 0.25.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugRelated to a bug, vulnerability, unexpected error with an existing featurecore`langchain-core` package issues & PRsexternal

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions