Skip to content

Latest commit

 

History

History
669 lines (537 loc) · 30 KB

File metadata and controls

669 lines (537 loc) · 30 KB
title id slug description
Migrating from LangGraph/LangChain to Haystack
migrating-from-langgraphlangchain-to-haystack
/migrating-from-langgraphlangchain-to-haystack
Whether you're planning to migrate to Haystack or just comparing LangChain/LangGraph and Haystack to choose the proper framework for your AI application, this guide will help you map common patterns between frameworks.

import CodeBlock from '@theme/CodeBlock';

Migrating from LangGraph/LangChain to Haystack

Whether you're planning to migrate to Haystack or just comparing LangChain/LangGraph and Haystack to choose the proper framework for your AI application, this guide will help you map common patterns between frameworks.

In this guide, you'll learn how to translate core LangGraph concepts, like nodes, edges, and state, into Haystack components, pipelines, and agents. The goal is to preserve your existing logic while leveraging Haystack's flexible, modular ecosystem.

It's most accurate to think of Haystack as covering both LangChain and LangGraph territory: Haystack provides the building blocks for everything from simple sequential flows to fully agentic workflows with custom logic.

Why you might explore or migrate to Haystack

You might consider Haystack if you want to build your AI applications on a stable, actively maintained foundation with an intuitive developer experience.

  • Unified orchestration framework. Haystack supports both deterministic pipelines and adaptive agentic flows, letting you combine them with the right level of autonomy in a single system.
  • High-quality codebase and design. Haystack is engineered for clarity and reliability with well-tested components, predictable APIs, and a modular architecture that simply works.
  • Ease of customization. Extend core components, add your own logic, or integrate custom tools with minimal friction.
  • Reduced cognitive overhead. Haystack extends familiar ideas rather than introducing new abstractions, helping you stay focused on applying concepts, not learning them.
  • Comprehensive documentation and learning resources. Every concept, from components and pipelines to agents and tools, is supported by detailed and well-maintained docs, tutorials, and educational content.
  • Frequent release cycles. New features, improvements, and bug fixes are shipped regularly, ensuring that the framework evolves quickly while maintaining backward compatibility.
  • Scalable from prototype to production. Start small and expand easily. The same code you use for a proof of concept can power enterprise-grade deployments through the whole Haystack ecosystem.

Concept mapping: LangGraph/LangChain → Haystack

Here's a table of key concepts and their approximate equivalents between the two frameworks. Use this when auditing your LangGraph/Langchain architecture and planning the migration.

LangGraph/LangChain concept Haystack equivalent Notes
Node Component A unit of logic in both frameworks. In Haystack, a Component can run standalone, in a pipeline, or as a tool with agent. You can create custom components or use built-in ones like Generators and Retrievers.
Edge / routing logic Connection / Branching / Looping Pipelines connect component inputs and outputs with type-checked links. They support branching, routing, and loops for flexible flow control.
Graph / Workflow (nodes + edges) Pipeline or Agent LangGraph explicitly defines graphs; Haystack achieves similar orchestration through pipelines or Agents when adaptive logic is needed.
Subgraphs SuperComponent A SuperComponent wraps a full pipeline and exposes it as a single reusable component
Models / LLMs ChatGenerator Components Haystack's ChatGenerators unify access to open and proprietary models, with support for streaming, structured outputs, and multimodal data.
Agent Creation (create_agent, multi-agent from LangChain) Agent Component Haystack provides a simple, pipeline-based Agent abstraction that handles reasoning, tool use, and multi-step execution.
Tool (Langchain) Tool / PipelineTool / ComponentTool / MCPTool Haystack exposes Python functions, pipelines, components, external APIs and MCP servers as agent tools.
Multi-Agent Collaboration (LangChain) Multi-Agent System Using ComponentTool, agents can use other agents as tools, enabling multi-agent architectures within one framework.
Model Context Protocol load_mcp_tools MultiServerMCPClient Model Context Protocol - MCPTool, MCPToolset, StdioServerInfo, StreamableHttpServerInfo Haystack provides various MCP primitives for connecting multiple MCP servers and organizing MCP toolsets.
Memory (State, short-term, long-term) Memory (Agent State, short-term, long-term) Agent State provides a structured way to share data between tools and store intermediate results during agent execution. For short-term memory, Haystack offers a ChatMessage Store to persist chat history. More memory options are coming soon.
Time travel (Checkpoints) Breakpoints (Breakpoint, AgentBreakpoint, ToolBreakpoint, Snapshot) Breakpoints let you pause, inspect, modify, and resume a pipeline, agent, or tool for debugging or iterative development.
Human-in-the-Loop (Interrupts / Commands) Human-in-the-loop ( ConfirmationStrategy / ConfirmationPolicy) Haystack uses confirmation strategies to pause or block the execution to gather user feedback

Ecosystem and Tooling Mapping: LangChain → Haystack

At deepset, we're building the tools to make LLMs truly usable in production, open source and beyond.

  • Haystack, AI Orchestration Framework → Open Source AI framework for building production-ready, AI-powered agents and applications, on your own or with community support.
  • Haystack Enterprise Starter → Private and secure engineering support, advanced pipeline templates, deployment guides, and early access features for teams needing more support and guidance.
  • Haystack Enterprise Platform → An enterprise-ready platform for teams running Gen AI apps in production, with security, governance, and scalability built in with a free version.

Here's the product equivalent of two ecosystems:

LangChain Ecosystem Haystack Ecosystem Notes
LangChain, LangGraph, Deep Agents Haystack Core AI orchestration framework for components, pipelines, and agents. Supports deterministic workflows and agentic execution with explicit, modular building blocks.
LangSmith (Observability) Haystack Enterprise Platform Integrated tooling for building, debugging and iterating. Assemble agents and pipelines visually with the Builder, which includes component validation, testing and debugging. The Prompt Explorer is used to iterate and evaluate models and prompts. Built-in chat interfaces to enable fast SME and stakeholder feedback. Collaborative building environment for engineers and business.
LangSmith (Deployment) Hayhooks Haystack Enterprise Starter (deployment guides + advanced best practice templates) Haystack Enterprise Platform (1-click deployment, on-prem/VPC options) Multiple deployment paths: lightweight API exposure via Hayhooks, structured enterprise deployment patterns through Haystack Enterprise Starter, and full managed or self-hosted deployment through the Haystack Enterprise Platform.

Code Comparison

Agentic Flows with Haystack vs LangGraph

Here's an example graph-based agent with access to a list of tools, comparing the LangGraph and Haystack APIs.

Step 1: Define tools

Both frameworks use a @tool decorator to expose Python functions as tools the LLM can call. The function signature and docstring define the tool's interface, which the LLM uses to understand when and how to invoke each tool.

{`# pip install haystack-ai anthropic-haystack

from haystack.tools import tool

Define tools

@tool def multiply(a: int, b: int) -> int: """Multiply `a` and `b`.

Args:
    a: First int
    b: Second int
"""
return a * b

@tool def add(a: int, b: int) -> int: """Adds `a` and `b`.

Args:
    a: First int
    b: Second int
"""
return a + b

@tool def divide(a: int, b: int) -> float: """Divide `a` and `b`.

Args:
    a: First int
    b: Second int
"""
return a / b`}</CodeBlock>
{`# pip install langchain-anthropic langgraph langchain

from langchain.tools import tool

Define tools

@tool def multiply(a: int, b: int) -> int: """Multiply `a` and `b`.

Args:
    a: First int
    b: Second int
"""
return a * b

@tool def add(a: int, b: int) -> int: """Adds `a` and `b`.

Args:
    a: First int
    b: Second int
"""
return a + b

@tool def divide(a: int, b: int) -> float: """Divide `a` and `b`.

Args:
    a: First int
    b: Second int
"""
return a / b`}</CodeBlock>

Step 2: Initialize the LLM with tools

Both frameworks connect tools to the LLM, but with different APIs. In Haystack, tools are passed directly to the ChatGenerator component during initialization. In LangGraph, you first initialize the model, then bind tools using .bind_tools() to create a tool-enabled LLM instance.

{`from haystack_integrations.components.generators.anthropic import AnthropicChatGenerator

Augment the LLM with tools

tools = [add, multiply, divide] model = AnthropicChatGenerator( model="claude-sonnet-4-5-20250929", generation_kwargs={"temperature": 0}, tools=tools, )`}

{`from langchain.chat_models import init_chat_model

Augment the LLM with tools

model = init_chat_model( "claude-sonnet-4-5-20250929", temperature=0, ) tools = [add, multiply, divide] tools_by_name = {tool.name: tool for tool in tools} llm_with_tools = model.bind_tools(tools)`}

Step 3: Set up message handling and LLM invocation

This is where the frameworks diverge more significantly. In Haystack you'll use a custom component (MessageCollector) to accumulate conversation history across the agentic loop. LangGraph instead defines a node function (llm_call) that operates on MessagesState - a built-in state container that automatically manages message history.

{`from typing import Any, Dict, List

from haystack import component from haystack.core.component.types import Variadic from haystack.dataclasses import ChatMessage

Components

Custom component to temporarily store the messages

@component() class MessageCollector: def init(self): self._messages = []

@component.output_types(messages=List[ChatMessage])
def run(self, messages: Variadic[List[ChatMessage]]) -> Dict[str, Any]:
    self._messages.extend([msg for inner in messages for msg in inner])
    return {"messages": self._messages}

def clear(self):
    self._messages = []

message_collector = MessageCollector()`}

{`from langgraph.graph import MessagesState from langchain.messages import SystemMessage, ToolMessage from typing import Literal

Nodes

def llm_call(state: MessagesState): # LLM decides whether to call a tool or not

return {
    "messages": [
        llm_with_tools.invoke(
            [
                SystemMessage(
                    content="You are a helpful assistant tasked with performing arithmetic on a set of inputs."
                )
            ]
            + state["messages"]
        )
    ]
}`}</CodeBlock>

Step 4: Execute tool calls

When the LLM decides to use a tool, it must be invoked and its result returned. Haystack provides a built-in ToolInvoker component that handles this automatically. LangGraph requires you to define a custom node function that iterates over tool calls, invokes each tool, and wraps the results in ToolMessage objects.

{`from haystack.components.tools import ToolInvoker

Tool invoker component to execute a tool call

tool_invoker = ToolInvoker(tools=tools)`}

{`def tool_node(state: dict): # Performs the tool call result = [] for tool_call in state["messages"][-1].tool_calls: tool = tools_by_name[tool_call["name"]] observation = tool.invoke(tool_call["args"]) result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"])) return {"messages": result}`}

Step 5: Implement conditional routing logic

After the LLM responds, we need to decide whether to continue the loop (if tools were called) or finish (if the LLM provided a final answer). Haystack uses a ConditionalRouter component with declarative route conditions written in Jinja2 templates. LangGraph uses a conditional edge function (should_continue) that inspects the state and returns the next node or END.

{`from haystack.components.routers import ConditionalRouter

ConditionalRouter component to route to the tool invoker or end user based upon whether the LLM made a tool call

routes = [ { "condition": "{{replies[0].tool_calls | length > 0}}", "output": "{{replies}}", "output_name": "there_are_tool_calls", "output_type": List[ChatMessage], }, { "condition": "{{replies[0].tool_calls | length == 0}}", "output": "{{replies}}", "output_name": "final_replies", "output_type": List[ChatMessage], }, ] router = ConditionalRouter(routes, unsafe=True)`}

{`from langgraph.graph import END

Conditional edge function to route to the tool node or end based upon whether the LLM made a tool call

def should_continue(state: MessagesState) -> Literal["tool_node", END]: # Decide if we should continue the loop or stop based upon whether the LLM made a tool call

messages = state["messages"]
last_message = messages[-1]

# If the LLM makes a tool call, then perform an action
if last_message.tool_calls:
    return "tool_node"

# Otherwise, we stop (reply to the user)
return END`}</CodeBlock>

Step 6: Assemble the workflow

This is where you wire together all the components or nodes. Haystack uses a Pipeline where you explicitly add components and connect their inputs and outputs, creating a directed graph with loops. LangGraph uses a StateGraph where you add nodes and edges, then compile the graph into an executable agent. Both approaches achieve the same agentic loop, but with different levels of explicitness.

{`from haystack import Pipeline

Build pipeline

agent_pipe = Pipeline()

Add components

agent_pipe.add_component("message_collector", message_collector) agent_pipe.add_component("llm", model) agent_pipe.add_component("router", router) agent_pipe.add_component("tool_invoker", tool_invoker)

Add connections

agent_pipe.connect("message_collector", "llm.messages") agent_pipe.connect("llm.replies", "router") agent_pipe.connect("router.there_are_tool_calls", "tool_invoker") # If there are tool calls, send them to the ToolInvoker agent_pipe.connect("router.there_are_tool_calls", "message_collector") agent_pipe.connect("tool_invoker.tool_messages", "message_collector")`}

{`from langgraph.graph import StateGraph, START

Build workflow

agent_builder = StateGraph(MessagesState)

Add nodes

agent_builder.add_node("llm_call", llm_call) agent_builder.add_node("tool_node", tool_node)

Add edges to connect nodes

agent_builder.add_edge(START, "llm_call") agent_builder.add_conditional_edges( "llm_call", should_continue, ["tool_node", END] ) agent_builder.add_edge("tool_node", "llm_call")

Compile the agent

agent = agent_builder.compile()`}

Step 7: Run the agent

Finally, we execute the agent with a user message. Haystack calls .run() on the pipeline with initial messages, while LangGraph calls .invoke() on the compiled agent. Both return the conversation history.

{`# Run the pipeline messages = [ ChatMessage.from_system(text="You are a helpful assistant tasked with performing arithmetic on a set of inputs."), ChatMessage.from_user(text="Add 3 and 4.") ] result = agent_pipe.run({"messages": messages}) result`}
{`from langchain.messages import HumanMessage

Invoke

messages = [ HumanMessage(content="Add 3 and 4.") ] messages = agent.invoke({"messages": messages}) for m in messages["messages"]: m.pretty_print()`}

Creating Agents

The Agentic Flows walkthrough above showed how to assemble an agent loop manually from pipeline primitives. Haystack also provides a high-level Agent class that wraps the full loop - LLM calls, tool invocation, and iteration - into a single component. LangGraph offers an equivalent shortcut through create_react_agent in langgraph.prebuilt. Both produce a ReAct-style agent that handles tool calling and multi-step reasoning automatically.

{`# pip install haystack-ai anthropic-haystack

from haystack.components.agents import Agent from haystack_integrations.components.generators.anthropic import AnthropicChatGenerator from haystack.dataclasses import ChatMessage from haystack.tools import tool

@tool def multiply(a: int, b: int) -> int: """Multiply `a` and `b`.""" return a * b

@tool def add(a: int, b: int) -> int: """Add `a` and `b`.""" return a + b

Create an agent - the agentic loop is handled automatically

agent = Agent( chat_generator=AnthropicChatGenerator( model="claude-sonnet-4-5-20250929", generation_kwargs={"temperature": 0}, ), tools=[multiply, add], system_prompt="You are a helpful assistant that performs arithmetic.", )

result = agent.run(messages=[ ChatMessage.from_user("What is 3 multiplied by 7, then add 5?") ]) print(result["messages"][-1].text)`}

{`# pip install langchain-anthropic langgraph

from langchain_anthropic import ChatAnthropic from langchain_core.tools import tool from langchain.agents import create_agent from langchain_core.messages import HumanMessage, SystemMessage

@tool def multiply(a: int, b: int) -> int: """Multiply `a` and `b`.""" return a * b

@tool def add(a: int, b: int) -> int: """Add `a` and `b`.""" return a + b

Create an agent - the agentic loop is handled automatically

model = ChatAnthropic( model="claude-sonnet-4-5-20250929", temperature=0, ) agent = create_agent( model, tools=[multiply, add], system_prompt=SystemMessage( content="You are a helpful assistant that performs arithmetic." ), )

result = agent.invoke({ "messages": [HumanMessage(content="What is 3 multiplied by 7, then add 5?")] }) print(result["messages"][-1].content)`}

Connecting to Document Stores

Document stores are the foundation of retrieval-augmented generation (RAG). In Haystack, document stores integrate natively with pipeline components like Retrievers and Prompt Builders via explicit typed connections. LangChain centers retrieval around its vector store abstraction composed using LCEL (LangChain Expression Language).

Both frameworks offer in-memory stores for prototyping and a wide range of production backends (Elasticsearch, Qdrant, Weaviate, Pinecone, and more) via integrations.

Step 1: Create a document store and add documents

{`# pip install haystack-ai sentence-transformers

from haystack import Document from haystack.document_stores.in_memory import InMemoryDocumentStore from haystack.components.embedders import SentenceTransformersDocumentEmbedder

Embed and write documents to the document store

document_store = InMemoryDocumentStore()

doc_embedder = SentenceTransformersDocumentEmbedder( model="sentence-transformers/all-MiniLM-L6-v2" )

docs = [ Document(content="Paris is the capital of France."), Document(content="Berlin is the capital of Germany."), Document(content="Tokyo is the capital of Japan."), ] docs_with_embeddings = doc_embedder.run(docs)["documents"] document_store.write_documents(docs_with_embeddings)`}

{`# pip install langchain-community langchain-huggingface sentence-transformers

from langchain_huggingface import HuggingFaceEmbeddings from langchain_community.vectorstores import InMemoryVectorStore from langchain_core.documents import Document

Embed and add documents to the vector store

embeddings = HuggingFaceEmbeddings( model_name="sentence-transformers/all-MiniLM-L6-v2" ) vectorstore = InMemoryVectorStore(embedding=embeddings) vectorstore.add_documents([ Document(page_content="Paris is the capital of France."), Document(page_content="Berlin is the capital of Germany."), Document(page_content="Tokyo is the capital of Japan."), ])`}

Step 2: Build a RAG pipeline

{`from haystack import Pipeline from haystack.components.embedders import SentenceTransformersTextEmbedder from haystack.components.retrievers.in_memory import InMemoryEmbeddingRetriever from haystack.components.builders import ChatPromptBuilder from haystack.dataclasses import ChatMessage from haystack_integrations.components.generators.anthropic import AnthropicChatGenerator

ChatPromptBuilder expects a List[ChatMessage] as template

template = [ChatMessage.from_user(""" Given the following documents, answer the question. {% for doc in documents %}{{ doc.content }}{% endfor %} Question: {{ question }} """)]

rag_pipeline = Pipeline() rag_pipeline.add_component( "text_embedder", SentenceTransformersTextEmbedder(model="sentence-transformers/all-MiniLM-L6-v2") ) rag_pipeline.add_component( "retriever", InMemoryEmbeddingRetriever(document_store=document_store) ) rag_pipeline.add_component( "prompt_builder", ChatPromptBuilder(template=template) ) rag_pipeline.add_component( "llm", AnthropicChatGenerator(model="claude-sonnet-4-5-20250929") )

rag_pipeline.connect("text_embedder.embedding", "retriever.query_embedding") rag_pipeline.connect("retriever.documents", "prompt_builder.documents") rag_pipeline.connect("prompt_builder.prompt", "llm.messages")

result = rag_pipeline.run({ "text_embedder": {"text": "What is the capital of France?"}, "prompt_builder": {"question": "What is the capital of France?"}, }) print(result["llm"]["replies"][0].text)`}

{`from langchain_anthropic import ChatAnthropic from langchain_core.prompts import ChatPromptTemplate from langchain_core.output_parsers import StrOutputParser from langchain_core.runnables import RunnablePassthrough

def format_docs(docs): return "\n".join(doc.page_content for doc in docs)

retriever = vectorstore.as_retriever() model = ChatAnthropic(model="claude-sonnet-4-5-20250929")

template = """ Given the following documents, answer the question. {context} Question: {question} """ prompt = ChatPromptTemplate.from_template(template)

rag_chain = ( {"context": retriever | format_docs, "question": RunnablePassthrough()} | prompt | model | StrOutputParser() )

result = rag_chain.invoke("What is the capital of France?") print(result)`}

Using MCP Tools

Both frameworks support the Model Context Protocol (MCP), letting agents connect to external tools and services exposed by MCP servers. Haystack provides MCPTool and MCPToolset through the mcp-haystack integration package, which plug directly into the Agent component. LangChain's MCP support relies on the separate langchain-mcp-adapters package and requires an async workflow throughout.

{`# pip install haystack-ai mcp-haystack anthropic-haystack

from haystack_integrations.tools.mcp import MCPToolset, StdioServerInfo from haystack.components.agents import Agent from haystack_integrations.components.generators.anthropic import AnthropicChatGenerator from haystack.dataclasses import ChatMessage

Connect to an MCP server - tools are auto-discovered

toolset = MCPToolset( server_info=StdioServerInfo( command="uvx", args=["mcp-server-fetch"], ) )

agent = Agent( chat_generator=AnthropicChatGenerator(model="claude-sonnet-4-5-20250929"), tools=toolset, system_prompt="You are a helpful assistant that can fetch web content.", )

result = agent.run(messages=[ ChatMessage.from_user("Fetch the content from https://haystack.deepset.ai") ]) print(result["messages"][-1].text)`}

{`# pip install langchain-mcp-adapters langgraph langchain-anthropic

import asyncio from langchain_mcp_adapters.client import MultiServerMCPClient from langchain.agents import create_agent from langchain_anthropic import ChatAnthropic from langchain_core.messages import HumanMessage, SystemMessage

model = ChatAnthropic(model="claude-sonnet-4-5-20250929")

async def run(): client = MultiServerMCPClient( { "fetch": { "command": "uvx", "args": ["mcp-server-fetch"], "transport": "stdio", } } ) tools = await client.get_tools() agent = create_agent( model, tools, system_prompt=SystemMessage( content="You are a helpful assistant that can fetch web content." ), ) result = await agent.ainvoke( { "messages": [ HumanMessage(content="Fetch the content from https://haystack.deepset.ai") ] } ) print(result["messages"][-1].content)

asyncio.run(run())`}

Hear from Haystack Users

See how teams across industries use Haystack to power their production AI systems, from RAG applications to agentic workflows.

"Haystack allows its users a production ready, easy to use framework that covers just about all of your needs, and allows you to write integrations easily for those it doesn't." - Josh Longenecker, GenAI Specialist at AWS

"Haystack's design philosophy significantly accelerates development and improves the robustness of AI applications, especially when heading towards production. The emphasis on explicit, modular components truly pays off in the long run." - Rima Hajou, Data & AI Technical Lead at Accenture

Featured Stories

Start Building with Haystack

👉 Thinking about migrating or evaluating Haystack? Jump right in with the Haystack Get Started guide or contact our team, we'd love to support you.