A template project demonstrating how to build a sequential agent workflow using LangGraph and LangChain.
This project implements a multi-step agent workflow using LangGraph's StateGraph. The agent processes input through three distinct stages:
- Initial processing (
initial_step) - LLM-based processing (
model_call) - Final output generation (
final_step)
_agent_graph.py: Main implementation file containing the agent workflowREADME.md: This documentation file
The project uses TypedDict classes for strict type checking across different stages:
-
InputState: Handles initial input parametersagent_input_a: First input parameteragent_input_b: Second input parameter
-
OverallState: Manages intermediate state during processingtemp_value_a: Temporary string valuetemp_value_b: List of temporary string values
-
OutputState: Defines the final output formatagent_output_value: Final processed result
- Initial Step: Performs preliminary processing on input data
- Model Call: Integrates with GPT-4 for advanced processing
- Uses a custom prompt from LangChain Hub
- Implements structured output parsing using Pydantic
- Final Step: Transforms processed data into the required output format
- LangChain
- LangGraph
- OpenAI
- Pydantic
main.py demonstrates the use of LangGraph python SDK to call the Agent hosted in LangGraph cloud.