Replies: 4 comments 3 replies
-
Hi! |
Beta Was this translation helpful? Give feedback.
-
And here we go. Admittedly, the whole construct is a pain....
|
Beta Was this translation helpful? Give feedback.
-
@wuodar Are you calling workflow.compile in every call to the lambda? |
Beta Was this translation helpful? Give feedback.
-
hey @wuodar, can we talk about your experience using langraph on lambda? was there any issue in terms of scalability, latency and availability? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I'm developing an assistant using Langchain + LangGraph, deployed on AWS Lambda, with communication handled via Amazon API Gateway WebSocket API.
I want to incorporate human-in-the-loop functionality, but before that, I need to implement a checkpointer for my chosen database, which seems like a significant amount of work. Therefore, I'd like to explore whether langgraph interrupts could work in my environment.
The main challenge I see is that AWS Lambda is serverless, meaning that if I interrupt the graph to send a message to the user via WebSocket, the Lambda instance handling the execution will terminate before the user responds. When the user replies, their response will be received by a new Lambda instance.
Given this, is it possible to implement langgraph interrupt in such an environment? If so, could you provide an example or guidance on how to make it work?
The documentation provides the following example:
This works if everything happens within a single instance, but in AWS Lambda, after sending a message to the user, the instance will be terminated. Then, when the user responds, their input will be processed by a new Lambda instance.
Can langgraph handle this scenario? If so, how should I structure the workflow to properly resume execution?
Thanks in advance for your insights!
Beta Was this translation helpful? Give feedback.
All reactions