-
I want to use the "interrupt_before" logic for langgraph agent, for an agent in production environment where the user input will decide whether to execute a tool or not. For the same purpose I am using checkpointer and thread_ids to get the state and resume the graph incase the approval is made. Now I wanted to ask if keep using the checkpointers, will it cause my application to break because of memory usage as it will keep storing the snapshots in a dictionary? Also in this documentation: https://langchain-ai.github.io/langgraph/reference/checkpoints/#checkpointer-implementations Is there any other way to destroy the threads so that memory gets cleared incase there is a problem in future? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Checkpointers backed by a production-grade database (such as postgres, redis, etc.) are designed for prod and used at scale today. That line is in the section on the "MemorySaver", a checkpointer implementation which is in-memory used primarily for proving out solutions locally, using in tests, and use in the docs (since it requires no additional setup). It offers no real "persistence", since it is just a python dictionary. |
Beta Was this translation helpful? Give feedback.
Checkpointers backed by a production-grade database (such as postgres, redis, etc.) are designed for prod and used at scale today.
That line is in the section on the "MemorySaver", a checkpointer implementation which is in-memory used primarily for proving out solutions locally, using in tests, and use in the docs (since it requires no additional setup). It offers no real "persistence", since it is just a python dictionary.