Langgraph Studio token streaming doesn't work after resume, langgraphjs Gemini #1518
Replies: 1 comment
-
fixed it, in case anyone else gets here, it seems you can't reassign the original config object at all, so any { ...config, ... } seems to break the callbacks for some reason. Well breaks the callbacks after a resume. I was able to spread the config prior to the first call, but doing it on subsequent calls when resuming, it breaks. Anyway I've just refactored my code to not reassign the original config object coming from studio. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I have a streaming setup with token streaming using gemini. It worls absolutely fine until I interrupt and resume. After a resume it no longer streams the tokens. I have debugged the llm calls and can see that they are indeed using the runManager callbacks in the streamResponseChuncks and I have dug all the way into the run manager calls and can see it getting all the way through to posting the events. It doesn't look like that is having any issues.
As I say, prior to the resume, the streaming works fine. I am storing the compiled graph instance in a map inside a class and then retrieving it upon resume using the thread id and calling the stream with the resume payload and the config passed in by studio. I was adding some extra stuff to the config for our own executions, thought maybe this was breaking it, but I've completely removed that and am passing the exact config object directly from the studio stream request and no joy
I put together the standalone test below just to confirm that it definitely works and this test works. It still streams tokens after the interrupt. But I can't for the life of me work out what's breaking it in my actual code, apart from this retrieval of the compiled graph from a cache map. It feels like the Gemini adapter and the call backs are all working, but for somereason they are possibly posting the tokes to the wrong (old) run? Is that possible?.
I can't post up my actual code as it's part of a complex system, I'll try and isolate it a bit more and see if I can replicate the full process of the stored graph in cache map.
Just wondered if anyone had seen anything similar and knew of any gotchas in and around this?
Cheers
working test code
Beta Was this translation helpful? Give feedback.
All reactions