You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems dead simple on getting fully formed JSON to stream; however, when I follow it, I only get on_parser_start and on_parser_end events which contain the fully completed json.
There are no events in between that have any of the properties in my json with partial values.
As far as I can tell I'm doing things right. Take a look at my code:
Base Agent
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { BaseChatModel } from "@langchain/core/language_models/chat_models";
import { AIMessage, BaseMessage, ToolMessage } from "@langchain/core/messages";
import { GraphState } from "./models/graph-state.mts";
import { JsonOutputParser } from "@langchain/core/output_parsers";
import { DynamicStructuredTool, StructuredToolInterface } from "@langchain/core/tools";
export type MessageFilter = (messages: BaseMessage[]) => BaseMessage[];
export type LangChainTool = DynamicStructuredTool | StructuredToolInterface | any;
export class BaseStreamAgent<T extends Record<string, any> = Record<string, any>> {
protected llmModel: BaseChatModel;
protected llmModelWithTools?: any; // RunnableBinding type from bindTools
private systemPrompt: ChatPromptTemplate;
private messageFilter?: MessageFilter;
private parser: JsonOutputParser<T>;
protected tools?: LangChainTool[];
protected agentName: string;
constructor(
systemPrompt: ChatPromptTemplate,
llmModel: BaseChatModel,
messageFilter?: MessageFilter,
tools?: LangChainTool[]
) {
this.systemPrompt = systemPrompt;
this.llmModel = llmModel;
this.messageFilter = messageFilter;
this.tools = tools;
// Create the parser internally
this.parser = new JsonOutputParser<T>();
// If tools are provided, create a separate model with tools bound
if (tools && tools.length > 0) {
const modelWithBindTools = llmModel as any;
if (modelWithBindTools.bindTools) {
this.llmModelWithTools = modelWithBindTools.bindTools(tools);
}
}
}
public async invoke(state: typeof GraphState.State): Promise<typeof GraphState.State> {
const invokeState = this.messageFilter
? { ...state, messages: this.messageFilter(state.messages) }
: state;
state.callStack.push(this.agentName);
// Check if we should use model with tools or regular model
const shouldUseToolModel = this.llmModelWithTools &&
!(state.messages[state.messages.length - 1] instanceof ToolMessage);
if (shouldUseToolModel) {
// First call - use model with tools, don't parse JSON
const chain = this.systemPrompt.pipe(this.llmModelWithTools);
const response = await chain.invoke(invokeState);
// Add the AI message with tool calls to state
state.messages.push(response as AIMessage);
} else {
// Either no tools or returning from tool execution - parse JSON
const chain = this.systemPrompt
.pipe(this.llmModel)
.pipe(this.parser.withConfig({
runName: 'json_parser'
}));
const result = await chain.invoke(invokeState);
state.messages.push(new AIMessage({ content: JSON.stringify(result) }));
// Call the handler with parsed result
this.handleInvocationResult(result, state);
}
// Always return the state (whether it was mutated or not)
return state;
}
protected handleInvocationResult(result: T, state: typeof GraphState.State): void {
// Default implementation - subclasses should override
// Subclasses can mutate the state object directly since it's passed by reference
}
// Generic static create method
static async createAgent<TAgent extends BaseStreamAgent<any>>(
AgentClass: new (promptTemplate: ChatPromptTemplate, llmModel: BaseChatModel, messageFilter?: MessageFilter, tools?: LangChainTool[]) => TAgent,
promptFactory: () => Promise<ChatPromptTemplate>,
modelFactory: () => BaseChatModel,
messageFilter?: MessageFilter,
tools?: LangChainTool[]
): Promise<TAgent> {
const promptTemplate = await promptFactory();
const llmModel = modelFactory();
return new AgentClass(promptTemplate, llmModel, messageFilter, tools);
}
static shouldCallToolsNode(state: typeof GraphState.State): boolean {
const lastMessage = state.messages[state.messages.length - 1];
const aiMessage = lastMessage as AIMessage;
if (aiMessage.tool_calls && aiMessage.tool_calls.length > 0) {
return true;
}
return false;
}
}
The bulk of the logic is in this class. As you can see I've:
instantiated a JsonOutputParser
in the invoke method in the else block I pipe the parser
I also build my prompt exactly how the example calls for by calling partial on it.
import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";
export const createRagPromptTemplate = async (): Promise<ChatPromptTemplate> => {
return ChatPromptTemplate.fromMessages([
[
"system",
`instructions blah blah blah
{format_instructions}`
],
new MessagesPlaceholder("messages")
]).partial({
format_instructions: `RESPONSE SCHEMA:
RESPOND WITH A VALID JSON OBJECT CONTAINING THE FOLLOWING PROPERTIES
blah blah blah
`
});
};
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
According to this document: https://js.langchain.com/docs/how_to/output_parser_json/#streaming
It seems dead simple on getting fully formed JSON to stream; however, when I follow it, I only get
on_parser_start
andon_parser_end
events which contain the fully completed json.There are no events in between that have any of the properties in my json with partial values.
As far as I can tell I'm doing things right. Take a look at my code:
Base Agent
The bulk of the logic is in this class. As you can see I've:
JsonOutputParser
invoke
method in the else block I pipe the parserI also build my prompt exactly how the example calls for by calling
partial
on it.And in my graph class I call
streamEvents
I'm super confused on how to get this to work. The only differences between the example document and my implementation are:
fromTemplate
notfromMessages
stream
notstreamEvents
So I'm assuming perhaps this isn't supported in one of these differences. Has anyone got this to work with
streamEvents
?Beta Was this translation helpful? Give feedback.
All reactions