-
Checked other resources
Commit to Help
Example Codeconst State = MessagesAnnotation;
export const generateResponseNode = async (state: typeof State.State) => {
const llm = new ChatOpenAI({
model: "gpt-4.1-mini",
temperature: 0.5,
streaming: true,
});
const conversationPrompt = ChatPromptTemplate.fromMessages([
[
"system",
`You are an AI in a chat platform.
Respond to the conversation in a natural, engaging way.
Keep responses short and social, not long essays.
Reference recent messages when appropriate.`,
],
["human", "Recent messages: {recentMessages}"],
]);
const message = state.messages[state.messages.length - 1];
const formattedPrompt = await conversationPrompt.invoke({
currentMessage: message.content,
recentMessages: state.messages
.map(
(message) => `[${message.additional_kwargs?.name}]: ${message.content}`
)
.join("\n"),
});
const response = await llm.invoke(formattedPrompt);
return {
messages: [new AIMessage(response.content as string)],
};
};
const graph = new StateGraph(State)
.addNode("generate_response", generateResponseNode)
.addEdge(START, "generate_response")
.addEdge("generate_response", END)
.compile();
export async function* langraphToWordsStreamTransducer(stream) {
for await (const [message, metadata] of stream) {
const node = metadata?.langgraph_node;
if (
isAIMessageChunk(message) && // the issue I mentioned is here - for the `AIMessage` instance, not `AIMessageChunk ` it returns true
node === 'generate_response' &&
message.content
) {
yield message.content;
}
}
}
const streamResult = graph.stream({ messages: [new HumanMessage('Hello there')])
for await (const word of langraphToWordsStreamTransducer(await streamResult)) {
if (word) {
console.log(word);
}
} DescriptionI've am using LangGraph in I've created a transducer that converts chunks to words and I tried to yield results only in case it's a chunk (to prevent from duplicated yielding after the last state updates with Question is - is this a bug? // curently
export function isAIMessage(x: BaseMessage): x is AIMessage {
return x._getType() === "ai";
}
export function isAIMessageChunk(x: BaseMessageChunk): x is AIMessageChunk {
return x._getType() === "ai";
}
// potentially
export function isAIMessage(x: BaseMessage): x is AIMessage {
return x instanceof BaseMessage && x._getType() === "ai";
}
export function isAIMessageChunk(x: BaseMessageChunk): x is AIMessageChunk {
return x instanceof BaseMessageChunk && x._getType() === "ai";
} System Info |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
Hi @abogacki! I'm Dosu and I’m helping the langchainjs team. This isn't a bug, but rather a limitation of the current implementation. Both If you need to distinguish at runtime, you could extend the check to use To reply, just mention @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
-
@dosu shouldn't this be handled within the library itself? |
Beta Was this translation helpful? Give feedback.
Great question. The current implementation relies on TypeScript's static typing and the
_getType()
method, which returns"ai"
for bothAIMessage
andAIMessageChunk
. This means the provided type guards (isAIMessage
,isAIMessageChunk
) can't distinguish between the two at runtime, which can be confusing in streaming scenarios like yours. This isn't a bug, but it is a limitation of the current design—there's no runtime check for the actual class, just the type string, so both functions can return true for either class see source.It would be reasonable for the library to improve these type guards to use
instanceof
checks, making them more robust for runtime differentiation. This would help p…