Skip to content

Conversation

@ThomasVitale
Copy link
Contributor

Output format instructions should not be included until the very last advisor runs, otherwise there's a risk of templating failure if more than one advisor tries to render the prompt template. This change guarantees the output format instructions are always included right before calling the chat model, without the risk of previous advisors interfering with it.

Output format instructions should not be included until the very last advisor runs, otherwise there's a risk of templating failure if more than one advisor tries to render the prompt template. This change guarantees the output format instructions are always included right before calling the chat model, without the risk of previous advisors interfering with it.

Signed-off-by: Thomas Vitale <[email protected]>
@Deprecated // Only for backward compatibility until the next release.
CHAT_MODEL("spring.ai.chat.client.model"),
@Deprecated // Only for backward compatibility until the next release.
OUTPUT_FORMAT("spring.ai.chat.client.output.format"),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a great approach using the advisor context for this, but we might re-evaluate later when considering the new structured output support.

}

@Test
void qaOutputConverter() {
Copy link
Contributor Author

@ThomasVitale ThomasVitale May 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test case fails if we remove the change in DefaultChatClient. Unfortunately we didn't have any integration test combining output converter with advisors.

@markpollack
Copy link
Member

added a simple test with streaming - though we may have it elsewhere...

merged in 90cab21

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants