Replies: 1 comment 4 replies
-
Hello, @Rov7! I'm here to help you with your technical questions and bug fixes. Let me know how I can assist you today! To pass structured data, like a dictionary, as examples to an LLM in LangChain while retaining a primary system message for context, you can use the
Here's a code snippet demonstrating this: from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
from langchain_core.messages import (
AIMessage,
BaseMessage,
HumanMessage,
SystemMessage,
ToolMessage,
)
import uuid
from typing import Dict, List
# Define the system message
system = """You are an expert at converting user questions into database queries. \
You have access to a database of tutorial videos about a software library for building LLM-powered applications. \
Given a question, return a list of database queries optimized to retrieve the most relevant results.
If there are acronyms or words you are not familiar with, do not try to rephrase them."""
# Define examples
examples = [
{
"input": "What's chat langchain, is it a langchain template?",
"tool_calls": [
Search(
query="What is chat langchain and is it a langchain template?",
sub_queries=["What is chat langchain", "What is a langchain template"],
)
],
},
# Add more examples as needed
]
# Convert examples to messages
def tool_example_to_messages(example: Dict) -> List[BaseMessage]:
messages: List[BaseMessage] = [HumanMessage(content=example["input"])]
openai_tool_calls = []
for tool_call in example["tool_calls"]:
openai_tool_calls.append(
{
"id": str(uuid.uuid4()),
"type": "function",
"function": {
"name": tool_call.__class__.__name__,
"arguments": tool_call.json(),
},
}
)
messages.append(
AIMessage(content="", additional_kwargs={"tool_calls": openai_tool_calls})
)
tool_outputs = example.get("tool_outputs") or [
"You have correctly called this tool."
] * len(openai_tool_calls)
for output, tool_call in zip(tool_outputs, openai_tool_calls):
messages.append(ToolMessage(content=output, tool_call_id=tool_call["id"]))
return messages
example_msgs = [msg for ex in examples for msg in tool_example_to_messages(ex)]
# Create the prompt with examples
prompt = ChatPromptTemplate.from_messages(
[
("system", system),
MessagesPlaceholder("examples", optional=True),
("human", "{question}"),
]
)
# Initialize the LLM and structured output
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
structured_llm = llm.with_structured_output(Search)
# Create the query analyzer with examples
query_analyzer_with_examples = (
{"question": RunnablePassthrough()}
| prompt.partial(examples=example_msgs)
| structured_llm
)
# Invoke the query analyzer
query_analyzer_with_examples.invoke(
"what's the difference between web voyager and reflection agents? do both use langgraph?"
) This approach allows you to retain the primary system message while including structured examples in the prompt [1]. |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
### Description:
I'm working on a project using LangChain, and I need to pass examples to the LLM in the form of a dictionary, such as destinatarios_dict. However, I'm facing a limitation where I can only pass one system message to the MessagePlaceholder, and it doesn't seem to support the inclusion of multiple system messages or additional example data.
### Context:
System Message: I'm using a system message to set up initial instructions or context.
Examples: I want to pass a dictionary (e.g., destinatarios) as structured data to serve as examples for the model to understand the expected inputs/outputs.
Problem: The MessagePlaceholder only accepts a single system message, which restricts me from including both the structured data and additional system instructions simultaneously. When I try to include the dictionary as part of the system message, it causes conflicts or isn't processed as intended. I'm basing my code on this tutorial
### Error:
281 prev_ai_message = None
282 if i != 0:
--> 283 raise ValueError("SystemMessage should be the first in the history.")
284 if system_instruction is not None:
285 raise ValueError(
286 "Detected more than one SystemMessage in the list of messages."
287 "Gemini APIs support the insertion of only one SystemMessage."
288 )
ValueError: SystemMessage should be the first in the history.
### Goal:
I would like to know how I can pass structured data (like the destinatarios dictionary) as examples to the LLM, while also retaining my primary system message for setting up the context. Is there a recommended approach for passing multiple system messages or combining the system message with structured examples within LangChain?
Any guidance or suggestions on how to structure these messages effectively would be greatly appreciated!
System Info
langchain-core 0.2.4
langgraph 0.2.22
python3.12.4
Beta Was this translation helpful? Give feedback.
All reactions