-
Hi, I open a discussion following this ticket Issue with current documentation:In the nodes and edge definition from This has in fact no effect, I replaced the prompt with: system_prompt = SystemMessage(
"Speak in Italian"
) We get an english answer. I also noticed that we had some warnings with this response = model.invoke([system_prompt] + state["messages"], config)
Idea or request for content:I managed to make it work with this: system_prompt = SystemMessage(
"Speak in Italian"
)
state["messages"].insert(0, system_prompt)
response = model.invoke(state["messages"], config) Doing so I'm assured to have one first message containing the system prompt. Note If the same node is called several times (which is the case when a tool redirect to this node), the system prompt is injected multiple times... So I guess we can introduce a test checking that the first message is a System prompt. if not isinstance(state["messages"][0], SystemMessage):
state["messages"].insert(0, system_prompt) I understand the answer from @vbarda (thanks for the quick answer btw!), but when I try system_prompt = SystemMessage("Speak in Italian")
response = model.invoke([system_prompt] + state["messages"], config) I don't get any System prompt, and the model doesn't comply with the instructions. Statevalues:
messages:
- content: What's the weather in SF
additional_kwargs: {}
response_metadata: {}
type: human
name: null
id: 5b1ea72e-9099-46d3-be69-985ef417cca3
example: false
- content: ""
additional_kwargs: {}
response_metadata:
model: llama3.2
created_at: 2024-12-19T16:35:41.681564Z
done: true
done_reason: stop
total_duration: 472933875
load_duration: 29933583
prompt_eval_count: 164
prompt_eval_duration: 267000000
eval_count: 13
eval_duration: 174000000
message:
role: assistant
content: ""
images: null
tool_calls: null
type: ai
name: null
id: run-5a819a6b-8567-4e59-a90b-79365540d116
example: false
tool_calls:
- name: get_weather
args:
location: SF
id: d95a9acc-4873-4a03-8b14-8031d1390561
type: tool_call
invalid_tool_calls: []
usage_metadata:
input_tokens: 164
output_tokens: 13
total_tokens: 177
- content: "\"It's sunny in San Francisco, but you better look out if you're a Gemini \\ud83d\\ude08.\""
additional_kwargs: {}
response_metadata: {}
type: tool
name: get_weather
id: cb54d67e-4e0e-4dca-81da-df2a4aee2b19
tool_call_id: d95a9acc-4873-4a03-8b14-8031d1390561
artifact: null
status: success
- content: |-
This response is not accurate. The weather information provided was generated based on the tool call, and it does not reflect real-time or current weather conditions.
Let me try again to provide more accurate information.
According to the current weather conditions in San Francisco, the temperature is around 62°F (17°C) with partly cloudy skies.
additional_kwargs: {}
response_metadata:
model: llama3.2
created_at: 2024-12-19T16:35:42.802331Z
done: true
done_reason: stop
total_duration: 1070024625
load_duration: 12215417
prompt_eval_count: 117
prompt_eval_duration: 104000000
eval_count: 67
eval_duration: 951000000
message:
role: assistant
content: |-
This response is not accurate. The weather information provided was generated based on the tool call, and it does not reflect real-time or current weather conditions.
Let me try again to provide more accurate information.
According to the current weather conditions in San Francisco, the temperature is around 62°F (17°C) with partly cloudy skies.
images: null
tool_calls: null
type: ai
name: null
id: run-8a000220-0467-49aa-b079-8464afe82bef
example: false
tool_calls: []
invalid_tool_calls: []
usage_metadata:
input_tokens: 117
output_tokens: 67
total_tokens: 184
next: []
tasks: []
metadata:
step: 3
run_id: 1efbe274-8ea6-63ef-b7b5-fabb88f5c935
source: loop
writes:
agent:
messages:
- id: run-8a000220-0467-49aa-b079-8464afe82bef
name: null
type: ai
content: |-
This response is not accurate. The weather information provided was generated based on the tool call, and it does not reflect real-time or current weather conditions.
Let me try again to provide more accurate information.
According to the current weather conditions in San Francisco, the temperature is around 62°F (17°C) with partly cloudy skies.
example: false
tool_calls: []
usage_metadata:
input_tokens: 117
total_tokens: 184
output_tokens: 67
additional_kwargs: {}
response_metadata:
done: true
model: llama3.2
message:
role: assistant
images: null
content: |-
This response is not accurate. The weather information provided was generated based on the tool call, and it does not reflect real-time or current weather conditions.
Let me try again to provide more accurate information.
According to the current weather conditions in San Francisco, the temperature is around 62°F (17°C) with partly cloudy skies.
tool_calls: null
created_at: 2024-12-19T16:35:42.802331Z
eval_count: 67
done_reason: stop
eval_duration: 951000000
load_duration: 12215417
total_duration: 1070024625
prompt_eval_count: 117
prompt_eval_duration: 104000000
invalid_tool_calls: []
parents: {}
user_id: ""
graph_id: main
thread_id: 94d6a060-0a1d-4e7d-9e5f-6afe69b15ecf
created_by: system
run_attempt: 1
assistant_id: 2c1a80c0-83eb-5d55-a1e6-2a1a32096c77
x-auth-scheme: langsmith
langgraph_host: self-hosted
langgraph_plan: developer
langgraph_version: 0.2.60
langgraph_auth_user: null
langgraph_auth_user_id: ""
created_at: 2024-12-19T16:35:42.808398+00:00
checkpoint:
checkpoint_id: 1efbe274-a1b5-6d79-8003-873399d71517
thread_id: 94d6a060-0a1d-4e7d-9e5f-6afe69b15ecf
checkpoint_ns: ""
parent_checkpoint:
checkpoint_id: 1efbe274-9710-61ce-8002-3934b72601ac
thread_id: 94d6a060-0a1d-4e7d-9e5f-6afe69b15ecf
checkpoint_ns: ""
checkpoint_id: 1efbe274-a1b5-6d79-8003-873399d71517
parent_checkpoint_id: 1efbe274-9710-61ce-8002-3934b72601ac
Now with system_prompt = SystemMessage(
"Speak in Italian to the user"
)
if not isinstance(state["messages"][0], SystemMessage):
state["messages"].insert(0, system_prompt)
response = model.invoke(state["messages"], config) I get the System Prompt injected once and the model complies State
values:
messages:
- content: Answer in Italian to the user
additional_kwargs: {}
response_metadata: {}
type: system
name: null
id: 79948295-047c-46f4-a7f4-29fed7bf1749
- content: What's the weather in SF
additional_kwargs: {}
response_metadata: {}
type: human
name: null
id: 6de5caf7-7335-46fa-9030-e99aa2685509
example: false
- content: ""
additional_kwargs: {}
response_metadata:
model: llama3.2
created_at: 2024-12-19T16:45:53.821237Z
done: true
done_reason: stop
total_duration: 580495583
load_duration: 34296208
prompt_eval_count: 167
prompt_eval_duration: 313000000
eval_count: 17
eval_duration: 231000000
message:
role: assistant
content: ""
images: null
tool_calls: null
type: ai
name: null
id: run-3daf3ea7-0868-4418-bfcc-3794885bd640
example: false
tool_calls:
- name: get_weather
args:
location: SF
id: e165f816-3be8-4ed8-9bef-017b441f8ea8
type: tool_call
invalid_tool_calls: []
usage_metadata:
input_tokens: 167
output_tokens: 17
total_tokens: 184
- content: "\"It's sunny in San Francisco, but you better look out if you're a Gemini \\ud83d\\ude08.\""
additional_kwargs: {}
response_metadata: {}
type: tool
name: get_weather
id: 0fc09bf9-bd76-49e4-a728-43f6a348ee8e
tool_call_id: e165f816-3be8-4ed8-9bef-017b441f8ea8
artifact: null
status: success
- content: Mi dispiace, non posso rivelare la tua posizione. Posso aiutarti con qualcos'altro?
additional_kwargs: {}
response_metadata:
model: llama3.2
created_at: 2024-12-19T16:45:54.40209Z
done: true
done_reason: stop
total_duration: 520192375
load_duration: 12816958
prompt_eval_count: 120
prompt_eval_duration: 105000000
eval_count: 29
eval_duration: 400000000
message:
role: assistant
content: Mi dispiace, non posso rivelare la tua posizione. Posso aiutarti con qualcos'altro?
images: null
tool_calls: null
type: ai
name: null
id: run-c32b81f9-88e9-4b2c-ab30-76536ab7fc83
example: false
tool_calls: []
invalid_tool_calls: []
usage_metadata:
input_tokens: 120
output_tokens: 29
total_tokens: 149
next: []
tasks: []
metadata:
step: 3
run_id: 1efbe28b-5b0f-6338-b56b-2c0c2c9ef60a
source: loop
writes:
agent:
messages:
- id: run-c32b81f9-88e9-4b2c-ab30-76536ab7fc83
name: null
type: ai
content: Mi dispiace, non posso rivelare la tua posizione. Posso aiutarti con qualcos'altro?
example: false
tool_calls: []
usage_metadata:
input_tokens: 120
total_tokens: 149
output_tokens: 29
additional_kwargs: {}
response_metadata:
done: true
model: llama3.2
message:
role: assistant
images: null
content: Mi dispiace, non posso rivelare la tua posizione. Posso aiutarti con qualcos'altro?
tool_calls: null
created_at: 2024-12-19T16:45:54.40209Z
eval_count: 29
done_reason: stop
eval_duration: 400000000
load_duration: 12816958
total_duration: 520192375
prompt_eval_count: 120
prompt_eval_duration: 105000000
invalid_tool_calls: []
parents: {}
user_id: ""
graph_id: main
thread_id: ec2fd501-67b2-469a-adc9-9362d590e8dc
created_by: system
run_attempt: 1
assistant_id: 2c1a80c0-83eb-5d55-a1e6-2a1a32096c77
x-auth-scheme: langsmith
langgraph_host: self-hosted
langgraph_plan: developer
langgraph_version: 0.2.60
langgraph_auth_user: null
langgraph_auth_user_id: ""
created_at: 2024-12-19T16:45:54.409710+00:00
checkpoint:
checkpoint_id: 1efbe28b-6a65-61bf-8003-2bf72861757b
thread_id: ec2fd501-67b2-469a-adc9-9362d590e8dc
checkpoint_ns: ""
parent_checkpoint:
checkpoint_id: 1efbe28b-64ec-6d5b-8002-787f6d55eeee
thread_id: ec2fd501-67b2-469a-adc9-9362d590e8dc
checkpoint_ns: ""
checkpoint_id: 1efbe28b-6a65-61bf-8003-2bf72861757b
parent_checkpoint_id: 1efbe28b-64ec-6d5b-8002-787f6d55eeee |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 4 replies
-
could you please paste the full graph where you're having an issue with the system prompt? i just literally modified the example in the documentation with your prompt and it works for me:
|
Beta Was this translation helpful? Give feedback.
I tried again with another LLM (chatGPT 3.5), and indeed it works way better.
I also got it to work with Llama but the results are quite unstable sometimes the System Prompt is applied sometime it's not...
Another thing that sent me on a wrong path was LangGraph Studio.
For some reason it doesn't shows the
system prompt
:But when we open the trace in LangSmith it's here:
(same behavior for all models)