How do we make the prompt explicit rather than implicit? #61
Replies: 3 comments 1 reply
-
Thanks for the feedback! Could you share some code example to explain your concern(what's hidden from developers)? |
Beta Was this translation helpful? Give feedback.
-
I think I have the same problem. I really want to be able to iterate on the prompt (with variables) inside of our prompt UI and get it right for a step, and then use it in the agent. Since ADK changes the prompts (by adding the description, etc), that makes it hard to get the right prompt for a single agent. Now that 9e473e0 is submitted, my previous attempted approach to solve for this is even worse. I was creating a system prompt with context variables for the bits of information I wanted from previous agents in a SequentialAgent flow. Now I get the context from the turn leading into calling this agent, the context is now duplicated, and also the LLMs perform worse with it separately than with the template I had specified directly. Would an appropriate solution be to override the Agent class and override |
Beta Was this translation helpful? Give feedback.
-
Response from ADK Answering Agent (experimental, answer may be inaccurate) TLDR: The recommended way to gain explicit control over the final prompt is to use a Hello @rcleveng and @GuillermoGaete, Thank you for raising this important point. The desire for explicit control over the final prompt sent to the LLM is a very valid and common requirement for achieving optimal performance, especially for complex routing or few-shot prompting strategies. To address your question, @rcleveng, while you could override A much cleaner and more targeted solution is to use the ADK's callback mechanism. Specifically, the The Here is a conceptual example of how you can use it to take full control of the prompt: from google.adk.agents import LlmAgent
from google.adk.callbacks import CallbackContext
from google.adk.llm_requests import LlmRequest
from google.genai.types import Content, Part
def override_prompt_callback(
callback_context: CallbackContext, llm_request: LlmRequest
) -> None:
"""
A callback to inspect and completely override the prompt before sending to the LLM.
"""
# The original, fully-formed prompt is in llm_request.prompt
# You can log it to see what the agent constructed:
print(f"Original ADK-generated prompt: {llm_request.prompt}")
# Your custom, fine-tuned prompt template
# This could be a complex, few-shot prompt that you've iterated on.
your_custom_prompt = f"""
This is my custom prompt structure.
I am ignoring the agent's default behavior.
The user's original message was: {callback_context.event.content.parts[0].text}
Now, route the user to 'Billing' or 'Support'.
"""
# Override the prompt in the request object
# The framework will use this modified request for the LLM call.
llm_request.prompt = your_custom_prompt
# --- When defining your agent ---
root_agent = LlmAgent(
name="HelpDeskCoordinator",
model="gemini-1.5-pro",
instruction="This instruction will be part of the prompt that you can override.",
sub_agents=[...],
# Register your callback function here
before_model_callback=override_prompt_callback
)
# Now, when you run this agent, the override_prompt_callback will be executed,
# and the LLM will receive your_custom_prompt instead of the default one. By using this callback, you can let the ADK handle the boilerplate of agent orchestration while precisely controlling the crucial LLM interaction yourself. This gives you the explicit control you're looking for and makes it much easier to integrate prompts you've developed and tested externally. [1] https://google.github.io/adk-docs/tutorials/agent-team/ |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello! How are you doing?
As an early adopter of LangChain at the time, I'd like to share a concern.
Today, I spent some time playing around with the ADK, and I really like some of the abstractions introduced.
My concern, however, is that we're currently prompting behind the scenes, especially with the LLMAgent, as if it were a black box. It's well-known that the prompt and the way it's structured are crucial for achieving good results.
As a user, I would feel much more comfortable if we could reach an approach that allows us to explicitly define the prompt instead of delegating it entirely to the ADK. The ADK should help me develop agents but shouldn't hide such important details as prompting.
This is simply my opinion, and I hope we continue developing these kinds of tools that enable us to build amazing products!
Beta Was this translation helpful? Give feedback.
All reactions