-
Notifications
You must be signed in to change notification settings - Fork 0
Description
This issue summarizes the documentation updates needed based on the changes in the ADK Python release from v1.21.0 to v1.22.0.
Compare URL: google/adk-python@v1.21.0...v1.22.0
1. Create new documentation for the experimental Pub/Sub toolset.
Doc file: docs/tools/pubsub.md
Current state:
Documentation for the Pub/Sub toolset does not exist.
Proposed Change:
Create a new page that explains how to use the
PubSubToolset. The page should cover:
- How to enable the experimental feature.
- How to instantiate the
PubSubToolset. - How to configure credentials using
PubSubCredentialsConfig. - How to use the
publish_message,pull_messages, andacknowledge_messagestools with examples.
Reasoning:
A new experimental Pub/Sub toolset has been added in v1.22.0. This feature needs to be documented to allow users to discover and use it.
Reference: src/google/adk/tools/pubsub/pubsub_toolset.py
2. Create new documentation for database session migration.
Doc file: docs/sessions/migration.md
Current state:
Documentation for database session migration does not exist.
Proposed Change:
Create a new page that explains the database schema versions (v0 and v1), the deprecation of the v0 schema, and how to use the
adk migrate sessioncommand to migrate a database from v0 to v1.
Reasoning:
A new database migration feature has been added in v1.22.0. This is a critical change for developers using DatabaseSessionService and needs to be documented.
Reference: src/google/adk/sessions/migration/migration_runner.py
3. Update DatabaseSessionService documentation to include the close() method and a note about schema versions.
Doc file: docs/sessions/session.md
Current state:
DatabaseSessionService
<div class="language-support-tag">
<span class="lst-supported">Supported in ADK</span><span class="lst-python">Python v0.1.0</span><span class="lst-go">Go v0.1.0</span>
</div>
* **How it works:** Connects to a relational database (e.g., PostgreSQL,
MySQL, SQLite) to store session data persistently in tables.
* **Persistence:** Yes. Data survives application restarts.
* **Requires:** A configured database.
* **Best for:** Applications needing reliable, persistent storage that you
manage yourself.
```py
from google.adk.sessions import DatabaseSessionService
# Example using a local SQLite file:
# Note: The implementation requires an async database driver.
# For SQLite, use 'sqlite+aiosqlite' instead of 'sqlite' to ensure async compatibility.
db_url = "sqlite+aiosqlite:///./my_agent_data.db"
session_service = DatabaseSessionService(db_url=db_url)
```
<div class="admonition warning">
<p class="admonition-title">Async Driver Requirement</p>
<p><code>DatabaseSessionService</code> requires an async database driver. When using SQLite, you must use <code>sqlite+aiosqlite</code> instead of <code>sqlite</code> in your connection string. For other databases (PostgreSQL, MySQL), ensure you're using an async-compatible driver (e.g., <code>asyncpg</code> for PostgreSQL, <code>aiomysql</code> for MySQL).</p>
</div>
Proposed Change:
DatabaseSessionService
<div class="language-support-tag">
<span class="lst-supported">Supported in ADK</span><span class="lst-python">Python v0.1.0</span><span class="lst-go">Go v0.1.0</span>
</div>
* **How it works:** Connects to a relational database (e.g., PostgreSQL,
MySQL, SQLite) to store session data persistently in tables.
* **Persistence:** Yes. Data survives application restarts.
* **Requires:** A configured database.
* **Best for:** Applications needing reliable, persistent storage that you
manage yourself.
<div class="admonition warning">
<p class="admonition-title">Schema Version and Migration</p>
<p>As of v1.22.0, the database schema used by `DatabaseSessionService` has been updated to version 1, which uses JSON for serialization. Older versions used a pickle-based format (v0) which is now deprecated. The service can still read from v0 databases, but will log a warning. It is highly recommended to migrate your database to the latest schema using the `adk migrate session` command. For more details, see the [Database Migration Guide](migration.md).</p>
</div>
```py
from google.adk.sessions import DatabaseSessionService
# Example using a local SQLite file:
# Note: The implementation requires an async database driver.
# For SQLite, use 'sqlite+aiosqlite' instead of 'sqlite' to ensure async compatibility.
db_url = "sqlite+aiosqlite:///./my_agent_data.db"
session_service = DatabaseSessionService(db_url=db_url)
# It's recommended to close the session service when your application shuts down
# to release database connections gracefully.
await session_service.close()
```
<div class="admonition warning">
<p class="admonition-title">Async Driver Requirement</p>
<p><code>DatabaseSessionService</code> requires an async database driver. When using SQLite, you must use <code>sqlite+aiosqlite</code> instead of <code>sqlite</code> in your connection string. For other databases (PostgreSQL, MySQL), ensure you're using an async-compatible driver (e.g., <code>asyncpg</code> for PostgreSQL, <code>aiomysql</code> for MySQL).</p>
</div>
Reasoning:
The DatabaseSessionService has a new close() method that should be documented. The deprecation of the old schema is also a critical piece of information for users of this service.
Reference: src/google/adk/sessions/database_session_service.py
4. Add documentation for the new per_turn_user_simulator_quality_v1 evaluation metric.
Doc file: docs/evaluate/criteria.md
Current state:
per_turn_user_simulator_quality_v1
This criterion evaluates whether a user simulator is faithful to a conversation
scenario.
Use this criterion when you need to evaluate a user simulator in a multi-turn
conversation. It is designed to assess whether the simulator follows the
conversation plan and responds appropriately to the agent's actions.
This criterion determines whether the a user simulator follows a defined
conversation scenario.
It uses a large language model (LLM) to check each of the user simulator's
responses against the conversation history and the overall conversation plan.
For the first turn, this criterion checks if user simulator response matches the
starting_prompt defined in the conversation scenario.
For subsequent turns, the LLM-as-judge checks for:
- Adherence to the conversation plan: Did the user simulator's response
deviate from the plan? - Appropriate stop condition: Did the conversation end when it was supposed
to?
For best results, use the stop signal in LlmBackedUserSimulator.
"per_turn_user_simulator_quality_v1": {
"threshold": 0.8
}Score interpretation
This metric computes the percentage of conversation turns in which the user
simulator's response was judged to be valid according to the conversation
scenario. A score of 1.0 indicates that the simulator behaved as expected in
all turns, while a lower score indicates that the simulator deviated in many
turns. Higher values are better.
Proposed Change:
per_turn_user_simulator_quality_v1
This criterion evaluates whether a user simulator is faithful to a conversation
scenario.
Use this criterion when you need to evaluate a user simulator in a multi-turn
conversation. It is designed to assess whether the simulator follows the
conversation plan and responds appropriately to the agent's actions.
This criterion determines whether the a user simulator follows a defined
conversation scenario.
It uses a large language model (LLM) to check each of the user simulator's
responses against the conversation history and the overall conversation plan.
For the first turn, this criterion checks if user simulator response matches the
starting_prompt defined in the conversation scenario.
For subsequent turns, the LLM-as-judge checks for:
- Adherence to the conversation plan: Did the user simulator's response
deviate from the plan? - Appropriate stop condition: Did the conversation end when it was supposed
to? - Natural conversation: Does the user simulator's response sound like a
human? - Responsiveness: Does the user simulator's response answer the agent's
questions? - Correctness: Does the user simulator correct the agent's mistakes?
For best results, use the stop signal in LlmBackedUserSimulator.
You can also provide custom instructions to the user simulator by setting the
custom_instructions field in the LlmBackedUserSimulatorConfig. This allows
you to have more control over the user simulator's behavior. The custom
instructions must contain the following placeholders: {stop_signal},
{conversation_plan}, and {conversation_history}.
"per_turn_user_simulator_quality_v1": {
"threshold": 0.8
}Score interpretation
This metric computes the percentage of conversation turns in which the user
simulator's response was judged to be valid according to the conversation
scenario. A score of 1.0 indicates that the simulator behaved as expected in
all turns, while a lower score indicates that the simulator deviated in many
turns. Higher values are better.
Reasoning:
A new prebuilt metric, per_turn_user_simulator_quality_v1, has been added for evaluating user simulators. This metric needs to be documented so that users can understand how to use it and what the results mean. Additionally, the LlmBackedUserSimulator now supports custom instructions, which should also be documented.
Reference: src/google/adk/evaluation/simulation/per_turn_user_simulator_quality_v1.py
5. Document the custom_instructions field in LlmBackedUserSimulatorConfig.
Doc file: docs/evaluate/user-sim.md
Current state:
User simulator configuration
You can override the default user simulator configuration to change the model,
internal model behavior, and the maximum number of user-agent interactions.
The below EvalConfig shows the default user simulator configuration:
{
"criteria": {
# same as before
},
"user_simulator_config": {
"model": "gemini-2.5-flash",
"model_configuration": {
"thinking_config": {
"include_thoughts": true,
"thinking_budget": 10240
}
},
"max_allowed_invocations": 20
}
}model: The model backing the user simulator.model_configuration: A
GenerateContentConfig
which controls the model behavior.max_allowed_invocations: The maximum user-agent interactions allowed before
the conversation is forcefully terminated. This should be set to be greater than
the longest reasonable user-agent interaction in yourEvalSet.
Proposed Change:
User simulator configuration
You can override the default user simulator configuration to change the model,
internal model behavior, and the maximum number of user-agent interactions.
The below EvalConfig shows the default user simulator configuration:
{
"criteria": {
# same as before
},
"user_simulator_config": {
"model": "gemini-2.5-flash",
"model_configuration": {
"thinking_config": {
"include_thoughts": true,
"thinking_budget": 10240
}
},
"max_allowed_invocations": 20,
"custom_instructions": null
}
}model: The model backing the user simulator.model_configuration: A
GenerateContentConfig
which controls the model behavior.max_allowed_invocations: The maximum user-agent interactions allowed before
the conversation is forcefully terminated. This should be set to be greater than
the longest reasonable user-agent interaction in yourEvalSet.custom_instructions: Custom instructions for theLlmBackedUserSimulator.
The instructions must contain the following formatting placeholders:{stop_signal}: text to be generated when the user simulator decides
that the conversation is over.{conversation_plan}: the overall plan for the conversation that the
user simulator must follow.{conversation_history}: the conversation between the user and the agent
so far.
Reasoning:
The LlmBackedUserSimulator now supports custom instructions, which gives users more control over the simulator's behavior. This new feature needs to be documented.
Reference: src/google/adk/evaluation/simulation/llm_backed_user_simulator.py
6. Update Cloud Run deployment documentation for new flags.
Doc file: docs/deploy/cloud-run.md
Current state:
The
adk deploy cloud_runcommand documentation does not include the--use_local_storageflag and does not mention regex support for--allow_origins.
Proposed Change:
Update the description of the
adk deploy cloud_runcommand to include the--use_local_storageflag and to mention that--use_local_storage=falseis the default for Cloud Run deployments. Also, update the description of the--allow_originsflag to mention regex support.
Reasoning:
The adk deploy cloud_run command has new options for controlling local storage and CORS that need to be documented.
Reference: src/google/adk/cli/cli_deploy.py
7. Update GKE deployment documentation for new flags.
Doc file: docs/deploy/gke.md
Current state:
The
adk deploy gkecommand documentation does not include the--use_local_storageflag and does not mention regex support for--allow_origins.
Proposed Change:
Update the description of the
adk deploy gkecommand to include the--use_local_storageflag and to mention that--use_local_storage=falseis the default for GKE deployments. Also, update the description of the--allow_originsflag to mention regex support.
Reasoning:
The adk deploy gke command has new options for controlling local storage and CORS that need to be documented.
Reference: src/google/adk/cli/cli_deploy.py
8. Update quickstart documentation for new flags.
Doc file: docs/get-started/quickstart.md
Current state:
The quickstart page describes the
adk runandadk webcommands but doesn't mention the--use_local_storageor--allow_originsflags.
Proposed Change:
Add a note about the
--use_local_storageand--allow_originsflags to theadk runandadk websections. For--use_local_storage, explain that it'strueby default and that it can be disabled with--no_use_local_storage. For--allow_origins, add a brief note about its purpose and the new regex support.
Reasoning:
The adk run and adk web commands have new options for controlling local storage and CORS that need to be documented.
Reference: src/google/adk/cli/cli_tools_click.py
9. Document the LlmAgent.set_default_model() method.
Doc file: docs/agents/llm-agents.md
Current state:
The
LlmAgentdocumentation does not include theset_default_model()method.
Proposed Change:
Add a new section to the page that explains how to use the
LlmAgent.set_default_model()method to override the default model for allLlmAgentinstances.
Reasoning:
A new method, LlmAgent.set_default_model(), has been added to allow users to override the default model for all LlmAgent instances. This needs to be documented.
Reference: src/google/adk/agents/llm_agent.py
10. Update StreamingMode documentation.
Doc file: docs/runtime/runconfig.md
Current state:
The existing documentation for
StreamingModeis minimal.
Proposed Change:
Replace the existing documentation for
StreamingModewith the new, more detailed explanations from therun_config.pydocstrings.
Reasoning:
The docstrings for StreamingMode in run_config.py have been significantly improved and provide much more detail than the current documentation. The documentation should be updated to reflect these improvements.
Reference: src/google/adk/agents/run_config.py
11. Review and update LiteLLM documentation.
Doc file: docs/agents/models.md
Current state:
The documentation describes how to use LiteLLM but may not reflect the latest improvements.
Proposed Change:
Review the existing documentation for LiteLLM and ensure that it is up-to-date with the latest changes. This includes mentioning the improved handling of tool results, multipart content, and MIME types.
Reasoning:
The lite_llm.py file has been significantly updated to improve the LiteLLM integration. The documentation should be reviewed to ensure it reflects these improvements and provides the best possible guidance to users.
Reference: src/google/adk/models/lite_llm.py