Skip to content

Conversation

@holtskinner
Copy link
Member

@holtskinner holtskinner commented Jul 14, 2025

@holtskinner holtskinner requested a review from a team as a code owner July 14, 2025 15:58
@holtskinner holtskinner merged commit 0e93e43 into main Jul 14, 2025
2 checks passed
@holtskinner holtskinner deleted the a2a-eval branch July 14, 2025 15:59
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @holtskinner, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request adds a comprehensive Jupyter Notebook that serves as a practical guide for evaluating A2A and ADK agents within the Google Cloud Vertex AI ecosystem. It provides a concrete example of an agent, integrates it into the A2A framework, and demonstrates how to leverage Vertex AI's evaluation capabilities to measure the agent's performance and quality.

Highlights

  • New Evaluation Notebook: Introduced a new Jupyter Notebook (notebooks/a2a_evaluation.ipynb) dedicated to demonstrating end-to-end evaluation of A2A/ADK agents using Vertex AI Evaluation services.
  • Reimbursement Agent Implementation: The notebook includes a complete, in-memory implementation of a sample 'Reimbursement Agent' built using the google.adk framework, showcasing its capabilities for handling reimbursement requests with tool usage.
  • A2A Integration and Execution: An AgentExecutor is implemented to integrate the ReimbursementAgent with the a2a framework, allowing it to process messages, manage tasks, and handle structured form inputs/outputs.
  • Vertex AI Evaluation Setup: The notebook configures and utilizes Vertex AI Evaluation, defining helper functions to parse agent outcomes and setting up an EvalTask to assess agent responses based on metrics like 'safety' and 'coherence'.
  • Evaluation Data and Visualization: A sample evaluation dataset is provided, and functions are included to display summary metrics and row-wise results from the Vertex AI evaluation, facilitating quick analysis of agent performance.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request introduces a new notebook for demonstrating end-to-end evaluation of an A2A/ADK agent using Vertex AI. Several critical issues were identified, including missing await calls and the use of a hardcoded session ID, along with other high and medium severity issues related to correctness, robustness, and maintainability.

" )\n",
" continue\n",
" else:\n",
" updater.update_status(\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

Missing await call for updater.update_status(...). This will cause the status to not be updated.

                    await updater.update_status(
                        TaskState.failed,
                        new_agent_text_message(
                            'Reaching an unexpected state',
                            task.contextId,
                            task.id,
                        ),
                        final=True,
                    )

" \"parts\": [{\"kind\": \"text\", \"text\": query}], # good one\n",
" \"messageId\": message_id_send,\n",
" \"kind\": \"message\",\n",
" \"contextId\": \"colab-session-xyz\",\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

Using a hardcoded contextId for all evaluation runs is a critical issue. Generate a unique contextId for each call to a2a_parsed_outcome.

                    "contextId": f"colab-session-{get_id()}",

" # not have current task, create a new one and use it.\n",
" if not task:\n",
" task = new_task(context.message)\n",
" event_queue.enqueue_event(task)\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

Missing await call for event_queue.enqueue_event(task). This will cause the task to not be executed.

            await event_queue.enqueue_event(task)

" is_task_complete = item[\"is_task_complete\"]\n",
" artifacts = None\n",
" if not is_task_complete:\n",
" updater.update_status(\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

Missing await call for updater.update_status(...). This will cause the status to not be updated.

                await updater.update_status(
                    TaskState.working,
                    new_agent_text_message(
                        item['updates'], task.contextId, task.id
                    ),
                )

" and \"result\" in item[\"content\"][\"response\"]\n",
" ):\n",
" data = json.loads(item[\"content\"][\"response\"][\"result\"])\n",
" updater.update_status(\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

Missing await call for updater.update_status(...). This will cause the status to not be updated.

                    await updater.update_status(
                        TaskState.input_required,
                        new_agent_parts_message(
                            [Part(root=DataPart(data=data))],
                            task.contextId,
                            task.id,
                        ),
                        final=True,
                    )

" \"\"\"Parse ADK event output into a structured dictionary format, with the predicted trajectory dumped as a JSON string.\"\"\"\n",
"\n",
" final_response = \"\"\n",
" predicted_trajectory_list = []\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

predicted_trajectory_list is initialized but never populated, resulting in an empty predicted_trajectory field. Extract tool calls from the rpc_response_send_msg.

" if any(selected_metric in k for selected_metric in metrics)\n",
" }\n",
"\n",
" min_val = min(summary_metrics.values())\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Calling min() or max() on an empty sequence will raise a ValueError. Add a check to handle this case gracefully.

    if not summary_metrics.values():
        print("No summary metrics to plot.")
        return

    min_val = min(summary_metrics.values())
    max_val = max(summary_metrics.values())

" markdown = \"### AI Response\\n\"\n",
" markdown += f\"{output['response']}\\n\\n\"\n",
"\n",
" if output[\"predicted_trajectory\"]:\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This line modifies the input dictionary output directly. Load the JSON into a new variable to avoid modifying the original data structure.

    if output["predicted_trajectory"]:
        predicted_trajectory = json.loads(output["predicted_trajectory"])
        markdown += "### Function Calls\n"
        for call in predicted_trajectory:
            markdown += f"- **Function**: `{call['tool_name']}`\n"
            markdown += "  - **Arguments**:\n"
            for key, value in call["tool_input"].items():
                markdown += f"    - `{key}`: `{value}`\n"

" form_request: dict[str, Any],\n",
" tool_context: ToolContext,\n",
" instructions: str | None = None,\n",
") -> dict[str, Any]:\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The function return_form is type-hinted to return a dict[str, Any], but it actually returns a string from json.dumps(form_dict) on line 353. This mismatch can be misleading. Update the return type hint to str.

) -> str:

" elif (\n",
" event.content\n",
" and event.content.parts\n",
" and any([True for p in event.content.parts if p.function_response])\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The condition any([True for p in event.content.parts if p.function_response]) is verbose. Simplify using a generator expression.

elif (
                    event.content
                    and event.content.parts
                    and any(p.function_response for p in event.content.parts)
                ):

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants