Skip to content

Conversation

@chenyushuo
Copy link
Collaborator

Description

As the title says.

Checklist

Please check the following items before code is ready to be reviewed.

  • Code has passed all tests
  • Docstrings have been added/updated in Google Style
  • Documentation has been updated
  • Code is ready for review

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @chenyushuo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical bug within the plugin_loader module that previously led to incorrect path resolution when attempting to load plugin modules. The fix ensures that plugin files are correctly identified and loaded regardless of how their paths are initially provided. Concurrently, the associated test suite has been significantly enhanced through parameterization, allowing for comprehensive validation of plugin loading across various path formats (absolute and relative), thereby preventing future regressions and improving the overall reliability of the plugin system.

Highlights

  • Bug Fix in Plugin Loading: Corrected an issue in plugin_loader.py where plugin modules were not being loaded correctly due to an incorrect path construction when calling load_from_file. The fix ensures the full, correct file path is used.
  • Enhanced Test Coverage: Refactored existing plugin loading tests (test_load_plugins_local, test_load_plugins_remote, test_passing_custom_class) to utilize the parameterized library, allowing them to run with multiple configurations.
  • Robust Path Handling in Tests: The updated tests now explicitly verify plugin loading functionality using both absolute and relative paths for plugin directories, significantly improving the robustness of the plugin_loader.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@chenyushuo
Copy link
Collaborator Author

/unittest-diff

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request fixes a bug in the plugin loader related to handling file paths, ensuring that both relative and absolute paths for plugin directories are correctly processed. The accompanying tests have been parameterized to validate this fix for different path formats. My review focuses on improving the robustness and maintainability of these new tests. I've suggested using context managers to handle environment variable changes to prevent test leakage, and also pointed out an opportunity to reduce code duplication in the test parameterization.

@github-actions
Copy link

Summary

Tests 📝 Passed ✅ Failed ❌ Skipped ⏭️ Other ❓ Flaky 🍂 Duration ⏱️
21 20 1 0 0 0 46ms

Failed Tests

Failed Tests ❌ Fail Message
❌ tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_local_1_tests_utils_plugins The test failed in the call phase due to an assertion error

Tests

Test Name Status Flaky Duration
tests/utils/eval_utils_test.py::TestComputeScore::test_both_boxed_and_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_both_boxed_and_not_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_empty_ground_truth 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_empty_solution_string 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_multiple_boxed_answers_in_solution 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_boxed_truth_raw_and_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_boxed_truth_raw_and_not_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_not_boxed 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_raw_and_ground_truth_boxed_equivalent 1ms
tests/utils/eval_utils_test.py::TestMathEvalUtils::test_extract_answer 1ms
tests/utils/eval_utils_test.py::TestMathEvalUtils::test_verify_math_answer 1ms
tests/utils/eval_utils_test.py::TestEvalUtils::test_is_equiv 1ms
tests/utils/log_test.py::LogTest::test_actor_log 6ms
tests/utils/log_test.py::LogTest::test_group_by_node 4ms
tests/utils/log_test.py::LogTest::test_no_actor_log 1ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_local_0__workspace_tests_utils_plugins 1ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_local_1_tests_utils_plugins 1ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_remote_0__workspace_tests_utils_plugins 9ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_remote_1_tests_utils_plugins 9ms
tests/utils/plugin_test.py::TestPluginLoader::test_passing_custom_class_0__workspace_tests_utils_plugins 5ms
tests/utils/plugin_test.py::TestPluginLoader::test_passing_custom_class_1_tests_utils_plugins 4ms

Github Test Reporter by CTRF 💚

@chenyushuo
Copy link
Collaborator Author

/unittest-diff

@github-actions
Copy link

Summary

Tests 📝 Passed ✅ Failed ❌ Skipped ⏭️ Other ❓ Flaky 🍂 Duration ⏱️
21 19 2 0 0 0 45.9s

Failed Tests

Failed Tests ❌ Fail Message
❌ tests/utils/plugin_test.py::TestPluginLoader::test_passing_custom_class_0__workspace_tests_utils_plugins The test failed in the call phase due to an exception
❌ tests/utils/plugin_test.py::TestPluginLoader::test_passing_custom_class_1_tests_utils_plugins The test failed in the call phase due to an exception

Tests

Test Name Status Flaky Duration
tests/utils/eval_utils_test.py::TestComputeScore::test_both_boxed_and_equivalent 53ms
tests/utils/eval_utils_test.py::TestComputeScore::test_both_boxed_and_not_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_empty_ground_truth 2ms
tests/utils/eval_utils_test.py::TestComputeScore::test_empty_solution_string 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_multiple_boxed_answers_in_solution 2ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_boxed_truth_raw_and_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_boxed_truth_raw_and_not_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_not_boxed 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_raw_and_ground_truth_boxed_equivalent 1ms
tests/utils/eval_utils_test.py::TestMathEvalUtils::test_extract_answer 4ms
tests/utils/eval_utils_test.py::TestMathEvalUtils::test_verify_math_answer 294ms
tests/utils/eval_utils_test.py::TestEvalUtils::test_is_equiv 5ms
tests/utils/log_test.py::LogTest::test_actor_log 6.8s
tests/utils/log_test.py::LogTest::test_group_by_node 4.6s
tests/utils/log_test.py::LogTest::test_no_actor_log 904ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_local_0__workspace_tests_utils_plugins 92ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_local_1_tests_utils_plugins 89ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_remote_0__workspace_tests_utils_plugins 9.4s
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_remote_1_tests_utils_plugins 9.3s
tests/utils/plugin_test.py::TestPluginLoader::test_passing_custom_class_0__workspace_tests_utils_plugins 4.8s
tests/utils/plugin_test.py::TestPluginLoader::test_passing_custom_class_1_tests_utils_plugins 4.7s

Github Test Reporter by CTRF 💚

@chenyushuo
Copy link
Collaborator Author

/unittest-diff

@github-actions
Copy link

Summary

Tests 📝 Passed ✅ Failed ❌ Skipped ⏭️ Other ❓ Flaky 🍂 Duration ⏱️
21 21 0 0 0 0 46.5s

Tests

Test Name Status Flaky Duration
tests/utils/eval_utils_test.py::TestComputeScore::test_both_boxed_and_equivalent 53ms
tests/utils/eval_utils_test.py::TestComputeScore::test_both_boxed_and_not_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_empty_ground_truth 2ms
tests/utils/eval_utils_test.py::TestComputeScore::test_empty_solution_string 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_multiple_boxed_answers_in_solution 2ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_boxed_truth_raw_and_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_boxed_truth_raw_and_not_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_not_boxed 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_raw_and_ground_truth_boxed_equivalent 1ms
tests/utils/eval_utils_test.py::TestMathEvalUtils::test_extract_answer 4ms
tests/utils/eval_utils_test.py::TestMathEvalUtils::test_verify_math_answer 263ms
tests/utils/eval_utils_test.py::TestEvalUtils::test_is_equiv 5ms
tests/utils/log_test.py::LogTest::test_actor_log 6.7s
tests/utils/log_test.py::LogTest::test_group_by_node 4.7s
tests/utils/log_test.py::LogTest::test_no_actor_log 904ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_local_0__workspace_tests_utils_plugins 93ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_local_1_tests_utils_plugins 89ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_remote_0__workspace_tests_utils_plugins 9.4s
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_remote_1_tests_utils_plugins 9.5s
tests/utils/plugin_test.py::TestPluginLoader::test_passing_custom_class_0__workspace_tests_utils_plugins 5.3s
tests/utils/plugin_test.py::TestPluginLoader::test_passing_custom_class_1_tests_utils_plugins 4.9s

Github Test Reporter by CTRF 💚

@pan-x-c pan-x-c merged commit 6ff5195 into agentscope-ai:main Oct 30, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants