Skip to content

Conversation

@helloyongyang
Copy link
Collaborator

No description provided.

@helloyongyang helloyongyang merged commit aa80886 into main Jul 25, 2025
2 checks passed
@helloyongyang helloyongyang deleted the devv branch July 25, 2025 04:06
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @helloyongyang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the evaluation capabilities of the CustomGenerateJustInfer class by adding explicit support for multi-turn conversation assessment. It refactors the evaluation workflow to incorporate these new metrics and cleans up the codebase by removing unnecessary dependencies, resulting in a more focused and configurable evaluation component.

Highlights

  • Multi-turn Evaluation Logic: I've introduced a new eval_answer method within the CustomGenerateJustInfer class. This method is designed to evaluate multi-turn conversational responses by calculating the accuracy of the first turn (T1V) and the conditional accuracy of the second turn given the first (T1V_T2V). It includes helper functions to pair turns and check individual turn accuracy.
  • Refactoring and Dependency Management: The existing eval method has been refactored to delegate the core answer evaluation to the newly created eval_answer method. Additionally, several unused imports (glob, human_eval.data, human_eval.evaluation, tqdm, eval_base) have been removed, streamlining the class's dependencies and improving code clarity.
  • Configurable Output Path: The saving of evaluation results in the eval method now uses self.config.save.save_path, making the output file location configurable rather than hardcoded to custom_samples_ans.json.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces logic to evaluate multi-turn conversations by processing pairs of conversation turns. The changes include a new eval_answer method that calculates metrics based on the correctness of single and dual-turn interactions. There is a critical logical bug in the accuracy calculation for the second turn, a high severity issue where a ZeroDivisionError could occur, and several medium severity suggestions to improve code robustness, readability, and maintainability.

assert pair2['question'][0] == pair1['question'][1]

logger.info(f'CustomGenerateJustInfer T1V: {T1V}, T1V_T2V: {T1V_T2V}')
logger.info(f'CustomGenerateJustInfer Possibility: {T1V_T2V / T1V}')

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This division will raise a ZeroDivisionError if T1V is 0. Handle this case to prevent the script from crashing.

        logger.info(f'CustomGenerateJustInfer Possibility: {T1V_T2V / T1V if T1V > 0 else 0.0}')

Comment on lines +35 to +36
T1V = 0
T1V_T2V = 0

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The variable names T1V and T1V_T2V are not descriptive and reduce code readability. Consider using more descriptive names that clearly state their purpose, such as turn1_valid_count and both_turns_valid_count.

T1V_T2V = 0

def create_pairs(lst):
return [(lst[i], lst[i + 1]) for i in range(0, len(lst), 2)]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The list comprehension will raise an IndexError if lst has an odd number of elements. Validate the input to prevent unexpected crashes. Consider adding a check at the beginning of the function.

Suggested change
return [(lst[i], lst[i + 1]) for i in range(0, len(lst), 2)]
assert len(lst) % 2 == 0, "Input data for pairing must have an even number of elements."

Comment on lines +42 to +44
if gt[turn].lower() in answer[turn].lower():
return True
return False

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The if-else block for returning a boolean value can be simplified to a single return statement for conciseness.

            return gt[turn].lower() in answer[turn].lower()

pair_data = create_pairs(data)

for idx, item in enumerate(pair_data):
assert item[0]['image'] == item[1]['image']

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using assert for data validation can be risky, as assertions can be disabled with the -O flag in Python, which is common in production environments. For validating input data, it's generally safer to raise a ValueError or log an error and continue, to ensure the check is always performed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants