Skip to content

fix(next): update Next.js and related dependencies to v15.5.9, improve Gemini model selection and rotation logic#19

Merged
hoangsonww merged 1 commit intomasterfrom
fix/fix-next
Dec 27, 2025
Merged

fix(next): update Next.js and related dependencies to v15.5.9, improve Gemini model selection and rotation logic#19
hoangsonww merged 1 commit intomasterfrom
fix/fix-next

Conversation

@hoangsonww
Copy link
Copy Markdown
Owner

This pull request introduces significant improvements to Gemini AI model selection and error handling in lib/chatWithCollabifyAI.ts, as well as a minor dependency update and a new IDE configuration file. The main enhancement is the dynamic fetching, filtering, and rotation of eligible Gemini models, improving reliability and flexibility in AI chat interactions.

Gemini AI Model Selection & Error Handling:

  • Added logic to dynamically fetch available Gemini models from the API, filter for eligible models, deduplicate, and rotate through them for each request, with a fallback to a default model if fetching fails. [1] [2]
  • Updated the chat function to attempt requests using all eligible models in rotation, throwing the last encountered error if all fail, which improves robustness in case of model-specific issues. [1] [2]
  • Removed hardcoded use of the gemini-1.5-flash model, replacing it with dynamic model selection logic.

Dependency Update:

  • Updated the next dependency in package.json from version 15.2.4 to ^15.5.9 for improved compatibility and bug fixes.

IDE Configuration:

  • Added .idea/copilot.data.migration.ask2agent.xml to store migration status for the Ask2Agent feature in the IDE.

@hoangsonww hoangsonww self-assigned this Dec 27, 2025
@hoangsonww hoangsonww added bug Something isn't working documentation Improvements or additions to documentation duplicate This issue or pull request already exists enhancement New feature or request good first issue Good for newcomers labels Dec 27, 2025
@vercel
Copy link
Copy Markdown

vercel Bot commented Dec 27, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
collabify-project-manager Ready Ready Preview, Comment Dec 27, 2025 3:55am

@hoangsonww hoangsonww merged commit 9583449 into master Dec 27, 2025
3 checks passed
@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello @hoangsonww, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the Gemini AI integration by introducing dynamic model selection and robust error handling, moving away from a hardcoded model. It also includes an important update to the Next.js framework and its associated dependencies, ensuring the application remains current and stable. A minor IDE configuration file was also added.

Highlights

  • Dynamic Gemini Model Selection: Implemented logic to dynamically fetch, filter, deduplicate, and rotate eligible Gemini AI models, enhancing reliability and flexibility in AI chat interactions.
  • Robust AI Chat Error Handling: The chat function now attempts requests with multiple eligible models in rotation, improving resilience against model-specific failures and ensuring a fallback mechanism.
  • Next.js and Dependency Updates: Upgraded the next dependency to ^15.5.9 along with related @next/swc-* and sharp packages, ensuring compatibility and leveraging the latest features and bug fixes.
  • IDE Configuration: Added a new .idea/copilot.data.migration.ask2agent.xml file for IDE-specific migration status.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces significant improvements to the Gemini AI integration by adding dynamic model fetching, caching, and rotation, which enhances the resilience and flexibility of the chat feature. The dependency updates, including Next.js, are also noted. My review focuses on the new model selection logic in lib/chatWithCollabifyAI.ts, where I've identified a few areas for improvement regarding correctness, maintainability, and robustness. Specifically, I've pointed out a potential typo in a fallback model name, suggested code simplifications, recommended better error logging for debuggability, and highlighted a potential race condition and a risky assumption in the model eligibility logic. Overall, these are great enhancements to the application.

}>;
};

const FALLBACK_GEMINI_MODELS = ["gemini-2.5-flash"];
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The model name gemini-2.5-flash appears to be a typo. The current flash model from Google is gemini-1.5-flash. Using an incorrect model name will cause the fallback mechanism to fail. Please verify and correct the model name.

Suggested change
const FALLBACK_GEMINI_MODELS = ["gemini-2.5-flash"];
const FALLBACK_GEMINI_MODELS = ["gemini-1.5-flash"];

}

const methods = model.supportedGenerationMethods ?? [];
return methods.length === 0 || methods.includes("generateContent");
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The condition methods.length === 0 optimistically assumes that if the supportedGenerationMethods array is empty or missing, the model is eligible for generateContent. This could be a risky assumption and might lead to using models that don't support content generation. It would be safer to explicitly check for the generateContent method. If the API documentation guarantees that an empty array implies support, please add a comment to clarify this behavior.

Suggested change
return methods.length === 0 || methods.includes("generateContent");
return methods.includes("generateContent");

Comment on lines +41 to +51
const dedupeModels = (models: string[]): string[] => {
const seen = new Set<string>();
const unique: string[] = [];
for (const model of models) {
if (!seen.has(model)) {
seen.add(model);
unique.push(model);
}
}
return unique;
};
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The dedupeModels function can be made more concise and idiomatic by using a Set to automatically handle uniqueness. This improves readability and reduces the amount of code.

const dedupeModels = (models: string[]): string[] => [...new Set(models)];

Comment on lines +57 to +58
const startIndex = modelRotationIndex % models.length;
modelRotationIndex = (modelRotationIndex + 1) % models.length;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is a potential race condition here with modelRotationIndex. Since it's a shared mutable global variable, concurrent requests to getRotatedModels could read the same modelRotationIndex value before it's updated. This would result in them receiving the same model sequence, leading to imperfect rotation under high load. While this might not be critical for the current application, it's an important consideration for concurrent environments.

Comment on lines +165 to +167
} catch {
modelNames = FALLBACK_GEMINI_MODELS;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The catch block is currently empty, which means any errors from fetchGeminiModels will be silently ignored, and the system will fall back to the default models without any indication of a problem. This can make debugging difficult. It's a good practice to log the error to provide visibility into failures.

} catch (error) {
    console.error("Failed to fetch Gemini models, using fallback:", error);
    modelNames = FALLBACK_GEMINI_MODELS;
  }

Comment on lines +192 to +194
} catch (error) {
lastError = error;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

In the model rotation loop, you're catching errors but only storing the last one. If multiple models fail, the reasons for the earlier failures are lost, which can make debugging more difficult. It would be beneficial to log each error as it occurs to get a complete picture of what went wrong during the rotation.

    } catch (error) {
      console.warn(`Model ${modelName} failed:`, error);
      lastError = error;
    }

@github-project-automation github-project-automation Bot moved this from Backlog to Done in Collabify Project Board Dec 28, 2025
@hoangsonww hoangsonww moved this from Done to Backlog in Collabify Project Board Dec 28, 2025
@hoangsonww hoangsonww moved this from Backlog to Ready in Collabify Project Board Feb 7, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working documentation Improvements or additions to documentation duplicate This issue or pull request already exists enhancement New feature or request good first issue Good for newcomers

Projects

Status: Ready

Development

Successfully merging this pull request may close these issues.

1 participant