Skip to content

fix(setup): preserve model selection on provider re-run#679

Open
Protocol-zero-0 wants to merge 2 commits intonearai:mainfrom
Protocol-zero-0:fix/600-onboarding-model-retention
Open

fix(setup): preserve model selection on provider re-run#679
Protocol-zero-0 wants to merge 2 commits intonearai:mainfrom
Protocol-zero-0:fix/600-onboarding-model-retention

Conversation

@Protocol-zero-0
Copy link

Summary

  • preserve selected_model when the setup wizard is re-run with the same LLM backend
  • clear the stored model only when the user actually switches to a different backend
  • add regression tests and sync the onboarding spec in src/setup/README.md

Problem

Re-running onboarding remembered most prior settings, but the model name was lost and had to be entered again.

Root cause

Provider setup helpers reset selected_model unconditionally, even when the backend did not change.

Fix

Introduce a shared helper that updates the backend while only clearing selected_model on real backend changes.

Tests

  • cargo test --lib test_set_llm_backend
  • cargo fmt --check
  • cargo clippy --all --benches --tests --examples --all-features -- -D warnings

Scope / Non-goals

  • does not change model selection UX beyond preserving the existing model on same-backend re-runs
  • does not change provider-specific model fetching behavior

Made with Cursor

Keep the previously selected model when onboarding is re-run with the same LLM backend so users are not forced to re-enter it. Add regression tests for same-backend retention and backend-switch reset behavior, and document the setup rule in the onboarding spec.

Made-with: Cursor
Copilot AI review requested due to automatic review settings March 7, 2026 20:17
@github-actions github-actions bot added size: M 50-199 changed lines scope: setup Onboarding / setup scope: docs Documentation risk: high Safety, secrets, auth, or critical infrastructure contributor: new First-time contributor and removed size: M 50-199 changed lines labels Mar 7, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the user experience of the setup wizard by intelligently managing the selected LLM model. Previously, re-running the setup process would unconditionally clear the chosen model, forcing users to re-enter it even if they were using the same backend. The changes introduce a new helper function that ensures the model selection is preserved when the backend is not changed, only clearing it when a different backend is explicitly chosen. This streamlines the setup process and reduces redundant user input.

Highlights

  • Model Selection Preservation: Implemented a mechanism to preserve the selected_model during the setup wizard's re-run if the LLM backend remains unchanged.
  • Conditional Model Clearing: Ensured that the selected_model is only cleared when a user explicitly switches to a different LLM backend, preventing unnecessary re-entry.
  • New Helper Function: Introduced a new shared helper function, set_llm_backend_preserving_model, to encapsulate the logic for updating the backend and conditionally clearing the model.
  • Regression Tests: Added dedicated regression tests to validate the new model preservation and clearing behavior under different scenarios.
  • Documentation Update: Updated the src/setup/README.md documentation to reflect the revised model selection handling during the onboarding process.
Changelog
  • src/setup/README.md
    • Updated description for OpenAI-compatible presets to clarify selected_model clearing conditions.
    • Added a point under API-key providers regarding selected_model preservation when the backend is unchanged.
  • src/setup/wizard.rs
    • Added a new private method set_llm_backend_preserving_model to handle LLM backend assignment and conditional model clearing.
    • Replaced direct assignments to self.settings.llm_backend and subsequent selected_model clearing with calls to set_llm_backend_preserving_model in setup_no_wizard_provider, setup_nearai, setup_api_key_provider, setup_openai_compatible_preset, and setup_openai_compatible_provider.
    • Added two new unit tests: test_set_llm_backend_preserves_model_when_backend_unchanged and test_set_llm_backend_clears_model_when_backend_changes.
Activity
  • Implemented a new helper function to manage LLM backend and model selection.
  • Refactored existing setup functions to utilize the new helper.
  • Added comprehensive unit tests to cover the new logic.
  • Updated relevant documentation to reflect the changes in behavior.
  • No external review comments or discussions have occurred yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request fixes a bug where the selected model was unconditionally cleared when re-running the setup wizard, even if the LLM provider remained the same. The fix introduces a new helper function, set_llm_backend_preserving_model, which conditionally clears the selected_model only when the backend provider actually changes. This new function is consistently applied across all provider setup helpers, which not only fixes the bug but also makes the behavior uniform. The pull request includes regression tests covering the new logic and updates the README.md to document the corrected behavior. The changes are correct and improve the user experience of the setup wizard.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes a setup-wizard regression where re-running onboarding with the same LLM backend unnecessarily cleared the previously selected model, forcing users to re-enter it.

Changes:

  • Introduce a shared helper to set llm_backend while only clearing selected_model when the backend actually changes.
  • Update provider setup paths to use the new helper consistently.
  • Add regression tests for the “preserve vs clear model” behavior and update the onboarding spec.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.

File Description
src/setup/wizard.rs Adds set_llm_backend_preserving_model() and routes provider setup flows through it; adds regression tests.
src/setup/README.md Updates onboarding documentation to reflect the new “preserve model unless backend changes” behavior.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +182 to +186
@@ -183,13 +183,17 @@ Switching between them overwrites the same credential slot.
- Pre-configured OpenAI-compatible preset with base URL `https://openrouter.ai/api/v1`
- Delegates to `setup_api_key_provider()` with a display name override ("OpenRouter")
- Sets `llm_backend = "openai_compatible"` and `openai_compatible_base_url` automatically
- Clears `selected_model` so Step 4 prompts for a model name (manual text input, no API-based model fetching)
- Clears `selected_model` only when switching from a different backend, so
Copy link

Copilot AI Mar 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section references setup_openrouter, but there doesn’t appear to be any such function in the codebase (provider setup is dispatched via run_provider_setup() and registry SetupHints). Consider updating the spec to describe the actual implementation entrypoint(s) so future changes don’t chase a nonexistent helper.

Copilot uses AI. Check for mistakes.
Clarify that OpenRouter setup is dispatched through run_provider_setup and setup_api_key_provider rather than a standalone helper. This keeps the setup spec aligned with the actual implementation referenced in PR review feedback.

Made-with: Cursor
@github-actions github-actions bot added the size: M 50-199 changed lines label Mar 7, 2026
@Protocol-zero-0
Copy link
Author

Addressed the review feedback in commit 810db70.

I updated src/setup/README.md to reference the actual OpenRouter setup path: run_provider_setup() dispatches to setup_api_key_provider(), rather than implying a standalone setup_openrouter helper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

contributor: new First-time contributor risk: high Safety, secrets, auth, or critical infrastructure scope: docs Documentation scope: setup Onboarding / setup size: M 50-199 changed lines

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants