fix(setup): preserve model selection on provider re-run#679
fix(setup): preserve model selection on provider re-run#679Protocol-zero-0 wants to merge 2 commits intonearai:mainfrom
Conversation
Keep the previously selected model when onboarding is re-run with the same LLM backend so users are not forced to re-enter it. Add regression tests for same-backend retention and backend-switch reset behavior, and document the setup rule in the onboarding spec. Made-with: Cursor
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the user experience of the setup wizard by intelligently managing the selected LLM model. Previously, re-running the setup process would unconditionally clear the chosen model, forcing users to re-enter it even if they were using the same backend. The changes introduce a new helper function that ensures the model selection is preserved when the backend is not changed, only clearing it when a different backend is explicitly chosen. This streamlines the setup process and reduces redundant user input. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request fixes a bug where the selected model was unconditionally cleared when re-running the setup wizard, even if the LLM provider remained the same. The fix introduces a new helper function, set_llm_backend_preserving_model, which conditionally clears the selected_model only when the backend provider actually changes. This new function is consistently applied across all provider setup helpers, which not only fixes the bug but also makes the behavior uniform. The pull request includes regression tests covering the new logic and updates the README.md to document the corrected behavior. The changes are correct and improve the user experience of the setup wizard.
There was a problem hiding this comment.
Pull request overview
This PR fixes a setup-wizard regression where re-running onboarding with the same LLM backend unnecessarily cleared the previously selected model, forcing users to re-enter it.
Changes:
- Introduce a shared helper to set
llm_backendwhile only clearingselected_modelwhen the backend actually changes. - Update provider setup paths to use the new helper consistently.
- Add regression tests for the “preserve vs clear model” behavior and update the onboarding spec.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
src/setup/wizard.rs |
Adds set_llm_backend_preserving_model() and routes provider setup flows through it; adds regression tests. |
src/setup/README.md |
Updates onboarding documentation to reflect the new “preserve model unless backend changes” behavior. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
src/setup/README.md
Outdated
| @@ -183,13 +183,17 @@ Switching between them overwrites the same credential slot. | |||
| - Pre-configured OpenAI-compatible preset with base URL `https://openrouter.ai/api/v1` | |||
| - Delegates to `setup_api_key_provider()` with a display name override ("OpenRouter") | |||
| - Sets `llm_backend = "openai_compatible"` and `openai_compatible_base_url` automatically | |||
| - Clears `selected_model` so Step 4 prompts for a model name (manual text input, no API-based model fetching) | |||
| - Clears `selected_model` only when switching from a different backend, so | |||
There was a problem hiding this comment.
This section references setup_openrouter, but there doesn’t appear to be any such function in the codebase (provider setup is dispatched via run_provider_setup() and registry SetupHints). Consider updating the spec to describe the actual implementation entrypoint(s) so future changes don’t chase a nonexistent helper.
Clarify that OpenRouter setup is dispatched through run_provider_setup and setup_api_key_provider rather than a standalone helper. This keeps the setup spec aligned with the actual implementation referenced in PR review feedback. Made-with: Cursor
|
Addressed the review feedback in commit I updated |
Summary
selected_modelwhen the setup wizard is re-run with the same LLM backendsrc/setup/README.mdProblem
Re-running onboarding remembered most prior settings, but the model name was lost and had to be entered again.
Root cause
Provider setup helpers reset
selected_modelunconditionally, even when the backend did not change.Fix
Introduce a shared helper that updates the backend while only clearing
selected_modelon real backend changes.Tests
cargo test --lib test_set_llm_backendcargo fmt --checkcargo clippy --all --benches --tests --examples --all-features -- -D warningsScope / Non-goals
Made with Cursor