-
Notifications
You must be signed in to change notification settings - Fork 771
Fix router constructor and generally how params are set #517
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,6 @@ | ||
# Core framework dependency | ||
mcp-agent @ file://../../../ # Link to the local mcp-agent project root | ||
|
||
# Additional dependencies specific to this example | ||
anthropic | ||
openai |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -42,7 +42,7 @@ Before running the agent, ensure you have your Gemini Developer API or Vertex AI | |
- `vertexai`: Boolean flag to enable VertexAI integration (default: false) | ||
- `project`: Google Cloud project ID (required if using VertexAI) | ||
- `location`: Google Cloud location (required if using VertexAI) | ||
- `default_model`: Defaults to "gemini-2.0-flash" but can be customized in your config | ||
- `default_model`: Defaults to "gemini-2.5-flash" but can be customized in your config | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chainDocument the new default_model more explicitly and add it to the sample config. Good call on switching to "gemini-2.5-flash". Please also reflect this in the YAML example so users discover/override it easily. Add to the snippet under the google section: google:
api_key: "your-google-api-key"
vertexai: false
# Include these if using VertexAI
# project: "your-google-cloud-project"
# location: "us-central1"
# Optional override; falls back to gemini-2.5-flash if unset
default_model: "gemini-2.5-flash" Run this quick check to ensure no stale references remain to the old default: 🏁 Script executed: #!/bin/bash
# Find any remaining mentions of gemini-2.0-flash or mixed 2-0 variants.
rg -nP --glob '!**/dist/**' '(gemini[-_.]?2[.-]0[-_.]?flash)' Length of output: 148 🏁 Script executed: #!/bin/bash
set -euo pipefail
echo "Searching repository for legacy 'gemini-2.0-flash' variants (case-insensitive)..."
rg -n --hidden -uu -S '(?i)gemini[-_.]?2[.-]0[-_.]?flash' || true
echo
echo "Searching repository for new 'gemini-2.5-flash' mentions..."
rg -n --hidden -uu -S 'gemini[-_.]?2[.-]5[-_.]?flash' || true
echo
echo "Searching repository for 'default_model' occurrences..."
rg -n --hidden -uu -S 'default_model' || true
echo
echo "Printing the referenced README snippet if it exists:"
if [ -f examples/model_providers/mcp_basic_google_agent/README.md ]; then
echo "---- README: examples/model_providers/mcp_basic_google_agent/README.md (lines 1-200) ----"
sed -n '1,200p' examples/model_providers/mcp_basic_google_agent/README.md
else
echo "README not found at examples/model_providers/mcp_basic_google_agent/README.md"
fi Length of output: 27511 Document and unify Google default_model = "gemini-2.5-flash" README already states gemini-2.5-flash — add an explicit default_model to the google sample config and replace remaining gemini-2.0-flash occurrences. Files to update:
Add to the README google snippet: google:
api_key: "your-google-api-key"
vertexai: false
# Include these if using VertexAI
# project: "your-google-cloud-project"
# location: "us-central1"
# Optional override; falls back to gemini-2.5-flash if unset
default_model: "gemini-2.5-flash" 🤖 Prompt for AI Agents
|
||
|
||
You can provide these in one of the following ways: | ||
|
||
|
Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
@@ -123,7 +123,7 @@ async def create_router_llm( | |||||||||||||||||||||||||||||||
functions: List[Callable] | None = None, | ||||||||||||||||||||||||||||||||
routing_instruction: str | None = None, | ||||||||||||||||||||||||||||||||
name: str | None = None, | ||||||||||||||||||||||||||||||||
provider: SupportedRoutingProviders = "openai", | ||||||||||||||||||||||||||||||||
provider: SupportedLLMProviders = "openai", | ||||||||||||||||||||||||||||||||
model: str | ModelPreferences | None = None, | ||||||||||||||||||||||||||||||||
request_params: RequestParams | None = None, | ||||||||||||||||||||||||||||||||
context: Context | None = None, | ||||||||||||||||||||||||||||||||
|
@@ -185,8 +185,22 @@ async def create_router_llm( | |||||||||||||||||||||||||||||||
**kwargs, | ||||||||||||||||||||||||||||||||
) | ||||||||||||||||||||||||||||||||
else: | ||||||||||||||||||||||||||||||||
raise ValueError( | ||||||||||||||||||||||||||||||||
f"Unsupported routing provider: {provider}. Currently supported providers are: ['openai', 'anthropic']. To request support, please create an issue at https://github.com/lastmile-ai/mcp-agent/issues" | ||||||||||||||||||||||||||||||||
factory = _llm_factory( | ||||||||||||||||||||||||||||||||
provider=provider, | ||||||||||||||||||||||||||||||||
model=model, | ||||||||||||||||||||||||||||||||
request_params=request_params, | ||||||||||||||||||||||||||||||||
context=context, | ||||||||||||||||||||||||||||||||
) | ||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||
return await LLMRouter.create( | ||||||||||||||||||||||||||||||||
name=name, | ||||||||||||||||||||||||||||||||
llm_factory=factory, | ||||||||||||||||||||||||||||||||
server_names=server_names, | ||||||||||||||||||||||||||||||||
agents=normalized_agents, | ||||||||||||||||||||||||||||||||
functions=functions, | ||||||||||||||||||||||||||||||||
routing_instruction=routing_instruction, | ||||||||||||||||||||||||||||||||
context=context, | ||||||||||||||||||||||||||||||||
**kwargs, | ||||||||||||||||||||||||||||||||
) | ||||||||||||||||||||||||||||||||
Comment on lines
+188
to
204
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Router path: llm_factory currently drops classifier instruction (behavioral regression). LLMRouter calls llm_factory with instruction/context. Your _llm_factory returns a lambda that doesn’t accept these kwargs, triggering the TypeError fallback and creating the classifier LLM without the routing instruction. That will degrade routing quality/non‑deterministically change behavior. Apply the fix in my comment on Lines 1037-1041 to accept and forward instruction/name. 🤖 Prompt for AI Agents
|
||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||
|
@@ -974,9 +988,20 @@ def _llm_factory( | |||||||||||||||||||||||||||||||
request_params: RequestParams | None = None, | ||||||||||||||||||||||||||||||||
context: Context | None = None, | ||||||||||||||||||||||||||||||||
) -> Callable[[Agent], AugmentedLLM]: | ||||||||||||||||||||||||||||||||
# Allow model to come from an explicit string, request_params.model, | ||||||||||||||||||||||||||||||||
# or request_params.modelPreferences (to run selection) in that order. | ||||||||||||||||||||||||||||||||
# Compute the chosen model by precedence: | ||||||||||||||||||||||||||||||||
# 1) explicit model_name from _select_provider_and_model (includes ModelPreferences) | ||||||||||||||||||||||||||||||||
# 2) provider default from provider_cls.get_provider_config(context) | ||||||||||||||||||||||||||||||||
# 3) provider hardcoded fallback | ||||||||||||||||||||||||||||||||
model_selector_input = ( | ||||||||||||||||||||||||||||||||
model | ||||||||||||||||||||||||||||||||
or getattr(request_params, "model", None) | ||||||||||||||||||||||||||||||||
or getattr(request_params, "modelPreferences", None) | ||||||||||||||||||||||||||||||||
) | ||||||||||||||||||||||||||||||||
prov, model_name = _select_provider_and_model( | ||||||||||||||||||||||||||||||||
provider=provider, | ||||||||||||||||||||||||||||||||
model=model or getattr(request_params, "model", None), | ||||||||||||||||||||||||||||||||
model=model_selector_input, | ||||||||||||||||||||||||||||||||
context=context, | ||||||||||||||||||||||||||||||||
) | ||||||||||||||||||||||||||||||||
provider_cls = _get_provider_class(prov) | ||||||||||||||||||||||||||||||||
|
@@ -990,9 +1015,28 @@ def _default_params() -> RequestParams | None: | |||||||||||||||||||||||||||||||
return RequestParams(modelPreferences=model) | ||||||||||||||||||||||||||||||||
return None | ||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||
# Merge provider-selected or configured default model into RequestParams if missing. | ||||||||||||||||||||||||||||||||
effective_params: RequestParams | None = request_params | ||||||||||||||||||||||||||||||||
if effective_params is not None: | ||||||||||||||||||||||||||||||||
chosen_model: str | None = model_name | ||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||
if not chosen_model: | ||||||||||||||||||||||||||||||||
cfg_obj = None | ||||||||||||||||||||||||||||||||
try: | ||||||||||||||||||||||||||||||||
cfg_obj = provider_cls.get_provider_config(context) | ||||||||||||||||||||||||||||||||
except Exception: | ||||||||||||||||||||||||||||||||
cfg_obj = None | ||||||||||||||||||||||||||||||||
if cfg_obj is not None: | ||||||||||||||||||||||||||||||||
chosen_model = getattr(cfg_obj, "default_model", None) | ||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||
# If the user did not specify a model in RequestParams, but provided other | ||||||||||||||||||||||||||||||||
# overrides (maxTokens, temperature, etc.), fill in the model only. | ||||||||||||||||||||||||||||||||
if getattr(effective_params, "model", None) is None and chosen_model: | ||||||||||||||||||||||||||||||||
effective_params.model = chosen_model | ||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||
return lambda agent: provider_cls( | ||||||||||||||||||||||||||||||||
agent=agent, | ||||||||||||||||||||||||||||||||
default_request_params=request_params or _default_params(), | ||||||||||||||||||||||||||||||||
default_request_params=effective_params or _default_params(), | ||||||||||||||||||||||||||||||||
context=context, | ||||||||||||||||||||||||||||||||
) | ||||||||||||||||||||||||||||||||
Comment on lines
1037
to
1041
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Pass instruction/name through llm_factory and avoid TypeError fallback. Make the factory accept optional kwargs and forward instruction/name to provider_cls. - return lambda agent: provider_cls(
- agent=agent,
- default_request_params=effective_params or _default_params(),
- context=context,
- )
+ def _factory(agent, instruction=None, name=None, **_):
+ return provider_cls(
+ agent=agent,
+ instruction=instruction,
+ name=name,
+ default_request_params=effective_params or _default_params(),
+ context=context,
+ )
+ return _factory 📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents
|
||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace file:// relative URL with an editable local path.
mcp-agent @ file://../../../
is a non‑portable/likely invalid file URL (PEP 508 expects an absolute file URL). Use an editable local path for consistency across pip/uv.Apply:
📝 Committable suggestion
🤖 Prompt for AI Agents