-
Notifications
You must be signed in to change notification settings - Fork 770
Fix router constructor and generally how params are set #517
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds provider-default integration and provider-config accessors to LLM workflow components, broadens router LLM provider support, updates Google default model and benchmarks naming, and introduces example requirements and config changes for Google in examples. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Caller
participant Factory as workflows.factory.create_router_llm
participant LLMFactory as _llm_factory
participant Router as LLMRouter / OpenAI/Anthropic Router
participant Ctx as Context.config
Caller->>Factory: create_router_llm(provider, agents, funcs, req_params, context)
alt provider is "openai" or "anthropic"
Factory->>Router: create specialized provider router
Router-->>Caller: provider-specific router
else other providers (azure/google/bedrock/ollama)
Factory->>LLMFactory: build llm_factory with params & context
LLMFactory->>Ctx: call AugmentedLLM.get_provider_config(context)
Ctx-->>LLMFactory: provider config (may include default_model)
LLMFactory->>LLMFactory: merge provider default into RequestParams
LLMFactory-->>Factory: llm_factory (with effective params)
Factory->>Router: LLMRouter.create(llm_factory, agents, funcs, routing_instruction, context)
Router-->>Caller: generic LLMRouter instance
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (5 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/mcp_agent/workflows/llm/augmented_llm_ollama.py (1)
66-76
: Guard against missing OpenAI-config and use the accessor.Current call path assumes context.config.openai is always set; if absent, structured completions will crash.
Apply:
- structured_response = await self.executor.execute( + provider_config = type(self).get_provider_config(self.context) + if provider_config is None: + raise ValueError("Ollama requires OpenAI-compatible config (base_url/api_key). Set Settings.openai.*") + structured_response = await self.executor.execute( OllamaCompletionTasks.request_structured_completion_task, RequestStructuredCompletionRequest( - config=self.context.config.openai, + config=provider_config, response_model=response_model if not serialized_response_model else None, serialized_response_model=serialized_response_model, response_str=response, model=model, ), )src/mcp_agent/workflows/llm/augmented_llm_openai.py (1)
1186-1196
: Fix dict access for image_url in content conversion.
c["image_url"]
is a dict; calling.startswith
on it will throw. Use the nested url string.Apply:
- elif ( - c["type"] == "image_url" - ): # isinstance(c, ChatCompletionContentPartImageParam): - if c["image_url"].startswith("data:"): - mime_type, base64_data = image_url_to_mime_and_base64( - c["image_url"] - ) + elif c["type"] == "image_url": + url = c["image_url"]["url"] + if url.startswith("data:"): + mime_type, base64_data = image_url_to_mime_and_base64(url) mcp_content.append( ImageContent(type="image", data=base64_data, mimeType=mime_type) )
🧹 Nitpick comments (8)
examples/basic/agent_factory/requirements.txt (1)
5-6
: Pin/constrain anthropic and openai in examples/basic/agent_factory/requirements.txt (lines 5–6)
Align with existing example pins — e.g. examples/usecases/mcp_supabase_migration_agent pins openai==1.51.0 and anthropic==0.34.2; examples/usecases/mcp_github_to_slack_agent uses anthropic>=0.48.0 — or point this example at a shared constraints file.src/mcp_agent/data/artificial_analysis_llm_benchmarks.json (1)
8378-8401
: Normalize Google provider/description labels (underscores, parentheses, newline).Inconsistencies will break grouping/filters and can leak awkward UI text:
- "Google (AI_Studio)" vs "Google (AI Studio)"
- "Google Vertex" vs "Google (Vertex)"
- Description sometimes lacks parentheses for Vertex
- One description has a newline and extra spaces
Apply these minimal edits:
@@ - "description": "Gemini 2.5 Pro (AI_Studio)", - "provider": "Google (AI_Studio)", + "description": "Gemini 2.5 Pro (AI Studio)", + "provider": "Google (AI Studio)", @@ - "description": "Gemini 2.5 Pro Vertex", - "provider": "Google Vertex", + "description": "Gemini 2.5 Pro (Vertex)", + "provider": "Google (Vertex)", @@ - "description": "Gemini 2.5 Flash (Reasoning) (AI_Studio)", - "provider": "Google (AI_Studio)", + "description": "Gemini 2.5 Flash (Reasoning) (AI Studio)", + "provider": "Google (AI Studio)", @@ - "description": "Gemini 2.5 Flash (AI_Studio)", - "provider": "Google (AI_Studio)", + "description": "Gemini 2.5 Flash (AI Studio)", + "provider": "Google (AI Studio)", @@ - "description": "Gemini 2.5 Flash-Lite (AI Studio)", + "description": "Gemini 2.5 Flash-Lite (AI Studio)", @@ - "description": "Gemini 2.5 Flash-Lite (Reasoning) (AI - Studio)", + "description": "Gemini 2.5 Flash-Lite (Reasoning) (AI Studio)",Also applies to: 9028-9051, 9053-9076, 9678-9701, 9703-9726, 12378-12401, 12403-12426
src/mcp_agent/workflows/llm/augmented_llm_google.py (1)
219-222
: Update docstring to reflect Gemini 2.5.It still references 2.0.
Apply:
- The default implementation uses gemini-2.0-flash as the LLM + The default implementation uses gemini-2.5-flash as the LLMsrc/mcp_agent/workflows/llm/augmented_llm.py (1)
357-362
: Abstract provider-config hook: solid addition.Enables router/defaults unification. Consider annotating return type as Any | None for clarity.
src/mcp_agent/workflows/factory.py (4)
1018-1036
: Also seed provider default_model when request_params is None.Right now provider defaults are merged only if request_params is provided. Seed RequestParams from provider config when request_params is None for consistent defaults.
- effective_params: RequestParams | None = request_params - if effective_params is not None: + effective_params: RequestParams | None = request_params + if effective_params is not None: ... - if getattr(effective_params, "model", None) is None and chosen_model: + if getattr(effective_params, "model", None) is None and chosen_model: effective_params.model = chosen_model + else: + # No request_params passed; seed from selector/provider defaults if available + chosen_model: str | None = model_name + if not chosen_model: + cfg_obj = None + try: + cfg_obj = provider_cls.get_provider_config(context) + except Exception: + cfg_obj = None + if cfg_obj is not None: + chosen_model = getattr(cfg_obj, "default_model", None) + if chosen_model: + effective_params = RequestParams(model=chosen_model) + if isinstance(model, ModelPreferences): + effective_params.modelPreferences = model
991-1001
: Model precedence comment vs. implementation.Your comment mentions “3) provider hardcoded fallback,” but no explicit hardcoded fallback is implemented here (you defer to provider defaults). Either implement the fallback or update the comment to avoid confusion.
Also applies to: 1004-1004
126-126
: Broadened provider enum looks good; fix docstrings to say “LLM router,” not “embedding router.”Minor text correction in parameter docs.
- provider: The provider to use for the embedding router. - model: The model to use for the embedding router. + provider: The provider to use for the LLM router. + model: The model to use for the LLM router.Also applies to: 141-147
152-159
: Type hint mismatch and isinstance union at runtime.normalized_agents is typed List[Agent] but can contain AugmentedLLM. Also prefer tuple in isinstance for runtime safety.
- normalized_agents: List[Agent] = [] + normalized_agents: List[Agent | AugmentedLLM] = [] ... - elif isinstance(a, Agent | AugmentedLLM): + elif isinstance(a, (Agent, AugmentedLLM)): normalized_agents.append(a)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
examples/basic/agent_factory/requirements.txt
(1 hunks)examples/model_providers/mcp_basic_google_agent/README.md
(1 hunks)src/mcp_agent/config.py
(1 hunks)src/mcp_agent/data/artificial_analysis_llm_benchmarks.json
(9 hunks)src/mcp_agent/workflows/factory.py
(4 hunks)src/mcp_agent/workflows/llm/augmented_llm.py
(1 hunks)src/mcp_agent/workflows/llm/augmented_llm_anthropic.py
(1 hunks)src/mcp_agent/workflows/llm/augmented_llm_azure.py
(1 hunks)src/mcp_agent/workflows/llm/augmented_llm_bedrock.py
(1 hunks)src/mcp_agent/workflows/llm/augmented_llm_google.py
(3 hunks)src/mcp_agent/workflows/llm/augmented_llm_ollama.py
(1 hunks)src/mcp_agent/workflows/llm/augmented_llm_openai.py
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (8)
src/mcp_agent/workflows/llm/augmented_llm_anthropic.py (6)
src/mcp_agent/workflows/llm/augmented_llm.py (1)
get_provider_config
(360-361)src/mcp_agent/workflows/llm/augmented_llm_azure.py (1)
get_provider_config
(132-133)src/mcp_agent/workflows/llm/augmented_llm_bedrock.py (1)
get_provider_config
(95-96)src/mcp_agent/workflows/llm/augmented_llm_google.py (1)
get_provider_config
(242-243)src/mcp_agent/workflows/llm/augmented_llm_ollama.py (1)
get_provider_config
(37-39)src/mcp_agent/workflows/llm/augmented_llm_openai.py (1)
get_provider_config
(138-139)
src/mcp_agent/workflows/llm/augmented_llm_azure.py (7)
src/mcp_agent/workflows/llm/augmented_llm.py (1)
get_provider_config
(360-361)src/mcp_agent/workflows/llm/augmented_llm_anthropic.py (1)
get_provider_config
(153-154)src/mcp_agent/workflows/llm/augmented_llm_bedrock.py (1)
get_provider_config
(95-96)src/mcp_agent/workflows/llm/augmented_llm_google.py (1)
get_provider_config
(242-243)src/mcp_agent/workflows/llm/augmented_llm_ollama.py (1)
get_provider_config
(37-39)src/mcp_agent/workflows/llm/augmented_llm_openai.py (1)
get_provider_config
(138-139)src/mcp_agent/app.py (1)
context
(160-165)
src/mcp_agent/workflows/llm/augmented_llm_ollama.py (2)
src/mcp_agent/workflows/llm/augmented_llm.py (1)
get_provider_config
(360-361)src/mcp_agent/workflows/llm/augmented_llm_openai.py (1)
get_provider_config
(138-139)
src/mcp_agent/workflows/llm/augmented_llm_bedrock.py (7)
src/mcp_agent/workflows/llm/augmented_llm.py (1)
get_provider_config
(360-361)src/mcp_agent/workflows/llm/augmented_llm_anthropic.py (1)
get_provider_config
(153-154)src/mcp_agent/workflows/llm/augmented_llm_azure.py (1)
get_provider_config
(132-133)src/mcp_agent/workflows/llm/augmented_llm_google.py (1)
get_provider_config
(242-243)src/mcp_agent/workflows/llm/augmented_llm_ollama.py (1)
get_provider_config
(37-39)src/mcp_agent/workflows/llm/augmented_llm_openai.py (1)
get_provider_config
(138-139)src/mcp_agent/app.py (1)
context
(160-165)
src/mcp_agent/workflows/llm/augmented_llm_openai.py (7)
src/mcp_agent/workflows/llm/augmented_llm.py (1)
get_provider_config
(360-361)src/mcp_agent/workflows/llm/augmented_llm_anthropic.py (1)
get_provider_config
(153-154)src/mcp_agent/workflows/llm/augmented_llm_azure.py (1)
get_provider_config
(132-133)src/mcp_agent/workflows/llm/augmented_llm_bedrock.py (1)
get_provider_config
(95-96)src/mcp_agent/workflows/llm/augmented_llm_google.py (1)
get_provider_config
(242-243)src/mcp_agent/workflows/llm/augmented_llm_ollama.py (1)
get_provider_config
(37-39)src/mcp_agent/app.py (1)
context
(160-165)
src/mcp_agent/workflows/llm/augmented_llm_google.py (6)
src/mcp_agent/workflows/llm/augmented_llm.py (2)
get_provider_config
(360-361)select_model
(363-407)src/mcp_agent/workflows/llm/augmented_llm_anthropic.py (1)
get_provider_config
(153-154)src/mcp_agent/workflows/llm/augmented_llm_azure.py (1)
get_provider_config
(132-133)src/mcp_agent/workflows/llm/augmented_llm_bedrock.py (1)
get_provider_config
(95-96)src/mcp_agent/workflows/llm/augmented_llm_ollama.py (1)
get_provider_config
(37-39)src/mcp_agent/workflows/llm/augmented_llm_openai.py (1)
get_provider_config
(138-139)
src/mcp_agent/workflows/llm/augmented_llm.py (7)
src/mcp_agent/workflows/llm/augmented_llm_anthropic.py (1)
get_provider_config
(153-154)src/mcp_agent/workflows/llm/augmented_llm_azure.py (1)
get_provider_config
(132-133)src/mcp_agent/workflows/llm/augmented_llm_bedrock.py (1)
get_provider_config
(95-96)src/mcp_agent/workflows/llm/augmented_llm_google.py (1)
get_provider_config
(242-243)src/mcp_agent/workflows/llm/augmented_llm_ollama.py (1)
get_provider_config
(37-39)src/mcp_agent/workflows/llm/augmented_llm_openai.py (1)
get_provider_config
(138-139)src/mcp_agent/app.py (1)
context
(160-165)
src/mcp_agent/workflows/factory.py (8)
src/mcp_agent/workflows/router/router_llm.py (2)
LLMRouter
(96-570)create
(156-180)src/mcp_agent/workflows/llm/augmented_llm.py (2)
RequestParams
(126-196)get_provider_config
(360-361)src/mcp_agent/workflows/llm/augmented_llm_anthropic.py (1)
get_provider_config
(153-154)src/mcp_agent/workflows/llm/augmented_llm_azure.py (1)
get_provider_config
(132-133)src/mcp_agent/workflows/llm/augmented_llm_bedrock.py (1)
get_provider_config
(95-96)src/mcp_agent/workflows/llm/augmented_llm_google.py (1)
get_provider_config
(242-243)src/mcp_agent/workflows/llm/augmented_llm_ollama.py (1)
get_provider_config
(37-39)src/mcp_agent/workflows/llm/augmented_llm_openai.py (1)
get_provider_config
(138-139)
🔇 Additional comments (13)
src/mcp_agent/data/artificial_analysis_llm_benchmarks.json (4)
8353-8376
: Rename to gemini-2.5: LGTMThe model name normalization to dot notation aligns with the rest of the PR.
13152-13152
: EOF newline: LGTM.Nice touch for POSIX tooling compatibility.
8353-8401
: No stale “gemini-2-5” references remain. Search confirms zero occurrences of the hyphen‐based pattern; all currentgemini-2.5-*
identifiers are expected.
8353-8401
: Confirm downstream uniqueness semantics for “name”.Core lookups are provider-qualified (composite key): load_default_models() reads the embedded JSON, gen_llm_benchmarks merges by (provider.lower(), name.lower()), and TokenCounter builds _model_lookup keyed by (provider.lower(), name.lower()) — see src/mcp_agent/workflows/llm/llm_selector.py (load_default_models), scripts/gen_llm_benchmarks.py (merged key), and src/mcp_agent/tracing/token_counter.py (_model_lookup / _build_provider_lookup).
However find_model_info() will perform name-only fuzzy matching when no provider is supplied, and llm_selector emits telemetry using model.name (span.set_attribute f"model.{model.name}..."), so duplicated names across providers (e.g., gemini-2.5-pro for AI Studio vs Vertex) can cause ambiguous lookups or telemetry collisions.Confirm intended behavior: consumers must either always provide a provider when resolving models, or you must disambiguate names (or include provider in emitted keys/attributes) — otherwise accept the current fuzzy-first-match semantics.
src/mcp_agent/workflows/llm/augmented_llm_ollama.py (1)
36-40
: Provider-config accessor added correctly.Matches the protocol and the OpenAI-compatible expectation for Ollama config.
src/mcp_agent/config.py (1)
344-350
: Add Google default_model: LGTM.Aliases cover env, nested, and flat forms. Ensure README/env examples mention GOOGLE_DEFAULT_MODEL.
src/mcp_agent/workflows/llm/augmented_llm_anthropic.py (1)
152-155
: Consistent provider-config accessor.Alignment with the protocol looks good.
src/mcp_agent/workflows/llm/augmented_llm_bedrock.py (1)
94-97
: Bedrock provider-config accessor: good.Keeps factory/routing uniform.
src/mcp_agent/workflows/llm/augmented_llm_openai.py (1)
137-140
: OpenAI provider-config accessor added cleanly.No concerns.
src/mcp_agent/workflows/llm/augmented_llm_google.py (3)
241-244
: Provider-config accessor implemented consistently.No issues.
257-258
: Structured fallback aligns with new default.Good defensive default in absence of selection.
60-61
: Approve: default model bumped to gemini-2.5-flash — verification requiredMatches PR narrative and config; ripgrep sweep returned "No files were searched". Re-run from the repository root and share the output of:
rg -n --hidden --no-ignore -S 'gemini-2.0-flash|gemini-2.5-flash'src/mcp_agent/workflows/llm/augmented_llm_azure.py (1)
131-134
: Azure provider-config accessor: looks good.Keeps parity with other providers.
@@ -0,0 +1,6 @@ | |||
# Core framework dependency | |||
mcp-agent @ file://../../../ # Link to the local mcp-agent project root |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace file:// relative URL with an editable local path.
mcp-agent @ file://../../../
is a non‑portable/likely invalid file URL (PEP 508 expects an absolute file URL). Use an editable local path for consistency across pip/uv.
Apply:
-mcp-agent @ file://../../../ # Link to the local mcp-agent project root
+-e ../../../ # Install local mcp-agent in editable mode
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
mcp-agent @ file://../../../ # Link to the local mcp-agent project root | |
-e ../../../ # Install local mcp-agent in editable mode |
🤖 Prompt for AI Agents
In examples/basic/agent_factory/requirements.txt around line 2, replace the
non-portable PEP 508 file URL "mcp-agent @ file://../../../" with an editable
local path so pip/uv can install the local project: change the line to use pip's
editable syntax pointing to the project root (for example "-e ../../../") or an
absolute path if required by CI; ensure the comment remains if desired.
- `project`: Google Cloud project ID (required if using VertexAI) | ||
- `location`: Google Cloud location (required if using VertexAI) | ||
- `default_model`: Defaults to "gemini-2.0-flash" but can be customized in your config | ||
- `default_model`: Defaults to "gemini-2.5-flash" but can be customized in your config |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Document the new default_model more explicitly and add it to the sample config.
Good call on switching to "gemini-2.5-flash". Please also reflect this in the YAML example so users discover/override it easily.
Add to the snippet under the google section:
google:
api_key: "your-google-api-key"
vertexai: false
# Include these if using VertexAI
# project: "your-google-cloud-project"
# location: "us-central1"
# Optional override; falls back to gemini-2.5-flash if unset
default_model: "gemini-2.5-flash"
Run this quick check to ensure no stale references remain to the old default:
🏁 Script executed:
#!/bin/bash
# Find any remaining mentions of gemini-2.0-flash or mixed 2-0 variants.
rg -nP --glob '!**/dist/**' '(gemini[-_.]?2[.-]0[-_.]?flash)'
Length of output: 148
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Searching repository for legacy 'gemini-2.0-flash' variants (case-insensitive)..."
rg -n --hidden -uu -S '(?i)gemini[-_.]?2[.-]0[-_.]?flash' || true
echo
echo "Searching repository for new 'gemini-2.5-flash' mentions..."
rg -n --hidden -uu -S 'gemini[-_.]?2[.-]5[-_.]?flash' || true
echo
echo "Searching repository for 'default_model' occurrences..."
rg -n --hidden -uu -S 'default_model' || true
echo
echo "Printing the referenced README snippet if it exists:"
if [ -f examples/model_providers/mcp_basic_google_agent/README.md ]; then
echo "---- README: examples/model_providers/mcp_basic_google_agent/README.md (lines 1-200) ----"
sed -n '1,200p' examples/model_providers/mcp_basic_google_agent/README.md
else
echo "README not found at examples/model_providers/mcp_basic_google_agent/README.md"
fi
Length of output: 27511
Document and unify Google default_model = "gemini-2.5-flash"
README already states gemini-2.5-flash — add an explicit default_model to the google sample config and replace remaining gemini-2.0-flash occurrences.
Files to update:
- examples/model_providers/mcp_basic_google_agent/mcp_agent.config.yaml
- examples/model_providers/mcp_basic_google_agent/mcp_agent.secrets.yaml.example
- examples/model_providers/mcp_basic_google_agent/main.py
- tests/workflows/llm/test_augmented_llm_google.py
- tests/cli/fixtures/multi_provider_config.yaml
- tests/cli/fixtures/realistic_mcp_configs/complex_integrations/mcp_agent.config.yaml
- docs/configuration.mdx
- docs/concepts/augmented-llms.mdx
(also any other rg hits for "gemini-2.0-flash")
Add to the README google snippet:
google:
api_key: "your-google-api-key"
vertexai: false
# Include these if using VertexAI
# project: "your-google-cloud-project"
# location: "us-central1"
# Optional override; falls back to gemini-2.5-flash if unset
default_model: "gemini-2.5-flash"
🤖 Prompt for AI Agents
In examples/model_providers/mcp_basic_google_agent/README.md around line 45, the
README mentions gemini-2.5-flash but the sample configs and code still reference
gemini-2.0-flash; add an explicit default_model entry to the google snippet and
update all occurrences to gemini-2.5-flash across the listed files.
Specifically: add default_model: "gemini-2.5-flash" to mcp_agent.config.yaml and
mcp_agent.secrets.yaml.example, update main.py to use
config.get("default_model", "gemini-2.5-flash") as the fallback, replace any
gemini-2.0-flash strings in the tests and fixtures
(tests/workflows/llm/test_augmented_llm_google.py,
tests/cli/fixtures/multi_provider_config.yaml,
tests/cli/fixtures/realistic_mcp_configs/complex_integrations/mcp_agent.config.yaml,
docs/configuration.mdx, docs/concepts/augmented-llms.mdx and any other matches)
and run tests to ensure no failing references remain.
factory = _llm_factory( | ||
provider=provider, | ||
model=model, | ||
request_params=request_params, | ||
context=context, | ||
) | ||
|
||
return await LLMRouter.create( | ||
name=name, | ||
llm_factory=factory, | ||
server_names=server_names, | ||
agents=normalized_agents, | ||
functions=functions, | ||
routing_instruction=routing_instruction, | ||
context=context, | ||
**kwargs, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Router path: llm_factory currently drops classifier instruction (behavioral regression).
LLMRouter calls llm_factory with instruction/context. Your _llm_factory returns a lambda that doesn’t accept these kwargs, triggering the TypeError fallback and creating the classifier LLM without the routing instruction. That will degrade routing quality/non‑deterministically change behavior.
Apply the fix in my comment on Lines 1037-1041 to accept and forward instruction/name.
🤖 Prompt for AI Agents
In src/mcp_agent/workflows/factory.py around lines 188 to 204, the llm_factory
produced by _llm_factory currently returns a lambda that does not accept the
instruction/name kwargs, causing LLMRouter to hit the TypeError fallback and
drop the routing/classifier instruction; update the factory so the returned
callable accepts and forwards instruction and name (or simply **kwargs) into the
underlying LLM creation call (preserving existing
provider/model/request_params/context) so that any routing_instruction or name
passed by LLMRouter is propagated to the created LLM.
return lambda agent: provider_cls( | ||
agent=agent, | ||
default_request_params=request_params or _default_params(), | ||
default_request_params=effective_params or _default_params(), | ||
context=context, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pass instruction/name through llm_factory and avoid TypeError fallback.
Make the factory accept optional kwargs and forward instruction/name to provider_cls.
- return lambda agent: provider_cls(
- agent=agent,
- default_request_params=effective_params or _default_params(),
- context=context,
- )
+ def _factory(agent, instruction=None, name=None, **_):
+ return provider_cls(
+ agent=agent,
+ instruction=instruction,
+ name=name,
+ default_request_params=effective_params or _default_params(),
+ context=context,
+ )
+ return _factory
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
return lambda agent: provider_cls( | |
agent=agent, | |
default_request_params=request_params or _default_params(), | |
default_request_params=effective_params or _default_params(), | |
context=context, | |
) | |
def _factory(agent, instruction=None, name=None, **_): | |
return provider_cls( | |
agent=agent, | |
instruction=instruction, | |
name=name, | |
default_request_params=effective_params or _default_params(), | |
context=context, | |
) | |
return _factory |
🤖 Prompt for AI Agents
In src/mcp_agent/workflows/factory.py around lines 1037 to 1041, the returned
lambda currently only accepts an agent and always calls provider_cls with fixed
args, causing issues when callers pass instruction/name (which fall back to a
TypeError). Change the factory to accept optional kwargs (e.g., def
llm_factory(agent, **kwargs):) and forward those kwargs into the provider_cls
call (provider_cls(agent=agent, default_request_params=effective_params or
_default_params(), context=context, **kwargs)) so instruction/name are passed
through and the TypeError fallback is avoided.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
examples/basic/agent_factory/mcp_agent.config.yaml (1)
7-7
: Defaulting examples to debug may leak sensitive prompts/args.Examples often get copied to prod-like runs; consider defaulting to info and documenting how to raise verbosity.
Suggested change:
- level: debug + level: infoAlternative: keep debug here but add a comment noting potential PII in logs and advise overriding via env/CLI.
src/mcp_agent/workflows/llm/augmented_llm.py (1)
357-362
: Add return type and generic fallback to get_provider_configAnnotate the base method to return BaseModel | None and implement a generic fallback that returns getattr(context.config, cls.provider) when available; keep provider-specific overrides (openai, azure, bedrock, google, anthropic, ollama) unchanged. Location: src/mcp_agent/workflows/llm/augmented_llm.py:357–362
- def get_provider_config(cls, context: Optional["Context"]): - """Return the provider-specific settings object from the app context, or None.""" - return None + def get_provider_config(cls, context: Optional["Context"]) -> BaseModel | None: + """Return the provider-specific settings object from the app context, or None.""" + if not context or not getattr(context, "config", None): + return None + provider = getattr(cls, "provider", None) + if provider and hasattr(context.config, provider): + return getattr(context.config, provider) + return None
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
examples/basic/agent_factory/mcp_agent.config.yaml
(2 hunks)src/mcp_agent/workflows/llm/augmented_llm.py
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/mcp_agent/workflows/llm/augmented_llm.py (7)
src/mcp_agent/workflows/llm/augmented_llm_azure.py (1)
get_provider_config
(132-133)src/mcp_agent/workflows/llm/augmented_llm_google.py (1)
get_provider_config
(242-243)src/mcp_agent/workflows/llm/augmented_llm_bedrock.py (1)
get_provider_config
(95-96)src/mcp_agent/workflows/llm/augmented_llm_ollama.py (1)
get_provider_config
(37-39)src/mcp_agent/workflows/llm/augmented_llm_anthropic.py (1)
get_provider_config
(153-154)src/mcp_agent/workflows/llm/augmented_llm_openai.py (1)
get_provider_config
(138-139)src/mcp_agent/app.py (1)
context
(160-165)
🔇 Additional comments (1)
examples/basic/agent_factory/mcp_agent.config.yaml (1)
22-24
: Fix Google default model ID: use dot notation (gemini-2.5-pro)Config uses gemini-2_5-pro — change to gemini-2.5-pro to match provider model IDs and avoid resolution failures.
File: examples/basic/agent_factory/mcp_agent.config.yaml Lines: 22-24
Apply:
-google: - default_model: gemini-2_5-pro +google: + default_model: gemini-2.5-proAutomated search returned "No files were searched" — other occurrences couldn't be verified. Re-run verification across the repo:
rg -n --hidden --no-ignore -S 'gemini-2_5-|gemini-2-5-|gemini-2.5-' .
Fixes #516
Summary by CodeRabbit
New Features
Documentation
Data
Chores