Conversation
Integrates AG2 (community fork of AutoGen, PyPI package: ag2) as a new
framework option alongside praisonai, crewai, and autogen.
Changes:
- pyproject.toml: add [ag2] optional dependency extra (ag2>=0.11.0)
- agents_generator.py: AG2 detection via importlib.metadata + _run_ag2()
with LLMConfig dict pattern, GroupChat orchestration, Bedrock support,
ChatResult.summary extraction, and TERMINATE cleanup
- auto.py: AG2_AVAILABLE flag + validation in AutoGenerator.__init__
- .env.example: add AG2 and AWS Bedrock environment variable templates
- examples/ag2/: basic, multi-agent, and Bedrock YAML examples
- tests: 16 unit tests + 9 mock integration tests (25/25 passing)
Detection uses importlib.metadata.distribution('ag2') + LLMConfig check.
All changes are purely additive — existing code paths unaffected.
E2E tested against OpenAI gpt-4o-mini: single-agent and multi-agent
GroupChat flows both produce correct output.
…_API_BASE, patch create=True
- Add _resolve() helper to read LLM config from YAML top-level, per-role,
config_list, and env vars in priority order (fixes Bedrock YAML config ignored)
- Include OPENAI_API_BASE env var in base_url resolution chain
- Add create=True to all patch("autogen.*") calls so tests pass without ag2 installed
feat(ag2): add AG2 framework backend integration
📝 WalkthroughWalkthroughThis PR introduces comprehensive AG2 framework support to PraisonAI. Changes include runtime AG2 availability detection, framework routing logic, a new Changes
Sequence Diagram(s)sequenceDiagram
participant User as User/Config
participant Gen as AgentsGenerator<br/>._run_ag2()
participant Config as LLMConfig
participant Agent as AssistantAgent<br/>(per role)
participant Tools as Tool<br/>Registry
participant Chat as GroupChat
participant Proxy as UserProxyAgent
participant Manager as GroupChatManager
participant Output as Output<br/>Formatter
User->>Gen: Call with YAML config + topic
Gen->>Config: Construct from env/config_list
Config-->>Gen: LLMConfig ready
Gen->>Agent: Create AssistantAgent per role
Agent-->>Gen: Agents instantiated
Gen->>Tools: Register tools via decorators
Tools-->>Gen: Tools registered per role
Gen->>Chat: Initialize GroupChat with agents
Chat-->>Gen: GroupChat configured
Gen->>Proxy: Create UserProxyAgent
Proxy-->>Gen: UserProxy ready
Proxy->>Manager: Create GroupChatManager(chat)
Manager-->>Proxy: Manager ready
Proxy->>Manager: initiate_chat(initial_message)
Manager->>Agent: Multi-turn agent interaction
Agent->>Tools: Execute registered tools
Tools-->>Agent: Tool results
Agent-->>Manager: Message/TERMINATE
Manager-->>Proxy: Chat history with TERMINATE
Proxy-->>Gen: Return chat_result
Gen->>Output: Format with ### AG2 Output ###
Output-->>Gen: Formatted result string
Gen-->>User: Return output or error
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 3❌ Failed checks (1 warning, 2 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
ⓘ You are approaching your monthly quota for Qodo. Upgrade your plan Review Summary by QodoAdd AG2 framework backend integration with GroupChat orchestration and Bedrock support
WalkthroughsDescription• Add AG2 framework backend integration as new orchestration option • Implement _run_ag2() method with LLMConfig, GroupChat, and Bedrock support • Add AG2 availability detection via importlib.metadata and LLMConfig import • Include 25 unit and integration tests with comprehensive mocking • Provide three YAML examples (basic, multi-agent, Bedrock) and function tools demo Diagramflowchart LR
A["AG2 Detection<br/>importlib.metadata"] --> B["AG2_AVAILABLE Flag"]
B --> C["Framework Validation<br/>in __init__"]
C --> D["_run_ag2 Method"]
D --> E["LLMConfig Construction<br/>OpenAI/Bedrock"]
E --> F["AssistantAgent Creation<br/>per Role"]
F --> G["GroupChat Orchestration<br/>with max_round=12"]
G --> H["Output Extraction<br/>& TERMINATE Cleanup"]
H --> I["Result with AG2 Header"]
File Changes1. src/praisonai/praisonai/agents_generator.py
|
Code Review by Qodo
|
Greptile SummaryThis PR adds first-class AG2 framework support (the community fork of AutoGen, PyPI package Key changes:
Issues found:
Confidence Score: 2/5Not safe to merge yet — the AG2 execution path will raise a TypeError at runtime due to incorrect LLMConfig instantiation, and the Bedrock example YAML will fail with a wrong model-name format. Two P1 bugs block the primary user path: (1)
|
| Filename | Overview |
|---|---|
| src/praisonai/praisonai/agents_generator.py | Core change: adds AG2 detection and _run_ag2 execution path; contains two P1 bugs — LLMConfig called with a positional dict instead of kwargs, and aws_region silently dropped from the Bedrock config. |
| examples/ag2/ag2_bedrock.yaml | Bedrock example uses a LiteLLM-style bedrock/ prefix in the model name which is incompatible with AG2's native Bedrock client. |
| src/praisonai/praisonai/auto.py | Adds AG2 availability detection and framework validation in AutoGenerator.__init__; logic mirrors agents_generator.py correctly. |
| src/praisonai/pyproject.toml | Adds ag2 = ["ag2>=0.11.0", "praisonai-tools>=0.0.15"] optional dependency in both [project.optional-dependencies] and [tool.poetry.extras]; consistent and correct. |
| src/praisonai/tests/unit/test_ag2_adapter.py | Thorough unit tests for availability detection, framework validation, LLMConfig construction, agent creation, and output extraction; all use mocked AG2 so the LLMConfig(**dict) bug is not caught. |
| src/praisonai/tests/integration/ag2/test_ag2_integration.py | Integration tests cover single-agent, multi-agent GroupChat, backward compatibility, and dispatch routing — well-structured and comprehensive. |
| src/praisonai/tests/source/ag2_function_tools.py | Standalone AG2 tool-registration example; correctly uses LLMConfig with keyword args and the context-manager pattern. |
Sequence Diagram
sequenceDiagram
participant User as User/CLI
participant AG as AgentsGenerator
participant Det as AG2 Detection
participant RAG as _run_ag2
participant LLM as LLMConfig
participant UP as UserProxyAgent
participant AA as AssistantAgent(s)
participant GC as GroupChat + Manager
User->>AG: generate_crew_and_kickoff(framework=ag2)
AG->>Det: importlib.metadata.distribution('ag2') + LLMConfig import
Det-->>AG: AG2_AVAILABLE = True
AG->>RAG: _run_ag2(config, topic, tools_dict)
RAG->>LLM: LLMConfig(llm_config_entry) — should be LLMConfig(**dict)
LLM-->>RAG: llm_config instance
RAG->>UP: UserProxyAgent(name=User, human_input_mode=NEVER)
loop For each role in YAML
RAG->>AA: AssistantAgent(name, system_message, llm_config)
AA-->>RAG: assistant
end
RAG->>GC: GroupChat(agents=[user_proxy]+assistants, max_round=12)
RAG->>GC: GroupChatManager(groupchat, llm_config)
RAG->>UP: initiate_chat(manager, message=task_description)
UP->>GC: orchestrate multi-agent conversation
GC-->>RAG: chat_result
RAG-->>AG: AG2 Output
AG-->>User: result string
Reviews (1): Last reviewed commit: "feat(ag2): add AG2 framework backend int..." | Re-trigger Greptile
| llm_config_entry["api_key"] = api_key | ||
| if base_url and base_url not in ("https://api.openai.com/v1", "https://api.openai.com/v1/"): | ||
| llm_config_entry["base_url"] = base_url | ||
| llm_config = LLMConfig(llm_config_entry) |
There was a problem hiding this comment.
LLMConfig called with positional dict — likely TypeError at runtime
LLMConfig(llm_config_entry) passes a plain dict as the first positional argument. AG2's LLMConfig is a Pydantic model whose constructor accepts keyword arguments (e.g. model=, api_type=), not a positional dict. The bundled example tests/source/ag2_function_tools.py (lines 51–55) confirms the expected call style uses keyword arguments, not a positional dict.
Because the unit tests mock LLMConfig entirely, they cannot catch this mismatch. The fix is to unpack the dict:
| llm_config = LLMConfig(llm_config_entry) | |
| llm_config = LLMConfig(**llm_config_entry) |
| if api_type == "bedrock": | ||
| llm_config_entry = {"api_type": "bedrock", "model": model_name} | ||
| else: | ||
| llm_config_entry = {"model": model_name} | ||
| if api_key: | ||
| llm_config_entry["api_key"] = api_key | ||
| if base_url and base_url not in ("https://api.openai.com/v1", "https://api.openai.com/v1/"): | ||
| llm_config_entry["base_url"] = base_url |
There was a problem hiding this comment.
aws_region from YAML is silently ignored for Bedrock
The ag2_bedrock.yaml example (and any user YAML) can specify aws_region inside the role-level llm block, but _run_ag2 never extracts it from yaml_llm or first_role_llm. As a result, the region is silently dropped and AG2 falls back to whatever boto3 picks up from the environment or ~/.aws/config.
To honour the YAML setting:
| if api_type == "bedrock": | |
| llm_config_entry = {"api_type": "bedrock", "model": model_name} | |
| else: | |
| llm_config_entry = {"model": model_name} | |
| if api_key: | |
| llm_config_entry["api_key"] = api_key | |
| if base_url and base_url not in ("https://api.openai.com/v1", "https://api.openai.com/v1/"): | |
| llm_config_entry["base_url"] = base_url | |
| if api_type == "bedrock": | |
| aws_region = _resolve("aws_region", env_var="AWS_DEFAULT_REGION") | |
| llm_config_entry = {"api_type": "bedrock", "model": model_name} | |
| if aws_region: | |
| llm_config_entry["aws_region"] = aws_region |
| You have deep expertise in Amazon Bedrock, SageMaker, ECS, and Lambda, | ||
| and you help organisations deploy AI agents at scale securely and cost-effectively. | ||
| llm: | ||
| model: "bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0" |
There was a problem hiding this comment.
LiteLLM
bedrock/ prefix is incompatible with AG2's native Bedrock client
bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0 uses the LiteLLM provider-prefix convention. When _run_ag2 builds the LLMConfig with api_type="bedrock", it passes this string verbatim as the model name to AG2's own Bedrock integration (which calls the boto3 Bedrock API directly). AG2's client expects the bare model ID — the bedrock/ prefix will likely cause an UnknownModelException or similar error from the Bedrock API.
| model: "bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0" | |
| model: "anthropic.claude-3-5-sonnet-20241022-v2:0" |
| base_url = (model_config.get("base_url") | ||
| or yaml_llm.get("base_url") | ||
| or os.environ.get("OPENAI_BASE_URL") | ||
| or os.environ.get("OPENAI_API_BASE")) |
There was a problem hiding this comment.
base_url resolution skips first_role_llm
All other config fields (model, api_key, api_type) are resolved via the _resolve helper which considers yaml_llm → first_role_llm → config_list → env var. The base_url resolution bypasses first_role_llm, meaning a base_url set inside a role-level llm: block is silently ignored for URL routing. For consistency:
| base_url = (model_config.get("base_url") | |
| or yaml_llm.get("base_url") | |
| or os.environ.get("OPENAI_BASE_URL") | |
| or os.environ.get("OPENAI_API_BASE")) | |
| base_url = (yaml_llm.get("base_url") | |
| or first_role_llm.get("base_url") | |
| or model_config.get("base_url") | |
| or os.environ.get("OPENAI_BASE_URL") | |
| or os.environ.get("OPENAI_API_BASE")) |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (6)
src/praisonai/praisonai/auto.py (1)
35-43: Consider narrowing the exception catch or adding a debug log.The broad
except Exceptionis intentional to handle any detection failure gracefully, but silently swallowing all exceptions (includingKeyboardInterruptviaBaseExceptionsubclasses — thoughExceptionexcludes those) can obscure unexpected issues during development.♻️ Optional: Add debug logging for troubleshooting
AG2_AVAILABLE = False try: import importlib.metadata as _importlib_metadata _importlib_metadata.distribution('ag2') from autogen import LLMConfig as _AG2LLMConfig # noqa: F401 — AG2-exclusive class AG2_AVAILABLE = True del _AG2LLMConfig, _importlib_metadata -except Exception: - pass +except Exception as _e: + logging.debug("AG2 not available: %s", _e)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/praisonai/praisonai/auto.py` around lines 35 - 43, The try/except around AG2 detection is too broad; replace the bare "except Exception" with specific exceptions (e.g., importlib.metadata.PackageNotFoundError and ImportError) when calling _importlib_metadata.distribution and importing autogen, and add a debug/process logger call to record the caught exception details for troubleshooting; update references in this block (AG2_AVAILABLE, _importlib_metadata, _AG2LLMConfig) so the behavior and cleanup (del statements) remain intact.src/praisonai/praisonai/agents_generator.py (3)
44-52: Same pattern as auto.py - consider extracting to shared utility.The AG2 detection logic is duplicated between
auto.pyandagents_generator.py. While not critical, extracting to a shared module would improve maintainability.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/praisonai/praisonai/agents_generator.py` around lines 44 - 52, The AG2 detection logic (setting AG2_AVAILABLE via importlib.metadata.distribution and conditional import of autogen.LLMConfig) is duplicated; extract it into a small shared utility function (e.g., is_ag2_available() or detect_optional_dependency('ag2')) that encapsulates the try/except and returns a boolean, replace the inline block in agents_generator.py (the AG2_AVAILABLE variable and importlib.metadata usage) with a call to that utility and import the utility from the common module, and do the same for the duplicate in auto.py so both modules use the single shared detector.
545-548: Consider logging the exception before returning the error string.The broad
except Exceptionis acceptable for user-facing resilience, but logging the full traceback would aid debugging.♻️ Proposed improvement
try: chat_result = user_proxy.initiate_chat(manager, message=initial_message) except Exception as e: + self.logger.exception("AG2 chat failed") - return f"### AG2 Error ###\n{str(e)}" + return f"### AG2 Error ###\n{e}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/praisonai/praisonai/agents_generator.py` around lines 545 - 548, The except block around user_proxy.initiate_chat should log the full exception and traceback before returning the error string; update the handler in agents_generator.py to call logger.exception(...) (or logging.exception(...) after ensuring an appropriate logger via import logging and logging.getLogger(__name__)) inside the except Exception as e block, then return the existing f"### AG2 Error ###\n{str(e)}" so the user-facing message is unchanged but full diagnostics are recorded for debugging.
506-507: Minor cleanup: rename unused loop variables and use list spread.Static analysis flags unused loop variables and suggests list spread syntax for clarity. These are stylistic improvements.
♻️ Proposed cleanup
- for role, details, assistant in ag2_agent_entries: + for _role, details, assistant in ag2_agent_entries: for tool_name in details.get("tools", []):- for role, details, _ in ag2_agent_entries: - for task_name, task_details in details.get("tasks", {}).items(): + for _role, details, _ in ag2_agent_entries: + for _task_name, task_details in details.get("tasks", {}).items():groupchat = GroupChat( - agents=[user_proxy] + all_assistants, + agents=[user_proxy, *all_assistants], messages=[], max_round=12, )Also applies to: 531-532, 538-539
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/praisonai/praisonai/agents_generator.py` around lines 506 - 507, The loops over ag2_agent_entries (e.g., "for role, details, assistant in ag2_agent_entries") use loop variables that aren't referenced; rename unused variables to start with an underscore (for example "_assistant" or "_role") to satisfy static analysis, and when iterating tools use list spread for clarity (e.g., iterate over [*details.get("tools", [])] or assign tools = [*details.get("tools", [])] then loop) in the blocks around the loops at the top-level generator functions in agents_generator.py (the occurrences at the for role, details, assistant in ag2_agent_entries and the similar loops referenced at the other occurrences). Ensure consistency across the three similar sites (around lines with the second and third occurrences).src/praisonai/tests/source/ag2_function_tools.py (1)
18-24: Remove unused imports.
GroupChatandGroupChatManagerare imported but not used in this example script. Removing them would clarify the minimal imports needed for basic tool registration.♻️ Proposed fix
from autogen import ( AssistantAgent, UserProxyAgent, - GroupChat, - GroupChatManager, LLMConfig, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/praisonai/tests/source/ag2_function_tools.py` around lines 18 - 24, The import statement currently brings in GroupChat and GroupChatManager but those symbols are unused; update the import list in the top-level autogen import (which currently includes AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager, LLMConfig) by removing GroupChat and GroupChatManager so only the actually used symbols (e.g., AssistantAgent, UserProxyAgent, LLMConfig) remain.src/praisonai/tests/unit/test_ag2_adapter.py (1)
465-487: Minor inconsistency:mock_llm_configmissing context manager setup.In
_run_with_messages,mock_llm_configdoesn't define__enter__/__exit__(lines 469-470), while other tests explicitly set them up (e.g., lines 170-172, 206-208). If_run_ag2usesLLMConfigas a context manager, this could cause test failures.♻️ Proposed fix for consistency
def _run_with_messages(self, messages): gen = self._make_gen() config = _make_config() mock_llm_config = MagicMock() + mock_llm_config.__enter__ = MagicMock(return_value=mock_llm_config) + mock_llm_config.__exit__ = MagicMock(return_value=False) mock_assistant = MagicMock()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/praisonai/tests/unit/test_ag2_adapter.py` around lines 465 - 487, The mock LLMConfig used in _run_with_messages should behave like a context manager; update the mock_llm_config in that helper so it defines __enter__ returning mock_llm_config and __exit__ (e.g., mock_llm_config.__enter__.return_value = mock_llm_config and mock_llm_config.__exit__.return_value = None) before patching LLMConfig so _run_ag2 sees a context-manager-compatible LLMConfig mock just like other tests using LLMConfig.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@examples/ag2/ag2_bedrock.yaml`:
- Around line 25-28: The YAML's aws_region is not being extracted and passed
into AG2's LLMConfig for Bedrock; update _run_ag2() to call
_resolve("aws_region", env_var="AWS_DEFAULT_REGION", default="us-east-1") when
api_type == "bedrock" and add that value into the llm_config_entry dict (e.g.,
llm_config_entry = {"api_type":"bedrock","model": model_name,"aws_region":
aws_region}) so the Bedrock integration receives the region from the config
instead of relying on AWS_DEFAULT_REGION.
In `@src/praisonai/praisonai/agents_generator.py`:
- Around line 515-523: The closure make_tool_fn currently captures the loop
variable tool_name by reference causing every tool_fn to end up with the last
tool's name; change make_tool_fn to accept tool_name as a default parameter
(e.g., def make_tool_fn(f, tool_name=tool_name):) and use that local parameter
when setting tool_fn.__name__, then continue registering wrapped via
assistant.register_for_llm and user_proxy.register_for_execution so each wrapped
function retains its correct name; update references around make_tool_fn,
tool_fn, wrapped, func, assistant.register_for_llm and
user_proxy.register_for_execution accordingly.
In `@src/praisonai/tests/integration/ag2/test_ag2_integration.py`:
- Around line 18-31: The test module stubs "instructor" but not "autogen",
causing import-time failures; update the stub loop in test_ag2_integration.py to
include "autogen" (e.g., add it to the tuple used with for _stub in
("instructor",) so sys.modules[_stub] = MagicMock() will create a MagicMock for
autogen as well), ensuring the same import-time stubbing behavior as done for
"instructor" and matching the unit-test fix.
In `@src/praisonai/tests/unit/test_ag2_adapter.py`:
- Around line 22-38: The tests fail because the autogen top-level module isn't
stubbed, so any use of patch("autogen.*", create=True) raises
ModuleNotFoundError; update the test setup in
src/praisonai/tests/unit/test_ag2_adapter.py to pre-populate
sys.modules["autogen"] with a MagicMock (similar to the existing "instructor"
stub) before imports/patches run so that autogen and its attributes can be
created by patch(..., create=True); ensure the stub is inserted conditionally
only if "autogen" not in sys.modules and mirror the pattern used for
"instructor" to avoid masking a real installation.
---
Nitpick comments:
In `@src/praisonai/praisonai/agents_generator.py`:
- Around line 44-52: The AG2 detection logic (setting AG2_AVAILABLE via
importlib.metadata.distribution and conditional import of autogen.LLMConfig) is
duplicated; extract it into a small shared utility function (e.g.,
is_ag2_available() or detect_optional_dependency('ag2')) that encapsulates the
try/except and returns a boolean, replace the inline block in
agents_generator.py (the AG2_AVAILABLE variable and importlib.metadata usage)
with a call to that utility and import the utility from the common module, and
do the same for the duplicate in auto.py so both modules use the single shared
detector.
- Around line 545-548: The except block around user_proxy.initiate_chat should
log the full exception and traceback before returning the error string; update
the handler in agents_generator.py to call logger.exception(...) (or
logging.exception(...) after ensuring an appropriate logger via import logging
and logging.getLogger(__name__)) inside the except Exception as e block, then
return the existing f"### AG2 Error ###\n{str(e)}" so the user-facing message is
unchanged but full diagnostics are recorded for debugging.
- Around line 506-507: The loops over ag2_agent_entries (e.g., "for role,
details, assistant in ag2_agent_entries") use loop variables that aren't
referenced; rename unused variables to start with an underscore (for example
"_assistant" or "_role") to satisfy static analysis, and when iterating tools
use list spread for clarity (e.g., iterate over [*details.get("tools", [])] or
assign tools = [*details.get("tools", [])] then loop) in the blocks around the
loops at the top-level generator functions in agents_generator.py (the
occurrences at the for role, details, assistant in ag2_agent_entries and the
similar loops referenced at the other occurrences). Ensure consistency across
the three similar sites (around lines with the second and third occurrences).
In `@src/praisonai/praisonai/auto.py`:
- Around line 35-43: The try/except around AG2 detection is too broad; replace
the bare "except Exception" with specific exceptions (e.g.,
importlib.metadata.PackageNotFoundError and ImportError) when calling
_importlib_metadata.distribution and importing autogen, and add a debug/process
logger call to record the caught exception details for troubleshooting; update
references in this block (AG2_AVAILABLE, _importlib_metadata, _AG2LLMConfig) so
the behavior and cleanup (del statements) remain intact.
In `@src/praisonai/tests/source/ag2_function_tools.py`:
- Around line 18-24: The import statement currently brings in GroupChat and
GroupChatManager but those symbols are unused; update the import list in the
top-level autogen import (which currently includes AssistantAgent,
UserProxyAgent, GroupChat, GroupChatManager, LLMConfig) by removing GroupChat
and GroupChatManager so only the actually used symbols (e.g., AssistantAgent,
UserProxyAgent, LLMConfig) remain.
In `@src/praisonai/tests/unit/test_ag2_adapter.py`:
- Around line 465-487: The mock LLMConfig used in _run_with_messages should
behave like a context manager; update the mock_llm_config in that helper so it
defines __enter__ returning mock_llm_config and __exit__ (e.g.,
mock_llm_config.__enter__.return_value = mock_llm_config and
mock_llm_config.__exit__.return_value = None) before patching LLMConfig so
_run_ag2 sees a context-manager-compatible LLMConfig mock just like other tests
using LLMConfig.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f97e4ca6-661d-4ecb-af50-223a7c133cef
📒 Files selected for processing (11)
examples/ag2/ag2_basic.yamlexamples/ag2/ag2_bedrock.yamlexamples/ag2/ag2_multi_agent.yamlsrc/praisonai/.env.examplesrc/praisonai/praisonai/agents_generator.pysrc/praisonai/praisonai/auto.pysrc/praisonai/pyproject.tomlsrc/praisonai/tests/integration/ag2/__init__.pysrc/praisonai/tests/integration/ag2/test_ag2_integration.pysrc/praisonai/tests/source/ag2_function_tools.pysrc/praisonai/tests/unit/test_ag2_adapter.py
| llm: | ||
| model: "bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0" | ||
| api_type: "bedrock" | ||
| aws_region: "us-east-1" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if aws_region is handled in _run_ag2
rg -n "aws_region" src/praisonai/praisonai/agents_generator.pyRepository: MervinPraison/PraisonAI
Length of output: 49
🏁 Script executed:
# Search for _run_ag2 function definition
rg -n "def _run_ag2" src/praisonai/Repository: MervinPraison/PraisonAI
Length of output: 165
🏁 Script executed:
# Search for how llm config dict is being processed
rg -n "llm\[" src/praisonai/praisonai/agents_generator.py | head -20Repository: MervinPraison/PraisonAI
Length of output: 49
🏁 Script executed:
# Look for LLMConfig instantiation with bedrock
rg -n "LLMConfig" src/praisonai/praisonai/agents_generator.py | head -20Repository: MervinPraison/PraisonAI
Length of output: 500
🏁 Script executed:
# Search for bedrock or aws handling in the codebase
rg -n "bedrock" src/praisonai/ -i | head -20Repository: MervinPraison/PraisonAI
Length of output: 2028
🏁 Script executed:
# Read the relevant section around line 474-483
sed -n '470,495p' src/praisonai/praisonai/agents_generator.pyRepository: MervinPraison/PraisonAI
Length of output: 1239
🏁 Script executed:
# Check what's in llm_config_entry before LLMConfig is called
sed -n '428,500p' src/praisonai/praisonai/agents_generator.py | head -80Repository: MervinPraison/PraisonAI
Length of output: 3384
🏁 Script executed:
# Check the test to understand expected behavior
cat -n src/praisonai/tests/unit/test_ag2_adapter.py | sed -n '197,240p'Repository: MervinPraison/PraisonAI
Length of output: 2378
🌐 Web query:
AG2 autogen LLMConfig bedrock aws_region parameter
💡 Result:
In AG2 (AutoGen), the Bedrock region is set via the aws_region key inside the Bedrock entry you pass to LLMConfig (i.e., inside config_list / the Bedrock config dict). [1][2]
from autogen import LLMConfig
llm_config = LLMConfig(
config_list={
"api_type": "bedrock",
"model": "anthropic.claude-3-5-sonnet-20241022-v2:0",
"aws_region": "us-east-1",
"aws_access_key": "...",
"aws_secret_key": "...",
# optional: "aws_session_token": "...",
# optional: "aws_profile_name": "default",
}
)AG2’s Bedrock examples also show pulling it from env, e.g. os.getenv("AWS_REGION", "eu-north-1"). [2]
Sources: AG2 Amazon Bedrock guide [1], AG2 Bedrock structured output notebook [2].
🏁 Script executed:
# Check if aws_region is mentioned anywhere in the repository in context of LLMConfig
rg -n "aws_region" src/praisonai/ -A 2 -B 2Repository: MervinPraison/PraisonAI
Length of output: 49
🏁 Script executed:
# Check if there are any other bedrock config examples in the repo
fd -e yaml -o -e yml | xargs grep -l "bedrock" 2>/dev/nullRepository: MervinPraison/PraisonAI
Length of output: 185
🏁 Script executed:
# Look for any bedrock-related documentation or comments in the code
rg -n "Bedrock" src/praisonai/praisonai/agents_generator.py -A 3 -B 3Repository: MervinPraison/PraisonAI
Length of output: 407
Extract and propagate aws_region to AG2's LLMConfig for Bedrock.
The aws_region field in the YAML config is not extracted or passed to AG2's LLMConfig. AG2's Bedrock integration requires aws_region as a key in the config dict passed to LLMConfig. Without it, the integration will fall back to the AWS_DEFAULT_REGION environment variable, making the YAML configuration incomplete.
Update _run_ag2() to extract aws_region via the _resolve() function and include it in the llm_config_entry dict for Bedrock:
if api_type == "bedrock":
aws_region = _resolve("aws_region", env_var="AWS_DEFAULT_REGION", default="us-east-1")
llm_config_entry = {"api_type": "bedrock", "model": model_name, "aws_region": aws_region}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/ag2/ag2_bedrock.yaml` around lines 25 - 28, The YAML's aws_region is
not being extracted and passed into AG2's LLMConfig for Bedrock; update
_run_ag2() to call _resolve("aws_region", env_var="AWS_DEFAULT_REGION",
default="us-east-1") when api_type == "bedrock" and add that value into the
llm_config_entry dict (e.g., llm_config_entry = {"api_type":"bedrock","model":
model_name,"aws_region": aws_region}) so the Bedrock integration receives the
region from the config instead of relying on AWS_DEFAULT_REGION.
| def make_tool_fn(f): | ||
| def tool_fn(**kwargs): | ||
| return f(**kwargs) if callable(f) else str(f) | ||
| tool_fn.__name__ = tool_name | ||
| return tool_fn | ||
|
|
||
| wrapped = make_tool_fn(func) | ||
| assistant.register_for_llm(description=f"Tool: {tool_name}")(wrapped) | ||
| user_proxy.register_for_execution()(wrapped) |
There was a problem hiding this comment.
Closure captures loop variable by reference — all tools will share the last tool_name.
The make_tool_fn closure captures tool_name from the enclosing scope. Since tool_name is reassigned each iteration, all registered tools will have __name__ set to the last tool in the loop. The fix is to pass tool_name as a default argument.
🐛 Proposed fix
- def make_tool_fn(f):
+ def make_tool_fn(f, name=tool_name):
def tool_fn(**kwargs):
return f(**kwargs) if callable(f) else str(f)
- tool_fn.__name__ = tool_name
+ tool_fn.__name__ = name
return tool_fn🧰 Tools
🪛 Ruff (0.15.9)
[warning] 518-518: Function definition does not bind loop variable tool_name
(B023)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/praisonai/praisonai/agents_generator.py` around lines 515 - 523, The
closure make_tool_fn currently captures the loop variable tool_name by reference
causing every tool_fn to end up with the last tool's name; change make_tool_fn
to accept tool_name as a default parameter (e.g., def make_tool_fn(f,
tool_name=tool_name):) and use that local parameter when setting
tool_fn.__name__, then continue registering wrapped via
assistant.register_for_llm and user_proxy.register_for_execution so each wrapped
function retains its correct name; update references around make_tool_fn,
tool_fn, wrapped, func, assistant.register_for_llm and
user_proxy.register_for_execution accordingly.
| # Stub heavy dependencies that auto.py (develop branch) imports at module level | ||
| # so that tests can import praisonai without a full installation. | ||
| for _stub in ("instructor",): | ||
| if _stub not in sys.modules: | ||
| sys.modules[_stub] = MagicMock() | ||
|
|
||
| import importlib as _importlib | ||
| if "openai" not in sys.modules: | ||
| try: | ||
| _importlib.import_module("openai") | ||
| except ImportError: | ||
| _mock_openai = MagicMock() | ||
| _mock_openai.__version__ = "1.0.0" | ||
| sys.modules["openai"] = _mock_openai |
There was a problem hiding this comment.
Same missing autogen stub issue as unit tests.
The integration tests have the same problem—instructor is stubbed but autogen is not. Apply the same fix here.
🐛 Proposed fix: add autogen stub
# Stub heavy dependencies that auto.py (develop branch) imports at module level
# so that tests can import praisonai without a full installation.
-for _stub in ("instructor",):
+for _stub in ("instructor", "autogen"):
if _stub not in sys.modules:
sys.modules[_stub] = MagicMock()
+
+# Ensure autogen sub-attributes exist for patching
+if isinstance(sys.modules.get("autogen"), MagicMock):
+ _autogen_mock = sys.modules["autogen"]
+ _autogen_mock.LLMConfig = MagicMock()
+ _autogen_mock.AssistantAgent = MagicMock()
+ _autogen_mock.UserProxyAgent = MagicMock()
+ _autogen_mock.GroupChat = MagicMock()
+ _autogen_mock.GroupChatManager = MagicMock()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/praisonai/tests/integration/ag2/test_ag2_integration.py` around lines 18
- 31, The test module stubs "instructor" but not "autogen", causing import-time
failures; update the stub loop in test_ag2_integration.py to include "autogen"
(e.g., add it to the tuple used with for _stub in ("instructor",) so
sys.modules[_stub] = MagicMock() will create a MagicMock for autogen as well),
ensuring the same import-time stubbing behavior as done for "instructor" and
matching the unit-test fix.
| # Stub heavy dependencies that auto.py (develop branch) imports at module level | ||
| # so that tests can import praisonai without a full installation. | ||
| for _stub in ("instructor",): | ||
| if _stub not in sys.modules: | ||
| sys.modules[_stub] = MagicMock() | ||
|
|
||
| # openai is installed (required by ag2/autogen internals), but auto.py also | ||
| # imports it at module level. Ensure it's really loaded, not a mock. | ||
| import importlib as _importlib | ||
| if "openai" not in sys.modules: | ||
| try: | ||
| _importlib.import_module("openai") | ||
| except ImportError: | ||
| _mock_openai = MagicMock() | ||
| _mock_openai.__version__ = "1.0.0" | ||
| sys.modules["openai"] = _mock_openai | ||
|
|
There was a problem hiding this comment.
Missing autogen module stub causes all AG2 tests to fail.
The pipeline failures show ModuleNotFoundError: No module named 'autogen' for all tests that patch autogen.*. The instructor module is stubbed (lines 24-26), but autogen is not. When autogen isn't installed, patch("autogen.LLMConfig", create=True) fails because create=True only creates the attribute, not the parent module.
🐛 Proposed fix: stub autogen module before tests run
# Stub heavy dependencies that auto.py (develop branch) imports at module level
# so that tests can import praisonai without a full installation.
-for _stub in ("instructor",):
+for _stub in ("instructor", "autogen"):
if _stub not in sys.modules:
sys.modules[_stub] = MagicMock()
+
+# Ensure autogen sub-attributes exist for patching
+if isinstance(sys.modules.get("autogen"), MagicMock):
+ _autogen_mock = sys.modules["autogen"]
+ _autogen_mock.LLMConfig = MagicMock()
+ _autogen_mock.AssistantAgent = MagicMock()
+ _autogen_mock.UserProxyAgent = MagicMock()
+ _autogen_mock.GroupChat = MagicMock()
+ _autogen_mock.GroupChatManager = MagicMock()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/praisonai/tests/unit/test_ag2_adapter.py` around lines 22 - 38, The tests
fail because the autogen top-level module isn't stubbed, so any use of
patch("autogen.*", create=True) raises ModuleNotFoundError; update the test
setup in src/praisonai/tests/unit/test_ag2_adapter.py to pre-populate
sys.modules["autogen"] with a MagicMock (similar to the existing "instructor"
stub) before imports/patches run so that autogen and its attributes can be
created by patch(..., create=True); ensure the stub is inserted conditionally
only if "autogen" not in sys.modules and mirror the pattern used for
"instructor" to avoid masking a real installation.
| # Run: | ||
| # praisonai --framework ag2 examples/ag2/ag2_bedrock.yaml | ||
| # |
There was a problem hiding this comment.
1. Cli rejects ag2 option 🐞 Bug ≡ Correctness
Examples instruct users to run praisonai --framework ag2 ..., but the CLI parser only allows crewai|autogen|praisonai, so AG2 cannot be used from the CLI as documented.
Agent Prompt
### Issue description
The CLI rejects `--framework ag2` because `ag2` is missing from the argparse `choices` list, even though the PR adds AG2 dispatch and examples document using `--framework ag2`.
### Issue Context
Users following `examples/ag2/*.yaml` will hit an argparse validation error before PraisonAI can run the AG2 adapter.
### Fix Focus Areas
- src/praisonai/praisonai/cli.py[512-514]
- examples/ag2/ag2_bedrock.yaml[10-12]
- src/praisonai/praisonai/agents_generator.py[328-347]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| # Build LLMConfig — pass a config dict; Bedrock needs no api_key | ||
| if api_type == "bedrock": | ||
| llm_config_entry = {"api_type": "bedrock", "model": model_name} | ||
| else: |
There was a problem hiding this comment.
2. Bedrock region ignored 🐞 Bug ≡ Correctness
_run_ag2 drops the YAML aws_region setting for Bedrock, so the ag2_bedrock.yaml example’s explicit region is never applied.
Agent Prompt
### Issue description
The AG2 adapter ignores `aws_region` from YAML when configuring Bedrock, so users cannot control region via config files (contradicting the provided Bedrock example).
### Issue Context
`examples/ag2/ag2_bedrock.yaml` specifies `llm.aws_region: us-east-1`, but `_run_ag2` does not read or include it in the Bedrock `llm_config_entry`.
### Fix Focus Areas
- src/praisonai/praisonai/agents_generator.py[450-477]
- examples/ag2/ag2_bedrock.yaml[25-29]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| # Allow YAML top-level llm block to override config_list values | ||
| yaml_llm = config.get("llm", {}) or {} | ||
| # Also check first role's llm block as a fallback | ||
| first_role_llm = {} | ||
| for role_details in config.get("roles", {}).values(): | ||
| first_role_llm = role_details.get("llm", {}) or {} | ||
| break | ||
|
|
||
| # Priority: YAML top-level llm > first role llm > config_list > env vars | ||
| def _resolve(key, env_var=None, default=None): | ||
| return (yaml_llm.get(key) or first_role_llm.get(key) | ||
| or model_config.get(key) | ||
| or (os.environ.get(env_var) if env_var else None) | ||
| or default) | ||
|
|
||
| api_type = _resolve("api_type", default="openai").lower() | ||
| model_name = _resolve("model", default="gpt-4o-mini") | ||
| api_key = _resolve("api_key", env_var="OPENAI_API_KEY") | ||
| # Fix #3: also check OPENAI_API_BASE for consistency with rest of codebase | ||
| base_url = (model_config.get("base_url") | ||
| or yaml_llm.get("base_url") | ||
| or os.environ.get("OPENAI_BASE_URL") | ||
| or os.environ.get("OPENAI_API_BASE")) |
There was a problem hiding this comment.
3. Base_url override wrong order 🐞 Bug ≡ Correctness
_run_ag2 documents that YAML llm overrides config_list, but base_url is resolved with config_list taking precedence, so YAML llm.base_url is silently ignored.
Agent Prompt
### Issue description
`base_url` resolution contradicts the adapter’s documented precedence. YAML `llm.base_url` should override `config_list.base_url` but currently does not.
### Issue Context
The adapter uses `_resolve()` (YAML-first) for other keys, but `base_url` uses a different ordering.
### Fix Focus Areas
- src/praisonai/praisonai/agents_generator.py[458-472]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
There was a problem hiding this comment.
Pull request overview
This PR adds first-class support for running PraisonAI workflows using the AG2 framework (PyPI ag2, installed under the autogen namespace), including dependency wiring plus mocked unit/integration test coverage and runnable examples.
Changes:
- Add AG2 availability detection +
framework="ag2"dispatch with a new_run_ag2execution path. - Add an
ag2optional dependency extra and update environment/example configs for AG2 usage. - Add mocked unit/integration tests and example YAMLs demonstrating single-/multi-agent and Bedrock flows.
Reviewed changes
Copilot reviewed 10 out of 11 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| src/praisonai/praisonai/agents_generator.py | Adds AG2 detection, framework validation/dispatch, and _run_ag2 implementation. |
| src/praisonai/praisonai/auto.py | Adds AG2 availability detection and framework validation messaging. |
| src/praisonai/pyproject.toml | Introduces the ag2 optional extra/dependency wiring. |
| src/praisonai/.env.example | Documents env vars relevant to AG2 and Bedrock examples. |
| src/praisonai/tests/unit/test_ag2_adapter.py | New unit tests covering AG2 validation + _run_ag2 behavior (mocked). |
| src/praisonai/tests/integration/ag2/test_ag2_integration.py | New mocked integration tests for AG2 orchestration and dispatch. |
| src/praisonai/tests/integration/ag2/init.py | Initializes AG2 integration test package. |
| src/praisonai/tests/source/ag2_function_tools.py | Standalone example demonstrating AG2 tool registration pattern. |
| examples/ag2/ag2_basic.yaml | Basic AG2 YAML example. |
| examples/ag2/ag2_multi_agent.yaml | Multi-agent GroupChat example YAML. |
| examples/ag2/ag2_bedrock.yaml | Bedrock-focused AG2 YAML example. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| AG2_AVAILABLE = False | ||
| try: | ||
| import importlib.metadata as _importlib_metadata | ||
| _importlib_metadata.distribution('ag2') | ||
| from autogen import LLMConfig as _AG2LLMConfig # noqa: F401 — AG2-exclusive class |
There was a problem hiding this comment.
Because AG2 installs under the autogen namespace, import autogen can succeed even when pyautogen is not installed. With the current flags, that can make AUTOGEN_AVAILABLE a false-positive and cause framework="autogen" to run against the AG2 backend (or vice-versa). Consider detecting AutoGen via the pyautogen distribution (importlib.metadata.distribution("pyautogen")) or another robust discriminator to avoid namespace collisions.
| try: | ||
| import autogen | ||
| AUTOGEN_AVAILABLE = True | ||
| except ImportError: | ||
| pass |
There was a problem hiding this comment.
Same namespace-collision problem here: AG2 provides an autogen package, so AUTOGEN_AVAILABLE can become true even if pyautogen isn’t installed. Consider checking the pyautogen distribution explicitly (or otherwise distinguishing pyautogen vs ag2) so framework="autogen" and framework="ag2" validations don’t interfere with each other.
| def make_tool_fn(f): | ||
| def tool_fn(**kwargs): | ||
| return f(**kwargs) if callable(f) else str(f) | ||
| tool_fn.__name__ = tool_name | ||
| return tool_fn |
There was a problem hiding this comment.
The tool wrapper created here uses a generic tool_fn(**kwargs) signature and drops the wrapped tool’s real signature/type hints. AG2’s register_for_llm typically builds the tool schema from the callable’s signature/annotations, so this wrapper can prevent the LLM from seeing required parameters. Preserve the original callable’s signature/annotations (e.g., set wrapped.__signature__ / __annotations__, or avoid wrapping when possible).
| # Ensure src is on path | ||
| sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../../")) |
There was a problem hiding this comment.
sys.path.insert(..., "../../") resolves to the tests/ directory, not the directory that contains the praisonai/ package. This makes the test brittle (e.g., running from repo root won’t be able to import praisonai). Consider inserting the actual project root (the parent of tests/) or relying on editable install instead of path manipulation.
| with patch("praisonai.agents_generator.AG2_AVAILABLE", True), \ | ||
| patch("autogen.LLMConfig", create=True, return_value=mock_llm_config) as mock_llmcfg, \ | ||
| patch("autogen.AssistantAgent", create=True, return_value=mock_assistant), \ | ||
| patch("autogen.UserProxyAgent", create=True, return_value=mock_user_proxy), \ | ||
| patch("autogen.GroupChat", create=True, return_value=mock_groupchat), \ |
There was a problem hiding this comment.
These tests patch autogen.* targets, but unittest.mock.patch will raise ModuleNotFoundError if the autogen module isn’t importable (common when optional AG2/AutoGen deps aren’t installed). Add a module-level pytest.importorskip("autogen") (or a conditional sys.modules["autogen"] stub) so the unit test suite can run without optional dependencies.
| with patch("autogen.LLMConfig", create=True, return_value=m["llm_config"]), \ | ||
| patch("autogen.AssistantAgent", create=True, return_value=m["assistant"]), \ | ||
| patch("autogen.UserProxyAgent", create=True, return_value=m["user_proxy"]), \ | ||
| patch("autogen.GroupChat", create=True, return_value=m["groupchat"]), \ | ||
| patch("autogen.GroupChatManager", create=True, return_value=m["manager"]): |
There was a problem hiding this comment.
This test patches autogen.* without first ensuring the autogen module exists. If AG2 isn’t installed, patch("autogen....") will error instead of skipping. Guard these tests with pytest.importorskip("autogen") (and/or skip based on importlib.metadata.distribution("ag2")) before any such patch blocks run.
| # Install: pip install "praisonai[ag2]" | ||
| # Run: praisonai --framework ag2 examples/ag2/ag2_basic.yaml | ||
| # or praisonai run examples/ag2/ag2_basic.yaml --framework ag2 |
There was a problem hiding this comment.
This example instructs users to pass --framework ag2, but the CLI currently restricts --framework choices to ["crewai", "autogen", "praisonai"] (praisonai/cli.py). Either update the CLI/UI to accept ag2, or adjust these run instructions to omit --framework and rely on framework: ag2 in the YAML.
| # Install: pip install "praisonai[ag2]" | ||
| # Run: praisonai --framework ag2 examples/ag2/ag2_multi_agent.yaml | ||
| # or praisonai run examples/ag2/ag2_multi_agent.yaml --framework ag2 |
There was a problem hiding this comment.
This example tells users to pass --framework ag2, but the CLI currently does not list ag2 as an allowed --framework choice. Either update the CLI/UI framework choices to include ag2, or adjust these instructions to rely on framework: ag2 in the YAML (no --framework flag).
| # Run: | ||
| # praisonai --framework ag2 examples/ag2/ag2_bedrock.yaml | ||
| # |
There was a problem hiding this comment.
The run instructions use --framework ag2, but the CLI currently restricts --framework to ["crewai", "autogen", "praisonai"]. Either extend CLI/UI choices to include ag2, or update this example to omit the flag and rely on framework: ag2 in YAML.
There was a problem hiding this comment.
Code Review
This pull request introduces support for the ag2 framework, a community fork of AutoGen. This includes adding ag2 as an optional dependency, updating configuration examples for basic, multi-agent, and AWS Bedrock scenarios, and implementing a new _run_ag2 method to orchestrate agents within this framework. The _run_ag2 method handles LLM configuration, agent creation, and tool registration. New integration and unit tests have been added to ensure the functionality and backward compatibility. The review comments suggest improving error handling by catching specific exceptions instead of generic ones and refining the tool registration logic to ensure callable functions are always provided.
| def make_tool_fn(f): | ||
| def tool_fn(**kwargs): | ||
| return f(**kwargs) if callable(f) else str(f) | ||
| tool_fn.__name__ = tool_name |
There was a problem hiding this comment.
The make_tool_fn function returns str(f) if f is not callable. This behavior is unexpected for a tool function, which should typically be callable. If f is not callable, it likely indicates a misconfiguration or an issue with the tool definition. It would be safer to raise an error or ensure f is always callable before wrapping it.
| def make_tool_fn(f): | |
| def tool_fn(**kwargs): | |
| return f(**kwargs) if callable(f) else str(f) | |
| tool_fn.__name__ = tool_name | |
| def make_tool_fn(f): | |
| if not callable(f): | |
| raise TypeError(f"Tool '{tool_name}' is not callable.") | |
| def tool_fn(**kwargs): | |
| return f(**kwargs) | |
| tool_fn.__name__ = tool_name | |
| return tool_fn |
| except Exception: | ||
| pass |
There was a problem hiding this comment.
Catching a generic Exception can hide specific issues and make debugging harder. It's better to catch importlib.metadata.PackageNotFoundError and ImportError explicitly, as these are the expected exceptions when a package or its components are not found.
| except Exception: | |
| pass | |
| except (importlib.metadata.PackageNotFoundError, ImportError): | |
| pass |
| except Exception: | ||
| pass |
There was a problem hiding this comment.
Catching a generic Exception can hide specific issues and make debugging harder. It's better to catch importlib.metadata.PackageNotFoundError and ImportError explicitly, as these are the expected exceptions when a package or its components are not found.
| except Exception: | |
| pass | |
| except (importlib.metadata.PackageNotFoundError, ImportError): | |
| pass |
Fixes #1338. Autonomously resolved by PraisonAI.
Summary by CodeRabbit
Release Notes
New Features
Documentation
Tests