Skip to content

fix: resolve issue #1338#1340

Closed
MervinPraison wants to merge 3 commits intomainfrom
develop
Closed

fix: resolve issue #1338#1340
MervinPraison wants to merge 3 commits intomainfrom
develop

Conversation

@MervinPraison
Copy link
Copy Markdown
Owner

@MervinPraison MervinPraison commented Apr 9, 2026

Fixes #1338. Autonomously resolved by PraisonAI.

Summary by CodeRabbit

Release Notes

  • New Features

    • Added AG2 framework support enabling multi-agent AI orchestration workflows.
    • Integrated AWS Bedrock for AWS-native model deployment.
    • Provided example configurations for research, writing, and cloud architecture workflows.
  • Documentation

    • Enhanced environment configuration guidance for AG2 and AWS Bedrock setup.
  • Tests

    • Added comprehensive integration and unit test coverage for AG2 framework.

faridun-ag2 and others added 3 commits March 24, 2026 16:47
Integrates AG2 (community fork of AutoGen, PyPI package: ag2) as a new
framework option alongside praisonai, crewai, and autogen.

Changes:
- pyproject.toml: add [ag2] optional dependency extra (ag2>=0.11.0)
- agents_generator.py: AG2 detection via importlib.metadata + _run_ag2()
  with LLMConfig dict pattern, GroupChat orchestration, Bedrock support,
  ChatResult.summary extraction, and TERMINATE cleanup
- auto.py: AG2_AVAILABLE flag + validation in AutoGenerator.__init__
- .env.example: add AG2 and AWS Bedrock environment variable templates
- examples/ag2/: basic, multi-agent, and Bedrock YAML examples
- tests: 16 unit tests + 9 mock integration tests (25/25 passing)

Detection uses importlib.metadata.distribution('ag2') + LLMConfig check.
All changes are purely additive — existing code paths unaffected.

E2E tested against OpenAI gpt-4o-mini: single-agent and multi-agent
GroupChat flows both produce correct output.
…_API_BASE, patch create=True

- Add _resolve() helper to read LLM config from YAML top-level, per-role,
  config_list, and env vars in priority order (fixes Bedrock YAML config ignored)
- Include OPENAI_API_BASE env var in base_url resolution chain
- Add create=True to all patch("autogen.*") calls so tests pass without ag2 installed
feat(ag2): add AG2 framework backend integration
Copilot AI review requested due to automatic review settings April 9, 2026 15:50
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 9, 2026

📝 Walkthrough

Walkthrough

This PR introduces comprehensive AG2 framework support to PraisonAI. Changes include runtime AG2 availability detection, framework routing logic, a new _run_ag2() method for multi-agent orchestration via Autogen's GroupChat API, optional dependency configuration, three example YAML workflows, and extensive unit and integration test coverage.

Changes

Cohort / File(s) Summary
AG2 Framework Integration
src/praisonai/praisonai/auto.py, src/praisonai/praisonai/agents_generator.py
Added runtime AG2 availability detection via importlib.metadata; extended framework validation to accept "ag2" as a valid framework; implemented _run_ag2() method to construct LLMConfig, create AssistantAgent instances per role, register tools, initialize GroupChat, and format output as ### AG2 Output ###.
Optional Dependencies
src/praisonai/pyproject.toml
Added ag2>=0.11.0 as optional dependency group with praisonai-tools under both [project.optional-dependencies] and [tool.poetry.extras].
Environment Configuration
src/praisonai/.env.example
Added explanatory comments and environment variable guidance for AG2, AWS Bedrock (AWS_DEFAULT_REGION, credentials), and Chainlit configuration with trailing newline normalization.
AG2 Example Workflows
examples/ag2/ag2_basic.yaml, examples/ag2/ag2_bedrock.yaml, examples/ag2/ag2_multi_agent.yaml
Added three AG2 configuration examples: basic single-agent research workflow, AWS Bedrock-integrated cloud architecture task, and multi-agent researcher/writer collaboration pattern with detailed role/task definitions.
Unit Tests
src/praisonai/tests/unit/test_ag2_adapter.py
Added 542 lines of unit test coverage validating AG2 availability detection, framework validation, LLMConfig construction for OpenAI and Bedrock, AssistantAgent/GroupChat creation, tool registration, system message composition, and output/error formatting.
Integration Tests
src/praisonai/tests/integration/ag2/test_ag2_integration.py
Added 477 lines of integration test coverage with mocked AG2 objects, testing single/multi-agent flows, GroupChat initialization, backward compatibility with existing "autogen" framework, and AgentOps integration.
Example Tool Script
src/praisonai/tests/source/ag2_function_tools.py
Added example script demonstrating AG2 tool registration with PraisonAI, including calculator function with arithmetic operations, LLMConfig usage context manager, and UserProxyAgent termination handling.

Sequence Diagram(s)

sequenceDiagram
    participant User as User/Config
    participant Gen as AgentsGenerator<br/>._run_ag2()
    participant Config as LLMConfig
    participant Agent as AssistantAgent<br/>(per role)
    participant Tools as Tool<br/>Registry
    participant Chat as GroupChat
    participant Proxy as UserProxyAgent
    participant Manager as GroupChatManager
    participant Output as Output<br/>Formatter

    User->>Gen: Call with YAML config + topic
    Gen->>Config: Construct from env/config_list
    Config-->>Gen: LLMConfig ready
    Gen->>Agent: Create AssistantAgent per role
    Agent-->>Gen: Agents instantiated
    Gen->>Tools: Register tools via decorators
    Tools-->>Gen: Tools registered per role
    Gen->>Chat: Initialize GroupChat with agents
    Chat-->>Gen: GroupChat configured
    Gen->>Proxy: Create UserProxyAgent
    Proxy-->>Gen: UserProxy ready
    Proxy->>Manager: Create GroupChatManager(chat)
    Manager-->>Proxy: Manager ready
    Proxy->>Manager: initiate_chat(initial_message)
    Manager->>Agent: Multi-turn agent interaction
    Agent->>Tools: Execute registered tools
    Tools-->>Agent: Tool results
    Agent-->>Manager: Message/TERMINATE
    Manager-->>Proxy: Chat history with TERMINATE
    Proxy-->>Gen: Return chat_result
    Gen->>Output: Format with ### AG2 Output ###
    Output-->>Gen: Formatted result string
    Gen-->>User: Return output or error
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • PR #1156: Adds identical AG2 framework support—AG2 availability checks, framework routing, _run_ag2() implementation, pyproject.toml dependencies, example configurations, and unit/integration tests.

Suggested labels

Review effort 4/5

Poem

🐰 A framework called AG2 takes flight,
With agents conversing left and right,
Tools registered, chats initiated bright,
From AWS clouds to tests polished white,
PraisonAI hops forward—what a delight!

🚥 Pre-merge checks | ✅ 2 | ❌ 3

❌ Failed checks (1 warning, 2 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 68.52% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'fix: resolve issue #1338' is vague and generic, using non-descriptive phrasing that doesn't convey the actual change—adding AG2 framework support as an optional backend. Use a more descriptive title that summarizes the main change, e.g., 'feat: add AG2 framework support as optional backend' or 'feat: integrate AG2 (AutoGen) as alternative orchestration framework'.
Linked Issues check ❓ Inconclusive Issue #1338 is a minimal test issue for PR triage verification with no actual coding requirements specified, while the PR implements comprehensive AG2 framework support with multiple test suites and examples. Clarify the relationship between issue #1338 and the AG2 implementation. Verify whether this PR truly addresses issue #1338 or if it should be linked to feature requirements instead.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Out of Scope Changes check ✅ Passed All changes are cohesively related to AG2 framework integration: framework detection, LLM configuration handling, agent orchestration, environment templates, example configurations, and comprehensive test coverage including unit and integration tests.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch develop

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link
Copy Markdown

ⓘ You are approaching your monthly quota for Qodo. Upgrade your plan

Review Summary by Qodo

Add AG2 framework backend integration with GroupChat orchestration and Bedrock support

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Add AG2 framework backend integration as new orchestration option
• Implement _run_ag2() method with LLMConfig, GroupChat, and Bedrock support
• Add AG2 availability detection via importlib.metadata and LLMConfig import
• Include 25 unit and integration tests with comprehensive mocking
• Provide three YAML examples (basic, multi-agent, Bedrock) and function tools demo
Diagram
flowchart LR
  A["AG2 Detection<br/>importlib.metadata"] --> B["AG2_AVAILABLE Flag"]
  B --> C["Framework Validation<br/>in __init__"]
  C --> D["_run_ag2 Method"]
  D --> E["LLMConfig Construction<br/>OpenAI/Bedrock"]
  E --> F["AssistantAgent Creation<br/>per Role"]
  F --> G["GroupChat Orchestration<br/>with max_round=12"]
  G --> H["Output Extraction<br/>& TERMINATE Cleanup"]
  H --> I["Result with AG2 Header"]
Loading

Grey Divider

File Changes

1. src/praisonai/praisonai/agents_generator.py ✨ Enhancement +164/-2

Add AG2 framework orchestration with LLMConfig and GroupChat

src/praisonai/praisonai/agents_generator.py


2. src/praisonai/praisonai/auto.py ✨ Enhancement +15/-0

Add AG2 availability detection and framework validation

src/praisonai/praisonai/auto.py


3. src/praisonai/tests/unit/test_ag2_adapter.py 🧪 Tests +542/-0

Add 16 unit tests for AG2 adapter and LLMConfig construction

src/praisonai/tests/unit/test_ag2_adapter.py


View more (8)
4. src/praisonai/tests/integration/ag2/test_ag2_integration.py 🧪 Tests +477/-0

Add 9 mock integration tests for single and multi-agent flows

src/praisonai/tests/integration/ag2/test_ag2_integration.py


5. src/praisonai/tests/source/ag2_function_tools.py 📝 Documentation +98/-0

Add AG2 function tools example with calculator demonstration

src/praisonai/tests/source/ag2_function_tools.py


6. examples/ag2/ag2_basic.yaml 📝 Documentation +30/-0

Add basic AG2 single-agent YAML example for research task

examples/ag2/ag2_basic.yaml


7. examples/ag2/ag2_multi_agent.yaml 📝 Documentation +54/-0

Add multi-agent GroupChat YAML example with researcher and writer

examples/ag2/ag2_multi_agent.yaml


8. examples/ag2/ag2_bedrock.yaml 📝 Documentation +42/-0

Add AG2 Bedrock integration example with AWS cloud architect

examples/ag2/ag2_bedrock.yaml


9. src/praisonai/.env.example 📝 Documentation +13/-3

Add AG2 and AWS Bedrock environment variable templates

src/praisonai/.env.example


10. src/praisonai/pyproject.toml ⚙️ Configuration changes +3/-0

Add ag2 optional dependency extra and package configuration

src/praisonai/pyproject.toml


11. src/praisonai/tests/integration/ag2/__init__.py Additional files +0/-0

...

src/praisonai/tests/integration/ag2/init.py


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review bot commented Apr 9, 2026

Code Review by Qodo

🐞 Bugs (6)   📘 Rule violations (0)   📎 Requirement gaps (0)   🎨 UX Issues (0)
🐞\ ≡ Correctness (3) ☼ Reliability (2) ⚙ Maintainability (1)

Grey Divider


Action required

1. CLI rejects ag2 option 🐞
Description
Examples instruct users to run praisonai --framework ag2 ..., but the CLI parser only allows
crewai|autogen|praisonai, so AG2 cannot be used from the CLI as documented.
Code

examples/ag2/ag2_bedrock.yaml[R10-12]

+# Run:
+#   praisonai --framework ag2 examples/ag2/ag2_bedrock.yaml
+#
Evidence
The new example explicitly documents --framework ag2, but the CLI argument parser does not include
ag2 in its allowed values, so the command will be rejected before reaching the new AgentsGenerator
dispatch that supports AG2.

examples/ag2/ag2_bedrock.yaml[10-12]
src/praisonai/praisonai/cli.py[512-514]
src/praisonai/praisonai/agents_generator.py[328-347]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The CLI rejects `--framework ag2` because `ag2` is missing from the argparse `choices` list, even though the PR adds AG2 dispatch and examples document using `--framework ag2`.

### Issue Context
Users following `examples/ag2/*.yaml` will hit an argparse validation error before PraisonAI can run the AG2 adapter.

### Fix Focus Areas
- src/praisonai/praisonai/cli.py[512-514]
- examples/ag2/ag2_bedrock.yaml[10-12]
- src/praisonai/praisonai/agents_generator.py[328-347]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Bedrock region ignored 🐞
Description
_run_ag2 drops the YAML aws_region setting for Bedrock, so the ag2_bedrock.yaml example’s
explicit region is never applied.
Code

src/praisonai/praisonai/agents_generator.py[R474-477]

+        # Build LLMConfig — pass a config dict; Bedrock needs no api_key
+        if api_type == "bedrock":
+            llm_config_entry = {"api_type": "bedrock", "model": model_name}
+        else:
Evidence
The Bedrock example sets llm.aws_region, but the bedrock branch only builds
{"api_type":"bedrock","model":...} and never resolves/passes aws_region, so the YAML config is
ignored.

examples/ag2/ag2_bedrock.yaml[25-29]
src/praisonai/praisonai/agents_generator.py[465-477]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The AG2 adapter ignores `aws_region` from YAML when configuring Bedrock, so users cannot control region via config files (contradicting the provided Bedrock example).

### Issue Context
`examples/ag2/ag2_bedrock.yaml` specifies `llm.aws_region: us-east-1`, but `_run_ag2` does not read or include it in the Bedrock `llm_config_entry`.

### Fix Focus Areas
- src/praisonai/praisonai/agents_generator.py[450-477]
- examples/ag2/ag2_bedrock.yaml[25-29]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. base_url override wrong order 🐞
Description
_run_ag2 documents that YAML llm overrides config_list, but base_url is resolved with
config_list taking precedence, so YAML llm.base_url is silently ignored.
Code

src/praisonai/praisonai/agents_generator.py[R450-472]

+        # Allow YAML top-level llm block to override config_list values
+        yaml_llm = config.get("llm", {}) or {}
+        # Also check first role's llm block as a fallback
+        first_role_llm = {}
+        for role_details in config.get("roles", {}).values():
+            first_role_llm = role_details.get("llm", {}) or {}
+            break
+
+        # Priority: YAML top-level llm > first role llm > config_list > env vars
+        def _resolve(key, env_var=None, default=None):
+            return (yaml_llm.get(key) or first_role_llm.get(key)
+                    or model_config.get(key)
+                    or (os.environ.get(env_var) if env_var else None)
+                    or default)
+
+        api_type = _resolve("api_type", default="openai").lower()
+        model_name = _resolve("model", default="gpt-4o-mini")
+        api_key = _resolve("api_key", env_var="OPENAI_API_KEY")
+        # Fix #3: also check OPENAI_API_BASE for consistency with rest of codebase
+        base_url = (model_config.get("base_url")
+                    or yaml_llm.get("base_url")
+                    or os.environ.get("OPENAI_BASE_URL")
+                    or os.environ.get("OPENAI_API_BASE"))
Evidence
The comment states YAML has highest priority, but the base_url assignment checks
model_config.get('base_url') before yaml_llm.get('base_url'), contradicting the stated
precedence and preventing YAML overrides from working when config_list includes base_url.

src/praisonai/praisonai/agents_generator.py[450-472]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`base_url` resolution contradicts the adapter’s documented precedence. YAML `llm.base_url` should override `config_list.base_url` but currently does not.

### Issue Context
The adapter uses `_resolve()` (YAML-first) for other keys, but `base_url` uses a different ordering.

### Fix Focus Areas
- src/praisonai/praisonai/agents_generator.py[458-472]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

4. Autogen availability misdetected 🐞
Description
Installing AG2 can make import autogen succeed (AG2 uses the autogen namespace), setting
AUTOGEN_AVAILABLE=True even when pyautogen isn’t installed, which can mislead framework
validation and future dispatch logic.
Code

src/praisonai/praisonai/agents_generator.py[R44-49]

+AG2_AVAILABLE = False
+try:
+    import importlib.metadata as _importlib_metadata
+    _importlib_metadata.distribution('ag2')
+    from autogen import LLMConfig as _AG2LLMConfig  # noqa: F401 — AG2-exclusive class
+    AG2_AVAILABLE = True
Evidence
The code sets AUTOGEN_AVAILABLE based on import autogen, and the new AG2 doc explicitly states
AG2 installs under the autogen namespace; meanwhile pyproject defines autogen (pyautogen) and
ag2 as distinct extras, so the boolean no longer uniquely indicates pyautogen availability.

src/praisonai/praisonai/agents_generator.py[38-52]
src/praisonai/praisonai/agents_generator.py[428-434]
src/praisonai/pyproject.toml[92-96]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`AUTOGEN_AVAILABLE` is determined via `import autogen`, but AG2 also provides the `autogen` namespace, so the flag can be true even when the AutoGen (pyautogen) extra is not installed.

### Issue Context
This can bypass intended framework-install checks and makes the availability flags misleading.

### Fix Focus Areas
- src/praisonai/praisonai/agents_generator.py[38-52]
- src/praisonai/praisonai/auto.py[29-43]
- src/praisonai/pyproject.toml[92-96]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. LLMConfig call style inconsistent 🐞
Description
_run_ag2 calls LLMConfig(llm_config_entry) with a positional dict, but the repo’s own AG2 tool
example constructs LLMConfig with keyword args and uses it as a context manager, creating a high
risk of TypeError or misconfiguration depending on the installed AG2 API.
Code

src/praisonai/praisonai/agents_generator.py[R474-484]

+        # Build LLMConfig — pass a config dict; Bedrock needs no api_key
+        if api_type == "bedrock":
+            llm_config_entry = {"api_type": "bedrock", "model": model_name}
+        else:
+            llm_config_entry = {"model": model_name}
+            if api_key:
+                llm_config_entry["api_key"] = api_key
+            if base_url and base_url not in ("https://api.openai.com/v1", "https://api.openai.com/v1/"):
+                llm_config_entry["base_url"] = base_url
+        llm_config = LLMConfig(llm_config_entry)
+
Evidence
The in-repo example demonstrates LLMConfig(api_type=..., model=..., api_key=...) and `with
llm_config: usage, while _run_ag2` constructs it differently; this inconsistency is a concrete
mismatch between documented usage and adapter implementation.

src/praisonai/praisonai/agents_generator.py[474-484]
src/praisonai/tests/source/ag2_function_tools.py[49-59]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The AG2 adapter constructs `LLMConfig` using a positional dict, while the repository’s own AG2 example uses keyword arguments and a context manager. This mismatch can cause runtime errors or configuration not being applied as intended.

### Issue Context
Keeping the adapter aligned with the repo’s documented/example usage reduces breakage across AG2 versions and improves maintainability.

### Fix Focus Areas
- src/praisonai/praisonai/agents_generator.py[474-503]
- src/praisonai/tests/source/ag2_function_tools.py[49-67]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Advisory comments

6. AG2 tests add wrong sys.path 🐞
Description
New AG2 tests insert .../../../../ into sys.path, unlike other integration tests that insert
.../../../../src, making imports dependent on the current working directory and increasing CI
brittleness.
Code

src/praisonai/tests/integration/ag2/test_ag2_integration.py[16]

+sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../../../"))
Evidence
The AG2 integration test prepends a different path than the existing integration tests, which can
change module resolution order and fail when pytest is executed from a different directory layout.

src/praisonai/tests/integration/ag2/test_ag2_integration.py[11-17]
src/praisonai/tests/integration/crewai/test_crewai_basic.py[13-15]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
AG2 tests modify `sys.path` differently from other integration tests, making test imports sensitive to the working directory and sys.path ordering.

### Issue Context
Other integration tests consistently add `.../../../../src`.

### Fix Focus Areas
- src/praisonai/tests/integration/ag2/test_ag2_integration.py[14-17]
- src/praisonai/tests/unit/test_ag2_adapter.py[19-21]
- src/praisonai/tests/integration/crewai/test_crewai_basic.py[13-15]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

@greptile-apps
Copy link
Copy Markdown

greptile-apps bot commented Apr 9, 2026

Greptile Summary

This PR adds first-class AG2 framework support (the community fork of AutoGen, PyPI package ag2) to PraisonAI, fixing issue #1338. It introduces AG2 detection via importlib.metadata, a new _run_ag2 execution path using AG2's LLMConfig + GroupChat pattern, an optional ag2 dependency in pyproject.toml, example YAML configurations, and a comprehensive suite of unit and integration tests.

Key changes:

  • agents_generator.py / auto.py: AG2 availability flag (AG2_AVAILABLE) detected via importlib.metadata.distribution('ag2') + LLMConfig import check; new _run_ag2 method using GroupChat multi-agent pattern
  • pyproject.toml: New ag2 = ["ag2>=0.11.0", "praisonai-tools>=0.0.15"] optional extra
  • examples/ag2/: Three example YAML files covering basic, multi-agent, and Bedrock configurations
  • tests/unit/test_ag2_adapter.py + tests/integration/ag2/test_ag2_integration.py: Full mocked test coverage for the new code path

Issues found:

  • LLMConfig is called with a positional dict argument (LLMConfig(llm_config_entry)) in _run_ag2, but AG2's LLMConfig constructor takes keyword arguments — as shown by the bundled ag2_function_tools.py example. This would raise a TypeError at runtime.
  • aws_region specified in the YAML llm block (e.g. ag2_bedrock.yaml) is silently dropped and never forwarded to LLMConfig.
  • The bedrock/ prefix in the example YAML model name is a LiteLLM convention; AG2's native Bedrock client expects just the bare model ID.

Confidence Score: 2/5

Not safe to merge yet — the AG2 execution path will raise a TypeError at runtime due to incorrect LLMConfig instantiation, and the Bedrock example YAML will fail with a wrong model-name format.

Two P1 bugs block the primary user path: (1) LLMConfig(dict) instead of LLMConfig(**dict) will cause a TypeError the first time any user runs --framework ag2; (2) the bundled Bedrock example uses a LiteLLM-style bedrock/ prefix that AG2's native Bedrock client does not understand. Additionally, aws_region from YAML is silently dropped. These are straightforward fixes, but until they are applied the feature is non-functional. Test coverage is otherwise solid and the architecture is well-designed.

src/praisonai/praisonai/agents_generator.py (line 483 — LLMConfig call) and examples/ag2/ag2_bedrock.yaml (line 26 — model name prefix) need attention before merging.

Vulnerabilities

No security concerns identified. API keys are sourced from environment variables and never hardcoded. The code_execution_config=False on UserProxyAgent disables arbitrary code execution in the AG2 path, which is the safe default. AWS Bedrock credentials follow the boto3 credential chain (env vars / ~/.aws/credentials / IAM role), which is standard practice.

Important Files Changed

Filename Overview
src/praisonai/praisonai/agents_generator.py Core change: adds AG2 detection and _run_ag2 execution path; contains two P1 bugs — LLMConfig called with a positional dict instead of kwargs, and aws_region silently dropped from the Bedrock config.
examples/ag2/ag2_bedrock.yaml Bedrock example uses a LiteLLM-style bedrock/ prefix in the model name which is incompatible with AG2's native Bedrock client.
src/praisonai/praisonai/auto.py Adds AG2 availability detection and framework validation in AutoGenerator.__init__; logic mirrors agents_generator.py correctly.
src/praisonai/pyproject.toml Adds ag2 = ["ag2>=0.11.0", "praisonai-tools>=0.0.15"] optional dependency in both [project.optional-dependencies] and [tool.poetry.extras]; consistent and correct.
src/praisonai/tests/unit/test_ag2_adapter.py Thorough unit tests for availability detection, framework validation, LLMConfig construction, agent creation, and output extraction; all use mocked AG2 so the LLMConfig(**dict) bug is not caught.
src/praisonai/tests/integration/ag2/test_ag2_integration.py Integration tests cover single-agent, multi-agent GroupChat, backward compatibility, and dispatch routing — well-structured and comprehensive.
src/praisonai/tests/source/ag2_function_tools.py Standalone AG2 tool-registration example; correctly uses LLMConfig with keyword args and the context-manager pattern.

Sequence Diagram

sequenceDiagram
    participant User as User/CLI
    participant AG as AgentsGenerator
    participant Det as AG2 Detection
    participant RAG as _run_ag2
    participant LLM as LLMConfig
    participant UP as UserProxyAgent
    participant AA as AssistantAgent(s)
    participant GC as GroupChat + Manager

    User->>AG: generate_crew_and_kickoff(framework=ag2)
    AG->>Det: importlib.metadata.distribution('ag2') + LLMConfig import
    Det-->>AG: AG2_AVAILABLE = True
    AG->>RAG: _run_ag2(config, topic, tools_dict)
    RAG->>LLM: LLMConfig(llm_config_entry) — should be LLMConfig(**dict)
    LLM-->>RAG: llm_config instance
    RAG->>UP: UserProxyAgent(name=User, human_input_mode=NEVER)
    loop For each role in YAML
        RAG->>AA: AssistantAgent(name, system_message, llm_config)
        AA-->>RAG: assistant
    end
    RAG->>GC: GroupChat(agents=[user_proxy]+assistants, max_round=12)
    RAG->>GC: GroupChatManager(groupchat, llm_config)
    RAG->>UP: initiate_chat(manager, message=task_description)
    UP->>GC: orchestrate multi-agent conversation
    GC-->>RAG: chat_result
    RAG-->>AG: AG2 Output
    AG-->>User: result string
Loading

Reviews (1): Last reviewed commit: "feat(ag2): add AG2 framework backend int..." | Re-trigger Greptile

llm_config_entry["api_key"] = api_key
if base_url and base_url not in ("https://api.openai.com/v1", "https://api.openai.com/v1/"):
llm_config_entry["base_url"] = base_url
llm_config = LLMConfig(llm_config_entry)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 LLMConfig called with positional dict — likely TypeError at runtime

LLMConfig(llm_config_entry) passes a plain dict as the first positional argument. AG2's LLMConfig is a Pydantic model whose constructor accepts keyword arguments (e.g. model=, api_type=), not a positional dict. The bundled example tests/source/ag2_function_tools.py (lines 51–55) confirms the expected call style uses keyword arguments, not a positional dict.

Because the unit tests mock LLMConfig entirely, they cannot catch this mismatch. The fix is to unpack the dict:

Suggested change
llm_config = LLMConfig(llm_config_entry)
llm_config = LLMConfig(**llm_config_entry)

Comment on lines +475 to +482
if api_type == "bedrock":
llm_config_entry = {"api_type": "bedrock", "model": model_name}
else:
llm_config_entry = {"model": model_name}
if api_key:
llm_config_entry["api_key"] = api_key
if base_url and base_url not in ("https://api.openai.com/v1", "https://api.openai.com/v1/"):
llm_config_entry["base_url"] = base_url
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 aws_region from YAML is silently ignored for Bedrock

The ag2_bedrock.yaml example (and any user YAML) can specify aws_region inside the role-level llm block, but _run_ag2 never extracts it from yaml_llm or first_role_llm. As a result, the region is silently dropped and AG2 falls back to whatever boto3 picks up from the environment or ~/.aws/config.

To honour the YAML setting:

Suggested change
if api_type == "bedrock":
llm_config_entry = {"api_type": "bedrock", "model": model_name}
else:
llm_config_entry = {"model": model_name}
if api_key:
llm_config_entry["api_key"] = api_key
if base_url and base_url not in ("https://api.openai.com/v1", "https://api.openai.com/v1/"):
llm_config_entry["base_url"] = base_url
if api_type == "bedrock":
aws_region = _resolve("aws_region", env_var="AWS_DEFAULT_REGION")
llm_config_entry = {"api_type": "bedrock", "model": model_name}
if aws_region:
llm_config_entry["aws_region"] = aws_region

You have deep expertise in Amazon Bedrock, SageMaker, ECS, and Lambda,
and you help organisations deploy AI agents at scale securely and cost-effectively.
llm:
model: "bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 LiteLLM bedrock/ prefix is incompatible with AG2's native Bedrock client

bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0 uses the LiteLLM provider-prefix convention. When _run_ag2 builds the LLMConfig with api_type="bedrock", it passes this string verbatim as the model name to AG2's own Bedrock integration (which calls the boto3 Bedrock API directly). AG2's client expects the bare model ID — the bedrock/ prefix will likely cause an UnknownModelException or similar error from the Bedrock API.

Suggested change
model: "bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0"
model: "anthropic.claude-3-5-sonnet-20241022-v2:0"

Comment on lines +469 to +472
base_url = (model_config.get("base_url")
or yaml_llm.get("base_url")
or os.environ.get("OPENAI_BASE_URL")
or os.environ.get("OPENAI_API_BASE"))
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 base_url resolution skips first_role_llm

All other config fields (model, api_key, api_type) are resolved via the _resolve helper which considers yaml_llm → first_role_llm → config_list → env var. The base_url resolution bypasses first_role_llm, meaning a base_url set inside a role-level llm: block is silently ignored for URL routing. For consistency:

Suggested change
base_url = (model_config.get("base_url")
or yaml_llm.get("base_url")
or os.environ.get("OPENAI_BASE_URL")
or os.environ.get("OPENAI_API_BASE"))
base_url = (yaml_llm.get("base_url")
or first_role_llm.get("base_url")
or model_config.get("base_url")
or os.environ.get("OPENAI_BASE_URL")
or os.environ.get("OPENAI_API_BASE"))

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (6)
src/praisonai/praisonai/auto.py (1)

35-43: Consider narrowing the exception catch or adding a debug log.

The broad except Exception is intentional to handle any detection failure gracefully, but silently swallowing all exceptions (including KeyboardInterrupt via BaseException subclasses — though Exception excludes those) can obscure unexpected issues during development.

♻️ Optional: Add debug logging for troubleshooting
 AG2_AVAILABLE = False
 try:
     import importlib.metadata as _importlib_metadata
     _importlib_metadata.distribution('ag2')
     from autogen import LLMConfig as _AG2LLMConfig  # noqa: F401 — AG2-exclusive class
     AG2_AVAILABLE = True
     del _AG2LLMConfig, _importlib_metadata
-except Exception:
-    pass
+except Exception as _e:
+    logging.debug("AG2 not available: %s", _e)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai/praisonai/auto.py` around lines 35 - 43, The try/except around
AG2 detection is too broad; replace the bare "except Exception" with specific
exceptions (e.g., importlib.metadata.PackageNotFoundError and ImportError) when
calling _importlib_metadata.distribution and importing autogen, and add a
debug/process logger call to record the caught exception details for
troubleshooting; update references in this block (AG2_AVAILABLE,
_importlib_metadata, _AG2LLMConfig) so the behavior and cleanup (del statements)
remain intact.
src/praisonai/praisonai/agents_generator.py (3)

44-52: Same pattern as auto.py - consider extracting to shared utility.

The AG2 detection logic is duplicated between auto.py and agents_generator.py. While not critical, extracting to a shared module would improve maintainability.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai/praisonai/agents_generator.py` around lines 44 - 52, The AG2
detection logic (setting AG2_AVAILABLE via importlib.metadata.distribution and
conditional import of autogen.LLMConfig) is duplicated; extract it into a small
shared utility function (e.g., is_ag2_available() or
detect_optional_dependency('ag2')) that encapsulates the try/except and returns
a boolean, replace the inline block in agents_generator.py (the AG2_AVAILABLE
variable and importlib.metadata usage) with a call to that utility and import
the utility from the common module, and do the same for the duplicate in auto.py
so both modules use the single shared detector.

545-548: Consider logging the exception before returning the error string.

The broad except Exception is acceptable for user-facing resilience, but logging the full traceback would aid debugging.

♻️ Proposed improvement
         try:
             chat_result = user_proxy.initiate_chat(manager, message=initial_message)
         except Exception as e:
+            self.logger.exception("AG2 chat failed")
-            return f"### AG2 Error ###\n{str(e)}"
+            return f"### AG2 Error ###\n{e}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai/praisonai/agents_generator.py` around lines 545 - 548, The
except block around user_proxy.initiate_chat should log the full exception and
traceback before returning the error string; update the handler in
agents_generator.py to call logger.exception(...) (or logging.exception(...)
after ensuring an appropriate logger via import logging and
logging.getLogger(__name__)) inside the except Exception as e block, then return
the existing f"### AG2 Error ###\n{str(e)}" so the user-facing message is
unchanged but full diagnostics are recorded for debugging.

506-507: Minor cleanup: rename unused loop variables and use list spread.

Static analysis flags unused loop variables and suggests list spread syntax for clarity. These are stylistic improvements.

♻️ Proposed cleanup
-        for role, details, assistant in ag2_agent_entries:
+        for _role, details, assistant in ag2_agent_entries:
             for tool_name in details.get("tools", []):
-        for role, details, _ in ag2_agent_entries:
-            for task_name, task_details in details.get("tasks", {}).items():
+        for _role, details, _ in ag2_agent_entries:
+            for _task_name, task_details in details.get("tasks", {}).items():
         groupchat = GroupChat(
-            agents=[user_proxy] + all_assistants,
+            agents=[user_proxy, *all_assistants],
             messages=[],
             max_round=12,
         )

Also applies to: 531-532, 538-539

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai/praisonai/agents_generator.py` around lines 506 - 507, The
loops over ag2_agent_entries (e.g., "for role, details, assistant in
ag2_agent_entries") use loop variables that aren't referenced; rename unused
variables to start with an underscore (for example "_assistant" or "_role") to
satisfy static analysis, and when iterating tools use list spread for clarity
(e.g., iterate over [*details.get("tools", [])] or assign tools =
[*details.get("tools", [])] then loop) in the blocks around the loops at the
top-level generator functions in agents_generator.py (the occurrences at the for
role, details, assistant in ag2_agent_entries and the similar loops referenced
at the other occurrences). Ensure consistency across the three similar sites
(around lines with the second and third occurrences).
src/praisonai/tests/source/ag2_function_tools.py (1)

18-24: Remove unused imports.

GroupChat and GroupChatManager are imported but not used in this example script. Removing them would clarify the minimal imports needed for basic tool registration.

♻️ Proposed fix
 from autogen import (
     AssistantAgent,
     UserProxyAgent,
-    GroupChat,
-    GroupChatManager,
     LLMConfig,
 )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai/tests/source/ag2_function_tools.py` around lines 18 - 24, The
import statement currently brings in GroupChat and GroupChatManager but those
symbols are unused; update the import list in the top-level autogen import
(which currently includes AssistantAgent, UserProxyAgent, GroupChat,
GroupChatManager, LLMConfig) by removing GroupChat and GroupChatManager so only
the actually used symbols (e.g., AssistantAgent, UserProxyAgent, LLMConfig)
remain.
src/praisonai/tests/unit/test_ag2_adapter.py (1)

465-487: Minor inconsistency: mock_llm_config missing context manager setup.

In _run_with_messages, mock_llm_config doesn't define __enter__/__exit__ (lines 469-470), while other tests explicitly set them up (e.g., lines 170-172, 206-208). If _run_ag2 uses LLMConfig as a context manager, this could cause test failures.

♻️ Proposed fix for consistency
     def _run_with_messages(self, messages):
         gen = self._make_gen()
         config = _make_config()

         mock_llm_config = MagicMock()
+        mock_llm_config.__enter__ = MagicMock(return_value=mock_llm_config)
+        mock_llm_config.__exit__ = MagicMock(return_value=False)
         mock_assistant = MagicMock()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai/tests/unit/test_ag2_adapter.py` around lines 465 - 487, The
mock LLMConfig used in _run_with_messages should behave like a context manager;
update the mock_llm_config in that helper so it defines __enter__ returning
mock_llm_config and __exit__ (e.g., mock_llm_config.__enter__.return_value =
mock_llm_config and mock_llm_config.__exit__.return_value = None) before
patching LLMConfig so _run_ag2 sees a context-manager-compatible LLMConfig mock
just like other tests using LLMConfig.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@examples/ag2/ag2_bedrock.yaml`:
- Around line 25-28: The YAML's aws_region is not being extracted and passed
into AG2's LLMConfig for Bedrock; update _run_ag2() to call
_resolve("aws_region", env_var="AWS_DEFAULT_REGION", default="us-east-1") when
api_type == "bedrock" and add that value into the llm_config_entry dict (e.g.,
llm_config_entry = {"api_type":"bedrock","model": model_name,"aws_region":
aws_region}) so the Bedrock integration receives the region from the config
instead of relying on AWS_DEFAULT_REGION.

In `@src/praisonai/praisonai/agents_generator.py`:
- Around line 515-523: The closure make_tool_fn currently captures the loop
variable tool_name by reference causing every tool_fn to end up with the last
tool's name; change make_tool_fn to accept tool_name as a default parameter
(e.g., def make_tool_fn(f, tool_name=tool_name):) and use that local parameter
when setting tool_fn.__name__, then continue registering wrapped via
assistant.register_for_llm and user_proxy.register_for_execution so each wrapped
function retains its correct name; update references around make_tool_fn,
tool_fn, wrapped, func, assistant.register_for_llm and
user_proxy.register_for_execution accordingly.

In `@src/praisonai/tests/integration/ag2/test_ag2_integration.py`:
- Around line 18-31: The test module stubs "instructor" but not "autogen",
causing import-time failures; update the stub loop in test_ag2_integration.py to
include "autogen" (e.g., add it to the tuple used with for _stub in
("instructor",) so sys.modules[_stub] = MagicMock() will create a MagicMock for
autogen as well), ensuring the same import-time stubbing behavior as done for
"instructor" and matching the unit-test fix.

In `@src/praisonai/tests/unit/test_ag2_adapter.py`:
- Around line 22-38: The tests fail because the autogen top-level module isn't
stubbed, so any use of patch("autogen.*", create=True) raises
ModuleNotFoundError; update the test setup in
src/praisonai/tests/unit/test_ag2_adapter.py to pre-populate
sys.modules["autogen"] with a MagicMock (similar to the existing "instructor"
stub) before imports/patches run so that autogen and its attributes can be
created by patch(..., create=True); ensure the stub is inserted conditionally
only if "autogen" not in sys.modules and mirror the pattern used for
"instructor" to avoid masking a real installation.

---

Nitpick comments:
In `@src/praisonai/praisonai/agents_generator.py`:
- Around line 44-52: The AG2 detection logic (setting AG2_AVAILABLE via
importlib.metadata.distribution and conditional import of autogen.LLMConfig) is
duplicated; extract it into a small shared utility function (e.g.,
is_ag2_available() or detect_optional_dependency('ag2')) that encapsulates the
try/except and returns a boolean, replace the inline block in
agents_generator.py (the AG2_AVAILABLE variable and importlib.metadata usage)
with a call to that utility and import the utility from the common module, and
do the same for the duplicate in auto.py so both modules use the single shared
detector.
- Around line 545-548: The except block around user_proxy.initiate_chat should
log the full exception and traceback before returning the error string; update
the handler in agents_generator.py to call logger.exception(...) (or
logging.exception(...) after ensuring an appropriate logger via import logging
and logging.getLogger(__name__)) inside the except Exception as e block, then
return the existing f"### AG2 Error ###\n{str(e)}" so the user-facing message is
unchanged but full diagnostics are recorded for debugging.
- Around line 506-507: The loops over ag2_agent_entries (e.g., "for role,
details, assistant in ag2_agent_entries") use loop variables that aren't
referenced; rename unused variables to start with an underscore (for example
"_assistant" or "_role") to satisfy static analysis, and when iterating tools
use list spread for clarity (e.g., iterate over [*details.get("tools", [])] or
assign tools = [*details.get("tools", [])] then loop) in the blocks around the
loops at the top-level generator functions in agents_generator.py (the
occurrences at the for role, details, assistant in ag2_agent_entries and the
similar loops referenced at the other occurrences). Ensure consistency across
the three similar sites (around lines with the second and third occurrences).

In `@src/praisonai/praisonai/auto.py`:
- Around line 35-43: The try/except around AG2 detection is too broad; replace
the bare "except Exception" with specific exceptions (e.g.,
importlib.metadata.PackageNotFoundError and ImportError) when calling
_importlib_metadata.distribution and importing autogen, and add a debug/process
logger call to record the caught exception details for troubleshooting; update
references in this block (AG2_AVAILABLE, _importlib_metadata, _AG2LLMConfig) so
the behavior and cleanup (del statements) remain intact.

In `@src/praisonai/tests/source/ag2_function_tools.py`:
- Around line 18-24: The import statement currently brings in GroupChat and
GroupChatManager but those symbols are unused; update the import list in the
top-level autogen import (which currently includes AssistantAgent,
UserProxyAgent, GroupChat, GroupChatManager, LLMConfig) by removing GroupChat
and GroupChatManager so only the actually used symbols (e.g., AssistantAgent,
UserProxyAgent, LLMConfig) remain.

In `@src/praisonai/tests/unit/test_ag2_adapter.py`:
- Around line 465-487: The mock LLMConfig used in _run_with_messages should
behave like a context manager; update the mock_llm_config in that helper so it
defines __enter__ returning mock_llm_config and __exit__ (e.g.,
mock_llm_config.__enter__.return_value = mock_llm_config and
mock_llm_config.__exit__.return_value = None) before patching LLMConfig so
_run_ag2 sees a context-manager-compatible LLMConfig mock just like other tests
using LLMConfig.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f97e4ca6-661d-4ecb-af50-223a7c133cef

📥 Commits

Reviewing files that changed from the base of the PR and between 365f750 and a8b1948.

📒 Files selected for processing (11)
  • examples/ag2/ag2_basic.yaml
  • examples/ag2/ag2_bedrock.yaml
  • examples/ag2/ag2_multi_agent.yaml
  • src/praisonai/.env.example
  • src/praisonai/praisonai/agents_generator.py
  • src/praisonai/praisonai/auto.py
  • src/praisonai/pyproject.toml
  • src/praisonai/tests/integration/ag2/__init__.py
  • src/praisonai/tests/integration/ag2/test_ag2_integration.py
  • src/praisonai/tests/source/ag2_function_tools.py
  • src/praisonai/tests/unit/test_ag2_adapter.py

Comment on lines +25 to +28
llm:
model: "bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0"
api_type: "bedrock"
aws_region: "us-east-1"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if aws_region is handled in _run_ag2
rg -n "aws_region" src/praisonai/praisonai/agents_generator.py

Repository: MervinPraison/PraisonAI

Length of output: 49


🏁 Script executed:

# Search for _run_ag2 function definition
rg -n "def _run_ag2" src/praisonai/

Repository: MervinPraison/PraisonAI

Length of output: 165


🏁 Script executed:

# Search for how llm config dict is being processed
rg -n "llm\[" src/praisonai/praisonai/agents_generator.py | head -20

Repository: MervinPraison/PraisonAI

Length of output: 49


🏁 Script executed:

# Look for LLMConfig instantiation with bedrock
rg -n "LLMConfig" src/praisonai/praisonai/agents_generator.py | head -20

Repository: MervinPraison/PraisonAI

Length of output: 500


🏁 Script executed:

# Search for bedrock or aws handling in the codebase
rg -n "bedrock" src/praisonai/ -i | head -20

Repository: MervinPraison/PraisonAI

Length of output: 2028


🏁 Script executed:

# Read the relevant section around line 474-483
sed -n '470,495p' src/praisonai/praisonai/agents_generator.py

Repository: MervinPraison/PraisonAI

Length of output: 1239


🏁 Script executed:

# Check what's in llm_config_entry before LLMConfig is called
sed -n '428,500p' src/praisonai/praisonai/agents_generator.py | head -80

Repository: MervinPraison/PraisonAI

Length of output: 3384


🏁 Script executed:

# Check the test to understand expected behavior
cat -n src/praisonai/tests/unit/test_ag2_adapter.py | sed -n '197,240p'

Repository: MervinPraison/PraisonAI

Length of output: 2378


🌐 Web query:

AG2 autogen LLMConfig bedrock aws_region parameter

💡 Result:

In AG2 (AutoGen), the Bedrock region is set via the aws_region key inside the Bedrock entry you pass to LLMConfig (i.e., inside config_list / the Bedrock config dict). [1][2]

from autogen import LLMConfig

llm_config = LLMConfig(
    config_list={
        "api_type": "bedrock",
        "model": "anthropic.claude-3-5-sonnet-20241022-v2:0",
        "aws_region": "us-east-1",
        "aws_access_key": "...",
        "aws_secret_key": "...",
        # optional: "aws_session_token": "...",
        # optional: "aws_profile_name": "default",
    }
)

AG2’s Bedrock examples also show pulling it from env, e.g. os.getenv("AWS_REGION", "eu-north-1"). [2]

Sources: AG2 Amazon Bedrock guide [1], AG2 Bedrock structured output notebook [2].


🏁 Script executed:

# Check if aws_region is mentioned anywhere in the repository in context of LLMConfig
rg -n "aws_region" src/praisonai/ -A 2 -B 2

Repository: MervinPraison/PraisonAI

Length of output: 49


🏁 Script executed:

# Check if there are any other bedrock config examples in the repo
fd -e yaml -o -e yml | xargs grep -l "bedrock" 2>/dev/null

Repository: MervinPraison/PraisonAI

Length of output: 185


🏁 Script executed:

# Look for any bedrock-related documentation or comments in the code
rg -n "Bedrock" src/praisonai/praisonai/agents_generator.py -A 3 -B 3

Repository: MervinPraison/PraisonAI

Length of output: 407


Extract and propagate aws_region to AG2's LLMConfig for Bedrock.

The aws_region field in the YAML config is not extracted or passed to AG2's LLMConfig. AG2's Bedrock integration requires aws_region as a key in the config dict passed to LLMConfig. Without it, the integration will fall back to the AWS_DEFAULT_REGION environment variable, making the YAML configuration incomplete.

Update _run_ag2() to extract aws_region via the _resolve() function and include it in the llm_config_entry dict for Bedrock:

if api_type == "bedrock":
    aws_region = _resolve("aws_region", env_var="AWS_DEFAULT_REGION", default="us-east-1")
    llm_config_entry = {"api_type": "bedrock", "model": model_name, "aws_region": aws_region}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ag2/ag2_bedrock.yaml` around lines 25 - 28, The YAML's aws_region is
not being extracted and passed into AG2's LLMConfig for Bedrock; update
_run_ag2() to call _resolve("aws_region", env_var="AWS_DEFAULT_REGION",
default="us-east-1") when api_type == "bedrock" and add that value into the
llm_config_entry dict (e.g., llm_config_entry = {"api_type":"bedrock","model":
model_name,"aws_region": aws_region}) so the Bedrock integration receives the
region from the config instead of relying on AWS_DEFAULT_REGION.

Comment on lines +515 to +523
def make_tool_fn(f):
def tool_fn(**kwargs):
return f(**kwargs) if callable(f) else str(f)
tool_fn.__name__ = tool_name
return tool_fn

wrapped = make_tool_fn(func)
assistant.register_for_llm(description=f"Tool: {tool_name}")(wrapped)
user_proxy.register_for_execution()(wrapped)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Closure captures loop variable by reference — all tools will share the last tool_name.

The make_tool_fn closure captures tool_name from the enclosing scope. Since tool_name is reassigned each iteration, all registered tools will have __name__ set to the last tool in the loop. The fix is to pass tool_name as a default argument.

🐛 Proposed fix
-                def make_tool_fn(f):
+                def make_tool_fn(f, name=tool_name):
                     def tool_fn(**kwargs):
                         return f(**kwargs) if callable(f) else str(f)
-                    tool_fn.__name__ = tool_name
+                    tool_fn.__name__ = name
                     return tool_fn
🧰 Tools
🪛 Ruff (0.15.9)

[warning] 518-518: Function definition does not bind loop variable tool_name

(B023)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai/praisonai/agents_generator.py` around lines 515 - 523, The
closure make_tool_fn currently captures the loop variable tool_name by reference
causing every tool_fn to end up with the last tool's name; change make_tool_fn
to accept tool_name as a default parameter (e.g., def make_tool_fn(f,
tool_name=tool_name):) and use that local parameter when setting
tool_fn.__name__, then continue registering wrapped via
assistant.register_for_llm and user_proxy.register_for_execution so each wrapped
function retains its correct name; update references around make_tool_fn,
tool_fn, wrapped, func, assistant.register_for_llm and
user_proxy.register_for_execution accordingly.

Comment on lines +18 to +31
# Stub heavy dependencies that auto.py (develop branch) imports at module level
# so that tests can import praisonai without a full installation.
for _stub in ("instructor",):
if _stub not in sys.modules:
sys.modules[_stub] = MagicMock()

import importlib as _importlib
if "openai" not in sys.modules:
try:
_importlib.import_module("openai")
except ImportError:
_mock_openai = MagicMock()
_mock_openai.__version__ = "1.0.0"
sys.modules["openai"] = _mock_openai
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Same missing autogen stub issue as unit tests.

The integration tests have the same problem—instructor is stubbed but autogen is not. Apply the same fix here.

🐛 Proposed fix: add autogen stub
 # Stub heavy dependencies that auto.py (develop branch) imports at module level
 # so that tests can import praisonai without a full installation.
-for _stub in ("instructor",):
+for _stub in ("instructor", "autogen"):
     if _stub not in sys.modules:
         sys.modules[_stub] = MagicMock()
+
+# Ensure autogen sub-attributes exist for patching
+if isinstance(sys.modules.get("autogen"), MagicMock):
+    _autogen_mock = sys.modules["autogen"]
+    _autogen_mock.LLMConfig = MagicMock()
+    _autogen_mock.AssistantAgent = MagicMock()
+    _autogen_mock.UserProxyAgent = MagicMock()
+    _autogen_mock.GroupChat = MagicMock()
+    _autogen_mock.GroupChatManager = MagicMock()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai/tests/integration/ag2/test_ag2_integration.py` around lines 18
- 31, The test module stubs "instructor" but not "autogen", causing import-time
failures; update the stub loop in test_ag2_integration.py to include "autogen"
(e.g., add it to the tuple used with for _stub in ("instructor",) so
sys.modules[_stub] = MagicMock() will create a MagicMock for autogen as well),
ensuring the same import-time stubbing behavior as done for "instructor" and
matching the unit-test fix.

Comment on lines +22 to +38
# Stub heavy dependencies that auto.py (develop branch) imports at module level
# so that tests can import praisonai without a full installation.
for _stub in ("instructor",):
if _stub not in sys.modules:
sys.modules[_stub] = MagicMock()

# openai is installed (required by ag2/autogen internals), but auto.py also
# imports it at module level. Ensure it's really loaded, not a mock.
import importlib as _importlib
if "openai" not in sys.modules:
try:
_importlib.import_module("openai")
except ImportError:
_mock_openai = MagicMock()
_mock_openai.__version__ = "1.0.0"
sys.modules["openai"] = _mock_openai

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Missing autogen module stub causes all AG2 tests to fail.

The pipeline failures show ModuleNotFoundError: No module named 'autogen' for all tests that patch autogen.*. The instructor module is stubbed (lines 24-26), but autogen is not. When autogen isn't installed, patch("autogen.LLMConfig", create=True) fails because create=True only creates the attribute, not the parent module.

🐛 Proposed fix: stub autogen module before tests run
 # Stub heavy dependencies that auto.py (develop branch) imports at module level
 # so that tests can import praisonai without a full installation.
-for _stub in ("instructor",):
+for _stub in ("instructor", "autogen"):
     if _stub not in sys.modules:
         sys.modules[_stub] = MagicMock()
+
+# Ensure autogen sub-attributes exist for patching
+if isinstance(sys.modules.get("autogen"), MagicMock):
+    _autogen_mock = sys.modules["autogen"]
+    _autogen_mock.LLMConfig = MagicMock()
+    _autogen_mock.AssistantAgent = MagicMock()
+    _autogen_mock.UserProxyAgent = MagicMock()
+    _autogen_mock.GroupChat = MagicMock()
+    _autogen_mock.GroupChatManager = MagicMock()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai/tests/unit/test_ag2_adapter.py` around lines 22 - 38, The tests
fail because the autogen top-level module isn't stubbed, so any use of
patch("autogen.*", create=True) raises ModuleNotFoundError; update the test
setup in src/praisonai/tests/unit/test_ag2_adapter.py to pre-populate
sys.modules["autogen"] with a MagicMock (similar to the existing "instructor"
stub) before imports/patches run so that autogen and its attributes can be
created by patch(..., create=True); ensure the stub is inserted conditionally
only if "autogen" not in sys.modules and mirror the pattern used for
"instructor" to avoid masking a real installation.

Comment on lines +10 to +12
# Run:
# praisonai --framework ag2 examples/ag2/ag2_bedrock.yaml
#
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Cli rejects ag2 option 🐞 Bug ≡ Correctness

Examples instruct users to run praisonai --framework ag2 ..., but the CLI parser only allows
crewai|autogen|praisonai, so AG2 cannot be used from the CLI as documented.
Agent Prompt
### Issue description
The CLI rejects `--framework ag2` because `ag2` is missing from the argparse `choices` list, even though the PR adds AG2 dispatch and examples document using `--framework ag2`.

### Issue Context
Users following `examples/ag2/*.yaml` will hit an argparse validation error before PraisonAI can run the AG2 adapter.

### Fix Focus Areas
- src/praisonai/praisonai/cli.py[512-514]
- examples/ag2/ag2_bedrock.yaml[10-12]
- src/praisonai/praisonai/agents_generator.py[328-347]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +474 to +477
# Build LLMConfig — pass a config dict; Bedrock needs no api_key
if api_type == "bedrock":
llm_config_entry = {"api_type": "bedrock", "model": model_name}
else:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Bedrock region ignored 🐞 Bug ≡ Correctness

_run_ag2 drops the YAML aws_region setting for Bedrock, so the ag2_bedrock.yaml example’s
explicit region is never applied.
Agent Prompt
### Issue description
The AG2 adapter ignores `aws_region` from YAML when configuring Bedrock, so users cannot control region via config files (contradicting the provided Bedrock example).

### Issue Context
`examples/ag2/ag2_bedrock.yaml` specifies `llm.aws_region: us-east-1`, but `_run_ag2` does not read or include it in the Bedrock `llm_config_entry`.

### Fix Focus Areas
- src/praisonai/praisonai/agents_generator.py[450-477]
- examples/ag2/ag2_bedrock.yaml[25-29]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +450 to +472
# Allow YAML top-level llm block to override config_list values
yaml_llm = config.get("llm", {}) or {}
# Also check first role's llm block as a fallback
first_role_llm = {}
for role_details in config.get("roles", {}).values():
first_role_llm = role_details.get("llm", {}) or {}
break

# Priority: YAML top-level llm > first role llm > config_list > env vars
def _resolve(key, env_var=None, default=None):
return (yaml_llm.get(key) or first_role_llm.get(key)
or model_config.get(key)
or (os.environ.get(env_var) if env_var else None)
or default)

api_type = _resolve("api_type", default="openai").lower()
model_name = _resolve("model", default="gpt-4o-mini")
api_key = _resolve("api_key", env_var="OPENAI_API_KEY")
# Fix #3: also check OPENAI_API_BASE for consistency with rest of codebase
base_url = (model_config.get("base_url")
or yaml_llm.get("base_url")
or os.environ.get("OPENAI_BASE_URL")
or os.environ.get("OPENAI_API_BASE"))
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. Base_url override wrong order 🐞 Bug ≡ Correctness

_run_ag2 documents that YAML llm overrides config_list, but base_url is resolved with
config_list taking precedence, so YAML llm.base_url is silently ignored.
Agent Prompt
### Issue description
`base_url` resolution contradicts the adapter’s documented precedence. YAML `llm.base_url` should override `config_list.base_url` but currently does not.

### Issue Context
The adapter uses `_resolve()` (YAML-first) for other keys, but `base_url` uses a different ordering.

### Fix Focus Areas
- src/praisonai/praisonai/agents_generator.py[458-472]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds first-class support for running PraisonAI workflows using the AG2 framework (PyPI ag2, installed under the autogen namespace), including dependency wiring plus mocked unit/integration test coverage and runnable examples.

Changes:

  • Add AG2 availability detection + framework="ag2" dispatch with a new _run_ag2 execution path.
  • Add an ag2 optional dependency extra and update environment/example configs for AG2 usage.
  • Add mocked unit/integration tests and example YAMLs demonstrating single-/multi-agent and Bedrock flows.

Reviewed changes

Copilot reviewed 10 out of 11 changed files in this pull request and generated 9 comments.

Show a summary per file
File Description
src/praisonai/praisonai/agents_generator.py Adds AG2 detection, framework validation/dispatch, and _run_ag2 implementation.
src/praisonai/praisonai/auto.py Adds AG2 availability detection and framework validation messaging.
src/praisonai/pyproject.toml Introduces the ag2 optional extra/dependency wiring.
src/praisonai/.env.example Documents env vars relevant to AG2 and Bedrock examples.
src/praisonai/tests/unit/test_ag2_adapter.py New unit tests covering AG2 validation + _run_ag2 behavior (mocked).
src/praisonai/tests/integration/ag2/test_ag2_integration.py New mocked integration tests for AG2 orchestration and dispatch.
src/praisonai/tests/integration/ag2/init.py Initializes AG2 integration test package.
src/praisonai/tests/source/ag2_function_tools.py Standalone example demonstrating AG2 tool registration pattern.
examples/ag2/ag2_basic.yaml Basic AG2 YAML example.
examples/ag2/ag2_multi_agent.yaml Multi-agent GroupChat example YAML.
examples/ag2/ag2_bedrock.yaml Bedrock-focused AG2 YAML example.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +44 to +48
AG2_AVAILABLE = False
try:
import importlib.metadata as _importlib_metadata
_importlib_metadata.distribution('ag2')
from autogen import LLMConfig as _AG2LLMConfig # noqa: F401 — AG2-exclusive class
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because AG2 installs under the autogen namespace, import autogen can succeed even when pyautogen is not installed. With the current flags, that can make AUTOGEN_AVAILABLE a false-positive and cause framework="autogen" to run against the AG2 backend (or vice-versa). Consider detecting AutoGen via the pyautogen distribution (importlib.metadata.distribution("pyautogen")) or another robust discriminator to avoid namespace collisions.

Copilot uses AI. Check for mistakes.
Comment on lines 29 to 33
try:
import autogen
AUTOGEN_AVAILABLE = True
except ImportError:
pass
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same namespace-collision problem here: AG2 provides an autogen package, so AUTOGEN_AVAILABLE can become true even if pyautogen isn’t installed. Consider checking the pyautogen distribution explicitly (or otherwise distinguishing pyautogen vs ag2) so framework="autogen" and framework="ag2" validations don’t interfere with each other.

Copilot uses AI. Check for mistakes.
Comment on lines +515 to +519
def make_tool_fn(f):
def tool_fn(**kwargs):
return f(**kwargs) if callable(f) else str(f)
tool_fn.__name__ = tool_name
return tool_fn
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tool wrapper created here uses a generic tool_fn(**kwargs) signature and drops the wrapped tool’s real signature/type hints. AG2’s register_for_llm typically builds the tool schema from the callable’s signature/annotations, so this wrapper can prevent the LLM from seeing required parameters. Preserve the original callable’s signature/annotations (e.g., set wrapped.__signature__ / __annotations__, or avoid wrapping when possible).

Copilot uses AI. Check for mistakes.
Comment on lines +19 to +20
# Ensure src is on path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../../"))
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sys.path.insert(..., "../../") resolves to the tests/ directory, not the directory that contains the praisonai/ package. This makes the test brittle (e.g., running from repo root won’t be able to import praisonai). Consider inserting the actual project root (the parent of tests/) or relying on editable install instead of path manipulation.

Copilot uses AI. Check for mistakes.
Comment on lines +180 to +184
with patch("praisonai.agents_generator.AG2_AVAILABLE", True), \
patch("autogen.LLMConfig", create=True, return_value=mock_llm_config) as mock_llmcfg, \
patch("autogen.AssistantAgent", create=True, return_value=mock_assistant), \
patch("autogen.UserProxyAgent", create=True, return_value=mock_user_proxy), \
patch("autogen.GroupChat", create=True, return_value=mock_groupchat), \
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These tests patch autogen.* targets, but unittest.mock.patch will raise ModuleNotFoundError if the autogen module isn’t importable (common when optional AG2/AutoGen deps aren’t installed). Add a module-level pytest.importorskip("autogen") (or a conditional sys.modules["autogen"] stub) so the unit test suite can run without optional dependencies.

Copilot uses AI. Check for mistakes.
Comment on lines +211 to +215
with patch("autogen.LLMConfig", create=True, return_value=m["llm_config"]), \
patch("autogen.AssistantAgent", create=True, return_value=m["assistant"]), \
patch("autogen.UserProxyAgent", create=True, return_value=m["user_proxy"]), \
patch("autogen.GroupChat", create=True, return_value=m["groupchat"]), \
patch("autogen.GroupChatManager", create=True, return_value=m["manager"]):
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test patches autogen.* without first ensuring the autogen module exists. If AG2 isn’t installed, patch("autogen....") will error instead of skipping. Guard these tests with pytest.importorskip("autogen") (and/or skip based on importlib.metadata.distribution("ag2")) before any such patch blocks run.

Copilot uses AI. Check for mistakes.
Comment on lines +4 to +6
# Install: pip install "praisonai[ag2]"
# Run: praisonai --framework ag2 examples/ag2/ag2_basic.yaml
# or praisonai run examples/ag2/ag2_basic.yaml --framework ag2
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This example instructs users to pass --framework ag2, but the CLI currently restricts --framework choices to ["crewai", "autogen", "praisonai"] (praisonai/cli.py). Either update the CLI/UI to accept ag2, or adjust these run instructions to omit --framework and rely on framework: ag2 in the YAML.

Copilot uses AI. Check for mistakes.
Comment on lines +4 to +6
# Install: pip install "praisonai[ag2]"
# Run: praisonai --framework ag2 examples/ag2/ag2_multi_agent.yaml
# or praisonai run examples/ag2/ag2_multi_agent.yaml --framework ag2
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This example tells users to pass --framework ag2, but the CLI currently does not list ag2 as an allowed --framework choice. Either update the CLI/UI framework choices to include ag2, or adjust these instructions to rely on framework: ag2 in the YAML (no --framework flag).

Copilot uses AI. Check for mistakes.
Comment on lines +10 to +12
# Run:
# praisonai --framework ag2 examples/ag2/ag2_bedrock.yaml
#
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The run instructions use --framework ag2, but the CLI currently restricts --framework to ["crewai", "autogen", "praisonai"]. Either extend CLI/UI choices to include ag2, or update this example to omit the flag and rely on framework: ag2 in YAML.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the ag2 framework, a community fork of AutoGen. This includes adding ag2 as an optional dependency, updating configuration examples for basic, multi-agent, and AWS Bedrock scenarios, and implementing a new _run_ag2 method to orchestrate agents within this framework. The _run_ag2 method handles LLM configuration, agent creation, and tool registration. New integration and unit tests have been added to ensure the functionality and backward compatibility. The review comments suggest improving error handling by catching specific exceptions instead of generic ones and refining the tool registration logic to ensure callable functions are always provided.

Comment on lines +515 to +518
def make_tool_fn(f):
def tool_fn(**kwargs):
return f(**kwargs) if callable(f) else str(f)
tool_fn.__name__ = tool_name
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The make_tool_fn function returns str(f) if f is not callable. This behavior is unexpected for a tool function, which should typically be callable. If f is not callable, it likely indicates a misconfiguration or an issue with the tool definition. It would be safer to raise an error or ensure f is always callable before wrapping it.

Suggested change
def make_tool_fn(f):
def tool_fn(**kwargs):
return f(**kwargs) if callable(f) else str(f)
tool_fn.__name__ = tool_name
def make_tool_fn(f):
if not callable(f):
raise TypeError(f"Tool '{tool_name}' is not callable.")
def tool_fn(**kwargs):
return f(**kwargs)
tool_fn.__name__ = tool_name
return tool_fn

Comment on lines +51 to +52
except Exception:
pass
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Catching a generic Exception can hide specific issues and make debugging harder. It's better to catch importlib.metadata.PackageNotFoundError and ImportError explicitly, as these are the expected exceptions when a package or its components are not found.

Suggested change
except Exception:
pass
except (importlib.metadata.PackageNotFoundError, ImportError):
pass

Comment on lines +42 to +43
except Exception:
pass
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Catching a generic Exception can hide specific issues and make debugging harder. It's better to catch importlib.metadata.PackageNotFoundError and ImportError explicitly, as these are the expected exceptions when a package or its components are not found.

Suggested change
except Exception:
pass
except (importlib.metadata.PackageNotFoundError, ImportError):
pass

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Test Autonomous Triage Runner

3 participants