Skip to content

Docs: Document new HostedAgent / LocalAgent canonical backends + deprecate ManagedAgent provider= overload (PR #1550 + #1555) #259

@MervinPraison

Description

@MervinPraison

Source PR(s)

What changed in the SDK

Two new canonical backend classes were added to clearly separate "hosted runtime" from "local loop with optional cloud compute". The old ManagedAgent(provider=...) factory is now deprecated because provider= was overloaded to mean both "hosted runtime" and "LLM routing hint" and "compute provider" — three different things.

New public API

Class Purpose Source
HostedAgent / HostedAgentConfig Entire agent loop runs on a cloud-managed runtime (currently only Anthropic) src/praisonai/praisonai/integrations/hosted_agent.py
LocalAgent / LocalAgentConfig Agent loop runs locally; tools optionally sandboxed via cloud compute= (e2b, modal, flyio, daytona, docker) src/praisonai/praisonai/integrations/local_agent.py

Both are exported from top-level praisonai and from praisonai.integrations:

from praisonai import HostedAgent, HostedAgentConfig, LocalAgent, LocalAgentConfig
# or
from praisonai.integrations import HostedAgent, HostedAgentConfig, LocalAgent, LocalAgentConfig

HostedAgentConfig aliases ManagedConfig; LocalAgentConfig aliases LocalManagedConfig — same fields, new names.

Deprecation behaviour of ManagedAgent(provider=...)

provider= value New behaviour
"anthropic" Returns AnthropicManagedAgent (no warning) — but new docs should recommend HostedAgent(provider="anthropic", ...)
"openai", "gemini", "ollama", "local" (explicitly passed) DeprecationWarning — recommend LocalAgent(config=LocalAgentConfig(model=...))
"e2b", "modal", "flyio", "daytona", "docker" DeprecationWarning (returns LocalManagedAgent for back-compat) — recommend LocalAgent(compute=..., config=LocalAgentConfig(...))
Unknown provider Raises ValueError
provider=None (auto-detect) Picks "anthropic" if ANTHROPIC_API_KEY/CLAUDE_API_KEY set, else "local"; no warning

LocalAgent(provider=...) itself emits a DeprecationWarning and routes via provider_for_routing for back-compat, but the documented usage is LocalAgent(compute=..., config=LocalAgentConfig(model=...)).

Why this matters for users

The old pattern ManagedAgent(provider="modal") made it ambiguous whether the user wanted (a) Modal as a hosted runtime (does not exist), (b) Modal as a compute sandbox for tools, or (c) Modal as an LLM. The new split is unambiguous:

  • Want the whole loop in the cloud?HostedAgent(provider="anthropic", ...)
  • Want local loop with sandboxed tools?LocalAgent(compute="modal", config=LocalAgentConfig(model="gpt-4o-mini"))
  • Want local loop, no sandbox?LocalAgent(config=LocalAgentConfig(model="gpt-4o-mini"))

Documentation work required

Follow AGENTS.md strictly: SDK-first, agent-centric examples, progressive disclosure, beginner tone ("is it really this easy?"), Mermaid hero diagrams, Mintlify components, and the SDK-truth read cycle (READ → UNDERSTAND → DOCUMENT) for every page.

1) Create new page — docs/features/hosted-agent.mdx

Place in docs/features/ per the AI-agent folder rules (do not create in docs/concepts/).

Frontmatter:

---
title: "Hosted Agent"
sidebarTitle: "Hosted Agent"
description: "Run the entire agent loop on a cloud-managed runtime (Anthropic)"
icon: "cloud"
---

Content requirements:

  • One-sentence opener: "Hosted Agent runs the entire agent loop — model, tools, session — on Anthropic's managed cloud infrastructure."
  • Hero graph LR Mermaid diagram with the standard color scheme (#8B0000 agent, #189AB4 tools, #10B981 results, #6366F1 input, #F59E0B cloud step). Show the loop crossing into the cloud boundary.
  • Quick Start <Steps> — agent-centric, not class-centric:
    • Step 1: minimal one-liner using HostedAgent(provider="anthropic")
    • Step 2: with HostedAgentConfig(model="claude-3-5-sonnet-latest", system="...", tools=[{"type": "agent_toolset_20260401"}])
  • How It Works — sequence diagram showing User → Agent → HostedAgent → Anthropic Cloud. Mention that agent metadata (agent_id, agent_version, environment_id, session_id) is provided by Anthropic's API.
  • Configuration Options table — extract from ManagedConfig dataclass at src/praisonai/praisonai/integrations/managed_agents.py:200-235. Fields: name, model, system, description, tools, mcp_servers, skills, callable_agents, metadata, env_name, packages, networking, session_title, resources, vault_ids. Include exact types and defaults.
  • Common Patterns:
    • Streaming (agent.start(..., stream=True))
    • Multi-turn / session continuation
    • hosted.retrieve_session() for usage tracking (returns the unified SessionInfo schema — link to /docs/features/managed-agents-session-info)
    • hosted.list_sessions()
  • Best Practices <AccordionGroup> — when to choose Hosted over Local; how ANTHROPIC_API_KEY / CLAUDE_API_KEY are picked up; cost considerations.
  • Note callout: Currently only provider="anthropic" is supported. Any other provider raises ValueError with a hint to use LocalAgent instead.
  • Related <CardGroup> — link to Local Agent, Managed Agents (overview), Agents.

Reference example file (in PraisonAI repo): examples/python/managed-agents/provider/runtime_hosted_anthropic.py

2) Create new page — docs/features/local-agent.mdx

Place in docs/features/.

Frontmatter:

---
title: "Local Agent"
sidebarTitle: "Local Agent"
description: "Run the agent loop locally with any LLM and optional cloud sandboxing for tools"
icon: "desktop"
---

Content requirements:

  • One-sentence opener: "Local Agent runs the agent loop in your process and lets you pick any LLM via model=, with optional cloud compute= for sandboxing tool execution."
  • Hero graph LR Mermaid diagram showing local loop + optional compute sandbox.
  • Quick Start <Steps> — show the simplicity:
    • Step 1: smallest possible — LocalAgent(config=LocalAgentConfig(model="gpt-4o-mini"))
    • Step 2: with compute="e2b" for sandboxed tools
    • Step 3: with Gemini (model="gemini/gemini-2.0-flash") and Ollama (model="ollama/llama3.2") showing the litellm prefix pattern
  • Decision-tree Mermaid diagram (per AGENTS.md — required when multiple options confuse users): "Should I set compute=?" → No: just local subprocess; Yes: pick e2b / modal / flyio / daytona / docker.
  • How It Works sequence diagram: User → Agent → LocalAgent → (LLM API + local subprocess or compute sandbox).
  • Configuration Options table — extract from LocalManagedConfig dataclass at src/praisonai/praisonai/integrations/managed_local.py:56-92. Fields: name, model, system, tools, max_turns, metadata, working_dir, env, packages, networking, host_packages_ok, session_title, on_tool_confirmation, on_custom_tool. Include exact types and defaults.
  • Important callout: LocalAgent(provider=...) is rejected with a DeprecationWarning — LLM choice goes in config.model=, sandbox choice goes in compute=. No provider= overload.
  • Common Patterns:
    • Per-LLM: OpenAI, Gemini, Ollama, Anthropic-via-litellm
    • Per compute backend (link out to existing managed-agents-{e2b,modal,docker,daytona} pages)
    • host_packages_ok=True + ManagedSandboxRequired security model
    • Custom tools via on_custom_tool callback
  • Best Practices <AccordionGroup> — pick compute=None for development; pick e2b/modal for untrusted code; never set host_packages_ok=True unless you understand the risk.
  • Related <CardGroup>Hosted Agent, Managed Agents (overview), Tools.

Reference example files (in PraisonAI repo):

  • examples/python/managed-agents/provider/runtime_local_openai.py
  • examples/python/managed-agents/provider/runtime_local_gemini.py
  • examples/python/managed-agents/provider/runtime_local_ollama.py
  • examples/python/managed-agents/provider/all_providers.py

3) Update existing page — docs/concepts/managed-agents.mdx

⚠️ This file is in docs/concepts/ (HUMAN-APPROVED ONLY per AGENTS.md). This issue is the explicit human instruction authorising the edit. The doc agent should make minimal, additive changes only.

Required edits:

  • Add a <Warning> callout near the top: ManagedAgent is the legacy factory. New code should use HostedAgent for cloud-managed runtimes or LocalAgent for local execution. The provider= parameter on ManagedAgent now emits DeprecationWarning for LLM and compute hints.
  • Replace the Quick Start "Basic Usage" snippet to import from the top level (from praisonai import ManagedAgent, ManagedConfig) and add a sibling tab/code-block showing the recommended HostedAgent / LocalAgent equivalent.
  • Add a new section "Migration from ManagedAgent" with a side-by-side table:
Old New
ManagedAgent(provider="anthropic", config=ManagedConfig(...)) HostedAgent(provider="anthropic", config=HostedAgentConfig(...))
ManagedAgent(provider="openai", config=LocalManagedConfig(model="gpt-4o")) LocalAgent(config=LocalAgentConfig(model="gpt-4o-mini"))
ManagedAgent(provider="ollama", config=LocalManagedConfig(model="llama3")) LocalAgent(config=LocalAgentConfig(model="ollama/llama3"))
ManagedAgent(provider="modal", ...) LocalAgent(compute="modal", config=LocalAgentConfig(...))
ManagedAgent(provider="e2b", ...) LocalAgent(compute="e2b", config=LocalAgentConfig(...))
ManagedAgent() (auto-detect) Still works, no warning
  • Add a <Tip> noting HostedAgentConfig is ManagedConfig and LocalAgentConfig is LocalManagedConfig — same fields, new names — so users do not need to relearn anything.

4) Update docs.json navigation

Add the two new feature pages under the existing Features group (do not add to the Managed Agents group inside Concepts — that group is for compute-provider pages):

"docs/features/hosted-agent",
"docs/features/local-agent"

A natural placement is alongside other agent-flavour pages like docs/features/codeagent and docs/features/mathagent. The doc agent should pick a sensible existing sub-group (e.g. the one containing codeagent/mathagent at lines 277-278 of docs.json) and validate docs.json is valid JSON after the edit.

5) Examples (in PraisonAIDocs examples/ if your conventions copy from PraisonAI)

The four runtime example scripts and the updated local_basic.py / all_providers.py were already added/updated in PraisonAI by PR #1550 and #1555. If your docs site mirrors examples (check examples/python/ or similar), copy these files verbatim — they already include the skip-guard pattern from #1555 (env-var → SDK availability → service reachability) so docs builds in CI without credentials will not break.


SDK source paths (truth source — read these first per AGENTS.md §1.1 / §1.2)

File Lines of interest
src/praisonai/praisonai/integrations/hosted_agent.py Whole file (93 lines) — HostedAgent class + HostedAgentConfig = ManagedConfig alias
src/praisonai/praisonai/integrations/local_agent.py Whole file (86 lines) — LocalAgent class + LocalAgentConfig = LocalManagedConfig alias + provider= deprecation
src/praisonai/praisonai/integrations/managed_agents.py 200-235 for ManagedConfig dataclass; 1075-1145 for new ManagedAgent factory deprecation logic
src/praisonai/praisonai/integrations/managed_local.py 56-92 for LocalManagedConfig dataclass
src/praisonai/praisonai/__init__.py 27-32, 128-141 — top-level exports
src/praisonai/praisonai/integrations/__init__.py 43-48, 112-125 — integrations exports
tests/unit/integrations/test_backend_semantics.py All — semantic contracts the docs must reflect (esp. test_managed_agent_compute_provider_warnings, test_local_agent_rejects_provider_overload)

Sample agent-centric Quick Start (use as a template)

This is the shape of the opener for both new pages — beginner-friendly, agent-first:

from praisonai import Agent, LocalAgent, LocalAgentConfig

agent = Agent(
    name="assistant",
    backend=LocalAgent(
        config=LocalAgentConfig(model="gpt-4o-mini"),
    ),
)

agent.start("Write hello.txt with the words 'hello world'")
from praisonai import Agent, HostedAgent, HostedAgentConfig

agent = Agent(
    name="assistant",
    backend=HostedAgent(
        provider="anthropic",
        config=HostedAgentConfig(model="claude-3-5-sonnet-latest"),
    ),
)

agent.start("What is the capital of France? One word.")

Acceptance criteria

  • docs/features/hosted-agent.mdx created, follows full template from AGENTS.md §2 (frontmatter, hero diagram, Quick Start <Steps>, How It Works, Config Options table, Common Patterns, Best Practices <AccordionGroup>, Related <CardGroup>)
  • docs/features/local-agent.mdx created, same template, includes the option-choice Mermaid diagram for compute=
  • docs/concepts/managed-agents.mdx updated with deprecation <Warning> and migration table (additive only)
  • docs.json updated with both new pages under a sensible Features sub-group; file is valid JSON
  • All code samples copy-paste runnable; imports use top-level from praisonai import ... (per AGENTS.md §6.1 friendly-import rule)
  • No invented APIs — every parameter, type and default cross-checked against the SDK files listed above
  • Mermaid diagrams use the standard color scheme (#8B0000, #189AB4, #10B981, #F59E0B, #6366F1, white text, #7C90A0 strokes)
  • No emojis in docs unless already convention; no forbidden phrases ("In this section...", "As you can see...", etc.)
  • Branch: claude/admiring-euler-f454e (per task instructions); PR opened as draft against main

This issue was authored by an automated agent based on PR #1555 (cleanup) and its parent PR #1550 (the actual feature). The cleanup PR itself adds skip-guards on examples and renames a test — no doc impact in isolation. The feature work in #1550 is what this issue covers.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingclaudeTrigger Claude Code analysisdocumentationImprovements or additions to documentationenhancementNew feature or requestsecurity

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions