You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Two new canonical backend classes were added to clearly separate "hosted runtime" from "local loop with optional cloud compute". The old ManagedAgent(provider=...) factory is now deprecated because provider= was overloaded to mean both "hosted runtime" and "LLM routing hint" and "compute provider" — three different things.
New public API
Class
Purpose
Source
HostedAgent / HostedAgentConfig
Entire agent loop runs on a cloud-managed runtime (currently only Anthropic)
DeprecationWarning (returns LocalManagedAgent for back-compat) — recommend LocalAgent(compute=..., config=LocalAgentConfig(...))
Unknown provider
Raises ValueError
provider=None (auto-detect)
Picks "anthropic" if ANTHROPIC_API_KEY/CLAUDE_API_KEY set, else "local"; no warning
LocalAgent(provider=...) itself emits a DeprecationWarning and routes via provider_for_routing for back-compat, but the documented usage is LocalAgent(compute=..., config=LocalAgentConfig(model=...)).
Why this matters for users
The old pattern ManagedAgent(provider="modal") made it ambiguous whether the user wanted (a) Modal as a hosted runtime (does not exist), (b) Modal as a compute sandbox for tools, or (c) Modal as an LLM. The new split is unambiguous:
Want the whole loop in the cloud? → HostedAgent(provider="anthropic", ...)
Want local loop with sandboxed tools? → LocalAgent(compute="modal", config=LocalAgentConfig(model="gpt-4o-mini"))
Want local loop, no sandbox? → LocalAgent(config=LocalAgentConfig(model="gpt-4o-mini"))
Documentation work required
Follow AGENTS.md strictly: SDK-first, agent-centric examples, progressive disclosure, beginner tone ("is it really this easy?"), Mermaid hero diagrams, Mintlify components, and the SDK-truth read cycle (READ → UNDERSTAND → DOCUMENT) for every page.
1) Create new page — docs/features/hosted-agent.mdx
Place in docs/features/ per the AI-agent folder rules (do not create in docs/concepts/).
Frontmatter:
---
title: "Hosted Agent"sidebarTitle: "Hosted Agent"description: "Run the entire agent loop on a cloud-managed runtime (Anthropic)"icon: "cloud"
---
Content requirements:
One-sentence opener: "Hosted Agent runs the entire agent loop — model, tools, session — on Anthropic's managed cloud infrastructure."
Hero graph LR Mermaid diagram with the standard color scheme (#8B0000 agent, #189AB4 tools, #10B981 results, #6366F1 input, #F59E0B cloud step). Show the loop crossing into the cloud boundary.
Quick Start <Steps> — agent-centric, not class-centric:
Step 1: minimal one-liner using HostedAgent(provider="anthropic")
Step 2: with HostedAgentConfig(model="claude-3-5-sonnet-latest", system="...", tools=[{"type": "agent_toolset_20260401"}])
How It Works — sequence diagram showing User → Agent → HostedAgent → Anthropic Cloud. Mention that agent metadata (agent_id, agent_version, environment_id, session_id) is provided by Anthropic's API.
Configuration Options table — extract from ManagedConfig dataclass at src/praisonai/praisonai/integrations/managed_agents.py:200-235. Fields: name, model, system, description, tools, mcp_servers, skills, callable_agents, metadata, env_name, packages, networking, session_title, resources, vault_ids. Include exact types and defaults.
Common Patterns:
Streaming (agent.start(..., stream=True))
Multi-turn / session continuation
hosted.retrieve_session() for usage tracking (returns the unified SessionInfo schema — link to /docs/features/managed-agents-session-info)
hosted.list_sessions()
Best Practices <AccordionGroup> — when to choose Hosted over Local; how ANTHROPIC_API_KEY / CLAUDE_API_KEY are picked up; cost considerations.
Note callout: Currently only provider="anthropic" is supported. Any other provider raises ValueError with a hint to use LocalAgent instead.
Related<CardGroup> — link to Local Agent, Managed Agents (overview), Agents.
Reference example file (in PraisonAI repo):examples/python/managed-agents/provider/runtime_hosted_anthropic.py
2) Create new page — docs/features/local-agent.mdx
Place in docs/features/.
Frontmatter:
---
title: "Local Agent"sidebarTitle: "Local Agent"description: "Run the agent loop locally with any LLM and optional cloud sandboxing for tools"icon: "desktop"
---
Content requirements:
One-sentence opener: "Local Agent runs the agent loop in your process and lets you pick any LLM via model=, with optional cloud compute= for sandboxing tool execution."
Hero graph LR Mermaid diagram showing local loop + optional compute sandbox.
Quick Start <Steps> — show the simplicity:
Step 1: smallest possible — LocalAgent(config=LocalAgentConfig(model="gpt-4o-mini"))
Step 2: with compute="e2b" for sandboxed tools
Step 3: with Gemini (model="gemini/gemini-2.0-flash") and Ollama (model="ollama/llama3.2") showing the litellm prefix pattern
Decision-tree Mermaid diagram (per AGENTS.md — required when multiple options confuse users): "Should I set compute=?" → No: just local subprocess; Yes: pick e2b / modal / flyio / daytona / docker.
How It Works sequence diagram: User → Agent → LocalAgent → (LLM API + local subprocess or compute sandbox).
Configuration Options table — extract from LocalManagedConfig dataclass at src/praisonai/praisonai/integrations/managed_local.py:56-92. Fields: name, model, system, tools, max_turns, metadata, working_dir, env, packages, networking, host_packages_ok, session_title, on_tool_confirmation, on_custom_tool. Include exact types and defaults.
Important callout: LocalAgent(provider=...) is rejected with a DeprecationWarning — LLM choice goes in config.model=, sandbox choice goes in compute=. No provider= overload.
Per compute backend (link out to existing managed-agents-{e2b,modal,docker,daytona} pages)
host_packages_ok=True + ManagedSandboxRequired security model
Custom tools via on_custom_tool callback
Best Practices <AccordionGroup> — pick compute=None for development; pick e2b/modal for untrusted code; never set host_packages_ok=True unless you understand the risk.
⚠️ This file is in docs/concepts/ (HUMAN-APPROVED ONLY per AGENTS.md). This issue is the explicit human instruction authorising the edit. The doc agent should make minimal, additive changes only.
Required edits:
Add a <Warning> callout near the top: ManagedAgent is the legacy factory. New code should use HostedAgent for cloud-managed runtimes or LocalAgent for local execution. The provider= parameter on ManagedAgent now emits DeprecationWarning for LLM and compute hints.
Replace the Quick Start "Basic Usage" snippet to import from the top level (from praisonai import ManagedAgent, ManagedConfig) and add a sibling tab/code-block showing the recommended HostedAgent / LocalAgent equivalent.
Add a new section "Migration from ManagedAgent" with a side-by-side table:
Add a <Tip> noting HostedAgentConfig is ManagedConfig and LocalAgentConfig is LocalManagedConfig — same fields, new names — so users do not need to relearn anything.
4) Update docs.json navigation
Add the two new feature pages under the existing Features group (do not add to the Managed Agents group inside Concepts — that group is for compute-provider pages):
A natural placement is alongside other agent-flavour pages like docs/features/codeagent and docs/features/mathagent. The doc agent should pick a sensible existing sub-group (e.g. the one containing codeagent/mathagent at lines 277-278 of docs.json) and validate docs.json is valid JSON after the edit.
5) Examples (in PraisonAIDocs examples/ if your conventions copy from PraisonAI)
The four runtime example scripts and the updated local_basic.py / all_providers.py were already added/updated in PraisonAI by PR #1550 and #1555. If your docs site mirrors examples (check examples/python/ or similar), copy these files verbatim — they already include the skip-guard pattern from #1555 (env-var → SDK availability → service reachability) so docs builds in CI without credentials will not break.
SDK source paths (truth source — read these first per AGENTS.md §1.1 / §1.2)
All — semantic contracts the docs must reflect (esp. test_managed_agent_compute_provider_warnings, test_local_agent_rejects_provider_overload)
Sample agent-centric Quick Start (use as a template)
This is the shape of the opener for both new pages — beginner-friendly, agent-first:
frompraisonaiimportAgent, LocalAgent, LocalAgentConfigagent=Agent(
name="assistant",
backend=LocalAgent(
config=LocalAgentConfig(model="gpt-4o-mini"),
),
)
agent.start("Write hello.txt with the words 'hello world'")
frompraisonaiimportAgent, HostedAgent, HostedAgentConfigagent=Agent(
name="assistant",
backend=HostedAgent(
provider="anthropic",
config=HostedAgentConfig(model="claude-3-5-sonnet-latest"),
),
)
agent.start("What is the capital of France? One word.")
Acceptance criteria
docs/features/hosted-agent.mdx created, follows full template from AGENTS.md §2 (frontmatter, hero diagram, Quick Start <Steps>, How It Works, Config Options table, Common Patterns, Best Practices <AccordionGroup>, Related <CardGroup>)
docs/features/local-agent.mdx created, same template, includes the option-choice Mermaid diagram for compute=
docs/concepts/managed-agents.mdx updated with deprecation <Warning> and migration table (additive only)
docs.json updated with both new pages under a sensible Features sub-group; file is valid JSON
All code samples copy-paste runnable; imports use top-level from praisonai import ... (per AGENTS.md §6.1 friendly-import rule)
No invented APIs — every parameter, type and default cross-checked against the SDK files listed above
Mermaid diagrams use the standard color scheme (#8B0000, #189AB4, #10B981, #F59E0B, #6366F1, white text, #7C90A0 strokes)
No emojis in docs unless already convention; no forbidden phrases ("In this section...", "As you can see...", etc.)
Branch: claude/admiring-euler-f454e (per task instructions); PR opened as draft against main
This issue was authored by an automated agent based on PR #1555 (cleanup) and its parent PR #1550 (the actual feature). The cleanup PR itself adds skip-guards on examples and renames a test — no doc impact in isolation. The feature work in #1550 is what this issue covers.
Source PR(s)
fix: ManagedAgent provider overload conflates hosted-runtime vs LLM-routing(merged 2026-04-25) — fix: ManagedAgent provider overload conflates hosted-runtime vs LLM-routing PraisonAI#1550Fix: P1 follow-up to #1550 — failing test + example skip-guards(merged 2026-04-26) — Fix: P1 follow-up to #1550 — failing test + example skip-guards PraisonAI#1555What changed in the SDK
Two new canonical backend classes were added to clearly separate "hosted runtime" from "local loop with optional cloud compute". The old
ManagedAgent(provider=...)factory is now deprecated becauseprovider=was overloaded to mean both "hosted runtime" and "LLM routing hint" and "compute provider" — three different things.New public API
HostedAgent/HostedAgentConfigsrc/praisonai/praisonai/integrations/hosted_agent.pyLocalAgent/LocalAgentConfigcompute=(e2b, modal, flyio, daytona, docker)src/praisonai/praisonai/integrations/local_agent.pyBoth are exported from top-level
praisonaiand frompraisonai.integrations:HostedAgentConfigaliasesManagedConfig;LocalAgentConfigaliasesLocalManagedConfig— same fields, new names.Deprecation behaviour of
ManagedAgent(provider=...)provider=value"anthropic"AnthropicManagedAgent(no warning) — but new docs should recommendHostedAgent(provider="anthropic", ...)"openai","gemini","ollama","local"(explicitly passed)DeprecationWarning— recommendLocalAgent(config=LocalAgentConfig(model=...))"e2b","modal","flyio","daytona","docker"DeprecationWarning(returnsLocalManagedAgentfor back-compat) — recommendLocalAgent(compute=..., config=LocalAgentConfig(...))ValueErrorprovider=None(auto-detect)"anthropic"ifANTHROPIC_API_KEY/CLAUDE_API_KEYset, else"local"; no warningLocalAgent(provider=...)itself emits aDeprecationWarningand routes viaprovider_for_routingfor back-compat, but the documented usage isLocalAgent(compute=..., config=LocalAgentConfig(model=...)).Why this matters for users
The old pattern
ManagedAgent(provider="modal")made it ambiguous whether the user wanted (a) Modal as a hosted runtime (does not exist), (b) Modal as a compute sandbox for tools, or (c) Modal as an LLM. The new split is unambiguous:HostedAgent(provider="anthropic", ...)LocalAgent(compute="modal", config=LocalAgentConfig(model="gpt-4o-mini"))LocalAgent(config=LocalAgentConfig(model="gpt-4o-mini"))Documentation work required
1) Create new page —
docs/features/hosted-agent.mdxPlace in
docs/features/per the AI-agent folder rules (do not create indocs/concepts/).Frontmatter:
Content requirements:
graph LRMermaid diagram with the standard color scheme (#8B0000agent,#189AB4tools,#10B981results,#6366F1input,#F59E0Bcloud step). Show the loop crossing into the cloud boundary.<Steps>— agent-centric, not class-centric:HostedAgent(provider="anthropic")HostedAgentConfig(model="claude-3-5-sonnet-latest", system="...", tools=[{"type": "agent_toolset_20260401"}])agent_id,agent_version,environment_id,session_id) is provided by Anthropic's API.ManagedConfigdataclass atsrc/praisonai/praisonai/integrations/managed_agents.py:200-235. Fields:name,model,system,description,tools,mcp_servers,skills,callable_agents,metadata,env_name,packages,networking,session_title,resources,vault_ids. Include exact types and defaults.agent.start(..., stream=True))hosted.retrieve_session()for usage tracking (returns the unifiedSessionInfoschema — link to/docs/features/managed-agents-session-info)hosted.list_sessions()<AccordionGroup>— when to choose Hosted over Local; howANTHROPIC_API_KEY/CLAUDE_API_KEYare picked up; cost considerations.provider="anthropic"is supported. Any other provider raisesValueErrorwith a hint to useLocalAgentinstead.<CardGroup>— link toLocal Agent,Managed Agents(overview),Agents.Reference example file (in PraisonAI repo):
examples/python/managed-agents/provider/runtime_hosted_anthropic.py2) Create new page —
docs/features/local-agent.mdxPlace in
docs/features/.Frontmatter:
Content requirements:
model=, with optional cloudcompute=for sandboxing tool execution."graph LRMermaid diagram showing local loop + optional compute sandbox.<Steps>— show the simplicity:LocalAgent(config=LocalAgentConfig(model="gpt-4o-mini"))compute="e2b"for sandboxed toolsmodel="gemini/gemini-2.0-flash") and Ollama (model="ollama/llama3.2") showing the litellm prefix patterncompute=?" → No: just local subprocess; Yes: pick e2b / modal / flyio / daytona / docker.LocalManagedConfigdataclass atsrc/praisonai/praisonai/integrations/managed_local.py:56-92. Fields:name,model,system,tools,max_turns,metadata,working_dir,env,packages,networking,host_packages_ok,session_title,on_tool_confirmation,on_custom_tool. Include exact types and defaults.LocalAgent(provider=...)is rejected with aDeprecationWarning— LLM choice goes inconfig.model=, sandbox choice goes incompute=. Noprovider=overload.managed-agents-{e2b,modal,docker,daytona}pages)host_packages_ok=True+ManagedSandboxRequiredsecurity modelon_custom_toolcallback<AccordionGroup>— pickcompute=Nonefor development; picke2b/modalfor untrusted code; never sethost_packages_ok=Trueunless you understand the risk.<CardGroup>—Hosted Agent,Managed Agents (overview),Tools.Reference example files (in PraisonAI repo):
examples/python/managed-agents/provider/runtime_local_openai.pyexamples/python/managed-agents/provider/runtime_local_gemini.pyexamples/python/managed-agents/provider/runtime_local_ollama.pyexamples/python/managed-agents/provider/all_providers.py3) Update existing page —
docs/concepts/managed-agents.mdxRequired edits:
<Warning>callout near the top:ManagedAgentis the legacy factory. New code should useHostedAgentfor cloud-managed runtimes orLocalAgentfor local execution. Theprovider=parameter onManagedAgentnow emitsDeprecationWarningfor LLM and compute hints.from praisonai import ManagedAgent, ManagedConfig) and add a sibling tab/code-block showing the recommendedHostedAgent/LocalAgentequivalent.ManagedAgent" with a side-by-side table:ManagedAgent(provider="anthropic", config=ManagedConfig(...))HostedAgent(provider="anthropic", config=HostedAgentConfig(...))ManagedAgent(provider="openai", config=LocalManagedConfig(model="gpt-4o"))LocalAgent(config=LocalAgentConfig(model="gpt-4o-mini"))ManagedAgent(provider="ollama", config=LocalManagedConfig(model="llama3"))LocalAgent(config=LocalAgentConfig(model="ollama/llama3"))ManagedAgent(provider="modal", ...)LocalAgent(compute="modal", config=LocalAgentConfig(...))ManagedAgent(provider="e2b", ...)LocalAgent(compute="e2b", config=LocalAgentConfig(...))ManagedAgent()(auto-detect)<Tip>notingHostedAgentConfig is ManagedConfigandLocalAgentConfig is LocalManagedConfig— same fields, new names — so users do not need to relearn anything.4) Update
docs.jsonnavigationAdd the two new feature pages under the existing Features group (do not add to the
Managed Agentsgroup inside Concepts — that group is for compute-provider pages):A natural placement is alongside other agent-flavour pages like
docs/features/codeagentanddocs/features/mathagent. The doc agent should pick a sensible existing sub-group (e.g. the one containingcodeagent/mathagentat lines 277-278 ofdocs.json) and validatedocs.jsonis valid JSON after the edit.5) Examples (in PraisonAIDocs
examples/if your conventions copy from PraisonAI)The four runtime example scripts and the updated
local_basic.py/all_providers.pywere already added/updated in PraisonAI by PR #1550 and #1555. If your docs site mirrors examples (checkexamples/python/or similar), copy these files verbatim — they already include the skip-guard pattern from #1555 (env-var → SDK availability → service reachability) so docs builds in CI without credentials will not break.SDK source paths (truth source — read these first per
AGENTS.md§1.1 / §1.2)src/praisonai/praisonai/integrations/hosted_agent.pyHostedAgentclass +HostedAgentConfig = ManagedConfigaliassrc/praisonai/praisonai/integrations/local_agent.pyLocalAgentclass +LocalAgentConfig = LocalManagedConfigalias +provider=deprecationsrc/praisonai/praisonai/integrations/managed_agents.py200-235forManagedConfigdataclass;1075-1145for newManagedAgentfactory deprecation logicsrc/praisonai/praisonai/integrations/managed_local.py56-92forLocalManagedConfigdataclasssrc/praisonai/praisonai/__init__.py27-32,128-141— top-level exportssrc/praisonai/praisonai/integrations/__init__.py43-48,112-125— integrations exportstests/unit/integrations/test_backend_semantics.pytest_managed_agent_compute_provider_warnings,test_local_agent_rejects_provider_overload)Sample agent-centric Quick Start (use as a template)
This is the shape of the opener for both new pages — beginner-friendly, agent-first:
Acceptance criteria
docs/features/hosted-agent.mdxcreated, follows full template fromAGENTS.md§2 (frontmatter, hero diagram, Quick Start<Steps>, How It Works, Config Options table, Common Patterns, Best Practices<AccordionGroup>, Related<CardGroup>)docs/features/local-agent.mdxcreated, same template, includes the option-choice Mermaid diagram forcompute=docs/concepts/managed-agents.mdxupdated with deprecation<Warning>and migration table (additive only)docs.jsonupdated with both new pages under a sensible Features sub-group; file is valid JSONfrom praisonai import ...(perAGENTS.md§6.1 friendly-import rule)#8B0000,#189AB4,#10B981,#F59E0B,#6366F1, white text,#7C90A0strokes)claude/admiring-euler-f454e(per task instructions); PR opened as draft againstmainThis issue was authored by an automated agent based on PR #1555 (cleanup) and its parent PR #1550 (the actual feature). The cleanup PR itself adds skip-guards on examples and renames a test — no doc impact in isolation. The feature work in #1550 is what this issue covers.