Problem
LocalManagedAgent advertises a sandboxed execution story (compute="docker|e2b|modal|daytona|flyio|local", packages={"pip": [...]}, LocalManagedConfig.sandbox_type) but the actual code path runs tool calls in-process and installs pip packages into the caller's Python interpreter.
Concretely (evidence on main):
praisonai/integrations/managed_local.py:463-478 β _install_packages() runs
subprocess.run([sys.executable, "-m", "pip", "install", "-q", *pip_pkgs], ...)
on the host Python, regardless of whether a compute provider is attached.
praisonai/integrations/managed_local.py:548-577 β _execute_sync() calls self._ensure_agent() then agent.chat(prompt). It never touches self._compute. Tool calls (execute_command, read_file, etc.) run in-process.
praisonai/integrations/managed_local.py:91 β LocalManagedConfig.sandbox_type: str = "subprocess" is declared but never read anywhere in the module (grep confirms).
PraisonAIDocs/docs/concepts/managed-agents.mdx:8 claims "Managed Agents run on cloud infrastructure, automatically provisioning compute resources and managing execution environments" β true for AnthropicManagedAgent, false by default for LocalManagedAgent.
Consequence: a non-developer following the quickstart runs LLM-generated shell commands on their laptop, and arbitrary pip installs leak into their environment.
Acceptance criteria
Implementation plan
- Add
ManagedSandboxRequired exception in praisonai/integrations/managed_agents.py (shared).
- In
LocalManagedAgent._install_packages, short-circuit to self._compute.execute(self._compute_instance_id, "pip install ...") when compute attached; else raise unless host_packages_ok=True.
- Add
provision_compute() auto-call inside _ensure_agent() when self._compute is set and self._compute_instance_id is None.
- Introduce a thin
_ComputeToolBridge that wraps the PraisonAI built-in shell-ish tools and delegates their exec step to compute.execute(...). Wire it in _resolve_tools() when compute is attached.
- Update
LocalManagedConfig:
- Add
host_packages_ok: bool = False.
- Either remove
sandbox_type or make it a compute-provider alias (pick one, document once).
- Docs: rewrite "How it works" section of
managed-agents.mdx to clearly state: Anthropic β cloud; Local β host unless compute=β¦.
- Tests per acceptance criteria.
Files
Modify:
src/praisonai/praisonai/integrations/managed_local.py
src/praisonai/praisonai/integrations/managed_agents.py (new exception)
src/praisonai/praisonai/integrations/compute/*.py (ensure execute() accepts pip install ...)
PraisonAIDocs/docs/concepts/managed-agents.mdx
PraisonAIDocs/docs/concepts/managed-agents-local.mdx
src/praisonai-agents/tests/managed/test_managed_factory.py (add failing tests)
Create:
src/praisonai-agents/tests/managed/test_managed_sandbox_safety.py
src/praisonai/tests/integration/test_managed_compute_wiring.py (gated real agentic)
Invariants
- Protocol-driven core untouched (changes live in wrapper).
- Lazy imports preserved.
- Backward-compatible: existing
LocalManagedAgent(...) without compute= continues to work only when no packages=; opt-out flag available.
References
cc @claude β please pick this up per .windsurf/workflows/e2e-analysis-issue-pr-merge.md.
Problem
LocalManagedAgentadvertises a sandboxed execution story (compute="docker|e2b|modal|daytona|flyio|local",packages={"pip": [...]},LocalManagedConfig.sandbox_type) but the actual code path runs tool calls in-process and installs pip packages into the caller's Python interpreter.Concretely (evidence on
main):praisonai/integrations/managed_local.py:463-478β_install_packages()runspraisonai/integrations/managed_local.py:548-577β_execute_sync()callsself._ensure_agent()thenagent.chat(prompt). It never touchesself._compute. Tool calls (execute_command,read_file, etc.) run in-process.praisonai/integrations/managed_local.py:91βLocalManagedConfig.sandbox_type: str = "subprocess"is declared but never read anywhere in the module (grep confirms).PraisonAIDocs/docs/concepts/managed-agents.mdx:8claims "Managed Agents run on cloud infrastructure, automatically provisioning compute resources and managing execution environments" β true forAnthropicManagedAgent, false by default forLocalManagedAgent.Consequence: a non-developer following the quickstart runs LLM-generated shell commands on their laptop, and arbitrary pip installs leak into their environment.
Acceptance criteria
LocalManagedAgentwithpackages=β¦and nocompute=β¦raisesManagedSandboxRequiredwith actionable message. Opt-out available via explicitLocalManagedConfig(host_packages_ok=True)for developer workflows.compute=β¦is set,_install_packagesruns inside the compute instance (compute.execute(instance_id, "pip install β¦")), never on the host.compute=β¦is set, every tool call routed throughagent.chat(...)executes viacompute.execute(instance_id, command)(for built-in tools that produce shell commands) β concretely: bridgeexecute_command/list_files/search_file/write_filetools to the compute provider by default.LocalManagedConfig.sandbox_typeis either honoured (e.g. selects compute provider) or removed with a short deprecation path.managed-agents.mdx+managed-agents-local.mdx) make the host-vs-sandbox story explicit in the first paragraph and in every quickstart example.test_install_packages_without_compute_raisesβ assertManagedSandboxRequired.test_install_packages_with_compute_runs_in_sandboxβ compute mock recordspip installcalls; hostsubprocess.runis not invoked.test_tool_execution_routes_through_compute_when_attachedβ compute mock receives the tool's command.RUN_REAL_AGENTIC=1):Agent(backend=LocalManagedAgent(compute="docker", config=LocalManagedConfig(model="gpt-4o-mini", packages={"pip":["requests"]}))).start("Fetch https://example.com and print the title")β completes end-to-end;requestsis installed in container, not host.Implementation plan
ManagedSandboxRequiredexception inpraisonai/integrations/managed_agents.py(shared).LocalManagedAgent._install_packages, short-circuit toself._compute.execute(self._compute_instance_id, "pip install ...")when compute attached; else raise unlesshost_packages_ok=True.provision_compute()auto-call inside_ensure_agent()whenself._computeis set andself._compute_instance_id is None._ComputeToolBridgethat wraps the PraisonAI built-in shell-ish tools and delegates their exec step tocompute.execute(...). Wire it in_resolve_tools()when compute is attached.LocalManagedConfig:host_packages_ok: bool = False.sandbox_typeor make it a compute-provider alias (pick one, document once).managed-agents.mdxto clearly state: Anthropic β cloud; Local β host unlesscompute=β¦.Files
Modify:
src/praisonai/praisonai/integrations/managed_local.pysrc/praisonai/praisonai/integrations/managed_agents.py(new exception)src/praisonai/praisonai/integrations/compute/*.py(ensureexecute()acceptspip install ...)PraisonAIDocs/docs/concepts/managed-agents.mdxPraisonAIDocs/docs/concepts/managed-agents-local.mdxsrc/praisonai-agents/tests/managed/test_managed_factory.py(add failing tests)Create:
src/praisonai-agents/tests/managed/test_managed_sandbox_safety.pysrc/praisonai/tests/integration/test_managed_compute_wiring.py(gated real agentic)Invariants
LocalManagedAgent(...)withoutcompute=continues to work only when nopackages=; opt-out flag available.References
cc @claude β please pick this up per
.windsurf/workflows/e2e-analysis-issue-pr-merge.md.