Skip to content

Commit d80165f

Browse files
committed
update doc
1 parent 25d1a67 commit d80165f

File tree

7 files changed

+511
-0
lines changed

7 files changed

+511
-0
lines changed

README.md

Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,52 @@ async def main():
9191

9292
asyncio.run(main())
9393
```
94+
---
95+
96+
## Agent Model Tool Calling (OpenAI Tools)
97+
98+
Expose sandbox tools to the Agent in OpenAI Tools format, allowing the model to trigger tools and execute them securely in the sandbox.
99+
100+
````python
101+
import asyncio, os, json
102+
from openai import OpenAI
103+
from ms_enclave.sandbox.manager import SandboxManagerFactory
104+
from ms_enclave.sandbox.model import DockerSandboxConfig, SandboxType
105+
106+
async def demo():
107+
client = OpenAI(
108+
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
109+
api_key=os.getenv("DASHSCOPE_API_KEY")
110+
)
111+
async with SandboxManagerFactory.create_manager() as m:
112+
cfg = DockerSandboxConfig(image="python:3.11-slim", tools_config={"python_executor": {}, "shell_executor": {}})
113+
sid = await m.create_sandbox(SandboxType.DOCKER, cfg)
94114

115+
tools = list((await m.get_sandbox_tools(sid)).values())
116+
messages = [{"role": "user", "content": "Print 'hello' in Python, then list /sandbox via shell."}]
117+
118+
rsp = client.chat.completions.create(model="qwen-plus", messages=messages, tools=tools, tool_choice="auto")
119+
msg = rsp.choices[0].message
120+
messages.append(msg.model_dump())
121+
122+
if getattr(msg, "tool_calls", None):
123+
for tc in msg.tool_calls:
124+
name = tc.function.name
125+
args = json.loads(tc.function.arguments or "{}")
126+
result = await m.execute_tool(sid, name, args)
127+
messages.append({"role": "tool", "content": result.model_dump_json(), "tool_call_id": tc.id, "name": name})
128+
final = client.chat.completions.create(model="qwen-plus", messages=messages)
129+
print(final.choices[0].message.content or "")
130+
else:
131+
print(msg.content or "")
132+
133+
asyncio.run(demo())
134+
````
135+
136+
**Notes:**
137+
- Use `get_sandbox_tools(sandbox_id)` to retrieve tool schemas (OpenAI-compatible)
138+
- Pass `tools=...` to the model, handle returned `tool_calls` and execute them in the sandbox
139+
- Call the model again to generate the final answer
95140
---
96141

97142
## Typical Usage Patterns & Examples

README_zh.md

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -96,6 +96,53 @@ asyncio.run(main())
9696

9797
---
9898

99+
## Agent 模型工具调用(OpenAI Tools)
100+
101+
将沙箱工具以 OpenAI Tools 形式暴露给 Agent,模型触发工具后在沙箱中安全执行。
102+
103+
````python
104+
import asyncio, os, json
105+
from openai import OpenAI
106+
from ms_enclave.sandbox.manager import SandboxManagerFactory
107+
from ms_enclave.sandbox.model import DockerSandboxConfig, SandboxType
108+
109+
async def demo():
110+
client = OpenAI(
111+
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
112+
api_key=os.getenv("DASHSCOPE_API_KEY")
113+
)
114+
async with SandboxManagerFactory.create_manager() as m:
115+
cfg = DockerSandboxConfig(image="python:3.11-slim", tools_config={"python_executor": {}, "shell_executor": {}})
116+
sid = await m.create_sandbox(SandboxType.DOCKER, cfg)
117+
118+
tools = list((await m.get_sandbox_tools(sid)).values())
119+
messages = [{"role": "user", "content": "Print 'hello' in Python, then list /sandbox via shell."}]
120+
121+
rsp = client.chat.completions.create(model="qwen-plus", messages=messages, tools=tools, tool_choice="auto")
122+
msg = rsp.choices[0].message
123+
messages.append(msg.model_dump())
124+
125+
if getattr(msg, "tool_calls", None):
126+
for tc in msg.tool_calls:
127+
name = tc.function.name
128+
args = json.loads(tc.function.arguments or "{}")
129+
result = await m.execute_tool(sid, name, args)
130+
messages.append({"role": "tool", "content": result.model_dump_json(), "tool_call_id": tc.id, "name": name})
131+
final = client.chat.completions.create(model="qwen-plus", messages=messages)
132+
print(final.choices[0].message.content or "")
133+
else:
134+
print(msg.content or "")
135+
136+
asyncio.run(demo())
137+
````
138+
139+
说明:
140+
- 使用 `get_sandbox_tools(sandbox_id)` 获取工具 schema(OpenAI 兼容)
141+
-`tools=...` 传入模型,处理返回的 `tool_calls` 并在沙箱执行
142+
- 再次调用模型生成最终答案
143+
144+
---
145+
99146
## 典型使用方式与示例
100147

101148

docs/en/docs/getting-started/quickstart.md

Lines changed: 151 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -153,3 +153,154 @@ if __name__ == '__main__':
153153
```bash
154154
python quickstart_app.py
155155
```
156+
---
157+
158+
## Method 3: Agent Tool Execution
159+
160+
When your Agent supports OpenAI Tools (function calling), you can expose sandbox tools as callable functions, allowing the model to trigger tools and execute them in the sandbox.
161+
162+
### Use Cases
163+
- Need the LLM to autonomously decide when to run Python code or Shell commands
164+
- Want to inject safely controlled code execution capabilities into the Agent
165+
166+
### Usage Steps
167+
1) Create a manager and sandbox, and enable tools
168+
2) Retrieve the sandbox's tool schema (OpenAI-compatible format)
169+
3) Call the model (tools=...), collect tool_calls
170+
4) Execute corresponding tools in the sandbox and append tool messages
171+
5) Let the model generate the final answer again
172+
173+
### Code Example
174+
````python
175+
import asyncio
176+
import json
177+
import os
178+
from typing import Any, Dict, List, Optional
179+
180+
from openai import OpenAI
181+
from ms_enclave.sandbox.manager import SandboxManagerFactory
182+
from ms_enclave.sandbox.model import DockerSandboxConfig, SandboxType
183+
184+
async def run_agent_with_sandbox() -> None:
185+
"""
186+
Create a sandbox, bind tools to an agent (qwen-plus via DashScope), and execute tool calls.
187+
Prints final model output and minimal tool execution results.
188+
"""
189+
190+
client = OpenAI(
191+
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
192+
api_key=os.environ.get("DASHSCOPE_API_KEY")
193+
)
194+
195+
async with SandboxManagerFactory.create_manager() as manager:
196+
config = DockerSandboxConfig(
197+
image="python:3.11-slim",
198+
tools_config={
199+
"python_executor": {},
200+
"shell_executor": {},
201+
"file_operation": {}
202+
},
203+
volumes={os.path.abspath("./output"): {"bind": "/sandbox/data", "mode": "rw"}},
204+
)
205+
206+
sandbox_id = await manager.create_sandbox(SandboxType.DOCKER, config)
207+
208+
# Fetch available tools from the sandbox and convert to OpenAI format
209+
available_tools = await manager.get_sandbox_tools(sandbox_id)
210+
211+
messages: List[Dict[str, Any]] = [
212+
{
213+
"role": "system",
214+
"content": (
215+
"You can run Python code and shell commands inside a managed sandbox using provided tools. "
216+
"Always use tools to perform code execution or shell operations, then summarize results concisely."
217+
),
218+
},
219+
{
220+
"role": "user",
221+
"content": (
222+
"1) Run Python to print 'hi from sandbox' and compute 123456*654321.\n"
223+
"2) Run a shell command to list /sandbox/data directory.\n"
224+
"Finally, summarize the outputs."
225+
),
226+
},
227+
]
228+
229+
# First model call with tools bound
230+
completion = client.chat.completions.create(
231+
model="qwen-plus", messages=messages, tools=list(available_tools.values()), tool_choice="auto"
232+
)
233+
msg = completion.choices[0].message
234+
235+
messages.append(msg.model_dump())
236+
237+
# Handle tool calls; execute in sandbox and feed results back to the model
238+
tool_summaries: List[str] = []
239+
if getattr(msg, "tool_calls", None):
240+
for call in msg.tool_calls:
241+
name = call.function.name
242+
args = json.loads(call.function.arguments or "{}")
243+
tool_result = await manager.execute_tool(sandbox_id, name, args)
244+
tool_summaries.append(f"{name} => {args} => {tool_result.status}")
245+
messages.append(
246+
{
247+
"role": "tool",
248+
"content": tool_result.model_dump_json(),
249+
"tool_call_id": call.id,
250+
"name": name,
251+
}
252+
)
253+
254+
# Ask the model to produce the final answer after tool results are added
255+
final = client.chat.completions.create(model="qwen-plus", messages=messages)
256+
final_text = final.choices[0].message.content or ""
257+
print("Model output:" + "=" * 20)
258+
print(final_text)
259+
else:
260+
# If no tool calls were made, just print the model output
261+
print("Model output:" + "=" * 20)
262+
print(msg.content or "")
263+
264+
# Minimal summary of executed tools
265+
if tool_summaries:
266+
print("Executed tools:" + "=" * 20)
267+
for s in tool_summaries:
268+
print(f"- {s}")
269+
270+
271+
def main() -> None:
272+
"""Entry point."""
273+
asyncio.run(run_agent_with_sandbox())
274+
275+
if __name__ == "__main__":
276+
main()
277+
278+
````
279+
280+
> Tip: Any model/service compatible with OpenAI Tools can use this pattern; you need to pass the sandbox tool schema to tools and execute each tool_call sequentially.
281+
282+
Output Example:
283+
```text
284+
[INFO:ms_enclave] Local sandbox manager started
285+
[INFO:ms_enclave] Created and started sandbox a3odo8es of type docker
286+
[INFO:ms_enclave] [📦 a3odo8es] hi from sandbox
287+
[INFO:ms_enclave] [📦 a3odo8es] hello.txt
288+
Model output:====================
289+
- Python printed: `hi from sandbox`
290+
- Computed `123456 * 654321 = 80779853376`
291+
- The `/sandbox/data` directory contains one file: `hello.txt`
292+
293+
Summary: The sandbox successfully executed the print and multiplication tasks, and the data directory listing revealed a single file named `hello.txt`.
294+
Executed tools:====================
295+
- python_executor => {'code': "print('hi from sandbox')\n123456 * 654321"} => success
296+
- shell_executor => {'command': 'ls /sandbox/data'} => success
297+
[INFO:ms_enclave] Cleaning up 1 sandboxes
298+
[INFO:ms_enclave] Deleted sandbox a3odo8es
299+
[INFO:ms_enclave] Local sandbox manager stopped
300+
```
301+
302+
## Summary
303+
304+
- **For experiments, scripts, unit tests** -> Recommended: **SandboxFactory**.
305+
- **For backend services, task scheduling, production environments** -> Recommended: **SandboxManagerFactory**.
306+
- **Need model to autonomously call tools** -> Use **SandboxManager** combined with OpenAI Tools.

docs/en/mkdocs.yml

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -69,6 +69,13 @@ theme:
6969

7070
# Add social links (bottom right)
7171
extra:
72+
alternate:
73+
- name: English
74+
link: https://ms-enclave.readthedocs.io/en/latest/
75+
lang: en
76+
- name: 中文
77+
link: https://ms-enclave.readthedocs.io/zh-cn/latest/
78+
lang: zh
7279
social:
7380
- icon: fontawesome/brands/github
7481
link: https://github.com/modelscope/ms-enclave

0 commit comments

Comments
 (0)