diff --git a/.gitignore b/.gitignore index 0f0447d..447456f 100644 --- a/.gitignore +++ b/.gitignore @@ -4,4 +4,18 @@ tmp* __pycache__ *.egg* node_modules/ -*.log \ No newline at end of file +*.log + +# Development environment +venv/ +.env + +# AI assistant folders +.claude/ +.cursor/ +.vscode/ + +# Model weights (large files) +*.bin +*.safetensors +gpt-oss-*/ \ No newline at end of file diff --git a/README.md b/README.md index 7d4f279..91e5237 100644 --- a/README.md +++ b/README.md @@ -135,6 +135,23 @@ This repository provides a collection of reference implementations: - On Linux: These reference implementations require CUDA - On Windows: These reference implementations have not been tested on Windows. Try using solutions like Ollama if you are trying to run the model locally. +#### Windows Setup Notes + +If you're developing on Windows, you may need to install additional dependencies: + +```shell +# Install Windows-compatible readline for interactive features +pip install pyreadline3 + +# Install numpy for PyTorch compatibility +pip install numpy +``` + +For production inference on Windows, consider using: +- [Ollama](https://ollama.com/) for local model serving +- [LM Studio](https://lmstudio.ai/) for desktop applications +- Cloud-based solutions like [Groq](https://groq.com/) or [Cloudflare Workers AI](https://developers.cloudflare.com/workers-ai/) + ### Installation If you want to try any of the code you can install it directly from [PyPI](https://pypi.org/project/gpt-oss/) diff --git a/examples/README.md b/examples/README.md new file mode 100644 index 0000000..15f16bb --- /dev/null +++ b/examples/README.md @@ -0,0 +1,103 @@ +# GPT-OSS Examples + +This directory contains practical examples demonstrating how to use GPT-OSS models with different frameworks and tools. + +## 📁 Examples Overview + +### 🤖 **Agents SDK Examples** +- **Python**: `agents-sdk-python/` - Example using OpenAI's Agents SDK with Python +- **JavaScript**: `agents-sdk-js/` - Example using OpenAI's Agents SDK with TypeScript/JavaScript + +### 🎨 **Streamlit Chat Interface** +- **Streamlit**: `streamlit/` - Interactive web-based chat interface using Streamlit + +## 🚀 Quick Start + +### Prerequisites +- Python 3.12+ +- Node.js 18+ (for JavaScript examples) +- GPT-OSS model running locally (via Ollama, vLLM, or other inference backend) + +### Running the Examples + +#### 1. Agents SDK (Python) +```bash +cd examples/agents-sdk-python +pip install -r requirements.txt # if requirements.txt exists +python example.py +``` + +#### 2. Agents SDK (JavaScript) +```bash +cd examples/agents-sdk-js +npm install +npm start +``` + +#### 3. Streamlit Chat Interface +```bash +cd examples/streamlit +pip install streamlit requests +streamlit run streamlit_chat.py +``` + +## 🔧 Configuration + +### Local Model Setup +Most examples expect a GPT-OSS model running locally. You can use: + +- **Ollama**: `ollama run gpt-oss:20b` +- **vLLM**: `vllm serve openai/gpt-oss-20b` +- **Local Responses API**: Run the included responses API server + +### Environment Variables +Some examples may require environment variables: +```bash +export OPENAI_API_KEY="local" # for local models +export OPENAI_BASE_URL="http://localhost:11434/v1" # Ollama default +``` + +## 📚 Example Details + +### Agents SDK Examples +These examples demonstrate: +- Setting up GPT-OSS with OpenAI's Agents SDK +- Using function calling and tools +- MCP (Model Context Protocol) integration +- Streaming responses + +### Streamlit Chat Interface +Features: +- Interactive web-based chat +- Model selection (large/small) +- Reasoning effort control +- Function calling support +- Browser search integration +- Debug mode for development + +## 🛠️ Troubleshooting + +### Common Issues + +1. **Connection Refused**: Make sure your local model server is running +2. **Model Not Found**: Verify the model name matches your local setup +3. **Port Conflicts**: Check that ports 11434 (Ollama) or 8000 (vLLM) are available + +### Getting Help +- Check the main [README.md](../README.md) for setup instructions +- Review the [awesome-gpt-oss.md](../awesome-gpt-oss.md) for additional resources +- Open an issue on GitHub for bugs or questions + +## 🤝 Contributing + +We welcome improvements to these examples! Please: +- Add clear comments and documentation +- Include setup instructions +- Test with different model backends +- Follow the project's coding standards + +## 📖 Related Documentation + +- [Main README](../README.md) - Project overview and setup +- [Tools Documentation](../gpt_oss/tools/) - Available tools and their usage +- [Responses API](../gpt_oss/responses_api/) - API server implementation diff --git a/examples/agents-sdk-js/README.md b/examples/agents-sdk-js/README.md new file mode 100644 index 0000000..cb81963 --- /dev/null +++ b/examples/agents-sdk-js/README.md @@ -0,0 +1,180 @@ +# JavaScript Agents SDK Example + +This example demonstrates how to use GPT-OSS with OpenAI's Agents SDK in TypeScript/JavaScript. + +## 🚀 Quick Start + +### Prerequisites +- Node.js 18+ +- GPT-OSS model running locally (Ollama, vLLM, etc.) +- npm or yarn package manager + +### Installation + +1. **Install dependencies:** +```bash +npm install +``` + +2. **Install global dependencies (for MCP server):** +```bash +npm install -g npx +``` + +### Running the Example + +1. **Start your GPT-OSS model:** +```bash +# Using Ollama +ollama run gpt-oss:20b + +# Using vLLM +vllm serve openai/gpt-oss-20b --port 11434 +``` + +2. **Run the example:** +```bash +npm start +``` + +## 🔧 Configuration + +### Environment Setup +The example is configured to use a local model server: + +```typescript +const openai = new OpenAI({ + apiKey: "local", + baseURL: "http://localhost:11434/v1", +}); +``` + +### Model Configuration +```typescript +const agent = new Agent({ + name: "My Agent", + instructions: "You are a helpful assistant.", + tools: [searchTool], + model: "gpt-oss:20b-test", // Model name for local server + mcpServers: [mcpServer], +}); +``` + +## 🛠️ Features Demonstrated + +### Function Calling +The example includes a weather tool: +```typescript +const searchTool = tool({ + name: "get_current_weather", + description: "Get the current weather in a given location", + parameters: z.object({ + location: z.string(), + }), + execute: async ({ location }) => { + return `The weather in ${location} is sunny.`; + }, +}); +``` + +### MCP (Model Context Protocol) Integration +Filesystem access via MCP server: +```typescript +const mcpServer = new MCPServerStdio({ + name: "Filesystem MCP Server, via npx", + fullCommand: `npx -y @modelcontextprotocol/server-filesystem ${samplesDir}`, +}); +``` + +### Streaming Responses +Real-time response streaming: +```typescript +const result = await run(agent, input, { + stream: true, +}); + +for await (const event of result) { + // Process streaming events +} +``` + +## 📝 Code Structure + +### Main Components +1. **Client Setup**: OpenAI client configuration for local model +2. **MCP Server**: Filesystem access server +3. **Tool Definition**: Custom function calling tool with Zod validation +4. **Agent Creation**: GPT-OSS agent with tools and MCP +5. **Streaming Execution**: Real-time response processing + +### Event Types +- `raw_model_stream_event`: Raw model responses and reasoning +- `run_item_stream_event`: Tool calls and function executions + +### TypeScript Features +- **Zod Validation**: Type-safe parameter validation +- **Async/Await**: Modern JavaScript async patterns +- **Type Safety**: Full TypeScript support + +## 🐛 Troubleshooting + +### Common Issues + +1. **"npx is not installed"** + ```bash + npm install -g npx + ``` + +2. **Connection refused to localhost:11434** + - Ensure your model server is running + - Check the port number matches your setup + +3. **TypeScript compilation errors** + ```bash + # Check TypeScript version + npx tsc --version + + # Install missing types + npm install @types/node + ``` + +4. **Module resolution errors** + ```bash + # Clear npm cache + npm cache clean --force + + # Reinstall dependencies + rm -rf node_modules package-lock.json + npm install + ``` + +### Debug Mode +Enable verbose logging: +```typescript +// Add to your code +console.log('Event:', event); +``` + +## 📦 Package Scripts + +- `npm start`: Run the example with tsx +- `npm test`: Run tests (placeholder) +- `npx tsc`: Compile TypeScript +- `npx tsx index.ts`: Run directly with tsx + +## 🔗 Related Documentation + +- [OpenAI Agents SDK](https://github.com/openai/agents) - Official SDK documentation +- [Model Context Protocol](https://modelcontextprotocol.io/) - MCP specification +- [Zod](https://zod.dev/) - TypeScript-first schema validation +- [tsx](https://github.com/esbuild-kit/tsx) - TypeScript execution engine +- [Main Examples README](../README.md) - Overview of all examples + +## 🤝 Contributing + +Improvements welcome! Please: +- Add more tool examples +- Enhance error handling +- Add configuration options +- Improve TypeScript types +- Add unit tests diff --git a/examples/agents-sdk-python/README.md b/examples/agents-sdk-python/README.md new file mode 100644 index 0000000..7b4f72b --- /dev/null +++ b/examples/agents-sdk-python/README.md @@ -0,0 +1,144 @@ +# Python Agents SDK Example + +This example demonstrates how to use GPT-OSS with OpenAI's Agents SDK in Python. + +## 🚀 Quick Start + +### Prerequisites +- Python 3.12+ +- GPT-OSS model running locally (Ollama, vLLM, etc.) +- Node.js (for MCP server) + +### Installation + +1. **Install Python dependencies:** +```bash +pip install -e . +``` + +2. **Install Node.js dependencies (for MCP server):** +```bash +npm install -g npx +``` + +### Running the Example + +1. **Start your GPT-OSS model:** +```bash +# Using Ollama +ollama run gpt-oss:20b + +# Using vLLM +vllm serve openai/gpt-oss-20b --port 11434 +``` + +2. **Run the example:** +```bash +python example.py +``` + +## 🔧 Configuration + +### Environment Setup +The example is configured to use a local model server: + +```python +openai_client = AsyncOpenAI( + api_key="local", + base_url="http://localhost:11434/v1", +) +``` + +### Model Configuration +```python +agent = Agent( + name="My Agent", + instructions="You are a helpful assistant.", + tools=[search_tool], + model="gpt-oss:20b-test", # Model name for local server + mcp_servers=[mcp_server], +) +``` + +## 🛠️ Features Demonstrated + +### Function Calling +The example includes a weather tool: +```python +@function_tool +async def search_tool(location: str) -> str: + return f"The weather in {location} is sunny." +``` + +### MCP (Model Context Protocol) Integration +Filesystem access via MCP server: +```python +mcp_server = MCPServerStdio( + name="Filesystem MCP Server, via npx", + params={ + "command": "npx", + "args": ["-y", "@modelcontextprotocol/server-filesystem", samples_dir], + }, +) +``` + +### Streaming Responses +Real-time response streaming: +```python +result = Runner.run_streamed(agent, user_input) +async for event in result.stream_events(): + # Process streaming events +``` + +## 📝 Code Structure + +### Main Components +1. **Client Setup**: OpenAI client configuration for local model +2. **MCP Server**: Filesystem access server +3. **Tool Definition**: Custom function calling tool +4. **Agent Creation**: GPT-OSS agent with tools and MCP +5. **Streaming Execution**: Real-time response processing + +### Event Types +- `raw_response_event`: Raw model responses +- `agent_updated_stream_event`: Agent state changes +- `run_item_stream_event`: Tool calls and outputs + +## 🐛 Troubleshooting + +### Common Issues + +1. **"npx is not installed"** + ```bash + npm install -g npx + ``` + +2. **Connection refused to localhost:11434** + - Ensure your model server is running + - Check the port number matches your setup + +3. **Model not found** + - Verify the model name matches your local server + - Check that the model is properly loaded + +### Debug Mode +Enable tracing for detailed logs: +```python +# Remove this line to enable tracing +set_tracing_disabled(True) +``` + +## 🔗 Related Documentation + +- [OpenAI Agents SDK](https://github.com/openai/agents) - Official SDK documentation +- [Model Context Protocol](https://modelcontextprotocol.io/) - MCP specification +- [Main Examples README](../README.md) - Overview of all examples +- [GPT-OSS Tools](../../gpt_oss/tools/) - Available tools for integration + +## 🤝 Contributing + +Improvements welcome! Please: +- Add more tool examples +- Enhance error handling +- Add configuration options +- Improve documentation diff --git a/examples/streamlit/README.md b/examples/streamlit/README.md new file mode 100644 index 0000000..5dedd2d --- /dev/null +++ b/examples/streamlit/README.md @@ -0,0 +1,190 @@ +# Streamlit Chat Interface + +This example provides an interactive web-based chat interface for GPT-OSS using Streamlit. + +## 🚀 Quick Start + +### Prerequisites +- Python 3.12+ +- GPT-OSS model running locally (Ollama, vLLM, etc.) +- Streamlit + +### Installation + +1. **Install dependencies:** +```bash +pip install streamlit requests +``` + +2. **Install GPT-OSS (if not already installed):** +```bash +pip install -e ../../ # From the root directory +``` + +### Running the Example + +1. **Start your GPT-OSS model server:** +```bash +# Using vLLM (recommended) +vllm serve openai/gpt-oss-20b --port 8000 + +# Using Ollama with Responses API +ollama run gpt-oss:20b --port 8000 + +# Using local Responses API server +python -m gpt_oss.responses_api.serve --port 8000 +``` + +2. **Run the Streamlit app:** +```bash +streamlit run streamlit_chat.py +``` + +3. **Open your browser** to the URL shown in the terminal (usually `http://localhost:8501`) + +## 🎨 Features + +### Model Selection +- **Large Model**: Uses `localhost:8000` (gpt-oss-120b) +- **Small Model**: Uses `localhost:8081` (gpt-oss-20b) + +### Chat Interface +- **Interactive Chat**: Real-time conversation with the model +- **Message History**: View and continue previous conversations +- **Streaming Responses**: See responses as they're generated + +### Configuration Options + +#### Reasoning Effort +- **Low**: Fast responses, minimal reasoning +- **Medium**: Balanced speed and reasoning +- **High**: Maximum reasoning effort + +#### Tools and Functions +- **Browser Search**: Enable web search capabilities +- **Function Calling**: Use custom functions +- **Apply Patch**: File manipulation capabilities + +#### Generation Parameters +- **Temperature**: Control response randomness (0.0-1.0) +- **Max Output Tokens**: Limit response length (1000-20000) + +### Debug Features +- **Debug Mode**: View raw conversation data +- **JSON Output**: See the full conversation structure +- **Tool Interactions**: Monitor function calls and responses + +## 🔧 Configuration + +### Environment Variables +```bash +# Optional: Set default model server URLs +export GPTOSS_LARGE_MODEL_URL="http://localhost:8000/v1" +export GPTOSS_SMALL_MODEL_URL="http://localhost:8081/v1" +``` + +### Custom Functions +You can define custom functions in the sidebar: + +```json +{ + "type": "object", + "properties": { + "location": { + "type": "string", + "description": "The city and state, e.g. San Francisco, CA" + } + }, + "required": ["location"] +} +``` + +## 🛠️ Advanced Usage + +### Custom Model Endpoints +Modify the URL configuration in the code: +```python +URL = ( + "http://your-custom-endpoint:8000/v1" # Large model + if selection == options[1] + else "http://your-custom-endpoint:8081/v1" # Small model +) +``` + +### Adding New Tools +Extend the tool configuration in the sidebar: +```python +# Add new tool toggles +use_custom_tool = st.sidebar.toggle("Use Custom Tool", value=False) +``` + +### Custom Styling +Modify the Streamlit theme in `.streamlit/config.toml`: +```toml +[theme] +primaryColor = "#FF6B6B" +backgroundColor = "#FFFFFF" +secondaryBackgroundColor = "#F0F2F6" +textColor = "#262730" +``` + +## 🐛 Troubleshooting + +### Common Issues + +1. **"Connection refused"** + - Ensure your model server is running + - Check the port numbers (8000 for large, 8081 for small) + - Verify the server supports the Responses API + +2. **"Model not found"** + - Check that your model server has the correct model loaded + - Verify the model name in your server configuration + +3. **Streamlit not starting** + ```bash + # Check Streamlit installation + streamlit --version + + # Reinstall if needed + pip install --upgrade streamlit + ``` + +4. **Browser search not working** + - Ensure you have an Exa API key set + - Check that the browser tool is properly configured + +### Debug Mode +Enable debug mode in the sidebar to see: +- Raw conversation data +- Tool interaction logs +- Model configuration details + +## 📊 Performance Tips + +### For Better Performance +- Use the smaller model for faster responses +- Set reasoning effort to "low" for quick interactions +- Limit max output tokens for shorter responses +- Use local model servers for lower latency + +### For Development +- Enable debug mode to monitor interactions +- Use the JSON output to understand the conversation flow +- Test with different model configurations + +## 🔗 Related Documentation + +- [Streamlit Documentation](https://docs.streamlit.io/) - Streamlit framework guide +- [GPT-OSS Main README](../../README.md) - Project overview +- [Responses API](../gpt_oss/responses_api/) - API server documentation +- [Tools Documentation](../gpt_oss/tools/) - Available tools + +## 🤝 Contributing + +Improvements welcome! Please: +- Add new UI features +- Enhance error handling +- Improve accessibility +- Add more configuration options +- Create custom themes diff --git a/gpt_oss/evals/README.md b/gpt_oss/evals/README.md index f0713dc..c8572e9 100644 --- a/gpt_oss/evals/README.md +++ b/gpt_oss/evals/README.md @@ -1,4 +1,157 @@ -# `gpt_oss.evals` +# GPT-OSS Evaluations -This module is a reincarnation of [simple-evals](https://github.com/openai/simple-evals) adapted for gpt-oss. It lets you -run GPQA and HealthBench against a runtime that supports Responses API on `localhost:8080/v1`. \ No newline at end of file +This module is a reincarnation of [simple-evals](https://github.com/openai/simple-evals) adapted for GPT-OSS. It provides evaluation frameworks for testing GPT-OSS model performance on various benchmarks. + +## 📊 Available Evaluations + +### 🧠 **GPQA (Graduate-Level Google-Proof Q&A)** +A challenging dataset of graduate-level questions across multiple domains. + +**Features:** +- 448 multiple-choice questions +- Graduate-level difficulty +- Multi-domain coverage +- Detailed reasoning evaluation + +### 🏥 **HealthBench** +A medical reasoning benchmark for evaluating healthcare-related capabilities. + +**Features:** +- Medical reasoning questions +- Clinical decision support scenarios +- Healthcare knowledge assessment + +## 🚀 Quick Start + +### Prerequisites +- GPT-OSS model running with Responses API +- Python 3.12+ +- Required dependencies: `pip install -e .[eval]` + +### Running Evaluations + +#### 1. Start Your Model Server +```bash +# Using vLLM +vllm serve openai/gpt-oss-20b --port 8080 + +# Using Ollama with Responses API +ollama run gpt-oss:20b --port 8080 + +# Using local Responses API server +python -m gpt_oss.responses_api.serve --port 8080 +``` + +#### 2. Run GPQA Evaluation +```bash +python -m gpt_oss.evals --eval gpqa --model gpt-oss-20b +``` + +#### 3. Run HealthBench Evaluation +```bash +python -m gpt_oss.evals --eval healthbench --model gpt-oss-20b +``` + +## 🔧 Configuration + +### Environment Variables +```bash +export OPENAI_API_KEY="local" +export OPENAI_BASE_URL="http://localhost:8080/v1" +``` + +### Command Line Options +```bash +python -m gpt_oss.evals --help +``` + +Common options: +- `--eval`: Evaluation type (gpqa, healthbench) +- `--model`: Model name +- `--max_samples`: Maximum number of samples to evaluate +- `--output_dir`: Directory for results +- `--reasoning_effort`: Reasoning effort level (low, medium, high) + +## 📈 Understanding Results + +### GPQA Results +- **Accuracy**: Overall correct answer percentage +- **Reasoning Quality**: Assessment of reasoning process +- **Domain Performance**: Performance across different subjects + +### HealthBench Results +- **Medical Accuracy**: Correct medical reasoning +- **Clinical Relevance**: Practical healthcare application +- **Safety Assessment**: Patient safety considerations + +## 🛠️ Custom Evaluations + +### Creating Custom Evaluations +You can create custom evaluations by implementing the evaluation interface: + +```python +from gpt_oss.evals.types import Evaluation + +class CustomEvaluation(Evaluation): + def __init__(self): + self.name = "custom_eval" + self.description = "Custom evaluation description" + + def generate_questions(self): + # Generate evaluation questions + pass + + def evaluate_response(self, question, response): + # Evaluate model response + pass +``` + +### Adding New Benchmarks +1. Create evaluation class +2. Implement question generation +3. Implement response evaluation +4. Add to evaluation registry + +## 📊 Results Analysis + +### Output Format +Results are saved in JSON format with: +- Question details +- Model responses +- Evaluation scores +- Reasoning analysis + +### Visualization +Use the provided analysis tools to visualize results: +```bash +python -m gpt_oss.evals.report --results results.json +``` + +## 🐛 Troubleshooting + +### Common Issues + +1. **Connection Refused**: Ensure model server is running on correct port +2. **Model Not Found**: Verify model name matches your setup +3. **Memory Issues**: Reduce batch size or use smaller model + +### Debug Mode +Enable verbose logging: +```bash +python -m gpt_oss.evals --eval gpqa --verbose +``` + +## 📖 Related Documentation + +- [Main README](../../README.md) - Project overview +- [Responses API](../responses_api/) - API server implementation +- [Evaluation Types](types.py) - Evaluation interface definitions +- [Simple Evals](https://github.com/openai/simple-evals) - Original evaluation framework + +## 🤝 Contributing + +We welcome evaluation improvements! Please: +- Add new benchmark datasets +- Improve evaluation metrics +- Enhance result analysis +- Document evaluation methodologies \ No newline at end of file diff --git a/gpt_oss/tools/README.md b/gpt_oss/tools/README.md new file mode 100644 index 0000000..d4ad46b --- /dev/null +++ b/gpt_oss/tools/README.md @@ -0,0 +1,211 @@ +# GPT-OSS Tools + +This directory contains the tools that GPT-OSS models can use during inference. These tools enable the models to perform actions like web browsing, code execution, and file manipulation. + +## 🛠️ Available Tools + +### 🌐 **Browser Tool** (`simple_browser/`) +A web browsing tool that allows the model to search and read web pages. + +**Features:** +- Web search functionality +- Page content extraction +- Scrolling through long pages +- Citation support for answers + +**Usage:** +```python +from gpt_oss.tools.simple_browser import SimpleBrowserTool +from gpt_oss.tools.simple_browser.backend import ExaBackend + +backend = ExaBackend(source="web") +browser_tool = SimpleBrowserTool(backend=backend) +``` + +**⚠️ Note:** This is for educational purposes. Implement your own browsing environment for production use. + +### 🐍 **Python Tool** (`python_docker/`) +A Python code execution tool that runs code in a Docker container. + +**Features:** +- Safe code execution in isolated environment +- Stateless execution model +- Support for calculations and data processing +- Chain-of-thought reasoning integration + +**Usage:** +```python +from gpt_oss.tools.python_docker.docker_tool import PythonTool + +python_tool = PythonTool() +``` + +**⚠️ Note:** Runs in a permissive Docker container. Implement proper security restrictions for production. + +### 📝 **Apply Patch Tool** (`apply_patch.py`) +A tool for creating, updating, or deleting files locally. + +**Features:** +- File creation and modification +- Patch application +- Safe file operations + +**Usage:** +```python +from gpt_oss.tools.apply_patch import apply_patch_tool +``` + +## 🔧 Tool Integration + +### Using Tools with Harmony Format +Tools are integrated using the Harmony response format: + +```python +from openai_harmony import SystemContent, Message, Conversation, Role + +# Create system message with tools +system_content = SystemContent.new().with_tools([ + browser_tool.tool_config, + python_tool.tool_config +]) + +system_message = Message.from_role_and_content(Role.SYSTEM, system_content) +``` + +### Tool Processing +```python +# Parse model output +messages = encoding.parse_messages_from_completion_tokens(output_tokens, Role.ASSISTANT) +last_message = messages[-1] + +# Route to appropriate tool +if last_message.recipient.startswith("browser"): + response_messages = await browser_tool.process(last_message) +elif last_message.recipient == "python": + response_messages = await python_tool.process(last_message) +``` + +## 🚀 Getting Started + +### Prerequisites +- Docker (for Python tool) +- Exa API key (for browser tool) +- GPT-OSS model with Harmony format support + +### Environment Setup +```bash +# Set up environment variables +export EXA_API_KEY="your_exa_api_key" # for browser tool +export DOCKER_HOST="unix:///var/run/docker.sock" # for Python tool +``` + +### Basic Example +```python +import asyncio +from gpt_oss.tools.simple_browser import SimpleBrowserTool +from gpt_oss.tools.python_docker.docker_tool import PythonTool +from openai_harmony import SystemContent, Message, Conversation, Role + +async def main(): + # Initialize tools + browser_tool = SimpleBrowserTool() + python_tool = PythonTool() + + # Create conversation with tools + system_content = SystemContent.new().with_tools([ + browser_tool.tool_config, + python_tool.tool_config + ]) + + conversation = Conversation.from_messages([ + Message.from_role_and_content(Role.SYSTEM, system_content), + Message.from_role_and_content(Role.USER, "What's the weather in San Francisco?") + ]) + + # Process with your model... +``` + +## 🔒 Security Considerations + +### Browser Tool +- Implement your own browsing environment +- Add rate limiting and access controls +- Consider content filtering + +### Python Tool +- Use restricted Docker containers +- Implement code execution limits +- Add security sandboxing + +### Apply Patch Tool +- Validate file paths and operations +- Implement backup mechanisms +- Add user confirmation for destructive operations + +## 📚 Advanced Usage + +### Custom Tool Development +You can create custom tools by implementing the `Tool` interface: + +```python +from gpt_oss.tools.tool import Tool +from openai_harmony import Message + +class CustomTool(Tool): + @property + def name(self) -> str: + return "custom_tool" + + async def _process(self, message: Message): + # Implement your tool logic here + yield Message(...) + + def instruction(self) -> str: + return "Description of what this tool does" +``` + +### Tool Configuration +Tools can be configured with different backends and settings: + +```python +# Browser tool with custom backend +from gpt_oss.tools.simple_browser.backend import CustomBackend + +backend = CustomBackend( + source="web", + max_results=10, + include_domains=["example.com"] +) +browser_tool = SimpleBrowserTool(backend=backend) +``` + +## 🐛 Troubleshooting + +### Common Issues + +1. **Docker Connection Error**: Ensure Docker is running and accessible +2. **Exa API Error**: Verify your API key is valid and has sufficient credits +3. **Tool Not Found**: Check that the tool is properly registered in the system message + +### Debug Mode +Enable debug mode to see tool interactions: + +```python +import logging +logging.basicConfig(level=logging.DEBUG) +``` + +## 📖 Related Documentation + +- [Main README](../../README.md) - Project overview +- [Harmony Format](https://github.com/openai/harmony) - Response format documentation +- [Tool Interface](tool.py) - Base tool implementation +- [Examples](../../examples/) - Usage examples + +## 🤝 Contributing + +We welcome tool improvements and new tool implementations! Please: +- Follow the existing tool interface +- Add comprehensive documentation +- Include security considerations +- Provide usage examples