diff --git a/examples/README.md b/examples/README.md new file mode 100644 index 0000000..2b79718 --- /dev/null +++ b/examples/README.md @@ -0,0 +1,72 @@ +# gpt-oss Examples + +This directory contains practical examples demonstrating how to use gpt-oss models in different scenarios. + +## Available Examples + +### 🤖 Agents SDK Examples + +- **[JavaScript/TypeScript](./agents-sdk-js/)**: Use gpt-oss with OpenAI Agents SDK in Node.js +- **[Python](./agents-sdk-python/)**: Use gpt-oss with OpenAI Agents SDK in Python + +These examples show how to create intelligent agents that can: + +- Use custom tools and functions +- Integrate with MCP (Model Context Protocol) servers +- Stream responses in real-time +- Display reasoning and tool calls + +### 💬 Streamlit Chat Interface + +- **[Streamlit Chat](./streamlit/)**: A web-based chat interface for gpt-oss + +This example demonstrates: + +- Real-time streaming chat interface +- Configurable model parameters +- Tool integration (functions and browser search) +- Debug mode for API inspection +- Responsive web design + +## Quick Start + +1. **Choose an example** based on your needs: + + - Use **Agents SDK** for building intelligent applications + - Use **Streamlit** for quick web interfaces + +2. **Set up a gpt-oss server**: + + ```bash + # With Ollama (recommended for local development) + ollama pull gpt-oss:20b + ollama serve + + # Or with vLLM + vllm serve openai/gpt-oss-20b + ``` + +3. **Follow the specific setup instructions** in each example's README + +## Prerequisites + +- Python 3.12+ (for Python examples) +- Node.js 18+ (for JavaScript examples) +- A running gpt-oss server (Ollama, vLLM, etc.) +- Basic familiarity with the chosen framework + +## Getting Help + +- Check the individual README files for detailed setup instructions +- Ensure your gpt-oss server is running and accessible +- Use debug modes to inspect API responses +- Refer to the main [gpt-oss documentation](../README.md) for model details + +## Contributing + +Feel free to contribute new examples or improvements to existing ones! Each example should include: + +- Clear setup instructions +- Prerequisites and dependencies +- Usage examples +- Troubleshooting tips diff --git a/examples/agents-sdk-js/README.md b/examples/agents-sdk-js/README.md new file mode 100644 index 0000000..4236958 --- /dev/null +++ b/examples/agents-sdk-js/README.md @@ -0,0 +1,65 @@ +# gpt-oss with OpenAI Agents SDK (JavaScript) + +This example demonstrates how to use gpt-oss models with the OpenAI Agents SDK in JavaScript/TypeScript. + +## Prerequisites + +- Node.js 18+ installed +- Ollama installed and running locally +- gpt-oss model downloaded in Ollama + +## Setup + +1. Install dependencies: + +```bash +npm install +``` + +2. Make sure Ollama is running and you have the gpt-oss model: + +```bash +# Install gpt-oss-20b model +ollama pull gpt-oss:20b + +# Start Ollama (if not already running) +ollama serve +``` + +3. Run the example: + +```bash +npm start +``` + +## What this example does + +This example creates a simple agent that: + +- Uses the gpt-oss-20b model via Ollama +- Has a custom weather tool +- Integrates with an MCP (Model Context Protocol) filesystem server +- Streams responses in real-time +- Shows both reasoning and tool calls + +## Key features + +- **Real-time streaming**: See the model's reasoning and responses as they're generated +- **Tool integration**: Demonstrates how to create and use custom tools +- **MCP integration**: Shows how to connect to external services via MCP +- **Harmony format**: Uses the harmony response format for better reasoning + +## Customization + +You can modify the example by: + +- Changing the model name in the agent configuration +- Adding more tools to the `tools` array +- Modifying the agent instructions +- Adding different MCP servers + +## Troubleshooting + +- Make sure Ollama is running on `localhost:11434` +- Ensure you have the correct model name (`gpt-oss:20b`) +- Check that npx is available for the MCP filesystem server diff --git a/examples/agents-sdk-js/index.ts b/examples/agents-sdk-js/index.ts index 27cc854..fc703a7 100644 --- a/examples/agents-sdk-js/index.ts +++ b/examples/agents-sdk-js/index.ts @@ -57,7 +57,7 @@ const agent = new Agent({ name: "My Agent", instructions: "You are a helpful assistant.", tools: [searchTool], - model: "gpt-oss:20b-test", + model: "gpt-oss:20b", mcpServers: [mcpServer], }); diff --git a/examples/agents-sdk-python/README.md b/examples/agents-sdk-python/README.md new file mode 100644 index 0000000..2733c93 --- /dev/null +++ b/examples/agents-sdk-python/README.md @@ -0,0 +1,75 @@ +# gpt-oss with OpenAI Agents SDK (Python) + +This example demonstrates how to use gpt-oss models with the OpenAI Agents SDK in Python. + +## Prerequisites + +- Python 3.12+ +- Ollama installed and running locally +- gpt-oss model downloaded in Ollama +- npx available (for MCP filesystem server) + +## Setup + +1. Install dependencies: + +```bash +pip install -r requirements.txt +``` + +2. Make sure Ollama is running and you have the gpt-oss model: + +```bash +# Install gpt-oss-20b model +ollama pull gpt-oss:20b + +# Start Ollama (if not already running) +ollama serve +``` + +3. Run the example: + +```bash +python example.py +``` + +## What this example does + +This example creates a simple agent that: + +- Uses the gpt-oss-20b model via Ollama +- Has a custom weather tool +- Integrates with an MCP (Model Context Protocol) filesystem server +- Streams responses in real-time +- Shows both reasoning and tool calls + +## Key features + +- **Real-time streaming**: See the model's reasoning and responses as they're generated +- **Tool integration**: Demonstrates how to create and use custom tools using `@function_tool` +- **MCP integration**: Shows how to connect to external services via MCP +- **Harmony format**: Uses the harmony response format for better reasoning +- **Async support**: Full async/await support for better performance + +## Customization + +You can modify the example by: + +- Changing the model name in the agent configuration +- Adding more tools using the `@function_tool` decorator +- Modifying the agent instructions +- Adding different MCP servers + +## Code structure + +- `main()`: Main async function that sets up the agent +- `search_tool()`: Example function tool for weather queries +- `prompt_user()`: Helper function for user input +- MCP server setup for filesystem operations + +## Troubleshooting + +- Make sure Ollama is running on `localhost:11434` +- Ensure you have the correct model name (`gpt-oss:20b`) +- Check that npx is available for the MCP filesystem server +- Verify Python 3.12+ is installed diff --git a/examples/agents-sdk-python/example.py b/examples/agents-sdk-python/example.py index af0be60..47ee130 100644 --- a/examples/agents-sdk-python/example.py +++ b/examples/agents-sdk-python/example.py @@ -62,7 +62,7 @@ async def get_weather(location: str) -> str: name="My Agent", instructions="You are a helpful assistant.", tools=[get_weather], - model="gpt-oss:20b-test", + model="gpt-oss:20b", mcp_servers=[mcp_server], ) diff --git a/examples/streamlit/README.md b/examples/streamlit/README.md new file mode 100644 index 0000000..87df1a8 --- /dev/null +++ b/examples/streamlit/README.md @@ -0,0 +1,83 @@ +# gpt-oss Streamlit Chat Interface + +This example demonstrates how to create a web-based chat interface for gpt-oss models using Streamlit. + +## Prerequisites + +- Python 3.12+ +- A running gpt-oss server (vLLM, Ollama, or other compatible server) +- Streamlit installed + +## Setup + +1. Install dependencies: + +```bash +pip install streamlit requests +``` + +2. Start your gpt-oss server. For example, with Ollama: + +```bash +# Install gpt-oss-20b model +ollama pull gpt-oss:20b + +# Start Ollama +ollama serve +``` + +3. Run the Streamlit app: + +```bash +streamlit run streamlit_chat.py +``` + +## Features + +This chat interface includes: + +- **Real-time streaming**: See responses as they're generated +- **Reasoning display**: View the model's reasoning process +- **Tool integration**: Use custom functions and browser search +- **Configurable parameters**: Adjust temperature, reasoning effort, and more +- **Debug mode**: View raw API responses for debugging +- **Responsive design**: Clean, modern chat interface + +## Configuration + +The sidebar allows you to configure: + +- **Model selection**: Choose between different model sizes +- **Instructions**: Customize the assistant's behavior +- **Reasoning effort**: Set reasoning effort (low/medium/high) +- **Functions**: Enable and configure custom function calls +- **Browser search**: Enable web search capabilities +- **Temperature**: Control response randomness +- **Max output tokens**: Limit response length +- **Debug mode**: Show raw API responses + +## Server Configuration + +The app expects a Responses API compatible server running on: + +- `http://localhost:8081/v1/responses` (for small model) +- `http://localhost:8000/v1/responses` (for large model) + +You can modify these URLs in the code to match your setup. + +## Customization + +You can customize the example by: + +- Adding new function tools +- Modifying the UI layout +- Adding authentication +- Implementing different server backends +- Adding file upload capabilities + +## Troubleshooting + +- Make sure your gpt-oss server is running and accessible +- Check that the server URLs match your setup +- Verify all dependencies are installed +- Check the debug mode for API response details