Skip to content

Comprehensive documentation improvements for examples, tools, and evaluations #80

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 15 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,18 @@ tmp*
__pycache__
*.egg*
node_modules/
*.log
*.log

# Development environment
venv/
.env

# AI assistant folders
.claude/
.cursor/
.vscode/

# Model weights (large files)
*.bin
*.safetensors
gpt-oss-*/
17 changes: 17 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,23 @@ This repository provides a collection of reference implementations:
- On Linux: These reference implementations require CUDA
- On Windows: These reference implementations have not been tested on Windows. Try using solutions like Ollama if you are trying to run the model locally.

#### Windows Setup Notes

If you're developing on Windows, you may need to install additional dependencies:

```shell
# Install Windows-compatible readline for interactive features
pip install pyreadline3

# Install numpy for PyTorch compatibility
pip install numpy
```

For production inference on Windows, consider using:
- [Ollama](https://ollama.com/) for local model serving
- [LM Studio](https://lmstudio.ai/) for desktop applications
- Cloud-based solutions like [Groq](https://groq.com/) or [Cloudflare Workers AI](https://developers.cloudflare.com/workers-ai/)

### Installation

If you want to try any of the code you can install it directly from [PyPI](https://pypi.org/project/gpt-oss/)
Expand Down
103 changes: 103 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
# GPT-OSS Examples

This directory contains practical examples demonstrating how to use GPT-OSS models with different frameworks and tools.

## 📁 Examples Overview

### 🤖 **Agents SDK Examples**
- **Python**: `agents-sdk-python/` - Example using OpenAI's Agents SDK with Python
- **JavaScript**: `agents-sdk-js/` - Example using OpenAI's Agents SDK with TypeScript/JavaScript

### 🎨 **Streamlit Chat Interface**
- **Streamlit**: `streamlit/` - Interactive web-based chat interface using Streamlit

## 🚀 Quick Start

### Prerequisites
- Python 3.12+
- Node.js 18+ (for JavaScript examples)
- GPT-OSS model running locally (via Ollama, vLLM, or other inference backend)

### Running the Examples

#### 1. Agents SDK (Python)
```bash
cd examples/agents-sdk-python
pip install -r requirements.txt # if requirements.txt exists
python example.py
```

#### 2. Agents SDK (JavaScript)
```bash
cd examples/agents-sdk-js
npm install
npm start
```

#### 3. Streamlit Chat Interface
```bash
cd examples/streamlit
pip install streamlit requests
streamlit run streamlit_chat.py
```

## 🔧 Configuration

### Local Model Setup
Most examples expect a GPT-OSS model running locally. You can use:

- **Ollama**: `ollama run gpt-oss:20b`
- **vLLM**: `vllm serve openai/gpt-oss-20b`
- **Local Responses API**: Run the included responses API server

### Environment Variables
Some examples may require environment variables:
```bash
export OPENAI_API_KEY="local" # for local models
export OPENAI_BASE_URL="http://localhost:11434/v1" # Ollama default
```

## 📚 Example Details

### Agents SDK Examples
These examples demonstrate:
- Setting up GPT-OSS with OpenAI's Agents SDK
- Using function calling and tools
- MCP (Model Context Protocol) integration
- Streaming responses

### Streamlit Chat Interface
Features:
- Interactive web-based chat
- Model selection (large/small)
- Reasoning effort control
- Function calling support
- Browser search integration
- Debug mode for development

## 🛠️ Troubleshooting

### Common Issues

1. **Connection Refused**: Make sure your local model server is running
2. **Model Not Found**: Verify the model name matches your local setup
3. **Port Conflicts**: Check that ports 11434 (Ollama) or 8000 (vLLM) are available

### Getting Help
- Check the main [README.md](../README.md) for setup instructions
- Review the [awesome-gpt-oss.md](../awesome-gpt-oss.md) for additional resources
- Open an issue on GitHub for bugs or questions

## 🤝 Contributing

We welcome improvements to these examples! Please:
- Add clear comments and documentation
- Include setup instructions
- Test with different model backends
- Follow the project's coding standards

## 📖 Related Documentation

- [Main README](../README.md) - Project overview and setup
- [Tools Documentation](../gpt_oss/tools/) - Available tools and their usage
- [Responses API](../gpt_oss/responses_api/) - API server implementation
180 changes: 180 additions & 0 deletions examples/agents-sdk-js/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,180 @@
# JavaScript Agents SDK Example

This example demonstrates how to use GPT-OSS with OpenAI's Agents SDK in TypeScript/JavaScript.

## 🚀 Quick Start

### Prerequisites
- Node.js 18+
- GPT-OSS model running locally (Ollama, vLLM, etc.)
- npm or yarn package manager

### Installation

1. **Install dependencies:**
```bash
npm install
```

2. **Install global dependencies (for MCP server):**
```bash
npm install -g npx
```

### Running the Example

1. **Start your GPT-OSS model:**
```bash
# Using Ollama
ollama run gpt-oss:20b

# Using vLLM
vllm serve openai/gpt-oss-20b --port 11434
```

2. **Run the example:**
```bash
npm start
```

## 🔧 Configuration

### Environment Setup
The example is configured to use a local model server:

```typescript
const openai = new OpenAI({
apiKey: "local",
baseURL: "http://localhost:11434/v1",
});
```

### Model Configuration
```typescript
const agent = new Agent({
name: "My Agent",
instructions: "You are a helpful assistant.",
tools: [searchTool],
model: "gpt-oss:20b-test", // Model name for local server
mcpServers: [mcpServer],
});
```

## 🛠️ Features Demonstrated

### Function Calling
The example includes a weather tool:
```typescript
const searchTool = tool({
name: "get_current_weather",
description: "Get the current weather in a given location",
parameters: z.object({
location: z.string(),
}),
execute: async ({ location }) => {
return `The weather in ${location} is sunny.`;
},
});
```

### MCP (Model Context Protocol) Integration
Filesystem access via MCP server:
```typescript
const mcpServer = new MCPServerStdio({
name: "Filesystem MCP Server, via npx",
fullCommand: `npx -y @modelcontextprotocol/server-filesystem ${samplesDir}`,
});
```

### Streaming Responses
Real-time response streaming:
```typescript
const result = await run(agent, input, {
stream: true,
});

for await (const event of result) {
// Process streaming events
}
```

## 📝 Code Structure

### Main Components
1. **Client Setup**: OpenAI client configuration for local model
2. **MCP Server**: Filesystem access server
3. **Tool Definition**: Custom function calling tool with Zod validation
4. **Agent Creation**: GPT-OSS agent with tools and MCP
5. **Streaming Execution**: Real-time response processing

### Event Types
- `raw_model_stream_event`: Raw model responses and reasoning
- `run_item_stream_event`: Tool calls and function executions

### TypeScript Features
- **Zod Validation**: Type-safe parameter validation
- **Async/Await**: Modern JavaScript async patterns
- **Type Safety**: Full TypeScript support

## 🐛 Troubleshooting

### Common Issues

1. **"npx is not installed"**
```bash
npm install -g npx
```

2. **Connection refused to localhost:11434**
- Ensure your model server is running
- Check the port number matches your setup

3. **TypeScript compilation errors**
```bash
# Check TypeScript version
npx tsc --version

# Install missing types
npm install @types/node
```

4. **Module resolution errors**
```bash
# Clear npm cache
npm cache clean --force

# Reinstall dependencies
rm -rf node_modules package-lock.json
npm install
```

### Debug Mode
Enable verbose logging:
```typescript
// Add to your code
console.log('Event:', event);
```

## 📦 Package Scripts

- `npm start`: Run the example with tsx
- `npm test`: Run tests (placeholder)
- `npx tsc`: Compile TypeScript
- `npx tsx index.ts`: Run directly with tsx

## 🔗 Related Documentation

- [OpenAI Agents SDK](https://github.com/openai/agents) - Official SDK documentation
- [Model Context Protocol](https://modelcontextprotocol.io/) - MCP specification
- [Zod](https://zod.dev/) - TypeScript-first schema validation
- [tsx](https://github.com/esbuild-kit/tsx) - TypeScript execution engine
- [Main Examples README](../README.md) - Overview of all examples

## 🤝 Contributing

Improvements welcome! Please:
- Add more tool examples
- Enhance error handling
- Add configuration options
- Improve TypeScript types
- Add unit tests
Loading