Skip to content

docs: add README files for examples directory #46

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 41 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
5204cb6
docs: add README files for examples directory
Danztee Aug 6, 2025
3834874
Update model names from gpt-oss:20b-test to gpt-oss:20b in examples
Danztee Aug 12, 2025
6d10597
evals: add chat completions API sampler (#59)
volsgd Aug 6, 2025
95d9db5
evals: log reasoning and extend max_tokens for chat completions (#62)
volsgd Aug 6, 2025
67aa38a
chat / api_server: do not include developer messages to reduce mismat…
volsgd Aug 7, 2025
ba809be
Fix typos 'lenght' -> 'length' (#78)
bodoque007 Aug 7, 2025
c5a6df7
fix f string errors in streamlit chat (#73)
NinoRisteski Aug 7, 2025
44e4935
Fixing typos and grammatical improvements. (#72)
IgnacioCorrecher Aug 7, 2025
d6c3ff7
fix: max_tokens handling in generate.py (#70)
Mirza-Samad-Ahmed-Baig Aug 7, 2025
be8001f
Support concurrent sampling from multiple Contexts (#83)
Maratyszcza Aug 7, 2025
60d7563
Update README.md
dkundel-openai Aug 8, 2025
791664f
fix packaging (#90)
LucasWilkinson Aug 8, 2025
a600b9a
Update README.md (#29)
shoumikhin Aug 10, 2025
c0dcb6e
Update README.md (#58)
hemenduroy Aug 10, 2025
66165dc
Fix typos and improve grammar in README (#61)
hasanerdemak Aug 10, 2025
5618b40
Update README.md (#71)
palenciavik Aug 10, 2025
a7588ad
Update README.md (#87)
genmnz Aug 10, 2025
4ba88ac
Update README.md (#41)
Buddhsen-tripathi Aug 10, 2025
d9c3a78
fix: typos across the codebase (#69)
bigint Aug 10, 2025
4e58cfa
[MINOR] fix: correct spelling error from "wnat" to "want" (#99)
jjestrada2 Aug 10, 2025
501cd97
a few typo fixes. (#102)
fujitatomoya Aug 10, 2025
c1871bb
Add API compatibility test (#114)
dkundel-openai Aug 11, 2025
2675cb8
Update awesome-gpt-oss.md
dkundel-openai Aug 11, 2025
738cee0
fix: Add channel parameter to PythonTool response handling (#33)
JustinTong0323 Aug 12, 2025
525a700
Fix: Corrected typos across 3 files in gpt-oss directory (#115)
CivaaBTW Aug 12, 2025
01d4eaa
fix editable build (#113)
heheda12345 Aug 12, 2025
b12d44c
docs: add table of contents to README.md (#106)
OkeyAmy Aug 12, 2025
eb190d4
fix: Markdown linting and cleanup (#107)
OkeyAmy Aug 12, 2025
047319d
docs: add docstrings to utility and helper functions (#97)
adarsh-crafts Aug 12, 2025
e65781c
feat: Add Gradio chat interface example (#89)
harshalmore31 Aug 12, 2025
1c80ba6
Feat: add command-line arguments for backend parameters (#86)
SyedaAnshrahGillani Aug 12, 2025
0189193
added GPTOSS_BUILD_METAL=1 for metal. (#84)
xiejw Aug 12, 2025
a179d03
chore: remove unused WeatherParams class and import (#82)
adarsh-crafts Aug 12, 2025
58348b4
refactor: rename search_tool for clarity (#81)
adarsh-crafts Aug 12, 2025
10281dd
fix invalid import in build-system-prompt.py (#32)
Om-Alve Aug 12, 2025
3876d35
Update simple_browser_tool.py (#40)
Shubhankar-Dixit Aug 12, 2025
1c0eaf1
triton implementation need install triton_kernels (#45)
sBobHuang Aug 12, 2025
a7659a9
bump version
dkundel-openai Aug 12, 2025
ae4be77
a few typo fixes. (#102)
fujitatomoya Aug 10, 2025
133452f
a few typo fixes. (#102)
fujitatomoya Aug 10, 2025
cb68bd0
Merge remote-tracking branch 'origin/main' into docs/add-example-readmes
Danztee Aug 12, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
72 changes: 72 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
# gpt-oss Examples

This directory contains practical examples demonstrating how to use gpt-oss models in different scenarios.

## Available Examples

### 🤖 Agents SDK Examples

- **[JavaScript/TypeScript](./agents-sdk-js/)**: Use gpt-oss with OpenAI Agents SDK in Node.js
- **[Python](./agents-sdk-python/)**: Use gpt-oss with OpenAI Agents SDK in Python

These examples show how to create intelligent agents that can:

- Use custom tools and functions
- Integrate with MCP (Model Context Protocol) servers
- Stream responses in real-time
- Display reasoning and tool calls

### 💬 Streamlit Chat Interface

- **[Streamlit Chat](./streamlit/)**: A web-based chat interface for gpt-oss

This example demonstrates:

- Real-time streaming chat interface
- Configurable model parameters
- Tool integration (functions and browser search)
- Debug mode for API inspection
- Responsive web design

## Quick Start

1. **Choose an example** based on your needs:

- Use **Agents SDK** for building intelligent applications
- Use **Streamlit** for quick web interfaces

2. **Set up a gpt-oss server**:

```bash
# With Ollama (recommended for local development)
ollama pull gpt-oss:20b
ollama serve

# Or with vLLM
vllm serve openai/gpt-oss-20b
```

3. **Follow the specific setup instructions** in each example's README

## Prerequisites

- Python 3.12+ (for Python examples)
- Node.js 18+ (for JavaScript examples)
- A running gpt-oss server (Ollama, vLLM, etc.)
- Basic familiarity with the chosen framework

## Getting Help

- Check the individual README files for detailed setup instructions
- Ensure your gpt-oss server is running and accessible
- Use debug modes to inspect API responses
- Refer to the main [gpt-oss documentation](../README.md) for model details

## Contributing

Feel free to contribute new examples or improvements to existing ones! Each example should include:

- Clear setup instructions
- Prerequisites and dependencies
- Usage examples
- Troubleshooting tips
65 changes: 65 additions & 0 deletions examples/agents-sdk-js/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# gpt-oss with OpenAI Agents SDK (JavaScript)

This example demonstrates how to use gpt-oss models with the OpenAI Agents SDK in JavaScript/TypeScript.

## Prerequisites

- Node.js 18+ installed
- Ollama installed and running locally
- gpt-oss model downloaded in Ollama

## Setup

1. Install dependencies:

```bash
npm install
```

2. Make sure Ollama is running and you have the gpt-oss model:

```bash
# Install gpt-oss-20b model
ollama pull gpt-oss:20b

# Start Ollama (if not already running)
ollama serve
```

3. Run the example:

```bash
npm start
```

## What this example does

This example creates a simple agent that:

- Uses the gpt-oss-20b model via Ollama
- Has a custom weather tool
- Integrates with an MCP (Model Context Protocol) filesystem server
- Streams responses in real-time
- Shows both reasoning and tool calls

## Key features

- **Real-time streaming**: See the model's reasoning and responses as they're generated
- **Tool integration**: Demonstrates how to create and use custom tools
- **MCP integration**: Shows how to connect to external services via MCP
- **Harmony format**: Uses the harmony response format for better reasoning

## Customization

You can modify the example by:

- Changing the model name in the agent configuration
- Adding more tools to the `tools` array
- Modifying the agent instructions
- Adding different MCP servers

## Troubleshooting

- Make sure Ollama is running on `localhost:11434`
- Ensure you have the correct model name (`gpt-oss:20b-test` or `gpt-oss:20b`)
- Check that npx is available for the MCP filesystem server
75 changes: 75 additions & 0 deletions examples/agents-sdk-python/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# gpt-oss with OpenAI Agents SDK (Python)

This example demonstrates how to use gpt-oss models with the OpenAI Agents SDK in Python.

## Prerequisites

- Python 3.12+
- Ollama installed and running locally
- gpt-oss model downloaded in Ollama
- npx available (for MCP filesystem server)

## Setup

1. Install dependencies:

```bash
pip install -r requirements.txt
```

2. Make sure Ollama is running and you have the gpt-oss model:

```bash
# Install gpt-oss-20b model
ollama pull gpt-oss:20b

# Start Ollama (if not already running)
ollama serve
```

3. Run the example:

```bash
python example.py
```

## What this example does

This example creates a simple agent that:

- Uses the gpt-oss-20b model via Ollama
- Has a custom weather tool
- Integrates with an MCP (Model Context Protocol) filesystem server
- Streams responses in real-time
- Shows both reasoning and tool calls

## Key features

- **Real-time streaming**: See the model's reasoning and responses as they're generated
- **Tool integration**: Demonstrates how to create and use custom tools using `@function_tool`
- **MCP integration**: Shows how to connect to external services via MCP
- **Harmony format**: Uses the harmony response format for better reasoning
- **Async support**: Full async/await support for better performance

## Customization

You can modify the example by:

- Changing the model name in the agent configuration
- Adding more tools using the `@function_tool` decorator
- Modifying the agent instructions
- Adding different MCP servers

## Code structure

- `main()`: Main async function that sets up the agent
- `search_tool()`: Example function tool for weather queries
- `prompt_user()`: Helper function for user input
- MCP server setup for filesystem operations

## Troubleshooting

- Make sure Ollama is running on `localhost:11434`
- Ensure you have the correct model name (`gpt-oss:20b-test` or `gpt-oss:20b`)
- Check that npx is available for the MCP filesystem server
- Verify Python 3.12+ is installed
83 changes: 83 additions & 0 deletions examples/streamlit/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
# gpt-oss Streamlit Chat Interface

This example demonstrates how to create a web-based chat interface for gpt-oss models using Streamlit.

## Prerequisites

- Python 3.12+
- A running gpt-oss server (vLLM, Ollama, or other compatible server)
- Streamlit installed

## Setup

1. Install dependencies:

```bash
pip install streamlit requests
```

2. Start your gpt-oss server. For example, with Ollama:

```bash
# Install gpt-oss-20b model
ollama pull gpt-oss:20b

# Start Ollama
ollama serve
```

3. Run the Streamlit app:

```bash
streamlit run streamlit_chat.py
```

## Features

This chat interface includes:

- **Real-time streaming**: See responses as they're generated
- **Reasoning display**: View the model's reasoning process
- **Tool integration**: Use custom functions and browser search
- **Configurable parameters**: Adjust temperature, reasoning effort, and more
- **Debug mode**: View raw API responses for debugging
- **Responsive design**: Clean, modern chat interface

## Configuration

The sidebar allows you to configure:

- **Model selection**: Choose between different model sizes
- **Instructions**: Customize the assistant's behavior
- **Reasoning effort**: Set reasoning effort (low/medium/high)
- **Functions**: Enable and configure custom function calls
- **Browser search**: Enable web search capabilities
- **Temperature**: Control response randomness
- **Max output tokens**: Limit response length
- **Debug mode**: Show raw API responses

## Server Configuration

The app expects a Responses API compatible server running on:

- `http://localhost:8081/v1/responses` (for small model)
- `http://localhost:8000/v1/responses` (for large model)

You can modify these URLs in the code to match your setup.

## Customization

You can customize the example by:

- Adding new function tools
- Modifying the UI layout
- Adding authentication
- Implementing different server backends
- Adding file upload capabilities

## Troubleshooting

- Make sure your gpt-oss server is running and accessible
- Check that the server URLs match your setup
- Verify all dependencies are installed
- Check the debug mode for API response details