-
Notifications
You must be signed in to change notification settings - Fork 1.5k
docs: add README files for examples directory #46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
Danztee
wants to merge
41
commits into
openai:main
Choose a base branch
from
Danztee:docs/add-example-readmes
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 1 commit
Commits
Show all changes
41 commits
Select commit
Hold shift + click to select a range
5204cb6
docs: add README files for examples directory
Danztee 3834874
Update model names from gpt-oss:20b-test to gpt-oss:20b in examples
Danztee 6d10597
evals: add chat completions API sampler (#59)
volsgd 95d9db5
evals: log reasoning and extend max_tokens for chat completions (#62)
volsgd 67aa38a
chat / api_server: do not include developer messages to reduce mismat…
volsgd ba809be
Fix typos 'lenght' -> 'length' (#78)
bodoque007 c5a6df7
fix f string errors in streamlit chat (#73)
NinoRisteski 44e4935
Fixing typos and grammatical improvements. (#72)
IgnacioCorrecher d6c3ff7
fix: max_tokens handling in generate.py (#70)
Mirza-Samad-Ahmed-Baig be8001f
Support concurrent sampling from multiple Contexts (#83)
Maratyszcza 60d7563
Update README.md
dkundel-openai 791664f
fix packaging (#90)
LucasWilkinson a600b9a
Update README.md (#29)
shoumikhin c0dcb6e
Update README.md (#58)
hemenduroy 66165dc
Fix typos and improve grammar in README (#61)
hasanerdemak 5618b40
Update README.md (#71)
palenciavik a7588ad
Update README.md (#87)
genmnz 4ba88ac
Update README.md (#41)
Buddhsen-tripathi d9c3a78
fix: typos across the codebase (#69)
bigint 4e58cfa
[MINOR] fix: correct spelling error from "wnat" to "want" (#99)
jjestrada2 501cd97
a few typo fixes. (#102)
fujitatomoya c1871bb
Add API compatibility test (#114)
dkundel-openai 2675cb8
Update awesome-gpt-oss.md
dkundel-openai 738cee0
fix: Add channel parameter to PythonTool response handling (#33)
JustinTong0323 525a700
Fix: Corrected typos across 3 files in gpt-oss directory (#115)
CivaaBTW 01d4eaa
fix editable build (#113)
heheda12345 b12d44c
docs: add table of contents to README.md (#106)
OkeyAmy eb190d4
fix: Markdown linting and cleanup (#107)
OkeyAmy 047319d
docs: add docstrings to utility and helper functions (#97)
adarsh-crafts e65781c
feat: Add Gradio chat interface example (#89)
harshalmore31 1c80ba6
Feat: add command-line arguments for backend parameters (#86)
SyedaAnshrahGillani 0189193
added GPTOSS_BUILD_METAL=1 for metal. (#84)
xiejw a179d03
chore: remove unused WeatherParams class and import (#82)
adarsh-crafts 58348b4
refactor: rename search_tool for clarity (#81)
adarsh-crafts 10281dd
fix invalid import in build-system-prompt.py (#32)
Om-Alve 3876d35
Update simple_browser_tool.py (#40)
Shubhankar-Dixit 1c0eaf1
triton implementation need install triton_kernels (#45)
sBobHuang a7659a9
bump version
dkundel-openai ae4be77
a few typo fixes. (#102)
fujitatomoya 133452f
a few typo fixes. (#102)
fujitatomoya cb68bd0
Merge remote-tracking branch 'origin/main' into docs/add-example-readmes
Danztee File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,72 @@ | ||
# gpt-oss Examples | ||
|
||
This directory contains practical examples demonstrating how to use gpt-oss models in different scenarios. | ||
|
||
## Available Examples | ||
|
||
### 🤖 Agents SDK Examples | ||
|
||
- **[JavaScript/TypeScript](./agents-sdk-js/)**: Use gpt-oss with OpenAI Agents SDK in Node.js | ||
- **[Python](./agents-sdk-python/)**: Use gpt-oss with OpenAI Agents SDK in Python | ||
|
||
These examples show how to create intelligent agents that can: | ||
|
||
- Use custom tools and functions | ||
- Integrate with MCP (Model Context Protocol) servers | ||
- Stream responses in real-time | ||
- Display reasoning and tool calls | ||
|
||
### 💬 Streamlit Chat Interface | ||
|
||
- **[Streamlit Chat](./streamlit/)**: A web-based chat interface for gpt-oss | ||
|
||
This example demonstrates: | ||
|
||
- Real-time streaming chat interface | ||
- Configurable model parameters | ||
- Tool integration (functions and browser search) | ||
- Debug mode for API inspection | ||
- Responsive web design | ||
|
||
## Quick Start | ||
|
||
1. **Choose an example** based on your needs: | ||
|
||
- Use **Agents SDK** for building intelligent applications | ||
- Use **Streamlit** for quick web interfaces | ||
|
||
2. **Set up a gpt-oss server**: | ||
|
||
```bash | ||
# With Ollama (recommended for local development) | ||
ollama pull gpt-oss:20b | ||
ollama serve | ||
|
||
# Or with vLLM | ||
vllm serve openai/gpt-oss-20b | ||
``` | ||
|
||
3. **Follow the specific setup instructions** in each example's README | ||
|
||
## Prerequisites | ||
|
||
- Python 3.12+ (for Python examples) | ||
- Node.js 18+ (for JavaScript examples) | ||
- A running gpt-oss server (Ollama, vLLM, etc.) | ||
- Basic familiarity with the chosen framework | ||
|
||
## Getting Help | ||
|
||
- Check the individual README files for detailed setup instructions | ||
- Ensure your gpt-oss server is running and accessible | ||
- Use debug modes to inspect API responses | ||
- Refer to the main [gpt-oss documentation](../README.md) for model details | ||
|
||
## Contributing | ||
|
||
Feel free to contribute new examples or improvements to existing ones! Each example should include: | ||
|
||
- Clear setup instructions | ||
- Prerequisites and dependencies | ||
- Usage examples | ||
- Troubleshooting tips |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,65 @@ | ||
# gpt-oss with OpenAI Agents SDK (JavaScript) | ||
|
||
This example demonstrates how to use gpt-oss models with the OpenAI Agents SDK in JavaScript/TypeScript. | ||
|
||
## Prerequisites | ||
|
||
- Node.js 18+ installed | ||
- Ollama installed and running locally | ||
- gpt-oss model downloaded in Ollama | ||
|
||
## Setup | ||
|
||
1. Install dependencies: | ||
|
||
```bash | ||
npm install | ||
``` | ||
|
||
2. Make sure Ollama is running and you have the gpt-oss model: | ||
|
||
```bash | ||
# Install gpt-oss-20b model | ||
ollama pull gpt-oss:20b | ||
|
||
# Start Ollama (if not already running) | ||
ollama serve | ||
``` | ||
|
||
3. Run the example: | ||
|
||
```bash | ||
npm start | ||
``` | ||
|
||
## What this example does | ||
|
||
This example creates a simple agent that: | ||
|
||
- Uses the gpt-oss-20b model via Ollama | ||
- Has a custom weather tool | ||
- Integrates with an MCP (Model Context Protocol) filesystem server | ||
- Streams responses in real-time | ||
- Shows both reasoning and tool calls | ||
|
||
## Key features | ||
|
||
- **Real-time streaming**: See the model's reasoning and responses as they're generated | ||
- **Tool integration**: Demonstrates how to create and use custom tools | ||
- **MCP integration**: Shows how to connect to external services via MCP | ||
- **Harmony format**: Uses the harmony response format for better reasoning | ||
|
||
## Customization | ||
|
||
You can modify the example by: | ||
|
||
- Changing the model name in the agent configuration | ||
- Adding more tools to the `tools` array | ||
- Modifying the agent instructions | ||
- Adding different MCP servers | ||
|
||
## Troubleshooting | ||
|
||
- Make sure Ollama is running on `localhost:11434` | ||
- Ensure you have the correct model name (`gpt-oss:20b-test` or `gpt-oss:20b`) | ||
- Check that npx is available for the MCP filesystem server |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,75 @@ | ||
# gpt-oss with OpenAI Agents SDK (Python) | ||
|
||
This example demonstrates how to use gpt-oss models with the OpenAI Agents SDK in Python. | ||
|
||
## Prerequisites | ||
|
||
- Python 3.12+ | ||
- Ollama installed and running locally | ||
- gpt-oss model downloaded in Ollama | ||
- npx available (for MCP filesystem server) | ||
|
||
## Setup | ||
|
||
1. Install dependencies: | ||
|
||
```bash | ||
pip install -r requirements.txt | ||
``` | ||
|
||
2. Make sure Ollama is running and you have the gpt-oss model: | ||
|
||
```bash | ||
# Install gpt-oss-20b model | ||
ollama pull gpt-oss:20b | ||
|
||
# Start Ollama (if not already running) | ||
ollama serve | ||
``` | ||
|
||
3. Run the example: | ||
|
||
```bash | ||
python example.py | ||
``` | ||
|
||
## What this example does | ||
|
||
This example creates a simple agent that: | ||
|
||
- Uses the gpt-oss-20b model via Ollama | ||
- Has a custom weather tool | ||
- Integrates with an MCP (Model Context Protocol) filesystem server | ||
- Streams responses in real-time | ||
- Shows both reasoning and tool calls | ||
|
||
## Key features | ||
|
||
- **Real-time streaming**: See the model's reasoning and responses as they're generated | ||
- **Tool integration**: Demonstrates how to create and use custom tools using `@function_tool` | ||
- **MCP integration**: Shows how to connect to external services via MCP | ||
- **Harmony format**: Uses the harmony response format for better reasoning | ||
- **Async support**: Full async/await support for better performance | ||
|
||
## Customization | ||
|
||
You can modify the example by: | ||
|
||
- Changing the model name in the agent configuration | ||
- Adding more tools using the `@function_tool` decorator | ||
- Modifying the agent instructions | ||
- Adding different MCP servers | ||
|
||
## Code structure | ||
|
||
- `main()`: Main async function that sets up the agent | ||
- `search_tool()`: Example function tool for weather queries | ||
- `prompt_user()`: Helper function for user input | ||
- MCP server setup for filesystem operations | ||
|
||
## Troubleshooting | ||
|
||
- Make sure Ollama is running on `localhost:11434` | ||
- Ensure you have the correct model name (`gpt-oss:20b-test` or `gpt-oss:20b`) | ||
- Check that npx is available for the MCP filesystem server | ||
- Verify Python 3.12+ is installed |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,83 @@ | ||
# gpt-oss Streamlit Chat Interface | ||
|
||
This example demonstrates how to create a web-based chat interface for gpt-oss models using Streamlit. | ||
|
||
## Prerequisites | ||
|
||
- Python 3.12+ | ||
- A running gpt-oss server (vLLM, Ollama, or other compatible server) | ||
- Streamlit installed | ||
|
||
## Setup | ||
|
||
1. Install dependencies: | ||
|
||
```bash | ||
pip install streamlit requests | ||
``` | ||
|
||
2. Start your gpt-oss server. For example, with Ollama: | ||
|
||
```bash | ||
# Install gpt-oss-20b model | ||
ollama pull gpt-oss:20b | ||
|
||
# Start Ollama | ||
ollama serve | ||
``` | ||
|
||
3. Run the Streamlit app: | ||
|
||
```bash | ||
streamlit run streamlit_chat.py | ||
``` | ||
|
||
## Features | ||
|
||
This chat interface includes: | ||
|
||
- **Real-time streaming**: See responses as they're generated | ||
- **Reasoning display**: View the model's reasoning process | ||
- **Tool integration**: Use custom functions and browser search | ||
- **Configurable parameters**: Adjust temperature, reasoning effort, and more | ||
- **Debug mode**: View raw API responses for debugging | ||
- **Responsive design**: Clean, modern chat interface | ||
|
||
## Configuration | ||
|
||
The sidebar allows you to configure: | ||
|
||
- **Model selection**: Choose between different model sizes | ||
- **Instructions**: Customize the assistant's behavior | ||
- **Reasoning effort**: Set reasoning effort (low/medium/high) | ||
- **Functions**: Enable and configure custom function calls | ||
- **Browser search**: Enable web search capabilities | ||
- **Temperature**: Control response randomness | ||
- **Max output tokens**: Limit response length | ||
- **Debug mode**: Show raw API responses | ||
|
||
## Server Configuration | ||
|
||
The app expects a Responses API compatible server running on: | ||
|
||
- `http://localhost:8081/v1/responses` (for small model) | ||
- `http://localhost:8000/v1/responses` (for large model) | ||
|
||
You can modify these URLs in the code to match your setup. | ||
|
||
## Customization | ||
|
||
You can customize the example by: | ||
|
||
- Adding new function tools | ||
- Modifying the UI layout | ||
- Adding authentication | ||
- Implementing different server backends | ||
- Adding file upload capabilities | ||
|
||
## Troubleshooting | ||
|
||
- Make sure your gpt-oss server is running and accessible | ||
- Check that the server URLs match your setup | ||
- Verify all dependencies are installed | ||
- Check the debug mode for API response details |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.