-
Notifications
You must be signed in to change notification settings - Fork 1.6k
docs: Add comprehensive README files for examples #44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,67 @@ | ||
# GPT-OSS Examples | ||
|
||
This directory contains practical examples demonstrating how to use gpt-oss models in different scenarios and with various frameworks. | ||
|
||
## Available Examples | ||
|
||
### [Streamlit Chat Interface](./streamlit/) | ||
A web-based chat application built with Streamlit that provides: | ||
- Interactive chat interface with conversation history | ||
- Model selection and configuration options | ||
- Function calling capabilities | ||
- Real-time streaming responses | ||
|
||
**Best for**: Quick prototyping, demos, and interactive testing of gpt-oss models. | ||
|
||
### [Agents SDK Python](./agents-sdk-python/) | ||
An advanced example using the OpenAI Agents SDK with: | ||
- Async agent interactions | ||
- Model Context Protocol (MCP) integration | ||
- Custom function tools | ||
- Filesystem operations | ||
- Streaming event processing | ||
|
||
**Best for**: Building sophisticated AI agents with tool capabilities and external integrations. | ||
|
||
## Getting Started | ||
|
||
Each example directory contains its own README with detailed setup and usage instructions. Generally, you'll need: | ||
|
||
1. **A local gpt-oss server running** (using Ollama, vLLM, or the gpt-oss responses API server) | ||
2. **Python dependencies** specific to each example | ||
3. **Additional tools** as specified in each example's requirements | ||
|
||
## Common Setup | ||
|
||
Most examples expect a local gpt-oss server compatible with OpenAI's API format. Here are quick setup options: | ||
|
||
### Using Ollama | ||
```bash | ||
ollama pull gpt-oss:20b | ||
ollama run gpt-oss:20b | ||
``` | ||
|
||
### Using the gpt-oss Responses API Server | ||
```bash | ||
python -m gpt_oss.responses_api.serve --checkpoint /path/to/checkpoint --port 11434 | ||
``` | ||
|
||
### Using vLLM | ||
```bash | ||
python -m vllm.entrypoints.openai.api_server --model openai/gpt-oss-20b --port 11434 | ||
``` | ||
|
||
## Contributing Examples | ||
|
||
We welcome contributions of new examples! If you've built something interesting with gpt-oss, consider: | ||
|
||
1. Adding it to the [`awesome-gpt-oss.md`](../awesome-gpt-oss.md) file | ||
2. Creating a pull request with a new example directory | ||
3. Including a comprehensive README with setup instructions | ||
|
||
## Support | ||
|
||
For questions about these examples: | ||
- Check the individual example README files | ||
- Review the main [gpt-oss documentation](../README.md) | ||
- Visit the [OpenAI Cookbook](https://cookbook.openai.com/topic/gpt-oss) for more guides |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,117 @@ | ||
# Agents SDK Python Example | ||
|
||
This example demonstrates how to use gpt-oss models with the OpenAI Agents SDK for Python, including integration with Model Context Protocol (MCP) servers. | ||
|
||
## Features | ||
|
||
- Async agent interaction with streaming responses | ||
- Integration with MCP servers for enhanced capabilities | ||
- Custom function tools | ||
- Filesystem operations through MCP | ||
- Real-time event processing | ||
|
||
## Prerequisites | ||
|
||
Before running this example, you need: | ||
|
||
1. **Node.js and npm** - Required for the MCP filesystem server | ||
2. **Local gpt-oss server** - Running on `http://localhost:11434` | ||
3. **Python 3.12+** - As specified in the project configuration | ||
|
||
## Installation | ||
|
||
1. Install Node.js and npm if not already installed | ||
2. Install the Python dependencies: | ||
|
||
```bash | ||
pip install openai-agents>=0.2.4 | ||
``` | ||
|
||
Or if you prefer using the project file: | ||
|
||
```bash | ||
pip install -e . | ||
``` | ||
|
||
## Running the Example | ||
|
||
1. **Start your local gpt-oss server** (e.g., using Ollama): | ||
```bash | ||
ollama run gpt-oss:20b | ||
``` | ||
|
||
2. **Run the example**: | ||
```bash | ||
python example.py | ||
``` | ||
|
||
3. **Interact with the agent** by typing your message when prompted | ||
|
||
## How It Works | ||
|
||
The example sets up: | ||
|
||
1. **OpenAI Client**: Configured to connect to your local gpt-oss server | ||
2. **MCP Server**: Filesystem operations server via npx | ||
3. **Custom Tools**: A sample weather search function | ||
4. **Streaming Agent**: Processes responses in real-time | ||
|
||
## Example Interaction | ||
|
||
``` | ||
> Can you tell me about the files in the current directory? | ||
Agent updated: My Agent | ||
-- Tool was called | ||
-- Tool output: [filesystem results] | ||
-- Message output: I can see several files in your current directory... | ||
=== Run complete === | ||
``` | ||
|
||
## Configuration | ||
|
||
You can customize the example by: | ||
|
||
- **Model**: Change the model name in the `Agent` configuration (line 70) | ||
- **Instructions**: Modify the agent's system instructions (line 68) | ||
- **Tools**: Add custom function tools using the `@function_tool` decorator | ||
- **Base URL**: Update the OpenAI client base URL for different servers | ||
|
||
## MCP Integration | ||
|
||
This example uses the Model Context Protocol (MCP) to provide the agent with filesystem capabilities. The MCP server is automatically started and connected, allowing the agent to: | ||
|
||
- Read and write files | ||
- List directory contents | ||
- Navigate the filesystem | ||
|
||
## Error Handling | ||
|
||
The example includes basic error handling: | ||
- Checks for `npx` availability before running | ||
- Graceful connection to MCP servers | ||
- Async/await pattern for proper resource management | ||
|
||
## Extending the Example | ||
|
||
To add more capabilities: | ||
|
||
1. **Add custom tools**: | ||
```python | ||
@function_tool | ||
async def my_custom_tool(param: str) -> str: | ||
return f"Processed: {param}" | ||
``` | ||
|
||
2. **Add more MCP servers**: | ||
```python | ||
additional_mcp = MCPServerStdio(name="Another Server", params={...}) | ||
agent = Agent(..., mcp_servers=[mcp_server, additional_mcp]) | ||
``` | ||
|
||
3. **Process different event types**: | ||
```python | ||
async for event in result.stream_events(): | ||
if event.type == "your_custom_event_type": | ||
# Handle custom events | ||
pass | ||
``` |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,62 @@ | ||
# Streamlit Chat Example | ||
|
||
This example demonstrates how to create a web-based chat interface for gpt-oss models using Streamlit. | ||
|
||
## Features | ||
|
||
- Interactive chat interface with conversation history | ||
- Model selection (large/small) | ||
- Configurable reasoning effort (low/medium/high) | ||
- Function calling support with custom tools | ||
- Real-time streaming responses | ||
|
||
## Prerequisites | ||
|
||
Before running this example, you need: | ||
|
||
1. **Local gpt-oss server running** - This example expects a local API server compatible with OpenAI's chat completions format | ||
2. **Python packages** - Install the required dependencies | ||
|
||
## Installation | ||
|
||
1. Install Streamlit and requests: | ||
```bash | ||
pip install streamlit requests | ||
``` | ||
|
||
2. Make sure you have a local gpt-oss server running (e.g., using Ollama, vLLM, or the gpt-oss responses API server) | ||
|
||
## Running the Example | ||
|
||
1. Start your local gpt-oss server on `http://localhost:11434` (or modify the base URL in the code) | ||
|
||
2. Run the Streamlit app: | ||
```bash | ||
streamlit run streamlit_chat.py | ||
``` | ||
|
||
3. Open your browser to the URL displayed (typically `http://localhost:8501`) | ||
|
||
## Configuration | ||
|
||
The app provides several configuration options in the sidebar: | ||
|
||
- **Model**: Choose between "large" and "small" models | ||
- **Instructions**: Customize the system prompt for the assistant | ||
- **Reasoning effort**: Control the level of reasoning (low/medium/high) | ||
- **Functions**: Enable/disable function calling with a sample weather function | ||
|
||
## Customization | ||
|
||
You can customize this example by: | ||
|
||
- Modifying the base URL to point to your gpt-oss server | ||
- Adding custom functions in the function properties section | ||
- Changing the default system instructions | ||
- Styling the interface with Streamlit components | ||
|
||
## Notes | ||
|
||
- This example assumes you're running a local gpt-oss server compatible with OpenAI's API format | ||
- The function calling feature includes a sample weather function for demonstration | ||
- Conversation history is maintained during the session but not persisted between sessions |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be port 8000 or 8081 to work with the UI