Skip to content

docs: update example READMEs with port fix and improvements #133

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -325,22 +325,22 @@ options:

We support [codex](https://github.com/openai/codex) as a client for gpt-oss. To run the 20b version, set this to `~/.codex/config.toml`:

```
```toml
disable_response_storage = true
show_reasoning_content = true

[model_providers.local]
name = "local"
base_url = "http://localhost:11434/v1"
base_url = "http://localhost:8000/v1"

[profiles.oss]
model = "gpt-oss:20b"
model_provider = "local"
```

This will work with any chat completions-API compatible server listening on port 11434, like ollama. Start the server and point codex to the oss model:
This will work with any chat completions-API compatible server listening on port 8000. If you're running the UI locally, it typically serves on http://localhost:8081. Start the server and point codex to the oss model:

```
```bash
ollama run gpt-oss:20b
codex -p oss
```
Expand Down
78 changes: 78 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
# gpt-oss Examples

This directory contains various examples demonstrating how to use gpt-oss in different scenarios and with different frameworks.

## Available Examples

### [Streamlit Chat](./streamlit/)
A simple chat interface built with Streamlit that connects to a local gpt-oss server.

**Features:**
- Interactive web-based chat interface
- Real-time responses
- Easy to customize and extend

### [Agents SDK - Python](./agents-sdk-python/)
Example using the OpenAI Agents SDK with Python to create an intelligent agent that can use tools and MCP servers.

**Features:**
- Tool integration (weather example)
- MCP server connectivity for filesystem operations
- Streaming responses
- Async/await support

### [Agents SDK - JavaScript/TypeScript](./agents-sdk-js/)
TypeScript example using the OpenAI Agents SDK to create an intelligent agent with tool calling capabilities.

**Features:**
- Tool integration
- MCP server connectivity
- TypeScript support
- Modern async/await patterns

### [Gradio Chat](./gradio/)
A simple chat interface using Gradio framework.

**Features:**
- Quick setup with Gradio
- Web-based interface
- Easy deployment

## Getting Started

1. **Start a local gpt-oss server** on `http://localhost:8000`
2. **Choose an example** from the directories above
3. **Follow the README** in each example directory for specific setup instructions

## Prerequisites

- Python 3.12+
- A running gpt-oss server (see main README for setup instructions)
- Framework-specific dependencies (listed in each example's README)

## Common Setup

Most examples assume you have a local gpt-oss server running. You can start one using:

```bash
# Using the responses API server
python -m gpt_oss.responses_api.serve --checkpoint gpt-oss-20b/original/ --port 8000

# Or using vLLM
vllm serve openai/gpt-oss-20b --port 8000

# Or using Ollama
ollama serve
ollama run gpt-oss:20b
```

If you're running the UI locally, it typically serves on `http://localhost:8081`.

## Contributing

When adding new examples:
1. Create a new directory with a descriptive name
2. Include a comprehensive README.md with setup instructions
3. Ensure all dependencies are clearly listed
4. Test the example thoroughly
5. Update this main examples README to include your new example
123 changes: 123 additions & 0 deletions examples/agents-sdk-python/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
# Agents SDK Python Example

This example demonstrates how to use the OpenAI Agents SDK with Python to create an intelligent agent that can interact with tools and MCP servers.

## Prerequisites

- Python 3.12+
- Node.js and npm (for MCP server)
- A running gpt-oss server

## Installation

1. Install Python dependencies:

```bash
pip install openai agents
```

2. Install Node.js dependencies for MCP server:

```bash
npm install -g npx
```

## Configuration

The example is configured to connect to a local gpt-oss server. Update the configuration in `example.py` if needed:

```python
openai_client = AsyncOpenAI(
api_key="local",
base_url="http://localhost:8000/v1",
)
```

## Running the Example

1. Start your local gpt-oss server on `http://localhost:8000`

2. Run the Python example:

```bash
python example.py
```

3. Enter your message when prompted and interact with the agent

## Features

### Tool Integration
The example includes a simple weather tool that demonstrates how to integrate custom functions:

```python
@function_tool
async def get_weather(location: str) -> str:
return f"The weather in {location} is sunny."
```

### MCP Server Integration
The agent connects to a filesystem MCP server that allows it to:
- Read files
- Write files
- List directories
- Navigate the filesystem

### Streaming Responses
The example demonstrates how to handle streaming responses and different event types:
- Tool calls
- Tool outputs
- Message outputs
- Agent updates

## Customization

### Adding New Tools
You can add new tools by defining functions with the `@function_tool` decorator:

```python
@function_tool
async def my_custom_tool(param: str) -> str:
# Your tool logic here
return "Tool result"
```

### Different Models
Change the model by updating the agent configuration:

```python
agent = Agent(
name="My Agent",
instructions="You are a helpful assistant.",
tools=[get_weather],
model="gpt-oss:120b", # or other available models
mcp_servers=[mcp_server],
)
```

### Custom MCP Servers
You can connect to different MCP servers by modifying the server configuration:

```python
mcp_server = MCPServerStdio(
name="Custom MCP Server",
params={
"command": "your-mcp-server-command",
"args": ["arg1", "arg2"],
},
)
```

## Troubleshooting

### npx not found
If you get an error about npx not being found:
```bash
npm install -g npx
```

### Connection errors
Ensure your gpt-oss server is running on the correct port (8000) and accessible.

### MCP server issues
The filesystem MCP server requires npx to be installed and accessible in your PATH.
56 changes: 56 additions & 0 deletions examples/streamlit/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# Streamlit Chat Example

This example demonstrates how to create a simple chat interface using Streamlit and gpt-oss.

## Prerequisites

- Python 3.12+
- A running gpt-oss server
- Streamlit installed

## Installation

1. Install dependencies:

```bash
pip install streamlit openai
```

2. Ensure you have a local gpt-oss server running

## Running the Example

1. Start your local gpt-oss server on `http://localhost:8000` (or modify the base URL in the code)

2. Run the Streamlit application:

```bash
streamlit run streamlit_chat.py
```

3. Open your browser to the URL shown in the terminal (typically `http://localhost:8501`)

## Configuration

You can modify the base URL and other settings by editing the configuration in `streamlit_chat.py`:

```python
client = OpenAI(
api_key="local",
base_url="http://localhost:8000/v1",
)
```

## Features

- Interactive chat interface
- Real-time responses from gpt-oss
- Simple and clean UI using Streamlit

## Customization

Feel free to modify the interface and add additional features such as:
- Chat history persistence
- Different model configurations
- Custom styling
- Tool integration