Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions python/samples/getting_started/agents/ollama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,8 @@ Set the following environment variables:
- `OLLAMA_HOST`: The base URL for your Ollama server (optional, defaults to `http://localhost:11434`)
- Example: `export OLLAMA_HOST="http://localhost:11434"`

- `OLLAMA_CHAT_MODEL_ID`: The model name to use
- Example: `export OLLAMA_CHAT_MODEL_ID="qwen2.5:8b"`
- `OLLAMA_MODEL_ID`: The model name to use
- Example: `export OLLAMA_MODEL_ID="qwen2.5:8b"`
- Must be a model you have pulled with Ollama

### For OpenAI Client with Ollama (`ollama_with_openai_chat_client.py`)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

Ensure to install Ollama and have a model running locally before running the sample
Not all Models support function calling, to test function calling try llama3.2 or qwen3:4b
Set the model to use via the OLLAMA_CHAT_MODEL_ID environment variable or modify the code below.
Set the model to use via the OLLAMA_MODEL_ID environment variable or modify the code below.
https://ollama.com/

"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

Ensure to install Ollama and have a model running locally before running the sample
Not all Models support reasoning, to test reasoning try qwen3:8b
Set the model to use via the OLLAMA_CHAT_MODEL_ID environment variable or modify the code below.
Set the model to use via the OLLAMA_MODEL_ID environment variable or modify the code below.
https://ollama.com/

"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

Ensure to install Ollama and have a model running locally before running the sample.
Not all Models support function calling, to test function calling try llama3.2
Set the model to use via the OLLAMA_CHAT_MODEL_ID environment variable or modify the code below.
Set the model to use via the OLLAMA_MODEL_ID environment variable or modify the code below.
https://ollama.com/

"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

Ensure to install Ollama and have a model running locally before running the sample
Not all Models support multimodal input, to test multimodal input try gemma3:4b
Set the model to use via the OLLAMA_CHAT_MODEL_ID environment variable or modify the code below.
Set the model to use via the OLLAMA_MODEL_ID environment variable or modify the code below.
https://ollama.com/

"""
Expand Down
2 changes: 1 addition & 1 deletion python/samples/getting_started/chat_client/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,6 @@ Depending on which client you're using, set the appropriate environment variable

**For Ollama client:**
- `OLLAMA_HOST`: Your Ollama server URL (defaults to `http://localhost:11434` if not set)
- `OLLAMA_CHAT_MODEL_ID`: The Ollama model to use for chat (e.g., `llama3.2`, `llama2`, `codellama`)
- `OLLAMA_MODEL_ID`: The Ollama model to use for chat (e.g., `llama3.2`, `llama2`, `codellama`)

> **Note**: For Ollama, ensure you have Ollama installed and running locally with at least one model downloaded. Visit [https://ollama.com/](https://ollama.com/) for installation instructions.
Loading