From 8dca5dd996515e65fb35290205d211e55d692481 Mon Sep 17 00:00:00 2001 From: Dina Suehiro Jones Date: Thu, 8 Jan 2026 19:22:47 -0500 Subject: [PATCH] Fix Ollama model env var in documentation Signed-off-by: Dina Suehiro Jones --- python/samples/getting_started/agents/ollama/README.md | 4 ++-- .../getting_started/agents/ollama/ollama_agent_basic.py | 2 +- .../getting_started/agents/ollama/ollama_agent_reasoning.py | 2 +- .../getting_started/agents/ollama/ollama_chat_client.py | 2 +- .../getting_started/agents/ollama/ollama_chat_multimodal.py | 2 +- python/samples/getting_started/chat_client/README.md | 2 +- 6 files changed, 7 insertions(+), 7 deletions(-) diff --git a/python/samples/getting_started/agents/ollama/README.md b/python/samples/getting_started/agents/ollama/README.md index ac4b2cb3d0..2a10ae2f57 100644 --- a/python/samples/getting_started/agents/ollama/README.md +++ b/python/samples/getting_started/agents/ollama/README.md @@ -40,8 +40,8 @@ Set the following environment variables: - `OLLAMA_HOST`: The base URL for your Ollama server (optional, defaults to `http://localhost:11434`) - Example: `export OLLAMA_HOST="http://localhost:11434"` -- `OLLAMA_CHAT_MODEL_ID`: The model name to use - - Example: `export OLLAMA_CHAT_MODEL_ID="qwen2.5:8b"` +- `OLLAMA_MODEL_ID`: The model name to use + - Example: `export OLLAMA_MODEL_ID="qwen2.5:8b"` - Must be a model you have pulled with Ollama ### For OpenAI Client with Ollama (`ollama_with_openai_chat_client.py`) diff --git a/python/samples/getting_started/agents/ollama/ollama_agent_basic.py b/python/samples/getting_started/agents/ollama/ollama_agent_basic.py index 4d2a69b56b..a0c49acea4 100644 --- a/python/samples/getting_started/agents/ollama/ollama_agent_basic.py +++ b/python/samples/getting_started/agents/ollama/ollama_agent_basic.py @@ -12,7 +12,7 @@ Ensure to install Ollama and have a model running locally before running the sample Not all Models support function calling, to test function calling try llama3.2 or qwen3:4b -Set the model to use via the OLLAMA_CHAT_MODEL_ID environment variable or modify the code below. +Set the model to use via the OLLAMA_MODEL_ID environment variable or modify the code below. https://ollama.com/ """ diff --git a/python/samples/getting_started/agents/ollama/ollama_agent_reasoning.py b/python/samples/getting_started/agents/ollama/ollama_agent_reasoning.py index e0ce24bb85..5821c2bcf0 100644 --- a/python/samples/getting_started/agents/ollama/ollama_agent_reasoning.py +++ b/python/samples/getting_started/agents/ollama/ollama_agent_reasoning.py @@ -12,7 +12,7 @@ Ensure to install Ollama and have a model running locally before running the sample Not all Models support reasoning, to test reasoning try qwen3:8b -Set the model to use via the OLLAMA_CHAT_MODEL_ID environment variable or modify the code below. +Set the model to use via the OLLAMA_MODEL_ID environment variable or modify the code below. https://ollama.com/ """ diff --git a/python/samples/getting_started/agents/ollama/ollama_chat_client.py b/python/samples/getting_started/agents/ollama/ollama_chat_client.py index 5d7197d8f5..336a79c721 100644 --- a/python/samples/getting_started/agents/ollama/ollama_chat_client.py +++ b/python/samples/getting_started/agents/ollama/ollama_chat_client.py @@ -12,7 +12,7 @@ Ensure to install Ollama and have a model running locally before running the sample. Not all Models support function calling, to test function calling try llama3.2 -Set the model to use via the OLLAMA_CHAT_MODEL_ID environment variable or modify the code below. +Set the model to use via the OLLAMA_MODEL_ID environment variable or modify the code below. https://ollama.com/ """ diff --git a/python/samples/getting_started/agents/ollama/ollama_chat_multimodal.py b/python/samples/getting_started/agents/ollama/ollama_chat_multimodal.py index 724cecbe72..1b830c2692 100644 --- a/python/samples/getting_started/agents/ollama/ollama_chat_multimodal.py +++ b/python/samples/getting_started/agents/ollama/ollama_chat_multimodal.py @@ -12,7 +12,7 @@ Ensure to install Ollama and have a model running locally before running the sample Not all Models support multimodal input, to test multimodal input try gemma3:4b -Set the model to use via the OLLAMA_CHAT_MODEL_ID environment variable or modify the code below. +Set the model to use via the OLLAMA_MODEL_ID environment variable or modify the code below. https://ollama.com/ """ diff --git a/python/samples/getting_started/chat_client/README.md b/python/samples/getting_started/chat_client/README.md index 293b454821..4b36865769 100644 --- a/python/samples/getting_started/chat_client/README.md +++ b/python/samples/getting_started/chat_client/README.md @@ -35,6 +35,6 @@ Depending on which client you're using, set the appropriate environment variable **For Ollama client:** - `OLLAMA_HOST`: Your Ollama server URL (defaults to `http://localhost:11434` if not set) -- `OLLAMA_CHAT_MODEL_ID`: The Ollama model to use for chat (e.g., `llama3.2`, `llama2`, `codellama`) +- `OLLAMA_MODEL_ID`: The Ollama model to use for chat (e.g., `llama3.2`, `llama2`, `codellama`) > **Note**: For Ollama, ensure you have Ollama installed and running locally with at least one model downloaded. Visit [https://ollama.com/](https://ollama.com/) for installation instructions. \ No newline at end of file