diff --git a/integrations/llama_stack.md b/integrations/llama_stack.md index 2a475b1..fae5cfb 100644 --- a/integrations/llama_stack.md +++ b/integrations/llama_stack.md @@ -36,7 +36,7 @@ Below are example configurations for using the Llama-3.2-3B model: Ollama as the inference provider: -```chat_generator = LlamaStackChatGenerator(model="llama3.2:3b")``` +```chat_generator = LlamaStackChatGenerator(model="ollama/llama3.2:3b")``` vLLM as the inference provider: ```chat_generator = LlamaStackChatGenerator(model="meta-llama/Llama-3.2-3B")```