Skip to content

Commit 57da9c6

Browse files
Felipe Campos PenhaFelipe Campos Penha
authored andcommitted
refactor: model name in various comments, strings etc.
1 parent 433b55d commit 57da9c6

File tree

7 files changed

+12
-12
lines changed

7 files changed

+12
-12
lines changed

initiatives/genai_red_team_handbook/sandboxes/RAG_local/Makefile

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ help:
2121
@echo " make format - Run black and isort formatters"
2222
@echo " make mypy - Run mypy static type checker"
2323
@echo ""
24-
@echo " make ollama-pull - Pull llama3 model for Ollama"
24+
@echo " make ollama-pull - Pull gpt-oss:20b model for Ollama"
2525
@echo " make ollama-serve - Start Ollama server"
2626
@echo " make ingest - Ingest all PDFs from data/documents/"
2727
@echo " make upload PATH=x - Upload a file to data/documents/ (requires PATH argument)"
@@ -67,7 +67,7 @@ clean:
6767
@echo "✅ Cleanup complete!"
6868

6969
ollama-pull:
70-
ollama pull llama3
70+
ollama pull gpt-oss:20b
7171
ollama pull nomic-embed-text
7272

7373
ollama-serve:

initiatives/genai_red_team_handbook/sandboxes/RAG_local/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ graph LR
7575
7676
subgraph "External Services (Local Host)"
7777
Ollama[Ollama Server<br/>:11434]
78-
Model[llama3 Model<br/>config/model.toml]
78+
Model[gpt-oss:20b Model<br/>config/model.toml]
7979
FileStorage[File Storage<br/>data/documents]
8080
end
8181
@@ -182,7 +182,7 @@ To use a different model, simply pull it with `ollama pull <model_name>` and upd
182182
Controls which LLM model to use:
183183
```toml
184184
[default]
185-
model = "llama3" # Change to switch models
185+
model = "gpt-oss:20b" # Change to switch models
186186

187187
[ollama]
188188
base_url = "http://host.containers.internal:11434/v1"
@@ -273,7 +273,7 @@ Run `make help` to see all commands:
273273
- `make mypy` - Run mypy type checker
274274

275275
**Ollama:**
276-
- `make ollama-pull` - Pull llama3 model
276+
- `make ollama-pull` - Pull gpt-oss:20b model
277277
- `make ollama-serve` - Start Ollama (checks if already running)
278278

279279
## Testing the Mock API
@@ -290,7 +290,7 @@ curl -X POST http://localhost:8000/v1/chat/completions \
290290
-H "Content-Type: application/json" \
291291
-H "Authorization: Bearer sk-mock-key" \
292292
-d '{
293-
"model": "llama3",
293+
"model": "gpt-oss:20b",
294294
"messages": [{"role": "user", "content": "Hello!"}]
295295
}'
296296
```

initiatives/genai_red_team_handbook/sandboxes/RAG_local/app/mocks/openai.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ class ChatCompletionRequest(BaseModel):
4949
"""Request model for chat completions endpoint.
5050
5151
Attributes:
52-
model: Name of the model to use (e.g., "llama3").
52+
model: Name of the model to use (e.g., "gpt-oss:20b").
5353
messages: List of message dictionaries with 'role' and 'content' keys.
5454
temperature: Sampling temperature between 0 and 2. Defaults to 0.7.
5555
max_tokens: Maximum number of tokens to generate. Defaults to None.

initiatives/genai_red_team_handbook/sandboxes/RAG_local/config/model.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
# To change which model is used, edit the [default] section below.
44

55
[default]
6-
# CHANGE THIS to switch models (e.g., "llama3", "llama2", "mistral")
6+
# CHANGE THIS to switch models (e.g., "gpt-oss:20b", "llama2", "mistral")
77
model = "gpt-oss:20b"
88

99
[ollama]

initiatives/genai_red_team_handbook/sandboxes/llm_local/Makefile

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ help:
2121
@echo " make format - Run black and isort formatters"
2222
@echo " make mypy - Run mypy static type checker"
2323
@echo ""
24-
@echo " make ollama-pull - Pull llama3 model for Ollama"
24+
@echo " make ollama-pull - Pull gpt-oss:20b model for Ollama"
2525
@echo " make ollama-serve - Start Ollama server"
2626
@echo ""
2727
@echo "Environment:"
@@ -65,7 +65,7 @@ clean:
6565
@echo "✅ Cleanup complete!"
6666

6767
ollama-pull:
68-
ollama pull llama3
68+
ollama pull gpt-oss:20b
6969

7070
ollama-serve:
7171
@echo "🔍 Checking if Ollama is running..."

initiatives/genai_red_team_handbook/sandboxes/llm_local/app/mocks/openai.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ class ChatCompletionRequest(BaseModel):
4747
"""Request model for chat completions endpoint.
4848
4949
Attributes:
50-
model: Name of the model to use (e.g., "llama3").
50+
model: Name of the model to use (e.g., "gpt-oss:20b").
5151
messages: List of message dictionaries with 'role' and 'content' keys.
5252
temperature: Sampling temperature between 0 and 2. Defaults to 0.7.
5353
max_tokens: Maximum number of tokens to generate. Defaults to None.

initiatives/genai_red_team_handbook/sandboxes/llm_local/config/model.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
# To change which model is used, edit the [default] section below.
44

55
[default]
6-
# CHANGE THIS to switch models (e.g., "llama3", "llama2", "mistral")
6+
# CHANGE THIS to switch models (e.g., "gpt-oss:20b", "llama2", "mistral")
77
model = "gpt-oss:20b"
88

99
[ollama]

0 commit comments

Comments
 (0)