Skip to content

Commit c2671fd

Browse files
committed
fix(minor docs update):
1 parent 56984c7 commit c2671fd

File tree

2 files changed

+21
-35
lines changed

2 files changed

+21
-35
lines changed

docs/http_spec.md

Lines changed: 13 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ The `LLMSpec` class is the core of the HTTP specification. It provides the follo
3333
### Methods
3434

3535
- **`from_string(http_spec: str) -> LLMSpec`**: Parses an HTTP specification string into an `LLMSpec` object.
36-
- **`validate(prompt: str, encoded_image: str, encoded_audio: str, files: dict) -> None`**: Validates the request parameters based on the specified modality.
36+
- **`validate(prompt: str, encoded_image: str, encoded_audio: str, files: dict) -> null`**: Validates the request parameters based on the specified modality.
3737
- **`probe(prompt: str, encoded_image: str = "", encoded_audio: str = "", files: dict = {}) -> httpx.Response`**: Sends an HTTP request using the specified parameters.
3838
- **`verify() -> httpx.Response`**: Verifies the HTTP specification by sending a test request.
3939

@@ -52,12 +52,11 @@ Authorization: Bearer sk-xxxxxxxxx
5252
Content-Type: application/json
5353
5454
{
55-
"model": "gpt-3.5-turbo",
56-
"messages": [{"role": "user", "content": "<<PROMPT>>"}],
57-
"temperature": 0.7
55+
"model": "gpt-3.5-turbo",
56+
"messages": [{"role": "user", "content": "<<PROMPT>>"}],
57+
"temperature": 0.7
5858
}
5959
"""
60-
6160
spec = LLMSpec.from_string(http_spec)
6261
response = await spec.probe("What is the capital of France?")
6362
```
@@ -71,12 +70,11 @@ Authorization: Bearer sk-xxxxxxxxx
7170
Content-Type: application/json
7271
7372
{
74-
"model": "gpt-4-vision-preview",
75-
"messages": [{"role": "user", "content": "What is in this image? <<BASE64_IMAGE>>"}],
76-
"temperature": 0.7
73+
"model": "gpt-4-vision-preview",
74+
"messages": [{"role": "user", "content": "What is in this image? <<BASE64_IMAGE>>"}],
75+
"temperature": 0.7
7776
}
7877
"""
79-
8078
spec = LLMSpec.from_string(http_spec)
8179
encoded_image = encode_image_base64_by_url("https://example.com/image.jpg")
8280
response = await spec.probe("What is in this image?", encoded_image=encoded_image)
@@ -91,12 +89,11 @@ Authorization: Bearer sk-xxxxxxxxx
9189
Content-Type: application/json
9290
9391
{
94-
"model": "whisper-large-v3",
95-
"messages": [{"role": "user", "content": "Transcribe this audio: <<BASE64_AUDIO>>"}],
96-
"temperature": 0.7
92+
"model": "whisper-large-v3",
93+
"messages": [{"role": "user", "content": "Transcribe this audio: <<BASE64_AUDIO>>"}],
94+
"temperature": 0.7
9795
}
9896
"""
99-
10097
spec = LLMSpec.from_string(http_spec)
10198
encoded_audio = encode_audio_base64_by_url("https://example.com/audio.mp3")
10299
response = await spec.probe("Transcribe this audio:", encoded_audio=encoded_audio)
@@ -111,12 +108,11 @@ Authorization: Bearer sk-xxxxxxxxx
111108
Content-Type: multipart/form-data
112109
113110
{
114-
"model": "gpt-3.5-turbo",
115-
"messages": [{"role": "user", "content": "Process this file: <<FILE>>"}],
116-
"temperature": 0.7
111+
"model": "gpt-3.5-turbo",
112+
"messages": [{"role": "user", "content": "Process this file: <<FILE>>"}],
113+
"temperature": 0.7
117114
}
118115
"""
119-
120116
spec = LLMSpec.from_string(http_spec)
121117
files = {"file": ("document.txt", open("document.txt", "rb"))}
122118
response = await spec.probe("Process this file:", files=files)

docs/probe_data.md

Lines changed: 8 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -54,30 +54,23 @@ The `probe_data` module is a core component of the Agentic Security project, res
5454

5555
- **Classes:**
5656
- `PromptSelectionInterface`: Abstract base class for prompt selection strategies.
57-
5857
- Methods:
5958
- `select_next_prompt(current_prompt: str, passed_guard: bool) -> str`: Selects next prompt
6059
- `select_next_prompts(current_prompt: str, passed_guard: bool) -> list[str]`: Selects multiple prompts
61-
- `update_rewards(previous_prompt: str, current_prompt: str, reward: float, passed_guard: bool) -> None`: Updates rewards
62-
60+
- `update_rewards(previous_prompt: str, current_prompt: str, reward: float, passed_guard: bool) -> null`: Updates rewards
6361
- `RandomPromptSelector`: Basic random selection with history tracking.
64-
6562
- Parameters:
6663
- `prompts: list[str]`: List of available prompts
6764
- `history_size: int = 3`: Size of history to prevent cycles
68-
6965
- `CloudRLPromptSelector`: Cloud-based RL implementation with fallback.
70-
7166
- Parameters:
7267
- `prompts: list[str]`: List of available prompts
7368
- `api_url: str`: URL of RL service
7469
- `auth_token: str = AUTH_TOKEN`: Authentication token
7570
- `history_size: int = 300`: Size of history
7671
- `timeout: int = 5`: Request timeout
7772
- `run_id: str = ""`: Unique run identifier
78-
7973
- `QLearningPromptSelector`: Local Q-learning implementation.
80-
8174
- Parameters:
8275
- `prompts: list[str]`: List of available prompts
8376
- `learning_rate: float = 0.1`: Learning rate
@@ -86,13 +79,11 @@ The `probe_data` module is a core component of the Agentic Security project, res
8679
- `exploration_decay: float = 0.995`: Exploration decay rate
8780
- `min_exploration: float = 0.01`: Minimum exploration rate
8881
- `history_size: int = 300`: Size of history
89-
90-
- `Module`: Main class that uses CloudRLPromptSelector.
91-
92-
- Parameters:
93-
- `prompt_groups: list[str]`: Groups of prompts
94-
- `tools_inbox: asyncio.Queue`: Queue for tool communication
95-
- `opts: dict = {}`: Configuration options
82+
- **Module**: Main class that uses CloudRLPromptSelector.
83+
- Parameters:
84+
- `prompt_groups: list[str]`: Groups of prompts
85+
- `tools_inbox: asyncio.Queue`: Queue for tool communication
86+
- `opts: dict = {}`: Configuration options
9687

9788
## Usage Examples
9889

@@ -119,10 +110,9 @@ from agentic_security.probe_data.modules.rl_model import QLearningPromptSelector
119110

120111
prompts = ["What is AI?", "Explain machine learning"]
121112
selector = QLearningPromptSelector(prompts)
122-
123113
current_prompt = "What is AI?"
124-
next_prompt = selector.select_next_prompt(current_prompt, passed_guard=True)
125-
selector.update_rewards(current_prompt, next_prompt, reward=1.0, passed_guard=True)
114+
next_prompt = selector.select_next_prompt(current_prompt, passed_guard=true)
115+
selector.update_rewards(current_prompt, next_prompt, reward=1.0, passed_guard=true)
126116
```
127117

128118
## Conclusion

0 commit comments

Comments
 (0)