Skip to content

Commit 290f3d4

Browse files
committed
Refine docs
1 parent 28f9312 commit 290f3d4

File tree

10 files changed

+284
-139
lines changed

10 files changed

+284
-139
lines changed

agentfly/templates/templates.py

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1023,6 +1023,7 @@ def prompt(self, add_generation_prompt=False, tools=None) -> str:
10231023
The prompt for the chat.
10241024
"""
10251025
self.flags['add_generation_prompt'] = add_generation_prompt
1026+
tools = tools or self.tools
10261027
prompt, _, _ = self.template.render(messages=self.messages, tools=tools, add_generation_prompt=add_generation_prompt)
10271028
return prompt
10281029

@@ -1043,15 +1044,18 @@ def tokenize(self, tokenizer: PreTrainedTokenizer = None, add_generation_prompt=
10431044
processor: The processor to use for the chat.
10441045
10451046
Returns:
1046-
The tokenized messages, a dictionary with the following items:
1047-
- input_ids
1048-
- attention_mask
1049-
- labels
1050-
- action_mask
1051-
- multi_modal_inputs
1047+
inputs (dict): Inputs for helping training.
1048+
- input_ids
1049+
- attention_mask
1050+
- labels
1051+
- action_mask
1052+
- multi_modal_inputs
10521053
"""
10531054
if tokenizer is None:
1055+
if self.tokenizer is None:
1056+
raise ValueError("Tokenizer is not set. Set it when initializing the chat or pass it as an argument.")
10541057
tokenizer = self.tokenizer
1058+
10551059
if tools is None:
10561060
tools = self.tools
10571061
return self.template.encode(messages=self.messages, tokenizer=tokenizer, return_tensors="pt", tools=tools, add_generation_prompt=add_generation_prompt, processor=processor)

docs/api_references/chat_template/advanced_features.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ The Chat Template System provides advanced features for fine-grained control ove
1111
The system supports multiple strategies for where and how tools are integrated into prompts:
1212

1313
```python
14-
from agentfly.agents.templates.constants import ToolPlacement
14+
from agentfly.templates.constants import ToolPlacement
1515

1616
# 1. SYSTEM placement - tools appear in system message
1717
system_placement = ToolPlacement.SYSTEM
@@ -31,7 +31,7 @@ Different strategies for formatting tool definitions:
3131
#### JSON Formatters
3232

3333
```python
34-
from agentfly.agents.templates.tool_policy import (
34+
from agentfly.templates.tool_policy import (
3535
JsonFormatter, JsonMinifiedFormatter, JsonIndentedFormatter, JsonCompactFormatter
3636
)
3737

@@ -65,7 +65,7 @@ yaml_formatter = YamlFormatter()
6565
#### Custom Formatters
6666

6767
```python
68-
from agentfly.agents.templates.tool_policy import ToolFormatter
68+
from agentfly.templates.tool_policy import ToolFormatter
6969

7070
class CustomToolFormatter(ToolFormatter):
7171
def format(self, tools):
@@ -101,7 +101,7 @@ custom_tool_policy = ToolPolicy(
101101
Process tool content before formatting:
102102

103103
```python
104-
from agentfly.agents.templates.tool_policy import ToolContentProcessor
104+
from agentfly.templates.tool_policy import ToolContentProcessor
105105

106106
class ToolFilterProcessor(ToolContentProcessor):
107107
"""Filter tools based on certain criteria"""
@@ -139,7 +139,7 @@ filtered_tool_policy = ToolPolicy(
139139
Fine-grained control over system message behavior:
140140

141141
```python
142-
from agentfly.agents.templates.system_policy import SystemPolicy
142+
from agentfly.templates.system_policy import SystemPolicy
143143

144144
# Basic system policy
145145
basic_policy = SystemPolicy(
@@ -170,7 +170,7 @@ Transform system messages before rendering:
170170
#### Built-in Processors
171171

172172
```python
173-
from agentfly.agents.templates.system_policy import Llama32DateProcessor
173+
from agentfly.templates.system_policy import Llama32DateProcessor
174174

175175
# Llama 3.2 date processor (adds current date)
176176
llama_date_policy = SystemPolicy(
@@ -183,7 +183,7 @@ llama_date_policy = SystemPolicy(
183183
#### Custom Content Processors
184184

185185
```python
186-
from agentfly.agents.templates.system_policy import SystemContentProcessor
186+
from agentfly.templates.system_policy import SystemContentProcessor
187187

188188
class EnvironmentAwareProcessor(SystemContentProcessor):
189189
"""Add environment information to system messages"""

docs/api_references/chat_template/basic_usage.md

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,12 +91,13 @@ messages_with_image = [
9191
"role": "user",
9292
"content": [
9393
{"type": "text", "text": "What's in this image?"},
94-
{"type": "image", "image": "/path/to/image.jpg"}
94+
{"type": "image", "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"}
9595
]
9696
}
9797
]
9898

9999
chat = Chat(template="qwen2.5-vl", messages=messages_with_image)
100+
prompt = chat.prompt()
100101
```
101102

102103
## Template Operations
@@ -119,12 +120,18 @@ prompt_with_tools = chat.prompt(tools=tools)
119120
Use `Chat.tokenize` method to tokenize the messages with the specified chat template.
120121

121122
```python
123+
from transformers import AutoTokenizer, AutoProcessor
124+
125+
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-VL-3B-Instruct")
126+
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-3B-Instruct")
122127
# Tokenize the conversation
123128
inputs = chat.tokenize(
124129
tokenizer=tokenizer,
125130
add_generation_prompt=True,
131+
processor=processor,
126132
tools=tools
127133
)
134+
print(inputs.keys())
128135

129136
# The result includes:
130137
# - input_ids: Token IDs

docs/api_references/chat_template/core_components.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@
3232

3333
The Chat Template System is inspired by the art of building block toys - where complex structures are created by combining simple, standardized components. We identify some basic components from LLM's chat templates, and use them to form prompts from conversation messages. Below are some basic core compoenents:
3434

35-
`system_template`: Specify system prompt is formatted in chat template.
35+
`system_template`: Specify how system prompt is formatted in chat template.
3636

3737
`system_template_with_tools`: Specify how tools along with system prompt is formatted in chat template
3838

@@ -72,9 +72,9 @@ tools = [
7272

7373
<span class="system">System: You are a helpful assistant.</span>
7474

75-
<span class="user">User: Hi, how are you today.</span>
75+
<span class="user">User: Hi, Can you help me search the information.</span>
7676

77-
<span class="assistant">Assistant: I am good. How can I help you?</span>
77+
<span class="assistant">Assistant: tool call: search\ntool arguments: related query</span>
7878

7979
<span class="tool">Tool: Searched inforamtion...</span>
8080

@@ -120,7 +120,7 @@ The central class that manages:
120120
#### Chat
121121
Recommended class for user usage:
122122
- Store and format messages
123-
- Get formatted prompt
123+
- Get formatted prompts
124124
- Tokenize formatted prompt
125125

126126
### Advanced Features
@@ -170,6 +170,6 @@ def _register_vision_processor(self):
170170
**4. Jinja Template Generation**
171171

172172
Templates can generate HuggingFace-compatible Jinja templates:
173-
- Enables use with external systems (vLLM, etc.)
173+
- Enables use with external systems (vLLM, transformers tokenizers, etc.)
174174
- Maintains consistency between Python and Jinja rendering
175175
- Supports complex logic through Jinja macros

0 commit comments

Comments
 (0)