The agentic reasoning cycle that drives Arawn's behavior.
The agent loop is a repeated cycle of:
- Build context (system prompt + history + recall)
- Call LLM with available tools
- Execute any tool calls
- Repeat until LLM responds with text only
pub async fn turn(&self, session: &mut Session, message: &str) -> Result<AgentResponse> {
// 1. Add user message to history
session.add_message(Message::user(message));
// 2. Recall relevant memories
let recall = self.recall(message).await?;
// 3. Build request with context
let request = self.build_request(session, recall);
// 4. Enter tool loop
loop {
let response = self.backend.complete(request).await?;
match response.content {
Content::Text(text) => {
// Done - return final response
return Ok(AgentResponse { text, tools, usage });
}
Content::ToolUse(calls) => {
// Execute tools, append results, continue loop
for call in calls {
let result = self.execute_tool(call).await?;
request.messages.push(result);
}
}
}
}
}The system prompt is assembled from:
- Bootstrap prompt — Core identity and behavior
- Tool documentation — Descriptions of available tools
- Workspace context — Current directory, project info
- Context preamble — Optional injected context (e.g., from parent agent)
Session history includes:
- Previous user messages
- Previous assistant responses
- Tool call records with results
Before each turn, relevant memories are retrieved:
let query = RecallQuery {
embedding: self.embed(message).await?,
limit: 5,
threshold: 0.6,
};
let memories = self.memory_store.recall(query).await?;Recalled memories are injected as context before the user message.
When the LLM returns tool calls:
- Parse — Extract tool name and parameters from response
- Validate — Check tool exists and parameters match schema
- Execute — Run tool with parameters and context
- Format — Convert result to message format
- Append — Add tool call and result to conversation
Tools receive context about their execution environment:
pub struct ToolContext {
pub session_id: String,
pub working_dir: PathBuf,
pub config: Arc<Config>,
pub memory_store: Option<Arc<MemoryStore>>,
}The agent loop has safety limits:
| Limit | Default | Configurable | Purpose |
|---|---|---|---|
| Max iterations | 25 | [agent.default].max_iterations |
Prevent runaway loops |
| Turn timeout | 300s | [agent.default].timeout |
Kill hung turns |
| Shell timeout | 30s | [tools.shell].timeout_secs |
Per-shell-command timeout |
| Web timeout | 30s | [tools.web].timeout_secs |
Per-web-request timeout |
For streaming responses, the loop yields events:
pub enum AgentEvent {
Text(String), // Partial text
ToolStart { name, id }, // Tool execution starting
ToolEnd { id, result }, // Tool execution complete
Done, // Turn complete
}| Error Type | Behavior |
|---|---|
| Tool execution failure | Return error message to LLM |
| LLM API error | Bubble up to caller |
| Timeout | Cancel tool, return timeout message |
| Max iterations | Stop loop, return partial response |