Skip to content

Commit 4d6d621

Browse files
committed
Merge branch '687-feat-pptx-parser' of https://github.com/deepsense-ai/ragbits into 687-feat-pptx-parser
2 parents 2be019b + 530df04 commit 4d6d621

File tree

3 files changed

+86
-0
lines changed

3 files changed

+86
-0
lines changed

docs/api_reference/agents/index.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,3 +3,5 @@
33
::: ragbits.agents.AgentOptions
44

55
::: ragbits.agents.Agent
6+
7+
::: ragbits.agents.AgentResult
Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
# How-To: Define and use agents with Ragbits
2+
3+
Ragbits [`Agent`][ragbits.agents.Agent] combines the reasoning power of LLMs with the ability to execute custom code through *tools*. This makes it possible to handle complex tasks by giving the model access to your own Python functions.
4+
5+
When using tool-enabled agents, the LLM reviews the system prompt and incoming messages to decide whether a tool should be called. Instead of just generating a text response, the model can choose to invoke a tool or combine both approaches.
6+
7+
Before using tools, you can check whether your selected model supports function calling with:
8+
```python
9+
litellm.supports_function_calling(model="your-model-name")
10+
```
11+
12+
If function calling is supported and tools are enabled, the agent interprets the user input, decides whether a tool is needed, executes it if necessary, and returns a final response enriched with tool results.
13+
14+
This response is encapsulated in an [`AgentResult`][ragbits.agents.AgentResult], which includes the model's output, additional metadata, conversation history, and any tool calls performed.
15+
16+
## How to build an agent with Ragbits
17+
This guide walks you through building a simple agent that uses a `get_weather` tool to return weather
18+
data based on a location.
19+
20+
### Define a tool function
21+
First, define the function you want your agent to call. It should take regular Python arguments and return a JSON-serializable result.
22+
```python
23+
import json
24+
25+
--8<-- "examples/agents/tool_use.py:31:48"
26+
```
27+
28+
### Define a prompt
29+
Use a structured prompt to instruct the LLM. For details on writing prompts with Ragbits, see the [Guide to Prompting](https://ragbits.deepsense.ai/how-to/prompts/use_prompting/).
30+
31+
```python
32+
from pydantic import BaseModel
33+
from ragbits.core.prompt import Prompt
34+
35+
--8<-- "examples/agents/tool_use.py:51:70"
36+
```
37+
38+
### Run the agent
39+
Create the agent, attach the prompt and tool, and run it:
40+
```python
41+
import asyncio
42+
from ragbits.agents import Agent
43+
from ragbits.core.llms import LiteLLM
44+
45+
--8<-- "examples/agents/tool_use.py:73:84"
46+
```
47+
48+
The result is an [AgentResult][ragbits.agents.AgentResult], which includes the model's output, additional metadata, conversation history, and any tool calls performed.
49+
50+
You can find the complete code example in the Ragbits repository [here](https://github.com/deepsense-ai/ragbits/blob/main/examples/agents/tool_use.py).
51+
52+
## Conversation history
53+
[`Agent`][ragbits.agents.Agent]s can retain conversation context across multiple interactions by enabling the `keep_history` flag when initializing the agent. This is useful when you want the agent to understand follow-up questions without needing the user to repeat earlier details.
54+
55+
To enable this, simply set `keep_history=True` when constructing the agent. The full exchange—including messages, tool calls, and results—is stored and can be accessed via the AgentResult.history property.
56+
57+
### Example of context preservation
58+
The following example demonstrates how an agent with history enabled maintains context between interactions:
59+
60+
```python
61+
async def main() -> None:
62+
"""Run the weather agent with conversation history."""
63+
llm = LiteLLM(model_name="gpt-4o-2024-08-06", use_structured_output=True)
64+
agent = Agent(llm=llm, prompt=WeatherPrompt, tools=[get_weather], keep_history=True)
65+
66+
await agent.run(WeatherPromptInput(location="Paris"))
67+
68+
# Follow-up question about Tokyo - the agent retains weather context
69+
response = await agent.run("What about Tokyo?")
70+
print(response)
71+
```
72+
73+
In this scenario, the agent recognizes that the follow-up question "What about Tokyo?" refers to weather information due to the preserved conversation history. The expected output would be an AgentResult containing the response:
74+
75+
```python
76+
AgentResult(content='The current temperature in Tokyo is 10°C.', ...)
77+
```
78+
79+
## Streaming agent responses
80+
For use cases where you want to process partial outputs from the LLM as they arrive (e.g., in chat UIs), the [`Agent`][ragbits.agents.Agent] class supports streaming through the `run_streaming()` method.
81+
82+
This method returns an `AgentResultStreaming` object — an async iterator that yields parts of the LLM response and tool-related events in real time.

mkdocs.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,8 @@ nav:
3737
- "Setup guardrails": how-to/guardrails/use_guardrails.md
3838
- Chatbots:
3939
- "Setup API & UI": how-to/chatbots/api.md
40+
- Agents:
41+
- "Define and use agents": how-to/agents/define_and_use_agents.md
4042
- Evaluate:
4143
- "Evaluate pipelines": how-to/evaluate/evaluate.md
4244
- "Create custom evaluation pipeline": how-to/evaluate/custom_evaluation_pipeline.md

0 commit comments

Comments
 (0)