You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+157-1Lines changed: 157 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,13 @@ Toller is a lightweight Python library designed to make your asynchronous calls
8
8
9
9
Just as the [Nova Scotia Duck Tolling Retriever](https://www.akc.org/dog-breeds/nova-scotia-duck-tolling-retriever/) lures and guides ducks, Toller "lures" unruly asynchronous tasks into well-managed, predictable flows, guiding the overall execution path and making concurrency easier to reason about.
10
10
11
-
When deploying applications that rely on numerous external async calls, managing failures (downtime, rate limits, transient errors) in a standard way becomes critical. Toller offers this standard, both for client-side calls and potentially for protecting server-side resources.
11
+
## Why Toller?
12
+
13
+
Modern applications that integrate with numerous LLMs, vector databases, and other microservices, face a constant challenge: external services can be unreliable. They might be temporarily down, enforce rate limits, or return transient errors.
14
+
15
+
Building robust applications in this environment means every external call needs careful handling, but repeating this logic for every API call leads to boilerplate, inconsistency, and often, poorly managed asynchronous processes. **Toller was built to solve this.** It provides a declarative way to add these resilience patterns.
16
+
17
+
Toller offers this standard, both for client-side calls and potentially for protecting server-side resources.
12
18
13
19
## Features
14
20
@@ -36,3 +42,153 @@ When deploying applications that rely on numerous external async calls, managing
36
42
```bash
37
43
pip install toller
38
44
```
45
+
46
+
## Usage and Examples
47
+
48
+
### Example 1: Basic Resilience for Generative AI Calls
49
+
<detailsopen>
50
+
For a function that calls out to an LLM, we want to handle rate limits, retry on temporary server issues, and stop if the service is truly down.
51
+
52
+
```python
53
+
import asyncio
54
+
import random
55
+
from toller import TransientError, FatalError, MaxRetriesExceeded, OpenCircuitError
56
+
57
+
# Define potential API errors
58
+
classLLMRateLimitError(TransientError): pass
59
+
classLLMServerError(TransientError): pass
60
+
classLLMInputError(FatalError): pass# e.g., prompt too long
61
+
62
+
# Simulate an LLM call
63
+
LLM_DOWN_FOR_DEMO=0# Counter for demoing circuit breaker
64
+
asyncdefcall_llm_api(prompt: str):
65
+
globalLLM_DOWN_FOR_DEMO
66
+
print(f"LLM API: Processing '{prompt[:20]}...' (Attempt for this task)")
0 commit comments