Skip to content

Commit e15d81a

Browse files
committed
readme: add install step
1 parent d2d1eaa commit e15d81a

File tree

1 file changed

+25
-14
lines changed

1 file changed

+25
-14
lines changed

README.md

Lines changed: 25 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,26 @@
11
# ResilientLLM
22

3-
A simple but robust LLM integration layer designed to ensure reliable, seamless interactions across multiple APIs by intelligently handling failures and rate limits.
3+
A minimalist but robust LLM integration layer designed to ensure reliable, seamless interactions across multiple LLM providers by intelligently handling failures and rate limits.
4+
5+
## Why Use ResilentLLM
6+
7+
This library solves challenges in building production-ready AI Agents due to
8+
9+
- Unstable network conditions
10+
- Inconsistent error handling
11+
- Unpredictable LLM API rate limit errors
12+
13+
### Key Features
14+
15+
- **Rate limiting**: You don’t need to calculate tokens or manage rate limits yourself
16+
- **Token estimation**: The number of LLM tokens is estimated for each request and enforced.
17+
- **Retries, backoff, and circuit breaker**: All are handled internally by the `ResilientOperation`.
18+
19+
## Installation
20+
21+
```bash
22+
npm i resilient-llm
23+
```
424

525
## Quickstart
626

@@ -16,7 +36,7 @@ const llm = new ResilientLLM({
1636
requestsPerMinute: 60, // Limit to 60 requests per minute
1737
llmTokensPerMinute: 90000 // Limit to 90,000 LLM tokens per minute
1838
},
19-
retries: 3, // Number of times to retry if req. fails for reasons possible to fix by retry
39+
retries: 3, // Number of times to retry when req. fails and only if it is possible to fix by retry
2040
backoffFactor: 2 // Increase delay between retries by this factor
2141
});
2242

@@ -35,17 +55,7 @@ const conversationHistory = [
3555
})();
3656
```
3757

38-
---
39-
40-
### Key Points
41-
42-
- **Rate limiting is automatic**: You don’t need to pass token counts or manage rate limits yourself.
43-
- **Token estimation**: The number of LLM tokens is estimated for each request and enforced.
44-
- **Retries, backoff, and circuit breaker**: All are handled internally by the `ResilientOperation`.
45-
46-
---
47-
48-
### Advanced: With Custom Options
58+
### Advanced Options
4959

5060
```js
5161
const response = await llm.chat(
@@ -61,6 +71,7 @@ const response = await llm.chat(
6171
);
6272
```
6373

74+
6475
## Motivation
6576

6677
ResilientLLM is a resilient, unified LLM interface featuring circuit breaker, token bucket rate limiting, caching, and adaptive retry with dynamic backoff support.
@@ -76,7 +87,7 @@ The final solution was to extract tiny LLM orchestration class out of all my AI
7687
This library solves my challenges in building production-ready AI Agents such as:
7788
- unstable network conditions
7889
- inconsistent error handling
79-
- unpredictable LLM API rate limit errrors
90+
- unpredictable LLM API rate limit errors
8091

8192
This library aims to solve the same challenges for you by providing a resilient layer that intelligently manages failures and rate limits, enabling you (developers) to integrate LLMs confidently and effortlessly at scale.
8293

0 commit comments

Comments
 (0)