diff --git a/docs/codedocs/algorithms.md b/docs/codedocs/algorithms.md
new file mode 100644
index 0000000..8eb6791
--- /dev/null
+++ b/docs/codedocs/algorithms.md
@@ -0,0 +1,125 @@
+---
+title: "Algorithms"
+description: "How fixed window, sliding window, token bucket, and cached fixed window algorithms work internally."
+---
+
+Algorithms define how tokens are counted and when requests are allowed. In this library, algorithms are factory functions that return an object with `limit`, `getRemaining`, and `resetTokens` methods. The factories live in `src/single.ts` (single region) and `src/multi.ts` (multi region), and their Redis logic lives in `src/lua-scripts/`.
+
+Each algorithm receives a `Context` with a Redis client, a key prefix, and optional cache. The algorithm then calls `safeEval` from `src/hash.ts`, which uses `EVALSHA` with a precomputed hash and falls back to `EVAL` if the script isn’t loaded.
+
+```mermaid
+flowchart TD
+ A[limit(identifier)] --> B{Algorithm}
+ B -->|fixedWindow| C[INCRBY + PEXPIRE]
+ B -->|slidingWindow| D[GET current + GET previous]
+ B -->|tokenBucket| E[HMGET refilledAt/tokens]
+ C --> F[Compare against limit]
+ D --> F
+ E --> F
+ F --> G[Return success/remaining/reset]
+```
+
+## Fixed window
+Fixed window is implemented in `src/single.ts` with `SCRIPTS.singleRegion.fixedWindow.*` in `src/lua-scripts/single.ts`. It increments a counter for the current window and rejects when the counter exceeds the limit. The Lua script sets the key’s expiration the first time it is created so each bucket is self‑cleaning.
+
+**Basic usage**
+```ts title="app/ratelimit.ts"
+import { Ratelimit } from "@upstash/ratelimit";
+import { Redis } from "@upstash/redis";
+
+const ratelimit = new Ratelimit({
+ redis: Redis.fromEnv(),
+ limiter: Ratelimit.fixedWindow(100, "1 m")
+});
+
+const res = await ratelimit.limit("api_key_123");
+```
+
+**Advanced usage (dynamic limits)**
+```ts title="app/dynamic.ts"
+import { Ratelimit } from "@upstash/ratelimit";
+import { Redis } from "@upstash/redis";
+
+const ratelimit = new Ratelimit({
+ redis: Redis.fromEnv(),
+ limiter: Ratelimit.fixedWindow(60, "1 m"),
+ dynamicLimits: true
+});
+
+await ratelimit.setDynamicLimit({ limit: 120 });
+const res = await ratelimit.limit("user_42");
+```
+
+## Sliding window
+Sliding window blends current and previous windows to reduce boundary bursts. The Lua script reads both buckets, weights the previous window by how far into the current window you are, and then calculates remaining tokens. See `SCRIPTS.singleRegion.slidingWindow.*` in `src/lua-scripts/single.ts`.
+
+**Basic usage**
+```ts title="app/ratelimit.ts"
+const ratelimit = new Ratelimit({
+ redis: Redis.fromEnv(),
+ limiter: Ratelimit.slidingWindow(10, "10 s")
+});
+```
+
+**Edge case (refunds)**
+If you pass a negative `rate`, the algorithm treats it as a refund and skips cache blocking. This is handled in `src/single.ts` by checking `incrementBy > 0` before consulting the cache.
+
+```ts title="app/refund.ts"
+const res = await ratelimit.limit("order_77", { rate: -1 });
+```
+
+## Token bucket
+Token bucket in `src/single.ts` uses a Redis hash to store `refilledAt` and `tokens`. The Lua script refills tokens based on elapsed time, then decrements by the request rate. See `tokenBucketLimitScript` in `src/lua-scripts/single.ts`.
+
+**Basic usage**
+```ts title="app/ratelimit.ts"
+const ratelimit = new Ratelimit({
+ redis: Redis.fromEnv(),
+ limiter: Ratelimit.tokenBucket(5, "10 s", 20)
+});
+```
+
+**Advanced usage (higher burst)**
+```ts title="app/burst.ts"
+const ratelimit = new Ratelimit({
+ redis: Redis.fromEnv(),
+ limiter: Ratelimit.tokenBucket(2, "1 s", 10)
+});
+```
+
+## Cached fixed window
+`cachedFixedWindow` is a special case that requires an ephemeral cache. It checks the local cache first, increments it optimistically, and updates Redis in the background. This is implemented in `src/single.ts` and uses `cachedFixedWindow*` scripts in `src/lua-scripts/single.ts`.
+
+**Basic usage**
+```ts title="app/worker.ts"
+const cache = new Map();
+const ratelimit = new Ratelimit({
+ redis: Redis.fromEnv(),
+ limiter: Ratelimit.cachedFixedWindow(5, "5 s"),
+ ephemeralCache: cache
+});
+```
+
+**Advanced usage (fail fast)**
+```ts title="app/worker.ts"
+try {
+ const res = await ratelimit.limit("ip:10.0.0.1");
+ if (!res.success) return new Response("blocked", { status: 429 });
+} catch (error) {
+ // cachedFixedWindow throws if no cache is provided
+}
+```
+
+`cachedFixedWindow` requires a cache (`ephemeralCache`). If you create the `Ratelimit` instance inside a request handler, the cache resets on every request and you lose the speed benefits. Create the instance outside your handler in serverless or edge functions.
+
+
+
+Fixed window is cheaper in Redis because it touches a single key per identifier, while sliding window reads two keys and applies a weighting step. That extra read means slightly higher latency, but it produces smoother limiting at window boundaries. If you expect burst traffic aligned to boundaries (cron jobs, marketing campaigns), sliding window reduces spikes. If cost and simplicity matter more than boundary behavior, fixed window is the pragmatic choice.
+
+
+Token bucket provides steady throughput and allows bursts by setting `maxTokens` larger than the refill rate, which is excellent for user‑driven traffic. Internally it stores a hash per identifier and updates both `refilledAt` and `tokens`, so it is more stateful than fixed or sliding windows. If you refund tokens with a negative `rate`, the bucket can exceed the refill rate temporarily, which is useful for compensating failures. The trade-off is extra logic and a more complex reset behavior compared to time-bucketed counters.
+
+
+Cached fixed window removes Redis from the critical path on cache hits, which is ideal for hot identifiers in edge environments. However, because the cache is local, consistency is best‑effort and depends on the lifecycle of the runtime. If you run multiple isolates or regions, each has its own cache and can allow more requests than expected until Redis updates converge. Use it only when you can tolerate soft limits and you run in a single isolate or a small number of replicas.
+
+
diff --git a/docs/codedocs/api-reference/analytics.md b/docs/codedocs/api-reference/analytics.md
new file mode 100644
index 0000000..c7ac104
--- /dev/null
+++ b/docs/codedocs/api-reference/analytics.md
@@ -0,0 +1,84 @@
+---
+title: "Analytics"
+description: "Analytics helper for recording and aggregating rate limit events."
+---
+
+`Analytics` in `src/analytics.ts` wraps `@upstash/core-analytics` and provides a higher‑level interface tailored to rate limit events. It is created automatically by `Ratelimit` when `analytics: true`, but you can also instantiate it directly for custom reporting.
+
+## Constructor
+```ts title="src/analytics.ts"
+new Analytics(config: AnalyticsConfig)
+```
+
+**Parameters**
+| Parameter | Type | Default | Description |
+|-----------|------|---------|-------------|
+| `redis` | `@upstash/redis` client | — | Redis REST client used for analytics storage. |
+| `prefix` | `string` | `@upstash/ratelimit` | Namespace for analytics keys. |
+
+## Methods
+### `extractGeo`
+```ts title="src/analytics.ts"
+extractGeo(req: { geo?: Geo; cf?: Geo }): Geo
+```
+Extracts geo metadata from either `req.geo` (Vercel) or `req.cf` (Cloudflare). If neither is present, returns an empty object.
+
+**Example**
+```ts title="app/geo.ts"
+const geo = analytics.extractGeo({ cf: { country: "US", city: "NYC" } });
+```
+
+### `record`
+```ts title="src/analytics.ts"
+record(event: Event): Promise
+```
+Records a single event into the `events` table with identifier, time, success state, and optional geo data.
+
+**Example**
+```ts title="app/record.ts"
+await analytics.record({
+ identifier: "user_123",
+ time: Date.now(),
+ success: true,
+ country: "US"
+});
+```
+
+### `series`
+```ts title="src/analytics.ts"
+series(filter: TFilter, cutoff: number): PromiseAggregate[]>
+```
+Aggregates counts over time for a given field (e.g. identifier, country).
+
+### `getUsage`
+```ts title="src/analytics.ts"
+getUsage(cutoff?: number): PromiseRecord>
+```
+Returns allowed vs blocked counts grouped by identifier.
+
+### `getUsageOverTime`
+```ts title="src/analytics.ts"
+getUsageOverTime(timestampCount: number, groupby: TFilter): PromiseAggregate[]>
+```
+Aggregates usage over time for a given field.
+
+### `getMostAllowedBlocked`
+```ts title="src/analytics.ts"
+getMostAllowedBlocked(timestampCount: number, getTop?: number, checkAtMost?: number): PromiseAggregate[]>
+```
+Returns top identifiers by allowed/blocked counts.
+
+## Usage with Ratelimit
+If you enable analytics in `Ratelimit`, the library calls `analytics.record` after each request and attaches the work to the `pending` promise in the response.
+
+```ts title="app/ratelimit.ts"
+const ratelimit = new Ratelimit({
+ redis: Redis.fromEnv(),
+ limiter: Ratelimit.slidingWindow(10, "10 s"),
+ analytics: true
+});
+```
+
+**Related**
+- [Request Lifecycle](../lifecycle)
+- [Ratelimit](./ratelimit)
diff --git a/docs/codedocs/api-reference/ip-deny-list.md b/docs/codedocs/api-reference/ip-deny-list.md
new file mode 100644
index 0000000..88b6832
--- /dev/null
+++ b/docs/codedocs/api-reference/ip-deny-list.md
@@ -0,0 +1,67 @@
+---
+title: "IpDenyList"
+description: "Helpers for managing the IP deny list and its refresh lifecycle."
+---
+
+`IpDenyList` is exported as a module (`export * as IpDenyList`) from `src/deny-list/ip-deny-list.ts`. It provides functions for refreshing and disabling the IP deny list stored in Redis. These helpers are primarily used internally when protection is enabled but can be called directly in operational workflows.
+
+## Functions
+### `updateIpDenyList`
+```ts title="src/deny-list/ip-deny-list.ts"
+updateIpDenyList(redis: Redis, prefix: string, threshold: number, ttl?: number): Promise
+```
+
+**Parameters**
+| Parameter | Type | Default | Description |
+|-----------|------|---------|-------------|
+| `redis` | `Redis` | — | Redis REST client. |
+| `prefix` | `string` | — | Ratelimit key prefix (default is `@upstash/ratelimit`). |
+| `threshold` | `number` | — | Allowed range 1–8. Higher means stricter IP inclusion. |
+| `ttl` | `number` | computed | Optional TTL for the status key, otherwise time until next 2 AM UTC. |
+
+**Behavior**
+- Fetches a public IP list based on the `threshold` level.
+- Removes the old IP list from the combined deny list set.
+- Replaces the IP list set and makes it disjoint from custom deny list entries.
+- Updates a status key with TTL for future refresh checks.
+
+**Example**
+```ts title="app/ops.ts"
+import { IpDenyList } from "@upstash/ratelimit";
+import { Redis } from "@upstash/redis";
+
+const redis = Redis.fromEnv();
+await IpDenyList.updateIpDenyList(redis, "@upstash/ratelimit", 6);
+```
+
+### `disableIpDenyList`
+```ts title="src/deny-list/ip-deny-list.ts"
+disableIpDenyList(redis: Redis, prefix: string): Promise
+```
+
+**Parameters**
+| Parameter | Type | Default | Description |
+|-----------|------|---------|-------------|
+| `redis` | `Redis` | — | Redis REST client. |
+| `prefix` | `string` | — | Ratelimit key prefix. |
+
+**Behavior**
+- Removes the IP list set from the combined deny list.
+- Deletes the IP list set.
+- Sets the status key to `disabled` with no TTL.
+
+**Example**
+```ts title="app/ops.ts"
+await IpDenyList.disableIpDenyList(redis, "@upstash/ratelimit");
+```
+
+## Errors
+### `ThresholdError`
+```ts title="src/deny-list/ip-deny-list.ts"
+class ThresholdError extends Error
+```
+Thrown when `threshold` is outside the allowed range of 1–8.
+
+**Related**
+- [Protection and Deny Lists](../protection-denylist)
+- [Ratelimit](./ratelimit)
diff --git a/docs/codedocs/api-reference/multi-region-ratelimit.md b/docs/codedocs/api-reference/multi-region-ratelimit.md
new file mode 100644
index 0000000..11eb824
--- /dev/null
+++ b/docs/codedocs/api-reference/multi-region-ratelimit.md
@@ -0,0 +1,68 @@
+---
+title: "MultiRegionRatelimit"
+description: "Multi-region rate limiter with background synchronization and low-latency reads."
+---
+
+`MultiRegionRatelimit` in `src/multi.ts` extends the base `Ratelimit` class but uses an array of Redis REST clients (one per region). Each request is issued to every region, the first response wins, and synchronization runs asynchronously.
+
+## Constructor
+```ts title="src/multi.ts"
+new MultiRegionRatelimit(config: MultiRegionRatelimitConfig)
+```
+
+**Parameters**
+| Parameter | Type | Default | Description |
+|-----------|------|---------|-------------|
+| `redis` | `Redis[]` | — | Array of `@upstash/redis` clients, one per region. |
+| `limiter` | `AlgorithmMultiRegionContext>` | — | Algorithm factory, typically `MultiRegionRatelimit.fixedWindow` or `slidingWindow`. |
+| `prefix` | `string` | `@upstash/ratelimit` | Key prefix for Redis. |
+| `ephemeralCache` | `Map \| false` | auto Map | Optional local cache to short‑circuit blocked identifiers. |
+| `timeout` | `number` | `5000` | Milliseconds to wait before returning a timeout response. |
+| `analytics` | `boolean` | `false` | Enable analytics submission. |
+| `dynamicLimits` | `boolean` | `false` | Not supported for multi‑region; ignored with a warning. |
+
+## Methods
+### `limit`
+```ts title="src/ratelimit.ts"
+limit(identifier: string, req?: LimitOptions): PromiseRatelimitResponse>
+```
+Behaves like the single‑region `limit`, but returns `pending` that includes synchronization across regions.
+
+**Example**
+```ts title="app/edge.ts"
+const res = await ratelimit.limit("api_key_123");
+context.waitUntil(res.pending);
+```
+
+### `blockUntilReady`
+```ts title="src/ratelimit.ts"
+blockUntilReady(identifier: string, timeout: number): PromiseRatelimitResponse>
+```
+
+### `getRemaining`
+```ts title="src/ratelimit.ts"
+getRemaining(identifier: string): Promise<{ remaining: number; reset: number; limit: number }>
+```
+
+### `resetUsedTokens`
+```ts title="src/ratelimit.ts"
+resetUsedTokens(identifier: string): Promise
+```
+
+### `setDynamicLimit` and `getDynamicLimit`
+These methods are inherited but not supported by multi‑region algorithms. If you enable `dynamicLimits` in the constructor you will receive a warning and the algorithms will ignore the dynamic limit key.
+
+## Static algorithm factories
+### `fixedWindow`
+```ts title="src/multi.ts"
+MultiRegionRatelimit.fixedWindow(tokens: number, window: Duration): AlgorithmMultiRegionContext>
+```
+
+### `slidingWindow`
+```ts title="src/multi.ts"
+MultiRegionRatelimit.slidingWindow(tokens: number, window: Duration): AlgorithmMultiRegionContext>
+```
+
+**Related**
+- [Ratelimit](./ratelimit)
+- [Multi-Region Consistency](../multi-region)
diff --git a/docs/codedocs/api-reference/ratelimit.md b/docs/codedocs/api-reference/ratelimit.md
new file mode 100644
index 0000000..ac579e9
--- /dev/null
+++ b/docs/codedocs/api-reference/ratelimit.md
@@ -0,0 +1,116 @@
+---
+title: "Ratelimit"
+description: "Single-region rate limiter class exported as Ratelimit (RegionRatelimit)."
+---
+
+`Ratelimit` is the single‑region limiter exported from `src/single.ts` as `RegionRatelimit`. It extends the core `Ratelimit` base class in `src/ratelimit.ts` and uses a single Upstash Redis REST instance. It supports fixed window, sliding window, token bucket, and cached fixed window algorithms.
+
+## Constructor
+```ts title="src/single.ts"
+new Ratelimit(config: RatelimitConfig)
+```
+
+**Parameters**
+| Parameter | Type | Default | Description |
+|-----------|------|---------|-------------|
+| `redis` | `@upstash/redis` client | — | Redis REST client used for all operations. |
+| `limiter` | `AlgorithmRegionContext>` | — | Algorithm factory, created with `Ratelimit.fixedWindow`, `slidingWindow`, `tokenBucket`, or `cachedFixedWindow`. |
+| `prefix` | `string` | `@upstash/ratelimit` | Key prefix for Redis. |
+| `ephemeralCache` | `Map \| false` | auto Map | Local cache to short‑circuit blocked identifiers. Set to `false` to disable. |
+| `timeout` | `number` | `5000` | Milliseconds to wait before returning a timeout response. |
+| `analytics` | `boolean` | `false` | Enable analytics submission. |
+| `enableProtection` | `boolean` | `false` | Enable deny list checks on identifier, IP, user agent, country. |
+| `denyListThreshold` | `number` | `6` | Threshold for IP deny list updates. |
+| `dynamicLimits` | `boolean` | `false` | Enable dynamic limit override stored in Redis. |
+
+## Methods
+### `limit`
+```ts title="src/ratelimit.ts"
+limit(identifier: string, req?: LimitOptions): PromiseRatelimitResponse>
+```
+- **Parameters**
+ - `identifier`: identifier to rate limit (user ID, IP, API key).
+ - `req.rate`: optional token rate (positive consumes, negative refunds).
+ - `req.ip`, `req.userAgent`, `req.country`: used for deny list checks when protection is enabled.
+- **Returns**: `RatelimitResponse` with `success`, `remaining`, `reset`, and `pending`.
+
+**Example**
+```ts title="app/api/route.ts"
+const res = await ratelimit.limit("user_123", { rate: 1 });
+if (!res.success) return new Response("blocked", { status: 429 });
+```
+
+### `blockUntilReady`
+```ts title="src/ratelimit.ts"
+blockUntilReady(identifier: string, timeout: number): PromiseRatelimitResponse>
+```
+Blocks until the request can pass or the timeout is reached.
+
+**Example**
+```ts title="app/queue.ts"
+const res = await ratelimit.blockUntilReady("queue:item", 60_000);
+```
+
+### `getRemaining`
+```ts title="src/ratelimit.ts"
+getRemaining(identifier: string): Promise<{ remaining: number; reset: number; limit: number }>
+```
+Reads remaining tokens and reset timestamp without consuming tokens.
+
+**Example**
+```ts title="app/usage.ts"
+const { remaining, reset } = await ratelimit.getRemaining("user_123");
+```
+
+### `resetUsedTokens`
+```ts title="src/ratelimit.ts"
+resetUsedTokens(identifier: string): Promise
+```
+Deletes keys for the identifier to reset usage.
+
+**Example**
+```ts title="app/admin.ts"
+await ratelimit.resetUsedTokens("user_123");
+```
+
+### `setDynamicLimit`
+```ts title="src/ratelimit.ts"
+setDynamicLimit(options: { limit: number | false }): Promise
+```
+Overrides the default limit globally when `dynamicLimits` is enabled.
+
+**Example**
+```ts title="app/admin.ts"
+await ratelimit.setDynamicLimit({ limit: 120 });
+```
+
+### `getDynamicLimit`
+```ts title="src/ratelimit.ts"
+getDynamicLimit(): Promise<{ dynamicLimit: number | null }>
+```
+Returns the current global dynamic limit.
+
+## Static algorithm factories
+### `fixedWindow`
+```ts title="src/single.ts"
+Ratelimit.fixedWindow(tokens: number, window: Duration): AlgorithmRegionContext>
+```
+
+### `slidingWindow`
+```ts title="src/single.ts"
+Ratelimit.slidingWindow(tokens: number, window: Duration): AlgorithmRegionContext>
+```
+
+### `tokenBucket`
+```ts title="src/single.ts"
+Ratelimit.tokenBucket(refillRate: number, interval: Duration, maxTokens: number): AlgorithmRegionContext>
+```
+
+### `cachedFixedWindow`
+```ts title="src/single.ts"
+Ratelimit.cachedFixedWindow(tokens: number, window: Duration): AlgorithmRegionContext>
+```
+
+**Related**
+- [MultiRegionRatelimit](./multi-region-ratelimit)
+- [Algorithms](../algorithms)
diff --git a/docs/codedocs/architecture.md b/docs/codedocs/architecture.md
new file mode 100644
index 0000000..b158d39
--- /dev/null
+++ b/docs/codedocs/architecture.md
@@ -0,0 +1,42 @@
+---
+title: "Architecture"
+description: "How the library is structured internally and how a limit check flows through modules."
+---
+
+The library is organized around a small core `Ratelimit` class in `src/ratelimit.ts`, with algorithm factories in `src/single.ts` and `src/multi.ts`, and Lua scripts in `src/lua-scripts/` for atomic Redis operations. The entry point `src/index.ts` re-exports the public API as `Ratelimit`, `MultiRegionRatelimit`, `Analytics`, and type helpers.
+
+```mermaid
+graph TD
+ A[src/index.ts] --> B[src/single.ts]
+ A --> C[src/multi.ts]
+ A --> D[src/analytics.ts]
+ B --> E[src/ratelimit.ts]
+ C --> E
+ E --> F[src/cache.ts]
+ E --> G[src/deny-list/deny-list.ts]
+ B --> H[src/lua-scripts/single.ts]
+ C --> I[src/lua-scripts/multi.ts]
+ E --> J[src/lua-scripts/hash.ts]
+ J --> H
+ J --> I
+```
+
+**Key design decisions and why they exist**
+- **Algorithm factories return a uniform interface.** `AlgorithmTContext>` in `src/types.ts` defines `limit`, `getRemaining`, and `resetTokens`. Both `RegionRatelimit` (`src/single.ts`) and `MultiRegionRatelimit` (`src/multi.ts`) produce factories that match this shape so the core `Ratelimit` class can call them without caring about the algorithm type. This keeps the public API stable while allowing new algorithms to be added.
+- **Lua scripts for atomicity and performance.** All heavy operations happen in `src/lua-scripts/` to make Redis mutations and reads atomic. `safeEval` in `src/hash.ts` uses `EVALSHA` and falls back to loading the script when missing. This keeps network overhead low and avoids race conditions across concurrent requests.
+- **Ephemeral cache is optional and local.** `src/cache.ts` is a simple `Map` wrapper used to block identifiers without hitting Redis. This is valuable in serverless and edge contexts where repeated blocked requests are common and latency matters. The cache is intentionally ephemeral so it never becomes a source of truth.
+- **Protection (deny list) is layered.** `src/deny-list/deny-list.ts` checks a local deny list cache first, then consults Redis via a Lua script. This means known-bad identifiers can be rejected without a Redis read, and Redis becomes the source of truth for global deny lists.
+- **Analytics is decoupled.** `src/analytics.ts` wraps `@upstash/core-analytics` and is triggered from `Ratelimit.submitAnalytics`. This keeps the limit path fast and ensures analytics is asynchronous via the `pending` promise in the response.
+
+**How a limit check flows**
+1. `Ratelimit.limit` in `src/ratelimit.ts` calls `getRatelimitResponse`, which builds a Redis key and collects deny list candidates (identifier, IP, user agent, country).
+2. If protection is enabled, `checkDenyList` in `src/deny-list/deny-list.ts` runs a Lua script to check all deny list sets. If a deny list value is found, the response is overridden before returning.
+3. The algorithm factory from `src/single.ts` or `src/multi.ts` executes `safeEval` in `src/hash.ts`, which runs the matching Lua script from `src/lua-scripts/`.
+4. The algorithm returns `{ success, remaining, reset, pending }`. For multi-region algorithms, `pending` includes background synchronization to reconcile state across regions.
+5. If analytics is enabled, `submitAnalytics` attaches another async task to `pending` and returns the final response immediately.
+
+**Data flow at a glance**
+- **Single region**: `Ratelimit.limit` → algorithm `limit` → Lua script → Redis → response → optional analytics.
+- **Multi region**: `Ratelimit.limit` → algorithm `limit` → Lua script in each region → first response wins → async sync to reconcile regions.
+
+The result is a small, composable architecture where the public API stays simple, and all heavy lifting is performed atomically at Redis with minimal overhead in the caller.
diff --git a/docs/codedocs/guides/cloudflare-workers.md b/docs/codedocs/guides/cloudflare-workers.md
new file mode 100644
index 0000000..d3af95a
--- /dev/null
+++ b/docs/codedocs/guides/cloudflare-workers.md
@@ -0,0 +1,73 @@
+---
+title: "Cloudflare Workers"
+description: "Run rate limiting inside a Cloudflare Worker with an ephemeral cache and waitUntil."
+---
+
+This guide demonstrates a Worker that uses `cachedFixedWindow` with an ephemeral cache for fast blocking, and uses `context.waitUntil` to keep analytics and sync work alive.
+
+
+
+### Install dependencies
+```bash
+npm install @upstash/ratelimit @upstash/redis
+```
+
+
+### Bind environment variables
+In `wrangler.toml` or the Cloudflare dashboard, set:
+```
+UPSTASH_REDIS_REST_URL=...
+UPSTASH_REDIS_REST_TOKEN=...
+```
+
+
+### Worker implementation
+```ts title="src/index.ts"
+import { Ratelimit } from "@upstash/ratelimit";
+import { Redis } from "@upstash/redis/cloudflare";
+
+export interface Env {
+ UPSTASH_REDIS_REST_URL: string;
+ UPSTASH_REDIS_REST_TOKEN: string;
+}
+
+const cache = new Map();
+
+export default {
+ async fetch(request: Request, env: Env, context: ExecutionContext) {
+ const ratelimit = new Ratelimit({
+ redis: Redis.fromEnv(env),
+ limiter: Ratelimit.cachedFixedWindow(5, "5 s"),
+ ephemeralCache: cache,
+ analytics: true
+ });
+
+ const res = await ratelimit.limit("identifier");
+ context.waitUntil(res.pending);
+
+ if (!res.success) {
+ return new Response("Too Many Requests", { status: 429 });
+ }
+ return new Response(`ok (remaining: ${res.remaining})`);
+ }
+};
+```
+
+
+
+**Why `cachedFixedWindow` here**
+- Hot request bursts are rejected in memory without waiting for Redis.
+- Redis is still updated to keep long‑term accounting correct.
+
+**Complete runnable behavior**
+- The first 5 requests per 5 seconds succeed.
+- Additional requests return `429` until the window resets.
+
+**When to use this pattern**
+This is a good default for edge endpoints where you expect bursts and want minimal latency. Because the cache is per‑isolate, it is most accurate when your Worker runs in a small number of isolates. If you have many isolates, consider `slidingWindow` or `fixedWindow` for stricter global accounting.
+
+
+`cachedFixedWindow` throws if you forget to pass `ephemeralCache`. Always create the `Map` outside the handler so it survives across requests while the Worker stays hot.
+
+**Troubleshooting**
+If `Redis.fromEnv(env)` fails, verify that the bindings are available in your Worker environment and that you are using the Cloudflare adapter from `@upstash/redis/cloudflare`. A missing adapter is the most common cause of runtime errors in Workers.
diff --git a/docs/codedocs/guides/enable-protection.md b/docs/codedocs/guides/enable-protection.md
new file mode 100644
index 0000000..481cd46
--- /dev/null
+++ b/docs/codedocs/guides/enable-protection.md
@@ -0,0 +1,71 @@
+---
+title: "Enable Protection (Deny List)"
+description: "Block abusive identifiers, IPs, or user agents with deny lists and protection mode."
+---
+
+Protection lets you automatically reject requests if the identifier or metadata appears in a deny list. This guide shows a minimal Next.js edge route handler that enables protection and passes request metadata to `limit()`.
+
+
+
+### Install dependencies
+```bash
+npm install @upstash/ratelimit @upstash/redis
+```
+
+
+### Configure environment variables
+```
+UPSTASH_REDIS_REST_URL=...
+UPSTASH_REDIS_REST_TOKEN=...
+```
+
+
+### Add a protected route
+```ts title="app/api/route.ts"
+export const runtime = "edge";
+export const dynamic = "force-dynamic";
+
+import { waitUntil } from "@vercel/functions";
+import { Ratelimit } from "@upstash/ratelimit";
+import { Redis } from "@upstash/redis";
+
+const ratelimit = new Ratelimit({
+ redis: Redis.fromEnv(),
+ limiter: Ratelimit.slidingWindow(10, "10 s"),
+ analytics: true,
+ enableProtection: true
+});
+
+export async function POST(request: Request) {
+ const body = await request.json();
+
+ const res = await ratelimit.limit(body.userId, {
+ ip: request.headers.get("x-forwarded-for") ?? "",
+ userAgent: request.headers.get("user-agent") ?? "",
+ country: request.headers.get("x-vercel-ip-country") ?? ""
+ });
+
+ waitUntil(res.pending);
+
+ if (!res.success) {
+ return new Response("Blocked", { status: 429 });
+ }
+
+ return new Response("ok");
+}
+```
+
+
+
+**Complete runnable behavior**
+- If the user ID, IP, user agent, or country is in the deny list, the request is rejected with `reason: "denyList"`.
+- Otherwise, the normal rate limit applies.
+
+**Operational tips**
+You can refresh the IP deny list manually using `IpDenyList.updateIpDenyList` when you rotate security policies or during incident response. Consider logging `res.reason` so you can differentiate between rate limit blocks and deny list blocks in your observability pipeline.
+
+
+Protection uses deny list data from Redis and may apply cached decisions for up to 60 seconds. If you remove an entry from the deny list and need immediate unblocking, restart the runtime or disable the deny list cache.
+
+**Security note**
+Protection is additive to rate limiting; it does not replace authentication or authorization. Use it to block known abusive identifiers and to reduce automated traffic while maintaining your normal access controls.
diff --git a/docs/codedocs/guides/nextjs-route-handlers.md b/docs/codedocs/guides/nextjs-route-handlers.md
new file mode 100644
index 0000000..cf42323
--- /dev/null
+++ b/docs/codedocs/guides/nextjs-route-handlers.md
@@ -0,0 +1,68 @@
+---
+title: "Next.js Route Handlers (Edge)"
+description: "Use Ratelimit in a Next.js route handler running on the Edge runtime."
+---
+
+This guide shows how to enforce rate limits in a Next.js route handler deployed to the Edge runtime. It uses `waitUntil` to keep analytics submission alive after the response is sent.
+
+
+
+### Install dependencies
+```bash
+npm install @upstash/ratelimit @upstash/redis
+```
+
+
+### Configure environment variables
+Set these in your Vercel project or `.env` file:
+```
+UPSTASH_REDIS_REST_URL=...
+UPSTASH_REDIS_REST_TOKEN=...
+```
+
+
+### Add a route handler
+```ts title="app/api/route.ts"
+export const runtime = "edge";
+export const dynamic = "force-dynamic";
+
+import { waitUntil } from "@vercel/functions";
+import { Ratelimit } from "@upstash/ratelimit";
+import { Redis } from "@upstash/redis";
+
+const ratelimit = new Ratelimit({
+ redis: Redis.fromEnv(),
+ limiter: Ratelimit.slidingWindow(10, "10 s"),
+ analytics: true
+});
+
+export async function GET() {
+ const { success, remaining, pending } = await ratelimit.limit("api");
+ waitUntil(pending);
+
+ if (!success) {
+ return new Response("Too Many Requests", { status: 429 });
+ }
+
+ return new Response(`ok (remaining: ${remaining})`);
+}
+```
+
+
+
+**Why this works**
+- `Ratelimit.slidingWindow` smooths boundary bursts with a weighted previous window.
+- `pending` includes analytics submission; Edge runtimes cancel background work unless you attach it to `waitUntil`.
+
+**Complete runnable behavior**
+- First 10 requests in a 10‑second window return `200`.
+- Subsequent requests return `429` until the window resets.
+
+**Notes on identifiers**
+Pick an identifier that matches your abuse surface. For public APIs, a stable API key or user ID is usually best. If you only have IPs, use a normalized IP string and be aware that shared NATs can cause unrelated users to share the same quota.
+
+
+If you want strict enforcement even during network issues, increase or disable `timeout`. The default is 5 seconds in `src/ratelimit.ts`.
+
+**Troubleshooting**
+If you always see `429`, verify that your identifier is not constant across users and that your deployment is not reusing the same identifier in tests. Also confirm that `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` are set in the Edge environment.
diff --git a/docs/codedocs/index.md b/docs/codedocs/index.md
new file mode 100644
index 0000000..240d78a
--- /dev/null
+++ b/docs/codedocs/index.md
@@ -0,0 +1,89 @@
+---
+title: "Getting Started"
+description: "Upstash Rate Limit is a connectionless, HTTP-based rate limiting library for serverless, edge, and browser environments using Upstash Redis REST."
+---
+
+Upstash Rate Limit is a connectionless, HTTP-based rate limiting library for serverless, edge, and browser environments using Upstash Redis REST.
+
+**The Problem**
+- Traditional rate limiters assume long-lived TCP connections and don’t fit serverless or edge runtimes.
+- You need predictable limits across multiple runtimes without deploying and operating your own Redis.
+- Cold starts and network latency make per-request rate checks expensive without caching.
+- You want optional analytics and protection (deny lists) without building extra pipelines.
+
+**The Solution**
+The library ships Redis-backed algorithms (fixed window, sliding window, token bucket, cached fixed window) and wraps them in a single `Ratelimit` API. It uses Upstash’s HTTP Redis to work in edge and serverless environments, adds an ephemeral cache to short‑circuit hot limits, and optionally records analytics or deny‑list decisions.
+
+```ts title="app/ratelimit.ts"
+import { Ratelimit } from "@upstash/ratelimit";
+import { Redis } from "@upstash/redis";
+
+export const ratelimit = new Ratelimit({
+ redis: Redis.fromEnv(),
+ limiter: Ratelimit.slidingWindow(10, "10 s"),
+ analytics: true,
+ prefix: "@upstash/ratelimit"
+});
+```
+
+**Installation**
+
+
+```bash
+npm install @upstash/ratelimit @upstash/redis
+```
+
+
+```bash
+pnpm add @upstash/ratelimit @upstash/redis
+```
+
+
+```bash
+yarn add @upstash/ratelimit @upstash/redis
+```
+
+
+```bash
+bun add @upstash/ratelimit @upstash/redis
+```
+
+
+
+**Quick Start**
+```ts title="app/api/route.ts"
+import { Ratelimit } from "@upstash/ratelimit";
+import { Redis } from "@upstash/redis";
+
+const ratelimit = new Ratelimit({
+ redis: Redis.fromEnv(),
+ limiter: Ratelimit.fixedWindow(5, "10 s")
+});
+
+export async function GET() {
+ const { success, remaining, reset } = await ratelimit.limit("user_123");
+ if (!success) {
+ return new Response("Too Many Requests", { status: 429 });
+ }
+ return new Response(`ok (remaining: ${remaining})`);
+}
+```
+
+Expected output (first request):
+```
+ok (remaining: 4)
+```
+
+**Key Features**
+- HTTP‑based Redis access for serverless, edge, browser, and WebAssembly environments
+- Multiple algorithms with shared `Ratelimit` API
+- Multi‑region support with background synchronization
+- Optional analytics and protection (deny list)
+- Ephemeral cache for faster blocking in hot runtimes
+- Dynamic limits you can change at runtime
+
+
+ How modules interact and why they are designed this way
+ Algorithms, lifecycle, protection, and multi‑region flows
+ Full API docs for Ratelimit and helpers
+
diff --git a/docs/codedocs/lifecycle.md b/docs/codedocs/lifecycle.md
new file mode 100644
index 0000000..92f5654
--- /dev/null
+++ b/docs/codedocs/lifecycle.md
@@ -0,0 +1,76 @@
+---
+title: "Request Lifecycle"
+description: "What happens when you call limit(), including cache checks, timeouts, and async work."
+---
+
+This page explains the runtime lifecycle of a single `limit()` call, from cache checks to Redis scripts to analytics submission. The orchestration logic is in `src/ratelimit.ts`, while cache behavior lives in `src/cache.ts` and Lua scripts live in `src/lua-scripts/`.
+
+```mermaid
+sequenceDiagram
+ participant App
+ participant Ratelimit
+ participant Cache
+ participant Redis
+ participant Analytics
+ App->>Ratelimit: limit(identifier)
+ Ratelimit->>Cache: isBlocked(identifier)
+ alt blocked
+ Ratelimit-->>App: success=false reason=cacheBlock
+ else not blocked
+ Ratelimit->>Redis: EVALSHA (algorithm script)
+ Redis-->>Ratelimit: counters / remaining / reset
+ Ratelimit->>Analytics: record() (async)
+ Ratelimit-->>App: success / remaining / reset / pending
+ end
+```
+
+## Core flow in `Ratelimit.limit`
+`limit()` is implemented in `src/ratelimit.ts`. It builds a namespaced Redis key, optionally checks deny list values, executes the algorithm’s Lua script, and then attaches analytics work to the `pending` promise. This `pending` field is important in edge runtimes where you must explicitly keep background work alive.
+
+**Basic usage**
+```ts title="app/api/route.ts"
+const res = await ratelimit.limit("user_123");
+if (!res.success) {
+ return new Response("Too Many Requests", { status: 429 });
+}
+return new Response("ok");
+```
+
+**Edge usage with `pending`**
+```ts title="app/api/route.ts"
+import { waitUntil } from "@vercel/functions";
+
+const res = await ratelimit.limit("api");
+waitUntil(res.pending);
+```
+
+## Timeout behavior
+`Ratelimit.limit` wraps the actual work in a `Promise.race` with a timeout promise when `timeout` is set. If the timeout fires first, the response is treated as a successful request with `reason: "timeout"` and `reset: 0`. This is deliberate so you can choose availability over strict enforcement in failure scenarios.
+
+**Example with a short timeout**
+```ts title="app/ratelimit.ts"
+const ratelimit = new Ratelimit({
+ redis: Redis.fromEnv(),
+ limiter: Ratelimit.slidingWindow(10, "10 s"),
+ timeout: 250
+});
+```
+
+## Cache behavior
+The optional `ephemeralCache` is created in the constructor of `Ratelimit` when you pass a `Map` or leave it undefined. `src/cache.ts` stores `identifier -> reset` pairs. If a request is already blocked and the reset time hasn’t passed, the cache short‑circuits the Redis call and responds immediately with `reason: "cacheBlock"`.
+
+When a request succeeds after a refund (negative `rate`), the cache is cleared for that identifier to prevent accidental blocking.
+
+If you instantiate `Ratelimit` inside a request handler in serverless runtimes, the cache is recreated on every request and provides no value. Create the instance outside the handler so the cache survives while the function is hot.
+
+
+
+Setting `timeout` to a low value favors availability, especially in edge or mobile environments where network jitter can cause Redis calls to slow down. However, timeouts return `success: true` with `reason: "timeout"`, which may allow requests that would otherwise be blocked. If your API is sensitive to abuse, prefer a higher timeout or disable it entirely. Consider using `blockUntilReady` for queues where delaying is acceptable.
+
+
+The cache reduces Redis traffic and latency for repeated blocked identifiers, which is ideal for bursty traffic. Because it is local and short‑lived, it is not a source of truth and can drift across regions or isolates. If you run multiple instances, each instance has its own cache and can allow slightly more requests than the global limit. Use it as an optimization, not a guarantee.
+
+
+`pending` collects background work such as multi‑region synchronization and analytics submission. If you ignore it in edge runtimes, the work may be canceled when the request completes. In Node.js you can usually ignore it, but in Cloudflare Workers or Vercel Edge you should call `context.waitUntil(pending)` or `waitUntil(pending)`. The trade-off is a small amount of additional work after the response is returned, which is usually acceptable for analytics and sync tasks.
+
+
diff --git a/docs/codedocs/multi-region.md b/docs/codedocs/multi-region.md
new file mode 100644
index 0000000..fdced97
--- /dev/null
+++ b/docs/codedocs/multi-region.md
@@ -0,0 +1,65 @@
+---
+title: "Multi-Region Consistency"
+description: "How multi-region rate limiting works and how background synchronization keeps regions aligned."
+---
+
+Multi‑region rate limiting is implemented in `src/multi.ts` as `MultiRegionRatelimit`. It uses multiple Redis instances (one per region) and combines their responses to enforce a global limit with low latency. The key idea is to accept the first region to respond, then synchronize the others asynchronously.
+
+```mermaid
+sequenceDiagram
+ participant App
+ participant RegionA
+ participant RegionB
+ participant RegionC
+ App->>RegionA: EVALSHA fixedWindow
+ App->>RegionB: EVALSHA fixedWindow
+ App->>RegionC: EVALSHA fixedWindow
+ Note over App: Promise.any resolves first
+ RegionB-->>App: first response
+ App-->>App: compute remaining
+ App-->>App: return response
+ App->>RegionA: sync missing IDs (async)
+ App->>RegionC: sync missing IDs (async)
+```
+
+## How it works internally
+- **Request IDs**: `randomId()` in `src/multi.ts` generates a short unique ID per request. This ID becomes a field in a Redis hash (`HSET`), which makes it possible to reconcile entries across regions.
+- **First response wins**: The algorithm uses `Promise.any` across region requests. This keeps latency low by returning as soon as any region responds.
+- **Background sync**: After the response is returned, the `pending` promise performs a synchronization pass that compares all region hashes and inserts missing request IDs so each region eventually converges.
+
+The fixed window implementation uses `SCRIPTS.multiRegion.fixedWindow.*` in `src/lua-scripts/multi.ts`, while the sliding window implementation uses `SCRIPTS.multiRegion.slidingWindow.*` and performs a weighted blend of current and previous windows, similar to the single‑region version.
+
+## Basic usage
+```ts title="app/ratelimit.ts"
+import { MultiRegionRatelimit } from "@upstash/ratelimit";
+import { Redis } from "@upstash/redis";
+
+const ratelimit = new MultiRegionRatelimit({
+ redis: [Redis.fromEnv(), Redis.fromEnv()],
+ limiter: MultiRegionRatelimit.fixedWindow(100, "1 m")
+});
+
+const res = await ratelimit.limit("api_key_123");
+```
+
+## Advanced usage (edge runtime with `pending`)
+```ts title="app/api/route.ts"
+import { waitUntil } from "@vercel/functions";
+
+const res = await ratelimit.limit("api_key_123");
+waitUntil(res.pending);
+```
+
+Multi‑region limiters use background synchronization for eventual consistency. If you ignore the `pending` promise in edge runtimes, regions can drift longer than expected, causing temporary limit inflation.
+
+
+
+Multi‑region mode optimizes latency by returning the first response, which is perfect for global applications. The trade‑off is that the limit is eventually consistent until the sync task completes. This is acceptable for many APIs but can be problematic when strict global quotas are required. If you need absolute consistency, use a single region or design the API to tolerate short bursts.
+
+
+Using request IDs stored in Redis hashes enables deterministic reconciliation, but it increases storage overhead compared to simple counters. For high throughput identifiers, the hash can grow during the window and requires careful TTL management, which the Lua scripts handle by setting expirations on first write. This approach is a pragmatic compromise: accurate per-request tracking with eventual cleanup. It also simplifies multi-region merging compared to vector clocks or distributed locks.
+
+
+Sliding window in multi‑region stores each request ID in the current bucket and reads both current and previous hashes for weighting. This gives you smoother boundaries, but it doubles the read path and increases hash size. In regions with high latency, the cost of reading both hashes can be noticeable. Use it when boundary spikes are a real concern and fixed window is not sufficient.
+
+
diff --git a/docs/codedocs/protection-denylist.md b/docs/codedocs/protection-denylist.md
new file mode 100644
index 0000000..2117be8
--- /dev/null
+++ b/docs/codedocs/protection-denylist.md
@@ -0,0 +1,59 @@
+---
+title: "Protection and Deny Lists"
+description: "How deny lists work, how IP lists are synced, and how protection alters responses."
+---
+
+Protection adds a deny‑list layer on top of normal rate limiting. When enabled, `Ratelimit.limit` checks the identifier and request metadata (IP, user agent, country) against Redis‑stored deny lists. The logic lives in `src/deny-list/deny-list.ts`, while IP list updates are in `src/deny-list/ip-deny-list.ts`.
+
+```mermaid
+flowchart TD
+ A[limit(identifier, options)] --> B[checkDenyListCache]
+ B -->|hit| C[deny immediately]
+ B -->|miss| D[Lua checkDenyListScript]
+ D -->|denied| C
+ D -->|not denied| E[Run algorithm]
+ C --> F[Override response reason=denyList]
+```
+
+## How it works internally
+- **Local cache**: `checkDenyListCache` uses an in‑memory `Cache` to block denied values for 60 seconds. This prevents repeated Redis checks for known‑bad identifiers.
+- **Redis check**: `checkDenyList` runs `checkDenyListScript` (in `src/deny-list/scripts.ts`) which uses `SMISMEMBER` against a combined `all` set and checks the TTL of the IP deny list status key.
+- **IP list refresh**: If the status TTL returns `-2` (expired), the script marks it as `pending`, and `resolveLimitPayload` schedules `updateIpDenyList` to refresh the list asynchronously.
+
+## Basic usage
+```ts title="app/api/route.ts"
+const ratelimit = new Ratelimit({
+ redis: Redis.fromEnv(),
+ limiter: Ratelimit.slidingWindow(10, "10 s"),
+ enableProtection: true
+});
+
+const res = await ratelimit.limit("user_123", {
+ ip: "203.0.113.42",
+ userAgent: "my-app/1.0",
+ country: "US"
+});
+```
+
+## Advanced usage (manual IP list refresh)
+```ts title="app/ops.ts"
+import { IpDenyList } from "@upstash/ratelimit";
+import { Redis } from "@upstash/redis";
+
+const redis = Redis.fromEnv();
+await IpDenyList.updateIpDenyList(redis, "@upstash/ratelimit", 6);
+```
+
+Enabling protection adds Redis work to every request and may block based on IP, user agent, or country. If these values are unstable or missing (for example, mobile clients without a reliable IP), you can end up denying legitimate traffic. Consider limiting deny list checks to endpoints where you have consistent metadata.
+
+
+
+The IP deny list is sourced from a curated public list and refreshed when the status key expires. This keeps your list current without manual work, but it does introduce a dependency on external data availability. If the fetch fails, the update throws and the list is not refreshed, which can degrade protection quality. Use `updateIpDenyList` manually in ops workflows if you need deterministic updates.
+
+
+The local deny list cache improves performance by avoiding repeated Redis lookups for recently denied values. However, it also means a value stays blocked for at least 60 seconds even if you remove it from Redis. This is usually acceptable for abuse control, but if you need immediate unblocking, you must restart the runtime or avoid the cache. The trade-off is performance versus immediacy.
+
+
+The IP list uses a threshold from 1 to 8, where higher thresholds include only IPs that appear in more threat lists. Higher thresholds reduce false positives but may allow more suspicious traffic through. Lower thresholds block more aggressively but can impact legitimate users behind shared IPs or VPNs. Adjusting the threshold is a risk management decision that depends on the sensitivity of your application.
+
+
diff --git a/docs/codedocs/types.md b/docs/codedocs/types.md
new file mode 100644
index 0000000..a7326ca
--- /dev/null
+++ b/docs/codedocs/types.md
@@ -0,0 +1,83 @@
+---
+title: "Types"
+description: "Exported TypeScript types and interfaces for configuring and extending the library."
+---
+
+This page lists the types exported from the package entrypoint `src/index.ts`. These are the types you can import directly from `@upstash/ratelimit`.
+
+## `RatelimitConfig`
+Exported from `src/single.ts` as `RegionRatelimitConfig`.
+
+```ts title="src/single.ts"
+export type RegionRatelimitConfig = {
+ redis: Redis;
+ limiter: AlgorithmRegionContext>;
+ prefix?: string;
+ ephemeralCache?: Map | false;
+ timeout?: number;
+ analytics?: boolean;
+ cacheScripts?: boolean; // deprecated
+ enableProtection?: boolean;
+ denyListThreshold?: number;
+ dynamicLimits?: boolean;
+};
+```
+
+Use this when constructing the single‑region `Ratelimit` class.
+
+## `MultiRegionRatelimitConfig`
+```ts title="src/multi.ts"
+export type MultiRegionRatelimitConfig = {
+ redis: Redis[];
+ limiter: AlgorithmMultiRegionContext>;
+ prefix?: string;
+ ephemeralCache?: Map | false;
+ timeout?: number;
+ analytics?: boolean;
+ cacheScripts?: boolean;
+ dynamicLimits?: boolean;
+};
+```
+
+Use this when constructing `MultiRegionRatelimit`. Note that `dynamicLimits` is ignored for multi‑region limiters.
+
+## `AnalyticsConfig`
+```ts title="src/analytics.ts"
+export type AnalyticsConfig = {
+ redis: Redis;
+ prefix?: string;
+};
+```
+
+You can pass this to `new Analytics()` if you want custom analytics aggregation outside of `Ratelimit`.
+
+## `Algorithm`
+```ts title="src/types.ts"
+export type AlgorithmTContext> = () => {
+ limit: (ctx: TContext, identifier: string, rate?: number) => PromiseRatelimitResponse>;
+ getRemaining: (ctx: TContext, identifier: string) => Promise<{ remaining: number; reset: number; limit: number }>;
+ resetTokens: (ctx: TContext, identifier: string) => Promise;
+};
+```
+
+This is the shape returned by algorithm factories. It allows you to implement custom algorithms that plug into `Ratelimit` as long as they respect the same contract.
+
+## `Duration`
+```ts title="src/duration.ts"
+export type Duration = `${number} ${"ms" | "s" | "m" | "h" | "d"}` | `${number}${"ms" | "s" | "m" | "h" | "d"}`;
+```
+
+Used by all algorithm factories to express window and interval sizes. Internally, `ms()` in `src/duration.ts` parses the string and converts it to milliseconds.
+
+## Import examples
+```ts title="app/types.ts"
+import type { RatelimitConfig, MultiRegionRatelimitConfig, Duration, Algorithm } from "@upstash/ratelimit";
+```
+
+These types are useful when you wrap the library in your own abstractions. For example, if you build a shared `createRatelimit()` helper across multiple services, typing the config ensures you pass the correct Redis client and algorithm factory.
+
+## Practical guidance
+- Use `RatelimitConfig` for single‑region usage in serverless and edge environments.
+- Use `MultiRegionRatelimitConfig` when you need lower latency across multiple regions and can tolerate eventual consistency.
+- Use `Algorithm` only when you implement a custom algorithm; most users should rely on the built‑ins.
+- Use `Duration` to keep window strings consistent and avoid invalid values (the parser throws on invalid formats).