You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+80-2Lines changed: 80 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,83 @@
1
1
# Changelog
2
2
3
+
## [1.3.0] - 2026-02-27
4
+
5
+
### Added
6
+
7
+
#### Prompt Compression
8
+
Reduce token usage and costs by compressing prompts before sending them to AI providers. Three compression methods are available:
9
+
10
+
-**`toon`** (Free) - Converts JSON data to a compact notation format, achieving 30-60% token savings on structured data. Only activates when the prompt starts with `{` or `[` (pure JSON). Non-JSON input is returned unchanged.
11
+
-**`compact`** ($0.0001/use) - Advanced compression that intelligently reduces prompt length while preserving meaning. Works on any text type. Supports configurable compression rate (0.3-0.7, default 0.5).
12
+
-**`combined`** ($0.0001/use) - Applies TOON first, then runs Compact on the result for maximum token reduction. For non-JSON input, behaves identically to `compact`. Best when you want maximum compression.
13
+
14
+
Prompt compression is opt-in and disabled by default. Security scanning always runs on the original text before compression is applied.
15
+
16
+
**Proxy mode:**
17
+
```typescript
18
+
// TOON - compress structured JSON prompts (free)
19
+
const openai =createOpenAI({
20
+
apiKey: process.env.LOCKLLM_API_KEY,
21
+
proxyOptions: {
22
+
compressionAction: 'toon'
23
+
}
24
+
});
25
+
26
+
// Compact - compress any text with configurable rate
27
+
const openai2 =createOpenAI({
28
+
apiKey: process.env.LOCKLLM_API_KEY,
29
+
proxyOptions: {
30
+
compressionAction: 'compact',
31
+
compressionRate: 0.4// Lower = more aggressive compression (0.3-0.7, default: 0.5)
32
+
}
33
+
});
34
+
35
+
// Combined - TOON then Compact for maximum compression
Let LockLLM automatically select the best model for each request based on task type and complexity. Set `routeAction: 'auto'` to enable, or `routeAction: 'custom'` to use your own routing rules from the dashboard.
Copy file name to clipboardExpand all lines: README.md
+39-16Lines changed: 39 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -81,8 +81,9 @@ LockLLM provides production-ready AI security that integrates seamlessly into yo
81
81
|**Custom Endpoints**| Configure custom URLs for any provider (self-hosted, Azure, private clouds) |
82
82
|**Custom Content Policies**| Define your own content rules in the dashboard and enforce them automatically across all providers |
83
83
|**AI Abuse Detection**| Detect bot-generated content, repetition attacks, and resource exhaustion from your end-users |
84
-
|**Intelligent Routing**| Automatically select the optimal model for each request based on task type and complexity to save costs |
84
+
|**Smart Routing**| Automatically select the optimal model for each request based on task type and complexity to save costs |
85
85
|**PII Detection & Redaction**| Detect and automatically redact emails, phone numbers, SSNs, credit cards, and other personal information before they reach AI providers |
86
+
|**Prompt Compression**| Reduce token usage with TOON (JSON-to-compact-notation, free), Compact (advanced compression, $0.0001/use), or Combined (TOON then Compact for maximum reduction, $0.0001/use) methods |
86
87
|**Response Caching**| Cache identical LLM responses to reduce costs and latency on repeated queries |
87
88
|**Enterprise Privacy**| Provider keys encrypted at rest, prompts never stored |
88
89
|**Production Ready**| Battle-tested with automatic retries, timeouts, and error handling |
Copy file name to clipboardExpand all lines: package.json
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
{
2
2
"name": "@lockllm/sdk",
3
-
"version": "1.2.0",
3
+
"version": "1.3.0",
4
4
"description": "Enterprise-grade AI security SDK providing real-time protection against prompt injection, jailbreaks, and adversarial attacks. Drop-in replacement for OpenAI, Anthropic, and 17+ providers with zero code changes. Includes REST API, proxy mode, browser extension, and webhook support. Free BYOK model with unlimited scanning.",
0 commit comments