You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+104Lines changed: 104 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,109 @@
1
1
# Changelog
2
2
3
+
## [1.1.0] - 2026-02-18
4
+
5
+
### Added
6
+
7
+
#### Custom Content Policy Enforcement
8
+
You can now enforce your own content rules on top of LockLLM's built-in security. Create custom policies in the [dashboard](https://www.lockllm.com/policies), and the SDK will automatically check prompts against them. When a policy is violated, you'll get a `PolicyViolationError` with the exact policy name, violated categories, and details.
9
+
10
+
```typescript
11
+
try {
12
+
awaitopenai.chat.completions.create({ ... });
13
+
} catch (error) {
14
+
if (errorinstanceofPolicyViolationError) {
15
+
console.log(error.violated_policies);
16
+
// [{ policy_name: "No competitor mentions", violated_categories: [...] }]
17
+
}
18
+
}
19
+
```
20
+
21
+
#### AI Abuse Detection
22
+
Protect your endpoints from automated misuse. When enabled, LockLLM detects bot-generated content, repetitive prompts, and resource exhaustion attacks. If abuse is detected, you'll get an `AbuseDetectedError` with confidence scores and detailed indicator breakdowns.
The SDK now returns a dedicated `InsufficientCreditsError` when your balance is too low for a request. The error includes your `current_balance` and the `estimated_cost`, so you can handle billing gracefully in your application.
35
+
36
+
#### Scan Modes and Actions
37
+
Control exactly what gets checked and what happens when threats are found:
All wrapper functions (`createOpenAI`, `createAnthropic`, `createGroq`, etc.) now accept a `proxyOptions` parameter so you can configure security behavior at initialization time instead of per-request:
Let LockLLM automatically select the best model for each request based on task type and complexity. Set `routeAction: 'auto'` to enable, or `routeAction: 'custom'` to use your own routing rules from the dashboard.
69
+
70
+
#### Response Caching
71
+
Reduce costs by caching identical LLM responses. Enabled by default in proxy mode - disable it with `cacheResponse: false` or customize the TTL with `cacheTTL`.
72
+
73
+
#### Universal Proxy Mode
74
+
Access 200+ models without configuring individual provider API keys using `getUniversalProxyURL()`. Uses LockLLM credits instead of BYOK.
// metadata.safe, metadata.routing, metadata.cache_status, metadata.credits_deducted, etc.
89
+
```
90
+
91
+
#### Expanded Scan Response
92
+
Scan responses now include richer data when using advanced features:
93
+
-`policy_warnings` - Which custom policies were violated and why
94
+
-`scan_warning` - Injection details when using `allow_with_warning`
95
+
-`abuse_warnings` - Abuse indicators when abuse detection is enabled
96
+
-`routing` - Task type, complexity score, and selected model when routing is enabled
97
+
98
+
### Changed
99
+
- The scan API is fully backward compatible - existing code works without changes. Internally, scan configuration is now sent via HTTP headers for better compatibility and caching behavior.
100
+
101
+
### Notes
102
+
- All new features are opt-in. Existing integrations continue to work without any changes.
103
+
- Custom policies, abuse detection, and routing are configured in the [LockLLM dashboard](https://www.lockllm.com/dashboard).
0 commit comments