Skip to content

Commit e685986

Browse files
Updated documentation
1 parent c6aecaf commit e685986

File tree

2 files changed

+242
-22
lines changed

2 files changed

+242
-22
lines changed

CHANGELOG.md

Lines changed: 104 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,109 @@
11
# Changelog
22

3+
## [1.1.0] - 2026-02-18
4+
5+
### Added
6+
7+
#### Custom Content Policy Enforcement
8+
You can now enforce your own content rules on top of LockLLM's built-in security. Create custom policies in the [dashboard](https://www.lockllm.com/policies), and the SDK will automatically check prompts against them. When a policy is violated, you'll get a `PolicyViolationError` with the exact policy name, violated categories, and details.
9+
10+
```typescript
11+
try {
12+
await openai.chat.completions.create({ ... });
13+
} catch (error) {
14+
if (error instanceof PolicyViolationError) {
15+
console.log(error.violated_policies);
16+
// [{ policy_name: "No competitor mentions", violated_categories: [...] }]
17+
}
18+
}
19+
```
20+
21+
#### AI Abuse Detection
22+
Protect your endpoints from automated misuse. When enabled, LockLLM detects bot-generated content, repetitive prompts, and resource exhaustion attacks. If abuse is detected, you'll get an `AbuseDetectedError` with confidence scores and detailed indicator breakdowns.
23+
24+
```typescript
25+
const openai = createOpenAI({
26+
apiKey: process.env.LOCKLLM_API_KEY,
27+
proxyOptions: {
28+
abuseAction: 'block' // Opt-in: block abusive requests
29+
}
30+
});
31+
```
32+
33+
#### Credit Balance Awareness
34+
The SDK now returns a dedicated `InsufficientCreditsError` when your balance is too low for a request. The error includes your `current_balance` and the `estimated_cost`, so you can handle billing gracefully in your application.
35+
36+
#### Scan Modes and Actions
37+
Control exactly what gets checked and what happens when threats are found:
38+
39+
- **Scan modes** - Choose `normal` (core security only), `policy_only` (custom policies only), or `combined` (both)
40+
- **Actions per detection type** - Set `block` or `allow_with_warning` independently for core scans, custom policies, and abuse detection
41+
- **Abuse detection** is opt-in - disabled by default, enable it with `abuseAction`
42+
43+
```typescript
44+
const result = await lockllm.scan(
45+
{ input: userPrompt, mode: 'combined', sensitivity: 'high' },
46+
{ scanAction: 'block', policyAction: 'allow_with_warning', abuseAction: 'block' }
47+
);
48+
```
49+
50+
#### Proxy Options on All Wrappers
51+
All wrapper functions (`createOpenAI`, `createAnthropic`, `createGroq`, etc.) now accept a `proxyOptions` parameter so you can configure security behavior at initialization time instead of per-request:
52+
53+
```typescript
54+
const openai = createOpenAI({
55+
apiKey: process.env.LOCKLLM_API_KEY,
56+
proxyOptions: {
57+
scanMode: 'combined',
58+
scanAction: 'block',
59+
policyAction: 'block',
60+
routeAction: 'auto', // Enable intelligent routing
61+
cacheResponse: true, // Enable response caching
62+
cacheTTL: 3600 // Cache for 1 hour
63+
}
64+
});
65+
```
66+
67+
#### Intelligent Routing
68+
Let LockLLM automatically select the best model for each request based on task type and complexity. Set `routeAction: 'auto'` to enable, or `routeAction: 'custom'` to use your own routing rules from the dashboard.
69+
70+
#### Response Caching
71+
Reduce costs by caching identical LLM responses. Enabled by default in proxy mode - disable it with `cacheResponse: false` or customize the TTL with `cacheTTL`.
72+
73+
#### Universal Proxy Mode
74+
Access 200+ models without configuring individual provider API keys using `getUniversalProxyURL()`. Uses LockLLM credits instead of BYOK.
75+
76+
```typescript
77+
import { getUniversalProxyURL } from '@lockllm/sdk';
78+
const url = getUniversalProxyURL();
79+
// 'https://api.lockllm.com/v1/proxy/chat/completions'
80+
```
81+
82+
#### Proxy Response Metadata
83+
New utilities to read detailed metadata from proxy responses - scan results, routing decisions, cache status, and credit usage:
84+
85+
```typescript
86+
import { parseProxyMetadata } from '@lockllm/sdk';
87+
const metadata = parseProxyMetadata(response.headers);
88+
// metadata.safe, metadata.routing, metadata.cache_status, metadata.credits_deducted, etc.
89+
```
90+
91+
#### Expanded Scan Response
92+
Scan responses now include richer data when using advanced features:
93+
- `policy_warnings` - Which custom policies were violated and why
94+
- `scan_warning` - Injection details when using `allow_with_warning`
95+
- `abuse_warnings` - Abuse indicators when abuse detection is enabled
96+
- `routing` - Task type, complexity score, and selected model when routing is enabled
97+
98+
### Changed
99+
- The scan API is fully backward compatible - existing code works without changes. Internally, scan configuration is now sent via HTTP headers for better compatibility and caching behavior.
100+
101+
### Notes
102+
- All new features are opt-in. Existing integrations continue to work without any changes.
103+
- Custom policies, abuse detection, and routing are configured in the [LockLLM dashboard](https://www.lockllm.com/dashboard).
104+
105+
---
106+
3107
## [1.0.1] - 2026-01-16
4108

5109
### Changed

README.md

Lines changed: 138 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ LockLLM is a state-of-the-art AI security ecosystem that detects prompt injectio
2828
- **Advanced ML Detection** - Models trained on real-world attack patterns for prompt injection and jailbreaks
2929
- **17+ Provider Support** - Universal coverage across OpenAI, Anthropic, Azure, Bedrock, Gemini, and more
3030
- **Drop-in Integration** - Replace existing SDKs with zero code changes - just change one line
31-
- **Completely Free** - BYOK (Bring Your Own Key) model with unlimited usage and no rate limits
31+
- **Free Unlimited Scanning** - BYOK (Bring Your Own Key) model with free unlimited scanning
3232
- **Privacy by Default** - Your data is never stored, only scanned in-memory and discarded
3333

3434
## Why LockLLM
@@ -79,6 +79,10 @@ LockLLM provides production-ready AI security that integrates seamlessly into yo
7979
| **Streaming Compatible** | Works seamlessly with streaming responses from any provider |
8080
| **Configurable Sensitivity** | Adjust detection thresholds (low/medium/high) per use case |
8181
| **Custom Endpoints** | Configure custom URLs for any provider (self-hosted, Azure, private clouds) |
82+
| **Custom Content Policies** | Define your own content rules in the dashboard and enforce them automatically across all providers |
83+
| **AI Abuse Detection** | Detect bot-generated content, repetition attacks, and resource exhaustion from your end-users |
84+
| **Intelligent Routing** | Automatically select the optimal model for each request based on task type and complexity to save costs |
85+
| **Response Caching** | Cache identical LLM responses to reduce costs and latency on repeated queries |
8286
| **Enterprise Privacy** | Provider keys encrypted at rest, prompts never stored |
8387
| **Production Ready** | Battle-tested with automatic retries, timeouts, and error handling |
8488

@@ -400,6 +404,9 @@ const highResult = await lockllm.scan({
400404
import {
401405
LockLLMError,
402406
PromptInjectionError,
407+
PolicyViolationError,
408+
AbuseDetectedError,
409+
InsufficientCreditsError,
403410
AuthenticationError,
404411
RateLimitError,
405412
UpstreamError
@@ -417,13 +424,19 @@ try {
417424
console.log("Injection confidence:", error.scanResult.injection);
418425
console.log("Request ID:", error.requestId);
419426

420-
// Log to security monitoring system
421-
await logSecurityIncident({
422-
type: 'prompt_injection',
423-
confidence: error.scanResult.injection,
424-
requestId: error.requestId,
425-
timestamp: new Date()
426-
});
427+
} else if (error instanceof PolicyViolationError) {
428+
// Custom policy violation detected
429+
console.log("Policy violation:", error.violated_policies);
430+
431+
} else if (error instanceof AbuseDetectedError) {
432+
// AI abuse detected (bot content, repetition, etc.)
433+
console.log("Abuse detected:", error.abuse_details.abuse_types);
434+
console.log("Confidence:", error.abuse_details.confidence);
435+
436+
} else if (error instanceof InsufficientCreditsError) {
437+
// Not enough credits
438+
console.log("Balance:", error.current_balance);
439+
console.log("Cost:", error.estimated_cost);
427440

428441
} else if (error instanceof AuthenticationError) {
429442
console.log("Invalid LockLLM API key");
@@ -587,7 +600,7 @@ interface LockLLMConfig {
587600
Scan a prompt for security threats before sending to an LLM.
588601

589602
```typescript
590-
await lockllm.scan(request: ScanRequest): Promise<ScanResponse>
603+
await lockllm.scan(request: ScanRequest, options?: ScanOptions): Promise<ScanResponse>
591604
```
592605

593606
**Request Parameters:**
@@ -596,6 +609,14 @@ await lockllm.scan(request: ScanRequest): Promise<ScanResponse>
596609
interface ScanRequest {
597610
input: string; // Required: Text to scan
598611
sensitivity?: 'low' | 'medium' | 'high'; // Optional: Detection level (default: 'medium')
612+
mode?: 'normal' | 'policy_only' | 'combined'; // Optional: Scan mode (default: 'combined')
613+
chunk?: boolean; // Optional: Force chunking for long texts
614+
}
615+
616+
interface ScanOptions {
617+
scanAction?: 'block' | 'allow_with_warning'; // Core injection behavior
618+
policyAction?: 'block' | 'allow_with_warning'; // Custom policy behavior
619+
abuseAction?: 'block' | 'allow_with_warning'; // Abuse detection (opt-in)
599620
}
600621
```
601622

@@ -605,8 +626,9 @@ interface ScanRequest {
605626
interface ScanResponse {
606627
safe: boolean; // Whether input is safe (true) or malicious (false)
607628
label: 0 | 1; // Classification: 0=safe, 1=malicious
608-
confidence: number; // Confidence score (0-1)
609-
injection: number; // Injection risk score (0-1, higher=more risky)
629+
confidence?: number; // Core injection confidence score (0-1)
630+
injection?: number; // Injection risk score (0-1, higher=more risky)
631+
policy_confidence?: number; // Policy check confidence (in combined/policy_only mode)
610632
sensitivity: Sensitivity; // Sensitivity level used for scan
611633
request_id: string; // Unique request identifier
612634

@@ -615,11 +637,20 @@ interface ScanResponse {
615637
input_chars: number; // Number of characters processed
616638
};
617639

618-
debug?: { // Only available with Pro plan
640+
debug?: {
619641
duration_ms: number; // Total processing time
620642
inference_ms: number; // ML inference time
621643
mode: 'single' | 'chunked';
622644
};
645+
646+
// Present when using policy_only or combined mode with allow_with_warning
647+
policy_warnings?: PolicyViolation[];
648+
// Present when core injection detected with allow_with_warning
649+
scan_warning?: ScanWarning;
650+
// Present when abuse detection is enabled and abuse found
651+
abuse_warnings?: AbuseWarning;
652+
// Present when intelligent routing is enabled
653+
routing?: { task_type: string; complexity: number; selected_model?: string; };
623654
}
624655
```
625656

@@ -640,6 +671,15 @@ createGroq(config: GenericClientConfig): OpenAI
640671
interface GenericClientConfig {
641672
apiKey: string; // Required: Your LockLLM API key
642673
baseURL?: string; // Optional: Override proxy URL
674+
proxyOptions?: { // Optional: Security and routing configuration
675+
scanMode?: 'normal' | 'policy_only' | 'combined';
676+
scanAction?: 'block' | 'allow_with_warning';
677+
policyAction?: 'block' | 'allow_with_warning';
678+
abuseAction?: 'block' | 'allow_with_warning' | null;
679+
routeAction?: 'disabled' | 'auto' | 'custom';
680+
cacheResponse?: boolean;
681+
cacheTTL?: number;
682+
};
643683
[key: string]: any; // Optional: Provider-specific options
644684
}
645685
```
@@ -656,6 +696,16 @@ const url = getProxyURL('openai');
656696
// Returns: 'https://api.lockllm.com/v1/proxy/openai'
657697
```
658698

699+
**Get universal proxy URL (non-BYOK, 200+ models):**
700+
701+
```typescript
702+
function getUniversalProxyURL(): string
703+
704+
// Example
705+
const url = getUniversalProxyURL();
706+
// Returns: 'https://api.lockllm.com/v1/proxy/chat/completions'
707+
```
708+
659709
**Get all proxy URLs:**
660710

661711
```typescript
@@ -667,6 +717,34 @@ console.log(urls.openai); // 'https://api.lockllm.com/v1/proxy/openai'
667717
console.log(urls.anthropic); // 'https://api.lockllm.com/v1/proxy/anthropic'
668718
```
669719

720+
**Build LockLLM proxy headers:**
721+
722+
```typescript
723+
import { buildLockLLMHeaders } from '@lockllm/sdk';
724+
725+
const headers = buildLockLLMHeaders({
726+
scanMode: 'combined',
727+
scanAction: 'block',
728+
policyAction: 'allow_with_warning',
729+
abuseAction: 'block',
730+
routeAction: 'auto'
731+
});
732+
// Returns: { 'x-lockllm-scan-mode': 'combined', ... }
733+
```
734+
735+
**Parse proxy response metadata:**
736+
737+
```typescript
738+
import { parseProxyMetadata } from '@lockllm/sdk';
739+
740+
// Parse response headers from any proxy request
741+
const metadata = parseProxyMetadata(response.headers);
742+
console.log(metadata.safe); // true/false
743+
console.log(metadata.scan_mode); // 'combined'
744+
console.log(metadata.cache_status); // 'HIT' or 'MISS'
745+
console.log(metadata.routing); // { task_type, complexity, selected_model, ... }
746+
```
747+
670748
## Error Types
671749

672750
LockLLM provides typed errors for comprehensive error handling:
@@ -678,6 +756,9 @@ LockLLMError (base)
678756
├── AuthenticationError (401)
679757
├── RateLimitError (429)
680758
├── PromptInjectionError (400)
759+
├── PolicyViolationError (403)
760+
├── AbuseDetectedError (400)
761+
├── InsufficientCreditsError (402)
681762
├── UpstreamError (502)
682763
├── ConfigurationError (400)
683764
└── NetworkError (0)
@@ -701,6 +782,32 @@ class RateLimitError extends LockLLMError {
701782
retryAfter?: number; // Milliseconds until retry allowed
702783
}
703784

785+
class PolicyViolationError extends LockLLMError {
786+
violated_policies: Array<{
787+
policy_name: string;
788+
violated_categories: Array<{ name: string }>;
789+
violation_details?: string;
790+
}>;
791+
}
792+
793+
class AbuseDetectedError extends LockLLMError {
794+
abuse_details: {
795+
confidence: number;
796+
abuse_types: string[];
797+
indicators: {
798+
bot_score: number;
799+
repetition_score: number;
800+
resource_score: number;
801+
pattern_score: number;
802+
};
803+
};
804+
}
805+
806+
class InsufficientCreditsError extends LockLLMError {
807+
current_balance: number; // Current credit balance
808+
estimated_cost: number; // Estimated cost of the request
809+
}
810+
704811
class UpstreamError extends LockLLMError {
705812
provider?: string; // Provider name
706813
upstreamStatus?: number; // Provider's status code
@@ -732,13 +839,22 @@ LockLLM adds minimal latency while providing comprehensive security protection.
732839

733840
## Rate Limits
734841

735-
LockLLM provides generous rate limits for all users, with the Free tier supporting most production use cases.
842+
LockLLM uses a 10-tier progressive system based on monthly usage. Higher tiers unlock faster rate limits and free monthly credits.
843+
844+
| Tier | Max RPM | Monthly Spending Requirement |
845+
|------|---------|----------------------------|
846+
| **Tier 1** (Free) | 30 RPM | $0 |
847+
| **Tier 2** | 50 RPM | $10/month |
848+
| **Tier 3** | 100 RPM | $50/month |
849+
| **Tier 4** | 200 RPM | $100/month |
850+
| **Tier 5** | 500 RPM | $250/month |
851+
| **Tier 6** | 1,000 RPM | $500/month |
852+
| **Tier 7** | 2,000 RPM | $1,000/month |
853+
| **Tier 8** | 5,000 RPM | $3,000/month |
854+
| **Tier 9** | 10,000 RPM | $5,000/month |
855+
| **Tier 10** | 20,000 RPM | $10,000/month |
736856

737-
| Tier | Requests per Minute | Best For |
738-
|------|---------------------|----------|
739-
| **Free** | 1,000 RPM | Most applications, startups, side projects |
740-
| **Pro** | 10,000 RPM | High-traffic applications, enterprise pilots |
741-
| **Enterprise** | Custom | Large-scale deployments, custom SLAs |
857+
See [pricing](https://www.lockllm.com/pricing) for full tier details and free monthly credits.
742858

743859
**Smart Rate Limit Handling:**
744860

@@ -924,17 +1040,17 @@ For non-JavaScript environments, use the REST API directly:
9241040

9251041
**Scan Endpoint:**
9261042
```bash
927-
curl -X POST https://api.lockllm.com/scan \
928-
-H "x-api-key: YOUR_LOCKLLM_API_KEY" \
1043+
curl -X POST https://api.lockllm.com/v1/scan \
1044+
-H "Authorization: Bearer YOUR_LOCKLLM_API_KEY" \
9291045
-H "Content-Type: application/json" \
930-
-d '{"prompt": "Your text to scan", "sensitivity": "medium"}'
1046+
-d '{"input": "Your text to scan", "sensitivity": "medium"}'
9311047
```
9321048

9331049
**Proxy Endpoints:**
9341050
```bash
9351051
# OpenAI-compatible proxy
9361052
curl -X POST https://api.lockllm.com/v1/proxy/openai/chat/completions \
937-
-H "x-api-key: YOUR_LOCKLLM_API_KEY" \
1053+
-H "Authorization: Bearer YOUR_LOCKLLM_API_KEY" \
9381054
-H "Content-Type: application/json" \
9391055
-d '{"model": "gpt-4", "messages": [{"role": "user", "content": "Hello"}]}'
9401056
```

0 commit comments

Comments
 (0)