Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
77 changes: 77 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,82 @@
# Changelog

## [1.2.0] - 2026-02-21

### Added

#### PII Detection and Redaction
Protect sensitive personal information in prompts before they reach AI providers. When enabled, LockLLM detects emails, phone numbers, SSNs, credit card numbers, and other PII entities. Choose how to handle detected PII with the `piiAction` option:

- **`block`** - Reject requests containing PII entirely. Throws a `PIIDetectedError` with entity types and count.
- **`strip`** - Automatically redact PII from prompts before forwarding to the AI provider. The redacted text is available via `redacted_input` in the scan response.
- **`allow_with_warning`** - Allow requests through but include PII metadata in the response for logging.

PII detection is opt-in and disabled by default.

```typescript
// Block requests containing PII
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY,
proxyOptions: {
piiAction: 'strip' // Automatically redact PII before sending to AI
}
});

// Handle PII errors when using block mode
try {
await openai.chat.completions.create({ ... });
} catch (error) {
if (error instanceof PIIDetectedError) {
console.log(error.pii_details.entity_types); // ['email', 'phone_number']
console.log(error.pii_details.entity_count); // 3
}
}
```

#### Scan API PII Support
The scan endpoint now accepts a `piiAction` option alongside existing scan options:

```typescript
const result = await lockllm.scan(
{ input: 'My email is test@example.com' },
{ piiAction: 'block', scanAction: 'block' }
);

if (result.pii_result) {
console.log(result.pii_result.detected); // true
console.log(result.pii_result.entity_types); // ['email']
console.log(result.pii_result.entity_count); // 1
console.log(result.pii_result.redacted_input); // 'My email is [EMAIL]' (strip mode only)
}
```

#### Enhanced Proxy Response Metadata
Proxy responses now include additional fields for better observability:

- **PII detection metadata** - `pii_detected` object with detection status, entity types, count, and action taken
- **Blocked status** - `blocked` flag when a request was rejected by security checks
- **Sensitivity and label** - `sensitivity` level used and numeric `label` (0 = safe, 1 = unsafe)
- **Decoded detail fields** - `scan_detail`, `policy_detail`, and `abuse_detail` automatically decoded from base64 response headers
- **Extended routing metadata** - `estimated_original_cost`, `estimated_routed_cost`, `estimated_input_tokens`, `estimated_output_tokens`, and `routing_fee_reason`

#### Sensitivity Header Support
You can now set the detection sensitivity level via `proxyOptions` or `buildLockLLMHeaders`:

```typescript
const openai = createOpenAI({
apiKey: process.env.LOCKLLM_API_KEY,
proxyOptions: {
sensitivity: 'high' // 'low', 'medium', or 'high'
}
});
```

### Notes
- PII detection is opt-in. Existing integrations continue to work without changes.
- All new types (`PIIAction`, `PIIResult`, `PIIDetectedError`, `PIIDetectedErrorData`) are fully exported for TypeScript users.

---

## [1.1.0] - 2026-02-18

### Added
Expand Down
41 changes: 37 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

**All-in-One AI Security for LLM Applications**

*Keep control of your AI. Detect prompt injection, jailbreaks, and adversarial attacks in real-time across 17+ providers with zero code changes.*
*Keep control of your AI. Detect prompt injection, jailbreaks, PII leakage, and adversarial attacks in real-time across 17+ providers with zero code changes.*

[Quick Start](#quick-start) · [Documentation](https://www.lockllm.com/docs) · [Examples](#examples) · [Benchmarks](https://www.lockllm.com) · [API Reference](#api-reference)

Expand Down Expand Up @@ -82,6 +82,7 @@ LockLLM provides production-ready AI security that integrates seamlessly into yo
| **Custom Content Policies** | Define your own content rules in the dashboard and enforce them automatically across all providers |
| **AI Abuse Detection** | Detect bot-generated content, repetition attacks, and resource exhaustion from your end-users |
| **Intelligent Routing** | Automatically select the optimal model for each request based on task type and complexity to save costs |
| **PII Detection & Redaction** | Detect and automatically redact emails, phone numbers, SSNs, credit cards, and other personal information before they reach AI providers |
| **Response Caching** | Cache identical LLM responses to reduce costs and latency on repeated queries |
| **Enterprise Privacy** | Provider keys encrypted at rest, prompts never stored |
| **Production Ready** | Battle-tested with automatic retries, timeouts, and error handling |
Expand Down Expand Up @@ -406,6 +407,7 @@ import {
PromptInjectionError,
PolicyViolationError,
AbuseDetectedError,
PIIDetectedError,
InsufficientCreditsError,
AuthenticationError,
RateLimitError,
Expand Down Expand Up @@ -433,6 +435,11 @@ try {
console.log("Abuse detected:", error.abuse_details.abuse_types);
console.log("Confidence:", error.abuse_details.confidence);

} else if (error instanceof PIIDetectedError) {
// Personal information detected (when piiAction is 'block')
console.log("PII found:", error.pii_details.entity_types);
console.log("Entity count:", error.pii_details.entity_count);

} else if (error instanceof InsufficientCreditsError) {
// Not enough credits
console.log("Balance:", error.current_balance);
Expand Down Expand Up @@ -548,7 +555,7 @@ LockLLM Security Gateway
3. **Error Response** - Detailed error returned with threat classification and confidence scores
4. **Logging** - Incident automatically logged in [dashboard](https://www.lockllm.com/dashboard) for review and monitoring

### Security & Privacy
### Privacy & Security

LockLLM is built with privacy and security as core principles. Your data stays yours.

Expand Down Expand Up @@ -617,6 +624,7 @@ interface ScanOptions {
scanAction?: 'block' | 'allow_with_warning'; // Core injection behavior
policyAction?: 'block' | 'allow_with_warning'; // Custom policy behavior
abuseAction?: 'block' | 'allow_with_warning'; // Abuse detection (opt-in)
piiAction?: 'strip' | 'block' | 'allow_with_warning'; // PII detection (opt-in)
}
```

Expand Down Expand Up @@ -651,6 +659,15 @@ interface ScanResponse {
abuse_warnings?: AbuseWarning;
// Present when intelligent routing is enabled
routing?: { task_type: string; complexity: number; selected_model?: string; };
// Present when PII detection is enabled
pii_result?: PIIResult;
}

interface PIIResult {
detected: boolean; // Whether PII was detected
entity_types: string[]; // Types of PII entities found (e.g., 'email', 'phone_number')
entity_count: number; // Number of PII entities found
redacted_input?: string; // Redacted text (only present when piiAction is 'strip')
}
```

Expand All @@ -676,7 +693,9 @@ interface GenericClientConfig {
scanAction?: 'block' | 'allow_with_warning';
policyAction?: 'block' | 'allow_with_warning';
abuseAction?: 'block' | 'allow_with_warning' | null;
piiAction?: 'strip' | 'block' | 'allow_with_warning' | null;
routeAction?: 'disabled' | 'auto' | 'custom';
sensitivity?: 'low' | 'medium' | 'high';
cacheResponse?: boolean;
cacheTTL?: number;
};
Expand Down Expand Up @@ -727,9 +746,10 @@ const headers = buildLockLLMHeaders({
scanAction: 'block',
policyAction: 'allow_with_warning',
abuseAction: 'block',
piiAction: 'strip',
routeAction: 'auto'
});
// Returns: { 'x-lockllm-scan-mode': 'combined', ... }
// Returns: { 'x-lockllm-scan-mode': 'combined', 'x-lockllm-pii-action': 'strip', ... }
```

**Parse proxy response metadata:**
Expand All @@ -743,6 +763,7 @@ console.log(metadata.safe); // true/false
console.log(metadata.scan_mode); // 'combined'
console.log(metadata.cache_status); // 'HIT' or 'MISS'
console.log(metadata.routing); // { task_type, complexity, selected_model, ... }
console.log(metadata.pii_detected); // { detected, entity_types, entity_count, action }
```

## Error Types
Expand All @@ -758,6 +779,7 @@ LockLLMError (base)
├── PromptInjectionError (400)
├── PolicyViolationError (403)
├── AbuseDetectedError (400)
├── PIIDetectedError (403)
├── InsufficientCreditsError (402)
├── UpstreamError (502)
├── ConfigurationError (400)
Expand Down Expand Up @@ -803,6 +825,13 @@ class AbuseDetectedError extends LockLLMError {
};
}

class PIIDetectedError extends LockLLMError {
pii_details: {
entity_types: string[]; // PII types found (e.g., 'email', 'phone_number')
entity_count: number; // Number of PII entities detected
};
}

class InsufficientCreditsError extends LockLLMError {
current_balance: number; // Current credit balance
estimated_cost: number; // Estimated cost of the request
Expand Down Expand Up @@ -908,7 +937,8 @@ const result = await lockllm.scan(
{
scanAction: 'block', // Block core injection attacks
policyAction: 'allow_with_warning', // Allow but warn on policy violations
abuseAction: 'block' // Enable abuse detection (opt-in)
abuseAction: 'block', // Enable abuse detection (opt-in)
piiAction: 'strip' // Redact PII from input (opt-in)
}
);

Expand All @@ -920,6 +950,7 @@ const openai = createOpenAI({
scanAction: 'block', // Block injection attacks
policyAction: 'block', // Block policy violations
abuseAction: 'allow_with_warning', // Detect abuse, don't block
piiAction: 'strip', // Automatically redact PII
routeAction: 'auto' // Enable intelligent routing
}
});
Expand All @@ -939,13 +970,15 @@ const openai = createOpenAI({
- `scanAction` - Controls core injection detection: `'block'` | `'allow_with_warning'`
- `policyAction` - Controls custom policy violations: `'block'` | `'allow_with_warning'`
- `abuseAction` - Controls abuse detection (opt-in): `'block'` | `'allow_with_warning'` | `null`
- `piiAction` - Controls PII detection (opt-in): `'strip'` | `'block'` | `'allow_with_warning'` | `null`
- `routeAction` - Controls intelligent routing: `'disabled'` | `'auto'` | `'custom'`

**Default Behavior (no headers):**
- Scan Mode: `combined` (check both core + policies)
- Scan Action: `allow_with_warning` (detect but don't block)
- Policy Action: `allow_with_warning` (detect but don't block)
- Abuse Action: `null` (disabled, opt-in only)
- PII Action: `null` (disabled, opt-in only)
- Route Action: `disabled` (no routing)

See [examples/advanced-options.ts](examples/advanced-options.ts) for complete examples.
Expand Down
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "@lockllm/sdk",
"version": "1.1.0",
"version": "1.2.0",
"description": "Enterprise-grade AI security SDK providing real-time protection against prompt injection, jailbreaks, and adversarial attacks. Drop-in replacement for OpenAI, Anthropic, and 17+ providers with zero code changes. Includes REST API, proxy mode, browser extension, and webhook support. Free BYOK model with unlimited scanning.",
"main": "./dist/index.js",
"module": "./dist/index.mjs",
Expand Down
35 changes: 35 additions & 0 deletions src/errors.ts
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ import type {
PolicyViolationErrorData,
AbuseDetectedErrorData,
InsufficientCreditsErrorData,
PIIDetectedErrorData,
} from './types/errors';

/**
Expand Down Expand Up @@ -189,6 +190,28 @@ export class AbuseDetectedError extends LockLLMError {
}
}

/**
* Error thrown when PII (personal information) is detected and action is block
*/
export class PIIDetectedError extends LockLLMError {
public readonly pii_details: {
entity_types: string[];
entity_count: number;
};

constructor(data: PIIDetectedErrorData) {
super({
message: data.message,
type: 'lockllm_pii_error',
code: 'pii_detected',
status: 403,
requestId: data.requestId,
});
this.name = 'PIIDetectedError';
this.pii_details = data.pii_details;
}
}

/**
* Error thrown when user has insufficient credits
*/
Expand Down Expand Up @@ -267,6 +290,18 @@ export function parseError(response: any, requestId?: string): LockLLMError {
});
}

// PII detected error
if (error.code === 'pii_detected' && error.pii_details) {
return new PIIDetectedError({
message: error.message,
type: error.type,
code: error.code,
status: 403,
requestId: error.request_id || requestId,
pii_details: error.pii_details,
});
}

// Abuse detected error
if (error.code === 'abuse_detected' && error.abuse_details) {
return new AbuseDetectedError({
Expand Down
4 changes: 4 additions & 0 deletions src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ export {
PromptInjectionError,
PolicyViolationError,
AbuseDetectedError,
PIIDetectedError,
InsufficientCreditsError,
UpstreamError,
ConfigurationError,
Expand All @@ -32,6 +33,7 @@ export type {
ScanMode,
ScanAction,
RouteAction,
PIIAction,
ProxyRequestOptions,
ProxyResponseMetadata,
} from './types/common';
Expand All @@ -44,6 +46,7 @@ export type {
PolicyViolation,
ScanWarning,
AbuseWarning,
PIIResult,
} from './types/scan';

export type {
Expand All @@ -52,6 +55,7 @@ export type {
PromptInjectionErrorData,
PolicyViolationErrorData,
AbuseDetectedErrorData,
PIIDetectedErrorData,
InsufficientCreditsErrorData,
} from './types/errors';

Expand Down
5 changes: 5 additions & 0 deletions src/scan.ts
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,11 @@ export class ScanClient {
headers['x-lockllm-abuse-action'] = options.abuseAction;
}

// PII action: opt-in PII detection (null/undefined means disabled)
if (options?.piiAction !== undefined && options?.piiAction !== null) {
headers['x-lockllm-pii-action'] = options.piiAction;
}

// Build request body
const body: Record<string, any> = {
input: request.input,
Expand Down
Loading
Loading