You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+77Lines changed: 77 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,82 @@
1
1
# Changelog
2
2
3
+
## [1.2.0] - 2026-02-21
4
+
5
+
### Added
6
+
7
+
#### PII Detection and Redaction
8
+
Protect sensitive personal information in prompts before they reach AI providers. When enabled, LockLLM detects emails, phone numbers, SSNs, credit card numbers, and other PII entities. Choose how to handle detected PII with the `piiAction` option:
9
+
10
+
-**`block`** - Reject requests containing PII entirely. Throws a `PIIDetectedError` with entity types and count.
11
+
-**`strip`** - Automatically redact PII from prompts before forwarding to the AI provider. The redacted text is available via `redacted_input` in the scan response.
12
+
-**`allow_with_warning`** - Allow requests through but include PII metadata in the response for logging.
13
+
14
+
PII detection is opt-in and disabled by default.
15
+
16
+
```typescript
17
+
// Block requests containing PII
18
+
const openai =createOpenAI({
19
+
apiKey: process.env.LOCKLLM_API_KEY,
20
+
proxyOptions: {
21
+
piiAction: 'strip'// Automatically redact PII before sending to AI
Copy file name to clipboardExpand all lines: README.md
+37-4Lines changed: 37 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@
10
10
11
11
**All-in-One AI Security for LLM Applications**
12
12
13
-
*Keep control of your AI. Detect prompt injection, jailbreaks, and adversarial attacks in real-time across 17+ providers with zero code changes.*
13
+
*Keep control of your AI. Detect prompt injection, jailbreaks, PII leakage, and adversarial attacks in real-time across 17+ providers with zero code changes.*
@@ -82,6 +82,7 @@ LockLLM provides production-ready AI security that integrates seamlessly into yo
82
82
|**Custom Content Policies**| Define your own content rules in the dashboard and enforce them automatically across all providers |
83
83
|**AI Abuse Detection**| Detect bot-generated content, repetition attacks, and resource exhaustion from your end-users |
84
84
|**Intelligent Routing**| Automatically select the optimal model for each request based on task type and complexity to save costs |
85
+
|**PII Detection & Redaction**| Detect and automatically redact emails, phone numbers, SSNs, credit cards, and other personal information before they reach AI providers |
85
86
|**Response Caching**| Cache identical LLM responses to reduce costs and latency on repeated queries |
86
87
|**Enterprise Privacy**| Provider keys encrypted at rest, prompts never stored |
87
88
|**Production Ready**| Battle-tested with automatic retries, timeouts, and error handling |
Copy file name to clipboardExpand all lines: package.json
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
{
2
2
"name": "@lockllm/sdk",
3
-
"version": "1.1.0",
3
+
"version": "1.2.0",
4
4
"description": "Enterprise-grade AI security SDK providing real-time protection against prompt injection, jailbreaks, and adversarial attacks. Drop-in replacement for OpenAI, Anthropic, and 17+ providers with zero code changes. Includes REST API, proxy mode, browser extension, and webhook support. Free BYOK model with unlimited scanning.",
0 commit comments