| title | description | tags | |||
|---|---|---|---|---|---|
Data Privacy & Retention Guide |
What data Claude Code sends to Anthropic servers and how to protect sensitive information |
|
Critical: Everything you share with Claude Code is sent to Anthropic servers. This guide explains what data leaves your machine and how to protect sensitive information.
| Configuration | Retention Period | Training | How to Enable |
|---|---|---|---|
| Consumer (default) | 5 years | Yes | (default state) |
| Consumer (opt-out) | 30 days | No | claude.ai/settings |
| Team / Enterprise / API | 30 days | No (default) | Use Team, Enterprise plan, or API keys |
| ZDR (Zero Data Retention) | 0 days server-side | No | Appropriately configured API keys |
Immediate action: Disable training data usage to reduce retention from 5 years to 30 days.
When you use Claude Code, the following data is sent to Anthropic:
┌─────────────────────────────────────────────────────────────┐
│ YOUR LOCAL MACHINE │
├─────────────────────────────────────────────────────────────┤
│ • Prompts you type │
│ • Files Claude reads (including .env if not excluded!) │
│ • MCP server results (SQL queries, API responses) │
│ • Bash command outputs │
│ • Error messages and stack traces │
└───────────┬──────────────────┬──────────────┬───────────────┘
│ │ │
▼ HTTPS/TLS ▼ HTTPS ▼ HTTPS
┌───────────────────┐ ┌──────────────┐ ┌─────────────────────┐
│ ANTHROPIC API │ │ STATSIG │ │ SENTRY │
├───────────────────┤ ├──────────────┤ ├─────────────────────┤
│ • Your prompts │ │ • Latency, │ │ • Error logs │
│ • Model responses │ │ reliability│ │ • No code or │
│ • Retention per │ │ • No code or │ │ file paths │
│ your tier │ │ file paths │ │ │
└───────────────────┘ └──────────────┘ └─────────────────────┘
(opt-out: (opt-out:
DISABLE_ DISABLE_ERROR_
TELEMETRY=1) REPORTING=1)
| Scenario | Data Sent to Anthropic |
|---|---|
You ask Claude to read src/app.ts |
Full file contents |
You run git status via Claude |
Command output |
MCP executes SELECT * FROM users |
Query results with user data |
Claude reads .env file |
API keys, passwords, secrets |
| Error occurs in your code | Full stack trace with paths |
- Retention: 5 years
- Usage: Model improvement, training data
- Applies to: Free, Pro, Max plans with training setting ON
- Retention: 30 days
- Usage: Safety monitoring, abuse prevention only
- How to enable:
- Go to https://claude.ai/settings/data-privacy-controls
- Disable "Allow model training on your conversations"
- Changes apply immediately
- Retention: 30 days
- Usage: Safety monitoring, abuse prevention only
- Training: Not used for training by default (no opt-out needed)
- Applies to: Team plans, Enterprise plans, API users, third-party platforms, Claude Gov
- Retention: 0 days server-side (local client cache may persist up to 30 days)
- Usage: None retained on Anthropic servers
- Requires: Appropriately configured API keys (see Anthropic documentation)
- Use cases: HIPAA (requires separate BAA), GDPR, PCI-DSS compliance, government contracts
Important: Data is encrypted in transit via TLS but is not encrypted at rest on Anthropic servers. Factor this into your security assessments.
Claude Code reads files to understand context. By default, this includes:
.envand.env.localfiles (API keys, passwords)credentials.json,secrets.yaml(service accounts)- SSH keys if in workspace scope
- Database connection strings
Mitigation: Configure excludePatterns (see Section 4).
When you configure database MCP servers (Neon, Supabase, PlanetScale):
Your Query: "Show me recent orders"
↓
MCP Executes: SELECT * FROM orders LIMIT 100
↓
Results Sent: 100 rows with customer names, emails, addresses
↓
Stored at Anthropic: According to your retention tier
Mitigation: Never connect production databases. Use dev/staging with anonymized data.
Bash commands and their output are included in context:
# This output goes to Anthropic:
$ env | grep API
OPENAI_API_KEY=sk-abc123...
STRIPE_SECRET_KEY=sk_live_...Mitigation: Use hooks to filter sensitive command outputs.
When you run /bug in Claude Code, your full conversation history (including all code, file contents, and potentially secrets) is sent to Anthropic for bug triage. This data is retained for 5 years, regardless of your training opt-out setting.
This is independent of your privacy preferences: even with training disabled and 30-day retention, bug reports follow their own 5-year retention policy.
Mitigation: Disable the command entirely if you work with sensitive codebases:
export DISABLE_BUG_COMMAND=1Or add it to your shell profile (~/.zshrc, ~/.bashrc) to make it permanent.
| Incident | Source |
|---|---|
Claude reads .env by default |
r/ClaudeAI, GitHub issues |
| DROP TABLE attempts on poorly configured MCP | r/ClaudeAI |
| Credentials exposed via environment variables | GitHub issues |
| Prompt injection via malicious MCP servers | r/programming |
- Visit https://claude.ai/settings/data-privacy-controls
- Toggle OFF "Allow model training"
- Retention reduces from 5 years to 30 days
In .claude/settings.json, use permissions.deny to block access to sensitive files:
{
"permissions": {
"deny": [
"Read(./.env*)",
"Edit(./.env*)",
"Write(./.env*)",
"Bash(cat .env*)",
"Bash(head .env*)",
"Read(./secrets/**)",
"Read(./**/credentials*)",
"Read(./**/*.pem)",
"Read(./**/*.key)",
"Read(./**/service-account*.json)"
]
}
}Note: The old
excludePatternsandignorePatternssettings were deprecated in October 2025. Usepermissions.denyinstead.
Warning:
permissions.denyhas known limitations. For defense-in-depth, combine with security hooks and external secrets management.
Create .claude/hooks/PreToolUse.sh:
#!/bin/bash
INPUT=$(cat)
TOOL_NAME=$(echo "$INPUT" | jq -r '.tool.name')
if [[ "$TOOL_NAME" == "Read" ]]; then
FILE_PATH=$(echo "$INPUT" | jq -r '.tool.input.file_path')
# Block reading sensitive files
if [[ "$FILE_PATH" =~ \.env|credentials|secrets|\.pem|\.key ]]; then
echo "BLOCKED: Attempted to read sensitive file: $FILE_PATH" >&2
exit 2 # Block the operation
fi
fiClaude Code connects to third-party services for operational metrics (Statsig) and error logging (Sentry). These do not include your code or file paths, but you can disable them entirely:
| Variable | What it Disables |
|---|---|
DISABLE_TELEMETRY=1 |
Statsig operational metrics (latency, reliability, usage patterns) |
DISABLE_ERROR_REPORTING=1 |
Sentry error logging |
DISABLE_BUG_COMMAND=1 |
The /bug command (prevents sending full conversation history) |
CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1 |
All non-essential network traffic at once |
CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY=1 |
Session quality surveys (note: surveys only send your numeric rating, never transcripts) |
Add these to your shell profile for permanent effect:
# In ~/.zshrc or ~/.bashrc
export DISABLE_TELEMETRY=1
export DISABLE_ERROR_REPORTING=1
export DISABLE_BUG_COMMAND=1Note: When using Bedrock, Vertex, or Foundry providers, all non-essential traffic (telemetry, error reporting, bug command, surveys) is disabled by default.
| Rule | Rationale |
|---|---|
| Never connect production databases | All query results sent to Anthropic |
| Use read-only database users | Prevents DROP/DELETE/UPDATE accidents |
| Anonymize development data | Reduces PII exposure risk |
| Create minimal test datasets | Less data = less risk |
| Audit MCP server sources | Third-party MCPs may have vulnerabilities |
| Environment | Recommendation |
|---|---|
| Development | Opt-out + exclusions + anonymized data |
| Staging | Consider Enterprise API if handling real data |
| Production | NEVER connect Claude Code directly |
| Feature | Claude Code + MCP | Cursor | GitHub Copilot |
|---|---|---|---|
| Data scope sent | Full SQL results, files | Code snippets | Code snippets |
| Production DB access | Yes (via MCP) | Limited | Not designed for |
| Default retention | 5 years | Variable | 30 days |
| Training by default | Yes | Opt-in | Opt-in |
Key difference: MCP creates a unique attack surface because MCP servers are separate processes with independent network/filesystem access.
- Handling PII (names, emails, addresses)
- Regulated industries (HIPAA, GDPR, PCI-DSS)
- Client data processing
- Government contracts
- Financial services
- Data classification policy exists for your organization
- API tier matches data sensitivity requirements
- Team trained on privacy controls
- Incident response plan for potential data exposure
- Legal/compliance review completed
| Resource | URL |
|---|---|
| Privacy settings | https://claude.ai/settings/data-privacy-controls |
| Anthropic usage policy | https://www.anthropic.com/policies |
| Enterprise information | https://www.anthropic.com/enterprise |
| Terms of service | https://www.anthropic.com/legal/consumer-terms |
# Check current Claude config
claude /config
# Verify exclusions are loaded
claude /status
# Run privacy audit
./examples/scripts/audit-scan.sh- Training opt-out enabled at claude.ai/settings
-
.env*files blocked viapermissions.denyin settings.json - No production database connections via MCP
- Security hooks installed for sensitive file access
- Team aware of data flow to Anthropic
Disclaimer: This is not legal advice. Consult a qualified attorney for your specific situation.
When using AI code generation tools, discuss these points with your legal team:
| Consideration | What to Discuss |
|---|---|
| Ownership | Copyright status of AI-generated code remains legally unsettled in most jurisdictions |
| License contamination | Training data may include open-source code with copyleft licenses (GPL, AGPL) that could affect your codebase |
| Vendor indemnification | Some enterprise plans offer legal protection (e.g., Microsoft Copilot Enterprise includes IP indemnification) |
| Sector compliance | Regulated industries (healthcare, finance, government) may have additional IP requirements |
This guide focuses on Claude Code usage—not legal strategy. For IP guidance, consult specialized legal resources or your organization's legal counsel.
Anthropic published Claude's constitution in January 2026 (CC0 license - public domain). This document defines the value hierarchy that guides Claude's behavior:
Priority Order (used to resolve conflicts):
- Broadly safe - Never compromise human supervision and control
- Broadly ethical - Honesty, harm avoidance, good conduct
- Anthropic compliance - Internal guidelines and policies
- Genuinely helpful - Real utility for users and society
| Scenario | Expected Behavior |
|---|---|
| Security-sensitive requests | Claude prioritizes safety over helpfulness (may be more conservative) |
| Borderline biology/chemistry | May decline or ask for context to assess safety implications |
| Ethical conflicts | Will follow hierarchy: safety > ethics > compliance > utility |
- Training data source: Constitution is used to generate synthetic training examples
- Behavior specification: Reference document explaining intended vs. accidental outputs
- Audit & governance: Provides legal/ethical foundation for compliance reviews
- Your own agents: CC0 license allows reuse/adaptation for custom models
- Constitution full text: https://www.anthropic.com/constitution
- PDF version: https://www-cdn.anthropic.com/.../claudes-constitution.pdf
- Announcement: https://www.anthropic.com/news/claude-new-constitution
- Alignment research: https://alignment.anthropic.com/
- 2026-02: Fixed retention model (3 tiers to 4 tiers), added /bug command warning, telemetry opt-out variables, encryption-at-rest disclosure, updated ZDR conditions
- 2026-01: Added Claude's governance & constitutional AI framework section
- 2026-01: Added intellectual property considerations section
- 2026-01: Initial version - documenting retention policies and protective measures