Skip to content

Add guide for customizing Presidio Docker images#1792

Open
SAIRAMSSSS wants to merge 3 commits intomicrosoft:mainfrom
SAIRAMSSSS:main
Open

Add guide for customizing Presidio Docker images#1792
SAIRAMSSSS wants to merge 3 commits intomicrosoft:mainfrom
SAIRAMSSSS:main

Conversation

@SAIRAMSSSS
Copy link

Summary

This PR adds comprehensive documentation for building and customizing Presidio Docker images to support additional languages.

Changes

  • Created docs/docker_customization.md with detailed instructions on:
    • Modifying Dockerfiles for language support
    • Configuring YAML files for custom recognizers
    • Building and running custom Docker images
    • Common pitfalls and troubleshooting tips (memory issues, NLP recognizer warnings)
    • Complete examples with docker-compose

Addresses Issue

Closes #1663

This documentation fulfills the request for more elaborate instructions on building custom Docker images for Presidio, specifically covering:

  • Which YAML files to modify for multi-language support
  • Typical pitfalls when adding 10+ languages
  • How to resolve common warnings like "NLP recognizer is not in the list of recognizers"

Testing

Documentation has been reviewed for accuracy and completeness. All code examples follow Presidio's existing patterns.

This document provides a comprehensive guide on how to build and customize Presidio Docker images to support additional languages and configurations, including prerequisites, steps for modification, and troubleshooting tips.
@microsoft-github-policy-service
Copy link
Contributor

@SAIRAMSSSS please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.

@microsoft-github-policy-service agree [company="{your company}"]

Options:

  • (default - no company specified) I have sole ownership of intellectual property rights to my Submissions and I am not making Submissions in the course of work for my employer.
@microsoft-github-policy-service agree
  • (when company given) I am making Submissions in the course of work for my employer (or my employer has intellectual property rights in my Submissions by contract or applicable law). I have permission from my employer to make Submissions and enter into this Agreement on behalf of my employer. By signing below, the defined term “You” includes me and my employer.
@microsoft-github-policy-service agree company="Microsoft"
Contributor License Agreement

Contribution License Agreement

This Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
and conveys certain license rights to Microsoft Corporation and its affiliates (“Microsoft”) for Your
contributions to Microsoft open source projects. This Agreement is effective as of the latest signature
date below.

  1. Definitions.
    “Code” means the computer software code, whether in human-readable or machine-executable form,
    that is delivered by You to Microsoft under this Agreement.
    “Project” means any of the projects owned or managed by Microsoft and offered under a license
    approved by the Open Source Initiative (www.opensource.org).
    “Submit” is the act of uploading, submitting, transmitting, or distributing code or other content to any
    Project, including but not limited to communication on electronic mailing lists, source code control
    systems, and issue tracking systems that are managed by, or on behalf of, the Project for the purpose of
    discussing and improving that Project, but excluding communication that is conspicuously marked or
    otherwise designated in writing by You as “Not a Submission.”
    “Submission” means the Code and any other copyrightable material Submitted by You, including any
    associated comments and documentation.
  2. Your Submission. You must agree to the terms of this Agreement before making a Submission to any
    Project. This Agreement covers any and all Submissions that You, now or in the future (except as
    described in Section 4 below), Submit to any Project.
  3. Originality of Work. You represent that each of Your Submissions is entirely Your original work.
    Should You wish to Submit materials that are not Your original work, You may Submit them separately
    to the Project if You (a) retain all copyright and license information that was in the materials as You
    received them, (b) in the description accompanying Your Submission, include the phrase “Submission
    containing materials of a third party:” followed by the names of the third party and any licenses or other
    restrictions of which You are aware, and (c) follow any other instructions in the Project’s written
    guidelines concerning Submissions.
  4. Your Employer. References to “employer” in this Agreement include Your employer or anyone else
    for whom You are acting in making Your Submission, e.g. as a contractor, vendor, or agent. If Your
    Submission is made in the course of Your work for an employer or Your employer has intellectual
    property rights in Your Submission by contract or applicable law, You must secure permission from Your
    employer to make the Submission before signing this Agreement. In that case, the term “You” in this
    Agreement will refer to You and the employer collectively. If You change employers in the future and
    desire to Submit additional Submissions for the new employer, then You agree to sign a new Agreement
    and secure permission from the new employer before Submitting those Submissions.
  5. Licenses.
  • Copyright License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license in the
    Submission to reproduce, prepare derivative works of, publicly display, publicly perform, and distribute
    the Submission and such derivative works, and to sublicense any or all of the foregoing rights to third
    parties.
  • Patent License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license under
    Your patent claims that are necessarily infringed by the Submission or the combination of the
    Submission with the Project to which it was Submitted to make, have made, use, offer to sell, sell and
    import or otherwise dispose of the Submission alone or with the Project.
  • Other Rights Reserved. Each party reserves all rights not expressly granted in this Agreement.
    No additional licenses or rights whatsoever (including, without limitation, any implied licenses) are
    granted by implication, exhaustion, estoppel or otherwise.
  1. Representations and Warranties. You represent that You are legally entitled to grant the above
    licenses. You represent that each of Your Submissions is entirely Your original work (except as You may
    have disclosed under Section 3). You represent that You have secured permission from Your employer to
    make the Submission in cases where Your Submission is made in the course of Your work for Your
    employer or Your employer has intellectual property rights in Your Submission by contract or applicable
    law. If You are signing this Agreement on behalf of Your employer, You represent and warrant that You
    have the necessary authority to bind the listed employer to the obligations contained in this Agreement.
    You are not expected to provide support for Your Submission, unless You choose to do so. UNLESS
    REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING, AND EXCEPT FOR THE WARRANTIES
    EXPRESSLY STATED IN SECTIONS 3, 4, AND 6, THE SUBMISSION PROVIDED UNDER THIS AGREEMENT IS
    PROVIDED WITHOUT WARRANTY OF ANY KIND, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTY OF
    NONINFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.
  2. Notice to Microsoft. You agree to notify Microsoft in writing of any facts or circumstances of which
    You later become aware that would make Your representations in this Agreement inaccurate in any
    respect.
  3. Information about Submissions. You agree that contributions to Projects and information about
    contributions may be maintained indefinitely and disclosed publicly, including Your name and other
    information that You submit with Your Submission.
  4. Governing Law/Jurisdiction. This Agreement is governed by the laws of the State of Washington, and
    the parties consent to exclusive jurisdiction and venue in the federal courts sitting in King County,
    Washington, unless no federal subject matter jurisdiction exists, in which case the parties consent to
    exclusive jurisdiction and venue in the Superior Court of King County, Washington. The parties waive all
    defenses of lack of personal jurisdiction and forum non-conveniens.
  5. Entire Agreement/Assignment. This Agreement is the entire agreement between the parties, and
    supersedes any and all prior agreements, understandings or communications, written or oral, between
    the parties relating to the subject matter hereof. This Agreement may be assigned by Microsoft.

Copy link
Collaborator

@omri374 omri374 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! This is a great start! Left some comments for discussion.


Navigate to `presidio-analyzer/Dockerfile` and add your desired spaCy language models.

### Example: Adding Spanish Support
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Presidio supports the installation of spacy, stanza and transformers models using the NLP config, so there is no need to explicitly add those to the Dockerfile. Have you given this a try?

**Problem**: Adding 10+ languages at once can cause the Docker image to run out of memory during build or runtime.

**Solutions**:
- Use smaller spaCy models (e.g., `es_core_news_sm` instead of `es_core_news_lg`)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a caveat about smaller models likely being less accurate in detecting PII in the text

docker run -d -p 5002:3000 --memory="4g" presidio-analyzer-custom:latest
```
- Build images with only the languages you actually need
- Consider using transformers models which can be more memory-efficient
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure this is true. Do you have a concrete example?

- `md` (medium): ~40MB, balanced
- `lg` (large): ~500MB+, most accurate but resource-intensive

**Recommendation**: Start with `md` models for a good balance.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our recommendation is to start with the large models

# Install spaCy language models
RUN python -m spacy download en_core_web_lg
RUN python -m spacy download es_core_news_md
RUN python -m spacy download fr_core_news_md
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you download models but not configure the NER model configuration, presidio will ignore those models.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a new documentation file providing guidance on building and customizing Presidio Docker images for multi-language support, addressing issue #1663.

Changes:

  • New comprehensive documentation file docs/docker_customization.md with instructions on Dockerfile modifications, YAML configurations, common pitfalls, and docker-compose examples

Comment on lines +84 to +90
## Step 6: Run Your Custom Image

Run the custom image:

```bash
docker run -d -p 5002:3000 presidio-analyzer-custom:latest
```
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Docker run command uses port 5002 for the external mapping but the Dockerfile default PORT environment variable is 3000 (as seen in the actual Dockerfile line 13). This creates confusion about which internal port the service is actually running on.

The documentation should be consistent with the actual Presidio Dockerfile which uses PORT=3000 by default. The command should either be:

  • docker run -d -p 5002:3000 presidio-analyzer-custom:latest (using default PORT=3000)
  • Or document that users can override the PORT environment variable if needed

Copilot uses AI. Check for mistakes.
- [Presidio Analyzer Documentation](https://microsoft.github.io/presidio/analyzer/)
- [spaCy Language Models](https://spacy.io/models)
- [Presidio Custom Recognizers](https://microsoft.github.io/presidio/analyzer/adding_recognizers/)
- [Analyzer Engine Provider](https://microsoft.github.io/presidio/analyzer/analyzer_engine_provider/)
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The link to "Analyzer Engine Provider" documentation appears to be inconsistent with the actual file name. The link uses /analyzer/analyzer_engine_provider/ (suggesting a directory), but the actual file in the repository is analyzer/analyzer_engine_provider.md (a single markdown file).

The correct link format should be:
[Analyzer Engine Provider](https://microsoft.github.io/presidio/analyzer/analyzer_engine_provider/)

This likely works in practice due to how MkDocs handles URLs, but it's better to be consistent with the actual file structure for clarity.

Suggested change
- [Analyzer Engine Provider](https://microsoft.github.io/presidio/analyzer/analyzer_engine_provider/)
- [Analyzer Engine Provider](https://microsoft.github.io/presidio/analyzer/analyzer_engine_provider.md)

Copilot uses AI. Check for mistakes.
For complex setups, use docker-compose.yml:

```yaml
version: '3.8'
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docker-compose.yml example uses version '3.8', but the actual Presidio docker-compose.yml files in the repository don't specify a version (which is the recommended practice for modern Docker Compose). The version field is deprecated in the latest Docker Compose specification.

Consider either:

  1. Removing the version: '3.8' line to follow current best practices
  2. Adding a note that the version field is optional in modern Docker Compose

This is a minor point but helps keep the documentation aligned with current Docker Compose conventions.

Suggested change
version: '3.8'

Copilot uses AI. Check for mistakes.
docker run -d -p 5002:3000 --memory="4g" presidio-analyzer-custom:latest
```
- Build images with only the languages you actually need
- Consider using transformers models which can be more memory-efficient
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The recommendation to "Consider using transformers models which can be more memory-efficient" may be misleading. Transformers models are typically more memory-intensive than smaller spaCy models, not less. The advantage of transformers is usually better accuracy for certain tasks, not memory efficiency.

If the intent is to suggest using a single multilingual transformer model instead of multiple language-specific spaCy models, this should be clarified. Otherwise, this recommendation could confuse users about the memory characteristics of transformers vs spaCy models.

Consider revising to: "Consider using a single multilingual transformers model instead of multiple language-specific spaCy models, which can reduce the total model size" or removing this point if it's not accurate for the intended use case.

Suggested change
- Consider using transformers models which can be more memory-efficient
- Consider using a single multilingual transformers model instead of multiple language-specific spaCy models, which can reduce the total model size

Copilot uses AI. Check for mistakes.
Comment on lines +138 to +154
WORKDIR /usr/bin/presidio-analyzer

# Copy requirements
COPY requirements.txt .
RUN pip install -r requirements.txt

# Install spaCy language models
RUN python -m spacy download en_core_web_lg
RUN python -m spacy download es_core_news_md
RUN python -m spacy download fr_core_news_md

# Copy application code
COPY . .

EXPOSE 3000

CMD ["python", "app.py"]
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Dockerfile example provided is incomplete and doesn't match the actual Presidio architecture. The real Presidio Dockerfile:

  1. Uses Python 3.12-slim (not 3.11-slim) as of the current version
  2. Uses Poetry for dependency management, not direct pip install
  3. Installs models via a configuration-driven approach using install_nlp_models.py, not direct spacy download commands
  4. Has a more complex structure with environment variables, non-root user setup, and health checks

This simplified example could mislead users into thinking they should manually add RUN python -m spacy download commands, when they should actually be modifying the YAML configuration files. Either remove this example or replace it with accurate guidance on modifying the NLP configuration file (e.g., conf/default.yaml or using a custom configuration file via build args).

Suggested change
WORKDIR /usr/bin/presidio-analyzer
# Copy requirements
COPY requirements.txt .
RUN pip install -r requirements.txt
# Install spaCy language models
RUN python -m spacy download en_core_web_lg
RUN python -m spacy download es_core_news_md
RUN python -m spacy download fr_core_news_md
# Copy application code
COPY . .
EXPOSE 3000
CMD ["python", "app.py"]
FROM python:3.12-slim
# Install system dependencies and Poetry
RUN apt-get update && \
apt-get install -y --no-install-recommends curl build-essential && \
pip install --no-cache-dir poetry && \
rm -rf /var/lib/apt/lists/*
WORKDIR /usr/bin/presidio-analyzer
# Copy project metadata and install Python dependencies with Poetry
COPY pyproject.toml poetry.lock ./
RUN poetry install --no-dev --no-interaction --no-ansi
# Copy NLP configuration which controls which spaCy models are installed
# Customize this YAML file (or provide your own via build args) instead of
# adding `python -m spacy download ...` commands in the Dockerfile.
COPY conf ./conf
# Install NLP models based on the configuration file
RUN poetry run python -m presidio_analyzer.install_nlp_models --conf-file conf/default.yaml
# Copy application code
COPY . .
# Create and switch to a non-root user
RUN useradd -m presidio && chown -R presidio /usr/bin/presidio-analyzer
USER presidio
EXPOSE 3000
# Simple health check endpoint is expected at /health
HEALTHCHECK --interval=30s --timeout=5s --retries=3 \
CMD curl -fsS http://localhost:3000/health || exit 1
# Start the analyzer service (command may vary by version; see official Dockerfile)
CMD ["poetry", "run", "gunicorn", "-b", "0.0.0.0:3000", "presidio_analyzer.entrypoints.app:app"]

Copilot uses AI. Check for mistakes.
### Modified Dockerfile

```dockerfile
FROM python:3.11-slim
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Dockerfile example specifies Python 3.11-slim, but the actual Presidio Dockerfile uses Python 3.12-slim (as seen in line 1 of the actual Dockerfile). While Python 3.11 is supported by Presidio according to the installation documentation, the example should match the current official Dockerfile to avoid confusion.

Consider updating to Python 3.12-slim to match the current official implementation, or add a note that users can use any supported Python version (3.10-3.13 according to installation.md).

Suggested change
FROM python:3.11-slim
FROM python:3.12-slim

Copilot uses AI. Check for mistakes.
Comment on lines +40 to +58
Navigate to `presidio-analyzer/Dockerfile` and add your desired spaCy language models.

### Example: Adding Spanish Support

In the Dockerfile, locate the section where spaCy models are downloaded and add:

```dockerfile
RUN python -m spacy download es_core_news_md
```

### Example: Adding Multiple Languages

```dockerfile
# Install language models
RUN python -m spacy download en_core_web_lg
RUN python -m spacy download es_core_news_md # Spanish
RUN python -m spacy download fr_core_news_md # French
RUN python -m spacy download de_core_news_md # German
```
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation incorrectly describes how to add language models to the Dockerfile. The actual Presidio Dockerfile uses a configuration-based approach where models are installed automatically via the install_nlp_models.py script that reads from configuration YAML files (like default.yaml), not by directly adding RUN python -m spacy download commands in the Dockerfile.

The correct approach is to modify the NLP configuration file (e.g., presidio_analyzer/conf/default.yaml) to specify which models should be installed. The Dockerfile already contains the logic to read this configuration and install the models automatically during the build process at line 36: RUN poetry run python install_nlp_models.py --conf_file ${NLP_CONF_FILE}

This section should be rewritten to reflect the actual architecture and direct users to modify the YAML configuration files instead.

Copilot uses AI. Check for mistakes.
Comment on lines +62 to +73
### Update Configuration File

Modify the recognizers configuration to support your languages. Edit `presidio-analyzer/presidio_analyzer/conf/default_recognizers.yaml`:

```yaml
# Add supported languages
supported_languages:
- en
- es
- fr
- de
```
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This YAML configuration section is incorrect. The default_recognizers.yaml file already contains a supported_languages list at the top level (line 1-2 of the actual file), and it uses the format:

supported_languages: 
  - en

The documentation should clarify that users need to:

  1. Modify the NLP configuration file (e.g., presidio_analyzer/conf/default.yaml) to add language models
  2. Update the top-level supported_languages list in default_recognizers.yaml to include new language codes
  3. Optionally add or update individual recognizers with language-specific context words

The current documentation incorrectly suggests adding a supported_languages block under the recognizers configuration without explaining the proper structure.

Copilot uses AI. Check for mistakes.
Comment on lines +107 to +119
### 2. Warning: NLP Recognizer Not in List

If you see warnings like:
```
UserWarning: NLP recognizer (e.g. SpacyRecognizer, StanzaRecognizer) is not in the list of recognizers for language en.
```

**Solution**: Ensure your language configuration matches your installed models:

1. Check `default_recognizers.yaml` includes your language
2. Verify the spaCy model is properly downloaded in the Dockerfile
3. Ensure the language code matches (e.g., 'en' for English, 'es' for Spanish)

Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The warning about "NLP recognizer is not in the list of recognizers" is misleading. This warning typically occurs when the NLP engine configuration (spacy models) doesn't match the recognizer registry configuration, not just when language configuration doesn't match installed models.

The solution provided is incomplete. Based on the actual Presidio architecture:

  1. The default_recognizers.yaml file controls which recognizers are loaded and which languages they support
  2. The NLP configuration file (e.g., default.yaml) controls which spaCy models are installed
  3. These two must be aligned: if you add Spanish support, you need BOTH the Spanish spaCy model in the NLP config AND Spanish language support declared in the recognizer registry

The documentation should clarify that this warning appears when recognizers are configured for a language but no NLP model is configured for that language in the NLP configuration file.

Copilot uses AI. Check for mistakes.
Comment on lines +32 to +37

The key files for customization are:

- `presidio-analyzer/Dockerfile`: Defines the analyzer Docker image
- `presidio-analyzer/presidio_analyzer/conf/default_recognizers.yaml`: Configures recognizers

Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation doesn't mention that users need to update the NLP configuration file (default.yaml) to specify which language models to install. Based on the actual Presidio Dockerfile (line 36), models are installed by running install_nlp_models.py --conf_file ${NLP_CONF_FILE}, which reads from the configuration file.

For multi-language support, users should either:

  1. Modify presidio_analyzer/conf/default.yaml to add additional models, OR
  2. Create a custom NLP configuration file (e.g., spacy_multilingual.yaml which already exists in the repo) and pass it as a build arg

The current documentation focuses on modifying the Dockerfile directly, which is not the recommended approach according to the actual Presidio architecture.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

More elaborate description how to build custom Docker images for Presidio

3 participants