feat(C4): add unsafe deserialization prohibition for model artifacts (4.5.10)#635
Open
RicoKomenda wants to merge 1 commit intoOWASP:mainfrom
Open
Conversation
…n model loading (4.5.10)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds 4.5.10 to C4.5 (AI Workload Sandboxing & Validation) to address one of the most commonly exploited attack vectors against AI systems: arbitrary code execution via unsafe model artifact deserialization.
New control:
Level: 1
Why this is needed
Python's pickle format, widely used for serializing PyTorch model checkpoints, executes arbitrary Python code at load time. A malicious model file with a crafted pickle payload achieves remote code execution on any system that loads it -- no exploit required beyond
torch.load(malicious_file). This has been demonstrated in multiple CVEs and is one of the most common supply chain attacks against ML systems. Hugging Face and PyTorch both document this as a known risk and recommend SafeTensors as a safe alternative.No existing AISVS control addresses this. C4.5.1 requires external models to run in sandboxes, but that is post-load isolation -- the code execution happens during
load(), before the sandbox has any effect. C6.1.2 requires scanning for malicious layers, but generic malware scanners do not parse pickle opcodes. A format-aware scan (e.g., picklescan, ModelScan) is specifically required.Level 1 is appropriate: this is a prerequisite security control. Unsafe deserialization is the #1 documented model supply chain attack and is verifiable through tooling available today.
Changes
1.0/en/0x10-C04-Infrastructure.md: add 4.5.10, fix MD060 separator rows and spacing on 4.5.1/4.5.21.0/en/0x93-Appendix-D_AI_Security_Controls_Inventory.md: add entry to AD.12