Skip to content

linux.pagecache.RecoverFs consuming massive memory until reaped #1975

@halpomeranz

Description

@halpomeranz

Describe the bug
I recently ran linux.pagecache.RecoverFs on an 8GB memory sample. By the time Volatility was killed by the out of memory reaper it had consumed over 90GB of memory and produced 13615 lines of output, but no tar.gz file.

Context
Volatility Version: 2.27.1
Operating System: Debian 12 (Bookworm)
Python Version: 3.11.2
Suspected Operating System: Debian 13
Command: vol -q -f memory.lime linux.pagecache.RecoverFs

To Reproduce
Steps to reproduce the behavior:

  1. Download sample image from https://deerrunassoc-my.sharepoint.com/:u:/g/personal/hal_deer-run_com/IQBYc4qlNKNASKeZwreVYtcIAQnSo3yEZPiyQk8CYtTkOvI?e=Wrk496
  2. Run "vol -q -f memory.lime linux.pagecache.RecoverFs"
  3. Watch the memory count up using "top" or similar
  4. Wait for process to be reaped

Expected behavior
Clearly this plugin should never consume so much RAM.

I would further suggest that the plugin should incrementally dump files as it finds them. That way even if the plugin aborts for some reason the analyst would at least get some information.

Example output
The memory image is available above. Generate whatever output is most useful.

Additional information
N/A

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions