Skip to content

Feature Request: Support Shared Memory Option #370

@nl-sm

Description

@nl-sm

I'm using the MMDB python library to host a WSGI server with multiple workers that will perform lookups from the MMDB database. I'd like for each worker process to be accessing the same shared memory in the backend. I can achieve this ok if I call maxminddb.open_database function before the processes get forked (preload option in gunicorn). However this doesn't work if the file gets updated, since unless I restart the entire service, each worker needs to separately reload the file and thus allocates additional memory

I could update the code to use a single worker process / threaded workers, or have a separate process from the WSGI server which receives requests from the worker processes and looks up the IP, but I'd like to avoid adding that complexity while still having multiple processes capable of performing lookups.

I think having a shared memory option in the C library would be really useful, allowing the reader to act more like a database connection pool, so separate processes could lookup from this shared memory. I'm not super familiar with how shared memory and mmap works, but I did see some discussion related to it on this issue, so thought I'd open a separate issue since it's something that would be useful (for me at least)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions